Science.gov

Sample records for realtime 3d tracking

  1. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    NASA Astrophysics Data System (ADS)

    Kanphet, J.; Suriyapee, S.; Dumrongkijudom, N.; Sanghangthum, T.; Kumkhwao, J.; Wisetrintong, M.

    2016-03-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable.

  2. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  3. Passive markers for tracking surgical instruments in real-time 3-D ultrasound imaging.

    PubMed

    Stoll, Jeffrey; Ren, Hongliang; Dupont, Pierre E

    2012-03-01

    A family of passive echogenic markers is presented by which the position and orientation of a surgical instrument can be determined in a 3-D ultrasound volume, using simple image processing. Markers are attached near the distal end of the instrument so that they appear in the ultrasound volume along with the instrument tip. They are detected and measured within the ultrasound image, thus requiring no external tracking device. This approach facilitates imaging instruments and tissue simultaneously in ultrasound-guided interventions. Marker-based estimates of instrument pose can be used in augmented reality displays or for image-based servoing. Design principles for marker shapes are presented that ensure imaging system and measurement uniqueness constraints are met. An error analysis is included that can be used to guide marker design and which also establishes a lower bound on measurement uncertainty. Finally, examples of marker measurement and tracking algorithms are presented along with experimental validation of the concepts.

  4. Real-time 3D motion tracking for small animal brain PET

    NASA Astrophysics Data System (ADS)

    Kyme, A. Z.; Zhou, V. W.; Meikle, S. R.; Fulton, R. R.

    2008-05-01

    High-resolution positron emission tomography (PET) imaging of conscious, unrestrained laboratory animals presents many challenges. Some form of motion correction will normally be necessary to avoid motion artefacts in the reconstruction. The aim of the current work was to develop and evaluate a motion tracking system potentially suitable for use in small animal PET. This system is based on the commercially available stereo-optical MicronTracker S60 which we have integrated with a Siemens Focus-220 microPET scanner. We present measured performance limits of the tracker and the technical details of our implementation, including calibration and synchronization of the system. A phantom study demonstrating motion tracking and correction was also performed. The system can be calibrated with sub-millimetre accuracy, and small lightweight markers can be constructed to provide accurate 3D motion data. A marked reduction in motion artefacts was demonstrated in the phantom study. The techniques and results described here represent a step towards a practical method for rigid-body motion correction in small animal PET. There is scope to achieve further improvements in the accuracy of synchronization and pose measurements in future work.

  5. Accuracy of real-time single- and multi-beat 3-d speckle tracking echocardiography in vitro.

    PubMed

    Hjertaas, Johannes Just; Fosså, Henrik; Dybdahl, Grete Lunestad; Grüner, Renate; Lunde, Per; Matre, Knut

    2013-06-01

    With little data published on the accuracy of cardiac 3-D strain measurements, we investigated the agreement between 3-D echocardiography and sonomicrometry in an in vitro model with a polyvinyl alcohol phantom. A cardiac scanner with a 3-D probe was used to acquire recordings at 15 different stroke volumes at a heart rate of 60 beats/min, and eight different stroke volumes at a heart rate of 120 beats/min. Sonomicrometry was used as a reference, monitoring longitudinal, circumferential and radial lengths. Both single- and multi-beat acquisitions were recorded. Strain values were compared with sonomicrometer strain using linear correlation coefficients and Bland-Altman analysis. Multi-beat acquisition showed good agreement, whereas real-time images showed less agreement. The best correlation was obtained for a heart rate 60 of beats/min at a volume rate 36.6 volumes/s.

  6. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    SciTech Connect

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon; Ra, Jong Beom; Lee, Jae Young

    2015-01-15

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a

  7. Improved image guidance technique for minimally invasive mitral valve repair using real-time tracked 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Rankin, Adam; Moore, John; Bainbridge, Daniel; Peters, Terry

    2016-03-01

    In the past ten years, numerous new surgical and interventional techniques have been developed for treating heart valve disease without the need for cardiopulmonary bypass. Heart valve repair is now being performed in a blood-filled environment, reinforcing the need for accurate and intuitive imaging techniques. Previous work has demonstrated how augmenting ultrasound with virtual representations of specific anatomical landmarks can greatly simplify interventional navigation challenges and increase patient safety. These techniques often complicate interventions by requiring additional steps taken to manually define and initialize virtual models. Furthermore, overlaying virtual elements into real-time image data can also obstruct the view of salient image information. To address these limitations, a system was developed that uses real-time volumetric ultrasound alongside magnetically tracked tools presented in an augmented virtuality environment to provide a streamlined navigation guidance platform. In phantom studies simulating a beating-heart navigation task, procedure duration and tool path metrics have achieved comparable performance to previous work in augmented virtuality techniques, and considerable improvement over standard of care ultrasound guidance.

  8. Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation.

    PubMed

    Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud

    2017-01-01

    In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges.

  9. Use of MV and kV imager correlation for maintaining continuous real-time 3D internal marker tracking during beam interruptions

    NASA Astrophysics Data System (ADS)

    Wiersma, R. D.; Riaz, N.; Dieterich, Sonja; Suh, Yelin; Xing, L.

    2009-01-01

    The integration of onboard kV imaging together with a MV electronic portal imaging device (EPID) on linear accelerators (LINAC) can provide an easy to implement real-time 3D organ position monitoring solution for treatment delivery. Currently, real-time MV-kV tracking has only been demonstrated by simultaneous imagining by both MV and kV imaging devices. However, modalities such as step-and-shoot IMRT (SS-IMRT), which inherently contain MV beam interruptions, can lead to loss of target information necessary for 3D localization. Additionally, continuous kV imaging throughout the treatment delivery can lead to high levels of imaging dose to the patient. This work demonstrates for the first time how full 3D target tracking can be maintained even in the presence of such beam interruption, or MV/kV beam interleave, by use of a relatively simple correlation model together with MV-kV tracking. A moving correlation model was constructed using both present and prior positions of the marker in the available MV or kV image to compute the position of the marker on the interrupted imager. A commercially available radiotherapy system, equipped with both MV and kV imaging devices, was used to deliver typical SS-IMRT lung treatment plans to a 4D phantom containing internally embedded metallic markers. To simulate actual lung tumor motion, previous recorded 4D lung patient motion data were used. Lung tumor motion data of five separate patients were inputted into the 4D phantom, and typical SS-IMRT lung plans were delivered to simulate actual clinical deliveries. Application of the correlation model to SS-IMRT lung treatment deliveries was found to be an effective solution for maintaining continuous 3D tracking during 'step' beam interruptions. For deliveries involving five or more gantry angles with 50 or more fields per plan, the positional errors were found to have <=1 mm root mean squared error (RMSE) in all three spatial directions. In addition to increasing the robustness of

  10. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV kV imaging

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wiersma, R. D.; Mao, W.; Luxton, G.; Xing, L.

    2008-12-01

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from ~0.5 mm for the normal adult breathing pattern to ~1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real-time

  11. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV-kV imaging.

    PubMed

    Liu, W; Wiersma, R D; Mao, W; Luxton, G; Xing, L

    2008-12-21

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from approximately 0.5 mm for the normal adult breathing pattern to approximately 1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general

  12. Three-Dimensional Rotation, Twist and Torsion Analyses Using Real-Time 3D Speckle Tracking Imaging: Feasibility, Reproducibility, and Normal Ranges in Pediatric Population

    PubMed Central

    Han, Wei; Gao, Jun; He, Lin; Yang, Yali; Yin, Ping; Xie, Mingxing; Ge, Shuping

    2016-01-01

    Background and Objective The specific aim of this study was to evaluate the feasibility, reproducibility and maturational changes of LV rotation, twist and torsion variables by real-time 3D speckle-tracking echocardiography (RT3DSTE) in children. Methods A prospective study was conducted in 347 consecutive healthy subjects (181 males/156 females, mean age 7.12 ± 5.3 years, and range from birth to 18-years) using RT 3D echocardiography (3DE). The LV rotation, twist and torsion measurements were made off-line using TomTec software. Manual landmark selection and endocardial border editing were performed in 3 planes (apical “2”-, “4”-, and “3”- chamber views) and semi-automated tracking yielded LV rotation, twist and torsion measurements. LV rotation, twist and torsion analysis by RT 3DSTE were feasible in 307 out of 347 subjects (88.5%). Results There was no correlation between rotation or twist and age, height, weight, BSA or heart rate, respectively. However, there was statistically significant, but very modest correlation between LV torsion and age (R2 = 0.036, P< 0.001). The normal ranges were defined for rotation and twist in this cohort, and for torsion for each age group. The intra-observer and inter-observer variabilities for apical and basal rotation, twist and torsion ranged from 7.3% ± 3.8% to 12.3% ± 8.8% and from 8.8% ± 4.6% to 15.7% ± 10.1%, respectively. Conclusions We conclude that analysis of LV rotation, twist and torsion by this new RT3D STE is feasible and reproducible in pediatric population. There is no maturational change in rotation and twist, but torsion decreases with age in this cohort. Further refinement is warranted to validate the utility of this new methodology in more sensitive and quantitative evaluation of congenital and acquired heart diseases in children. PMID:27427968

  13. Real-Time, Multiple, Pan/Tilt/Zoom, Computer Vision Tracking, and 3D Position Estimating System for Unmanned Aerial System Metrology

    DTIC Science & Technology

    2013-10-18

    area of 3D point estimation of flapping- wing UASs. The benefits of designing and developing such a system is instrumental in researching various...are many benefits to us- ing SIFT in tracking. It detects features that are invariant to image scale and rotation, and are shown to provide robust...provided to estimate background motion for optical flow background subtraction. The experiments with the static background showed minute benefit in

  14. Automatic respiration tracking for radiotherapy using optical 3D camera

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Rapid optical three-dimensional (O3D) imaging systems provide accurate digitized 3D surface data in real-time, with no patient contact nor radiation. The accurate 3D surface images offer crucial information in image-guided radiation therapy (IGRT) treatments for accurate patient repositioning and respiration management. However, applications of O3D imaging techniques to image-guided radiotherapy have been clinically challenged by body deformation, pathological and anatomical variations among individual patients, extremely high dimensionality of the 3D surface data, and irregular respiration motion. In existing clinical radiation therapy (RT) procedures target displacements are caused by (1) inter-fractional anatomy changes due to weight, swell, food/water intake; (2) intra-fractional variations from anatomy changes within any treatment session due to voluntary/involuntary physiologic processes (e.g. respiration, muscle relaxation); (3) patient setup misalignment in daily reposition due to user errors; and (4) changes of marker or positioning device, etc. Presently, viable solution is lacking for in-vivo tracking of target motion and anatomy changes during the beam-on time without exposing patient with additional ionized radiation or high magnet field. Current O3D-guided radiotherapy systems relay on selected points or areas in the 3D surface to track surface motion. The configuration of the marks or areas may change with time that makes it inconsistent in quantifying and interpreting the respiration patterns. To meet the challenge of performing real-time respiration tracking using O3D imaging technology in IGRT, we propose a new approach to automatic respiration motion analysis based on linear dimensionality reduction technique based on PCA (principle component analysis). Optical 3D image sequence is decomposed with principle component analysis into a limited number of independent (orthogonal) motion patterns (a low dimension eigen-space span by eigen-vectors). New

  15. 3-D model-based vehicle tracking.

    PubMed

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  16. Light driven micro-robotics with holographic 3D tracking

    NASA Astrophysics Data System (ADS)

    Glückstad, Jesper

    2016-04-01

    We recently pioneered the concept of light-driven micro-robotics including the new and disruptive 3D-printed micro-tools coined Wave-guided Optical Waveguides that can be real-time optically trapped and "remote-controlled" in a volume with six-degrees-of-freedom. To be exploring the full potential of this new drone-like 3D light robotics approach in challenging microscopic geometries requires a versatile and real-time reconfigurable light coupling that can dynamically track a plurality of "light robots" in 3D to ensure continuous optimal light coupling on the fly. Our latest developments in this new and exciting area will be reviewed in this invited paper.

  17. Ames Lab 101: Real-Time 3D Imaging

    SciTech Connect

    Zhang, Song

    2010-01-01

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  18. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2016-07-12

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  19. Tracking earthquake source evolution in 3-D

    NASA Astrophysics Data System (ADS)

    Kennett, B. L. N.; Gorbatov, A.; Spiliopoulos, S.

    2014-08-01

    Starting from the hypocentre, the point of initiation of seismic energy, we seek to estimate the subsequent trajectory of the points of emission of high-frequency energy in 3-D, which we term the `evocentres'. We track these evocentres as a function of time by energy stacking for putative points on a 3-D grid around the hypocentre that is expanded as time progresses, selecting the location of maximum energy release as a function of time. The spatial resolution in the neighbourhood of a target point can be simply estimated by spatial mapping using the properties of isochrons from the stations. The mapping of a seismogram segment to space is by inverse slowness, and thus more distant stations have a broader spatial contribution. As in hypocentral estimation, the inclusion of a wide azimuthal distribution of stations significantly enhances 3-D capability. We illustrate this approach to tracking source evolution in 3-D by considering two major earthquakes, the 2007 Mw 8.1 Solomons islands event that ruptured across a plate boundary and the 2013 Mw 8.3 event 610 km beneath the Sea of Okhotsk. In each case we are able to provide estimates of the evolution of high-frequency energy that tally well with alternative schemes, but also to provide information on the 3-D characteristics that is not available from backprojection from distant networks. We are able to demonstrate that the major characteristics of event rupture can be captured using just a few azimuthally distributed stations, which opens the opportunity for the approach to be used in a rapid mode immediately after a major event to provide guidance for, for example tsunami warning for megathrust events.

  20. 3D hand tracking using Kalman filter in depth space

    NASA Astrophysics Data System (ADS)

    Park, Sangheon; Yu, Sunjin; Kim, Joongrock; Kim, Sungjin; Lee, Sangyoun

    2012-12-01

    Hand gestures are an important type of natural language used in many research areas such as human-computer interaction and computer vision. Hand gestures recognition requires the prior determination of the hand position through detection and tracking. One of the most efficient strategies for hand tracking is to use 2D visual information such as color and shape. However, visual-sensor-based hand tracking methods are very sensitive when tracking is performed under variable light conditions. Also, as hand movements are made in 3D space, the recognition performance of hand gestures using 2D information is inherently limited. In this article, we propose a novel real-time 3D hand tracking method in depth space using a 3D depth sensor and employing Kalman filter. We detect hand candidates using motion clusters and predefined wave motion, and track hand locations using Kalman filter. To verify the effectiveness of the proposed method, we compare the performance of the proposed method with the visual-based method. Experimental results show that the performance of the proposed method out performs visual-based method.

  1. Linear tracking for 3-D medical ultrasound imaging.

    PubMed

    Huang, Qing-Hua; Yang, Zhao; Hu, Wei; Jin, Lian-Wen; Wei, Gang; Li, Xuelong

    2013-12-01

    As the clinical application grows, there is a rapid technical development of 3-D ultrasound imaging. Compared with 2-D ultrasound imaging, 3-D ultrasound imaging can provide improved qualitative and quantitative information for various clinical applications. In this paper, we proposed a novel tracking method for a freehand 3-D ultrasound imaging system with improved portability, reduced degree of freedom, and cost. We designed a sliding track with a linear position sensor attached, and it transmitted positional data via a wireless communication module based on Bluetooth, resulting in a wireless spatial tracking modality. A traditional 2-D ultrasound probe fixed to the position sensor on the sliding track was used to obtain real-time B-scans, and the positions of the B-scans were simultaneously acquired when moving the probe along the track in a freehand manner. In the experiments, the proposed method was applied to ultrasound phantoms and real human tissues. The results demonstrated that the new system outperformed a previously developed freehand system based on a traditional six-degree-of-freedom spatial sensor in phantom and in vivo studies, indicating its merit in clinical applications for human tissues and organs.

  2. Real-Time 3D Visualization

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  3. 3D Tracking via Shoe Sensing

    PubMed Central

    Li, Fangmin; Liu, Guo; Liu, Jian; Chen, Xiaochuang; Ma, Xiaolin

    2016-01-01

    Most location-based services are based on a global positioning system (GPS), which only works well in outdoor environments. Compared to outdoor environments, indoor localization has created more buzz in recent years as people spent most of their time indoors working at offices and shopping at malls, etc. Existing solutions mainly rely on inertial sensors (i.e., accelerometer and gyroscope) embedded in mobile devices, which are usually not accurate enough to be useful due to the mobile devices’ random movements while people are walking. In this paper, we propose the use of shoe sensing (i.e., sensors attached to shoes) to achieve 3D indoor positioning. Specifically, a short-time energy-based approach is used to extract the gait pattern. Moreover, in order to improve the accuracy of vertical distance estimation while the person is climbing upstairs, a state classification is designed to distinguish the walking status including plane motion (i.e., normal walking and jogging horizontally), walking upstairs, and walking downstairs. Furthermore, we also provide a mechanism to reduce the vertical distance accumulation error. Experimental results show that we can achieve nearly 100% accuracy when extracting gait patterns from walking/jogging with a low-cost shoe sensor, and can also achieve 3D indoor real-time positioning with high accuracy. PMID:27801839

  4. Integration of real-time 3D image acquisition and multiview 3D display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

    2014-03-01

    Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

  5. Tracked 3D ultrasound in radio-frequency liver ablation

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Fichtinger, Gabor; Taylor, Russell H.; Choti, Michael A.

    2003-05-01

    Recent studies have shown that radio frequency (RF) ablation is a simple, safe and potentially effective treatment for selected patients with liver metastases. Despite all recent therapeutic advancements, however, intra-procedural target localization and precise and consistent placement of the tissue ablator device are still unsolved problems. Various imaging modalities, including ultrasound (US) and computed tomography (CT) have been tried as guidance modalities. Transcutaneous US imaging, due to its real-time nature, may be beneficial in many cases, but unfortunately, fails to adequately visualize the tumor in many cases. Intraoperative or laparoscopic US, on the other hand, provides improved visualization and target imaging. This paper describes a system for computer-assisted RF ablation of liver tumors, combining navigational tracking of a conventional imaging ultrasound probe to produce 3D ultrasound imaging with a tracked RF ablation device supported by a passive mechanical arm and spatially registered to the ultrasound volume.

  6. Speeding up 3D speckle tracking using PatchMatch

    NASA Astrophysics Data System (ADS)

    Zontak, Maria; O'Donnell, Matthew

    2016-03-01

    Echocardiography provides valuable information to diagnose heart dysfunction. A typical exam records several minutes of real-time cardiac images. To enable complete analysis of 3D cardiac strains, 4-D (3-D+t) echocardiography is used. This results in a huge dataset and requires effective automated analysis. Ultrasound speckle tracking is an effective method for tissue motion analysis. It involves correlation of a 3D kernel (block) around a voxel with kernels in later frames. The search region is usually confined to a local neighborhood, due to biomechanical and computational constraints. For high strains and moderate frame-rates, however, this search region will remain large, leading to a considerable computational burden. Moreover, speckle decorrelation (due to high strains) leads to errors in tracking. To solve this, spatial motion coherency between adjacent voxels should be imposed, e.g., by averaging their correlation functions.1 This requires storing correlation functions for neighboring voxels, thus increasing memory demands. In this work, we propose an efficient search using PatchMatch, 2 a powerful method to find correspondences between images. Here we adopt PatchMatch for 3D volumes and radio-frequency signals. As opposed to an exact search, PatchMatch performs random sampling of the search region and propagates successive matches among neighboring voxels. We show that: 1) Inherently smooth offset propagation in PatchMatch contributes to spatial motion coherence without any additional processing or memory demand. 2) For typical scenarios, PatchMatch is at least 20 times faster than the exact search, while maintaining comparable tracking accuracy.

  7. Real-time monitoring of 3D cell culture using a 3D capacitance biosensor.

    PubMed

    Lee, Sun-Mi; Han, Nalae; Lee, Rimi; Choi, In-Hong; Park, Yong-Beom; Shin, Jeon-Soo; Yoo, Kyung-Hwa

    2016-03-15

    Three-dimensional (3D) cell cultures have recently received attention because they represent a more physiologically relevant environment compared to conventional two-dimensional (2D) cell cultures. However, 2D-based imaging techniques or cell sensors are insufficient for real-time monitoring of cellular behavior in 3D cell culture. Here, we report investigations conducted with a 3D capacitance cell sensor consisting of vertically aligned pairs of electrodes. When GFP-expressing human breast cancer cells (GFP-MCF-7) encapsulated in alginate hydrogel were cultured in a 3D cell culture system, cellular activities, such as cell proliferation and apoptosis at different heights, could be monitored non-invasively and in real-time by measuring the change in capacitance with the 3D capacitance sensor. Moreover, we were able to monitor cell migration of human mesenchymal stem cells (hMSCs) with our 3D capacitance sensor.

  8. Real-time catheter tracking for high-dose-rate prostate brachytherapy using an electromagnetic 3D-guidance device: A preliminary performance study

    SciTech Connect

    Zhou Jun; Sebastian, Evelyn; Mangona, Victor; Yan Di

    2013-02-15

    Purpose: In order to increase the accuracy and speed of catheter reconstruction in a high-dose-rate (HDR) prostate implant procedure, an automatic tracking system has been developed using an electromagnetic (EM) device (trakSTAR, Ascension Technology, VT). The performance of the system, including the accuracy and noise level with various tracking parameters and conditions, were investigated. Methods: A direct current (dc) EM transmitter (midrange model) and a sensor with diameter of 1.3 mm (Model 130) were used in the trakSTAR system for tracking catheter position during HDR prostate brachytherapy. Localization accuracy was assessed under both static and dynamic analyses conditions. For the static analysis, a calibration phantom was used to investigate error dependency on operating room (OR) table height (bottom vs midposition vs top), sensor position (distal tip of catheter vs connector end of catheter), direction [left-right (LR) vs anterior-posterior (AP) vs superior-inferior (SI)], sampling frequency (40 vs 80 vs 120 Hz), and interference from OR equipment (present vs absent). The mean and standard deviation of the localization offset in each direction and the corresponding error vectors were calculated. For dynamic analysis, the paths of five straight catheters were tracked to study the effects of directions, sampling frequency, and interference of EM field. Statistical analysis was conducted to compare the results in different configurations. Results: When interference was present in the static analysis, the error vectors were significantly higher at the top table position (3.3 {+-} 1.3 vs 1.8 {+-} 0.9 mm at bottom and 1.7 {+-} 1.0 mm at middle, p < 0.001), at catheter end position (3.1 {+-} 1.1 vs 1.4 {+-} 0.7 mm at the tip position, p < 0.001), and at 40 Hz sampling frequency (2.6 {+-} 1.1 vs 2.4 {+-} 1.5 mm at 80 Hz and 1.8 {+-} 1.1 at 160 Hz, p < 0.001). So did the mean offset errors in the LR direction (-1.7 {+-} 1.4 vs 0.4 {+-} 0.5 mm in AP and 0

  9. Electrically tunable lens speeds up 3D orbital tracking

    PubMed Central

    Annibale, Paolo; Dvornikov, Alexander; Gratton, Enrico

    2015-01-01

    3D orbital particle tracking is a versatile and effective microscopy technique that allows following fast moving fluorescent objects within living cells and reconstructing complex 3D shapes using laser scanning microscopes. We demonstrated notable improvements in the range, speed and accuracy of 3D orbital particle tracking by replacing commonly used piezoelectric stages with Electrically Tunable Lens (ETL) that eliminates mechanical movement of objective lenses. This allowed tracking and reconstructing shape of structures extending 500 microns in the axial direction. Using the ETL, we tracked at high speed fluorescently labeled genomic loci within the nucleus of living cells with unprecedented temporal resolution of 8ms using a 1.42NA oil-immersion objective. The presented technology is cost effective and allows easy upgrade of scanning microscopes for fast 3D orbital tracking. PMID:26114037

  10. Real-time 3D video conference on generic hardware

    NASA Astrophysics Data System (ADS)

    Desurmont, X.; Bruyelle, J. L.; Ruiz, D.; Meessen, J.; Macq, B.

    2007-02-01

    Nowadays, video-conference tends to be more and more advantageous because of the economical and ecological cost of transport. Several platforms exist. The goal of the TIFANIS immersive platform is to let users interact as if they were physically together. Unlike previous teleimmersion systems, TIFANIS uses generic hardware to achieve an economically realistic implementation. The basic functions of the system are to capture the scene, transmit it through digital networks to other partners, and then render it according to each partner's viewing characteristics. The image processing part should run in real-time. We propose to analyze the whole system. it can be split into different services like central processing unit (CPU), graphical rendering, direct memory access (DMA), and communications trough the network. Most of the processing is done by CPU resource. It is composed of the 3D reconstruction and the detection and tracking of faces from the video stream. However, the processing needs to be parallelized in several threads that have as little dependencies as possible. In this paper, we present these issues, and the way we deal with them.

  11. 3D gaze tracking system for NVidia 3D Vision®.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  12. Multiview 3-D Echocardiography Fusion with Breath-Hold Position Tracking Using an Optical Tracking System.

    PubMed

    Punithakumar, Kumaradevan; Hareendranathan, Abhilash R; McNulty, Alexander; Biamonte, Marina; He, Allen; Noga, Michelle; Boulanger, Pierre; Becher, Harald

    2016-08-01

    Recent advances in echocardiography allow real-time 3-D dynamic image acquisition of the heart. However, one of the major limitations of 3-D echocardiography is the limited field of view, which results in an acquisition insufficient to cover the whole geometry of the heart. This study proposes the novel approach of fusing multiple 3-D echocardiography images using an optical tracking system that incorporates breath-hold position tracking to infer that the heart remains at the same position during different acquisitions. In six healthy male volunteers, 18 pairs of apical/parasternal 3-D ultrasound data sets were acquired during a single breath-hold as well as in subsequent breath-holds. The proposed method yielded a field of view improvement of 35.4 ± 12.5%. To improve the quality of the fused image, a wavelet-based fusion algorithm was developed that computes pixelwise likelihood values for overlapping voxels from multiple image views. The proposed wavelet-based fusion approach yielded significant improvement in contrast (66.46 ± 21.68%), contrast-to-noise ratio (49.92 ± 28.71%), signal-to-noise ratio (57.59 ± 47.85%) and feature count (13.06 ± 7.44%) in comparison to individual views.

  13. Real-time structured light intraoral 3D measurement pipeline

    NASA Astrophysics Data System (ADS)

    Gheorghe, Radu; Tchouprakov, Andrei; Sokolov, Roman

    2013-02-01

    Computer aided design and manufacturing (CAD/CAM) is increasingly becoming a standard feature and service provided to patients in dentist offices and denture manufacturing laboratories. Although the quality of the tools and data has slowly improved in the last years, due to various surface measurement challenges, practical, accurate, invivo, real-time 3D high quality data acquisition and processing still needs improving. Advances in GPU computational power have allowed for achieving near real-time 3D intraoral in-vivo scanning of patient's teeth. We explore in this paper, from a real-time perspective, a hardware-software-GPU solution that addresses all the requirements mentioned before. Moreover we exemplify and quantify the hard and soft deadlines required by such a system and illustrate how they are supported in our implementation.

  14. Deployment of a 3D tag tracking method utilising RFID

    NASA Astrophysics Data System (ADS)

    Wasif Reza, Ahmed; Yun, Teoh Wei; Dimyati, Kaharudin; Geok Tan, Kim; Ariffin Noordin, Kamarul

    2012-04-01

    Recent trend shows that one of the crucial problems faced while using radio frequency to track the objects is the inconsistency of the signal strength reception, which can be mainly due to the environmental factors and the blockage, which always have the most impact on the tracking accuracy. Besides, three dimensions are more relevant to a warehouse scanning. Therefore, this study proposes a highly accurate and new three-dimensional (3D) radio frequency identification-based indoor tracking system with the consideration of different attenuation factors and obstacles. The obtained results show that the proposed system yields high-quality performance with an average error as low as 0.27 m (without obstacles and attenuation effects). The obtained results also show that the proposed tracking technique can achieve relatively lower errors (0.4 and 0.36 m, respectively) even in the presence of the highest attenuation effect, e = 3.3 or when the environment is largely affected by 50% of the obstacles. Furthermore, the superiority of the proposed 3D tracking system has been proved by comparing with other existing approaches. The 3D tracking system proposed in this study can be applicable to a warehouse scanning.

  15. Autonomous Real-Time Interventional Scan Plane Control With a 3-D Shape-Sensing Needle

    PubMed Central

    Plata, Juan Camilo; Holbrook, Andrew B.; Park, Yong-Lae; Pauly, Kim Butts; Daniel, Bruce L.; Cutkosky, Mark R.

    2016-01-01

    This study demonstrates real-time scan plane control dependent on three-dimensional needle bending, as measured from magnetic resonance imaging (MRI)-compatible optical strain sensors. A biopsy needle with embedded fiber Bragg grating (FBG) sensors to measure surface strains is used to estimate its full 3-D shape and control the imaging plane of an MR scanner in real-time, based on the needle’s estimated profile. The needle and scanner coordinate frames are registered to each other via miniature radio-frequency (RF) tracking coils, and the scan planes autonomously track the needle as it is deflected, keeping its tip in view. A 3-D needle annotation is superimposed over MR-images presented in a 3-D environment with the scanner’s frame of reference. Scan planes calculated based on the FBG sensors successfully follow the tip of the needle. Experiments using the FBG sensors and RF coils to track the needle shape and location in real-time had an average root mean square error of 4.2 mm when comparing the estimated shape to the needle profile as seen in high resolution MR images. This positional variance is less than the image artifact caused by the needle in high resolution SPGR (spoiled gradient recalled) images. Optical fiber strain sensors can estimate a needle’s profile in real-time and be used for MRI scan plane control to potentially enable faster and more accurate physician response. PMID:24968093

  16. Autonomous real-time interventional scan plane control with a 3-D shape-sensing needle.

    PubMed

    Elayaperumal, Santhi; Plata, Juan Camilo; Holbrook, Andrew B; Park, Yong-Lae; Pauly, Kim Butts; Daniel, Bruce L; Cutkosky, Mark R

    2014-11-01

    This study demonstrates real-time scan plane control dependent on three-dimensional needle bending, as measured from magnetic resonance imaging (MRI)-compatible optical strain sensors. A biopsy needle with embedded fiber Bragg grating (FBG) sensors to measure surface strains is used to estimate its full 3-D shape and control the imaging plane of an MR scanner in real-time, based on the needle's estimated profile. The needle and scanner coordinate frames are registered to each other via miniature radio-frequency (RF) tracking coils, and the scan planes autonomously track the needle as it is deflected, keeping its tip in view. A 3-D needle annotation is superimposed over MR-images presented in a 3-D environment with the scanner's frame of reference. Scan planes calculated based on the FBG sensors successfully follow the tip of the needle. Experiments using the FBG sensors and RF coils to track the needle shape and location in real-time had an average root mean square error of 4.2 mm when comparing the estimated shape to the needle profile as seen in high resolution MR images. This positional variance is less than the image artifact caused by the needle in high resolution SPGR (spoiled gradient recalled) images. Optical fiber strain sensors can estimate a needle's profile in real-time and be used for MRI scan plane control to potentially enable faster and more accurate physician response.

  17. Monocular 3-D gait tracking in surveillance scenes.

    PubMed

    Rogez, Grégory; Rihan, Jonathan; Guerrero, Jose J; Orrite, Carlos

    2014-06-01

    Gait recognition can potentially provide a noninvasive and effective biometric authentication from a distance. However, the performance of gait recognition systems will suffer in real surveillance scenarios with multiple interacting individuals and where the camera is usually placed at a significant angle and distance from the floor. We present a methodology for view-invariant monocular 3-D human pose tracking in man-made environments in which we assume that observed people move on a known ground plane. First, we model 3-D body poses and camera viewpoints with a low dimensional manifold and learn a generative model of the silhouette from this manifold to a reduced set of training views. During the online stage, 3-D body poses are tracked using recursive Bayesian sampling conducted jointly over the scene's ground plane and the pose-viewpoint manifold. For each sample, the homography that relates the corresponding training plane to the image points is calculated using the dominant 3-D directions of the scene, the sampled location on the ground plane and the sampled camera view. Each regressed silhouette shape is projected using this homographic transformation and is matched in the image to estimate its likelihood. Our framework is able to track 3-D human walking poses in a 3-D environment exploring only a 4-D state space with success. In our experimental evaluation, we demonstrate the significant improvements of the homographic alignment over a commonly used similarity transformation and provide quantitative pose tracking results for the monocular sequences with a high perspective effect from the CAVIAR dataset.

  18. VPython: Writing Real-time 3D Physics Programs

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth

    2001-06-01

    VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.

  19. 3D imaging of neutron tracks using confocal microscopy

    NASA Astrophysics Data System (ADS)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  20. Real-time rendering method and performance evaluation of composable 3D lenses for interactive VR.

    PubMed

    Borst, Christoph W; Tiesel, Jan-Phillip; Best, Christopher M

    2010-01-01

    We present and evaluate a new approach for real-time rendering of composable 3D lenses for polygonal scenes. Such lenses, usually called "volumetric lenses," are an extension of 2D Magic Lenses to 3D volumes in which effects are applied to scene elements. Although the composition of 2D lenses is well known, 3D composition was long considered infeasible due to both geometric and semantic complexity. Nonetheless, for a scene with multiple interactive 3D lenses, the problem of intersecting lenses must be considered. Intersecting 3D lenses in meaningful ways supports new interfaces such as hierarchical 3D windows, 3D lenses for managing and composing visualization options, or interactive shader development by direct manipulation of lenses providing component effects. Our 3D volumetric lens approach differs from other approaches and is one of the first to address efficient composition of multiple lenses. It is well-suited to head-tracked VR environments because it requires no view-dependent generation of major data structures, allowing caching and reuse of full or partial results. A Composite Shader Factory module composes shader programs for rendering composite visual styles and geometry of intersection regions. Geometry is handled by Boolean combinations of region tests in fragment shaders, which allows both convex and nonconvex CSG volumes for lens shape. Efficiency is further addressed by a Region Analyzer module and by broad-phase culling. Finally, we consider the handling of order effects for composed 3D lenses.

  1. A 3D diamond detector for particle tracking

    NASA Astrophysics Data System (ADS)

    Bachmair, F.; Bäni, L.; Bergonzo, P.; Caylar, B.; Forcolin, G.; Haughton, I.; Hits, D.; Kagan, H.; Kass, R.; Li, L.; Oh, A.; Phan, S.; Pomorski, M.; Smith, D. S.; Tyzhnevyi, V.; Wallny, R.; Whitehead, D.

    2015-06-01

    A novel device using single-crystal chemical vapour deposited diamond and resistive electrodes in the bulk forming a 3D diamond detector is presented. The electrodes of the device were fabricated with laser assisted phase change of diamond into a combination of diamond-like carbon, amorphous carbon and graphite. The connections to the electrodes of the device were made using a photo-lithographic process. The electrical and particle detection properties of the device were investigated. A prototype detector system consisting of the 3D device connected to a multi-channel readout was successfully tested with 120 GeV protons proving the feasibility of the 3D diamond detector concept for particle tracking applications for the first time.

  2. Characterisation of walking loads by 3D inertial motion tracking

    NASA Astrophysics Data System (ADS)

    Van Nimmen, K.; Lombaert, G.; Jonkers, I.; De Roeck, G.; Van den Broeck, P.

    2014-09-01

    The present contribution analyses the walking behaviour of pedestrians in situ by 3D inertial motion tracking. The technique is first tested in laboratory experiments with simultaneous registration of the ground reaction forces. The registered motion of the pedestrian allows for the identification of stride-to-stride variations, which is usually disregarded in the simulation of walking forces. Subsequently, motion tracking is used to register the walking behaviour of (groups of) pedestrians during in situ measurements on a footbridge. The calibrated numerical model of the structure and the information gathered using the motion tracking system enables detailed simulation of the step-by-step pedestrian induced vibrations. Accounting for the in situ identified walking variability of the test-subjects leads to a significantly improved agreement between the measured and the simulated structural response.

  3. Tracking people and cars using 3D modeling and CCTV.

    PubMed

    Edelman, Gerda; Bijhold, Jurrien

    2010-10-10

    The aim of this study was to find a method for the reconstruction of movements of people and cars using CCTV footage and a 3D model of the environment. A procedure is proposed, in which video streams are synchronized and displayed in a 3D model, by using virtual cameras. People and cars are represented by cylinders and boxes, which are moved in the 3D model, according to their movements as shown in the video streams. The procedure was developed and tested in an experimental setup with test persons who logged their GPS coordinates as a recording of the ground truth. Results showed that it is possible to implement this procedure and to reconstruct movements of people and cars from video recordings. The procedure was also applied to a forensic case. In this work we experienced that more situational awareness was created by the 3D model, which made it easier to track people on multiple video streams. Based on all experiences from the experimental set up and the case, recommendations are formulated for use in practice.

  4. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  5. 3-D model-based tracking for UAV indoor localization.

    PubMed

    Teulière, Céline; Marchand, Eric; Eck, Laurent

    2015-05-01

    This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights.

  6. Design and Performance Evaluation on Ultra-Wideband Time-Of-Arrival 3D Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Dusl, John

    2012-01-01

    A three-dimensional (3D) Ultra-Wideband (UWB) Time--of-Arrival (TOA) tracking system has been studied at NASA Johnson Space Center (JSC) to provide the tracking capability inside the International Space Station (ISS) modules for various applications. One of applications is to locate and report the location where crew experienced possible high level of carbon-dioxide and felt upset. In order to accurately locate those places in a multipath intensive environment like ISS modules, it requires a robust real-time location system (RTLS) which can provide the required accuracy and update rate. A 3D UWB TOA tracking system with two-way ranging has been proposed and studied. The designed system will be tested in the Wireless Habitat Testbed which simulates the ISS module environment. In this presentation, we discuss the 3D TOA tracking algorithm and the performance evaluation based on different tracking baseline configurations. The simulation results show that two configurations of the tracking baseline are feasible. With 100 picoseconds standard deviation (STD) of TOA estimates, the average tracking error 0.2392 feet (about 7 centimeters) can be achieved for configuration Twisted Rectangle while the average tracking error 0.9183 feet (about 28 centimeters) can be achieved for configuration Slightly-Twisted Top Rectangle . The tracking accuracy can be further improved with the improvement of the STD of TOA estimates. With 10 picoseconds STD of TOA estimates, the average tracking error 0.0239 feet (less than 1 centimeter) can be achieved for configuration "Twisted Rectangle".

  7. 3D visualisation and analysis of single and coalescing tracks in Solid state Nuclear Track Detectors

    NASA Astrophysics Data System (ADS)

    Wertheim, David; Gillmore, Gavin; Brown, Louise; Petford, Nick

    2010-05-01

    Exposure to radon gas (222Rn) and associated ionising decay products can cause lung cancer in humans (1). Solid state Nuclear Track Detectors (SSNTDs) can be used to monitor radon concentrations (2). Radon particles form tracks in the detectors and these tracks can be etched in order to enable 2D surface image analysis. We have previously shown that confocal microscopy can be used for 3D visualisation of etched SSNTDs (3). The aim of the study was to further investigate track angles and patterns in SSNTDs. A 'LEXT' confocal laser scanning microscope (Olympus Corporation, Japan) was used to acquire 3D image datasets of five CR-39 plastic SSNTD's. The resultant 3D visualisations were analysed by eye and inclination angles assessed on selected tracks. From visual assessment, single isolated tracks as well as coalescing tracks were observed on the etched detectors. In addition varying track inclination angles were observed. Several different patterns of track formation were seen such as single isolated and double coalescing tracks. The observed track angles of inclination may help to assess the angle at which alpha particles hit the detector. Darby, S et al. Radon in homes and risk of lung cancer : collaborative analysis of individual data from 13 European case-control studies. British Medical Journal 2005; 330, 223-226. Phillips, P.S., Denman, A.R., Crockett, R.G.M., Gillmore, G., Groves-Kirkby, C.J., Woolridge, A., Comparative Analysis of Weekly vs. Three monthly radon measurements in dwellings. DEFRA Report No., DEFRA/RAS/03.006. (2004). Wertheim D, Gillmore G, Brown L, and Petford N. A new method of imaging particle tracks in Solid State Nuclear Track Detectors. Journal of Microscopy 2010; 237: 1-6.

  8. Holographic microscopy for 3D tracking of bacteria

    NASA Astrophysics Data System (ADS)

    Nadeau, Jay; Cho, Yong Bin; El-Kholy, Marwan; Bedrossian, Manuel; Rider, Stephanie; Lindensmith, Christian; Wallace, J. Kent

    2016-03-01

    Understanding when, how, and if bacteria swim is key to understanding critical ecological and biological processes, from carbon cycling to infection. Imaging motility by traditional light microscopy is limited by focus depth, requiring cells to be constrained in z. Holographic microscopy offers an instantaneous 3D snapshot of a large sample volume, and is therefore ideal in principle for quantifying unconstrained bacterial motility. However, resolving and tracking individual cells is difficult due to the low amplitude and phase contrast of the cells; the index of refraction of typical bacteria differs from that of water only at the second decimal place. In this work we present a combination of optical and sample-handling approaches to facilitating bacterial tracking by holographic phase imaging. The first is the design of the microscope, which is an off-axis design with the optics along a common path, which minimizes alignment issues while providing all of the advantages of off-axis holography. Second, we use anti-reflective coated etalon glass in the design of sample chambers, which reduce internal reflections. Improvement seen with the antireflective coating is seen primarily in phase imaging, and its quantification is presented here. Finally, dyes may be used to increase phase contrast according to the Kramers-Kronig relations. Results using three test strains are presented, illustrating the different types of bacterial motility characterized by an enteric organism (Escherichia coli), an environmental organism (Bacillus subtilis), and a marine organism (Vibrio alginolyticus). Data processing steps to increase the quality of the phase images and facilitate tracking are also discussed.

  9. 3D whiteboard: collaborative sketching with 3D-tracked smart phones

    NASA Astrophysics Data System (ADS)

    Lue, James; Schulze, Jürgen P.

    2014-02-01

    We present the results of our investigation of the feasibility of a new approach for collaborative drawing in 3D, based on Android smart phones. Our approach utilizes a number of fiduciary markers, placed in the working area where they can be seen by the smart phones' cameras, in order to estimate the pose of each phone in the room. Our prototype allows two users to draw 3D objects with their smart phones by moving their phones around in 3D space. For example, 3D lines are drawn by recording the path of the phone as it is moved around in 3D space, drawing line segments on the screen along the way. Each user can see the virtual drawing space on their smart phones' displays, as if the display was a window into this space. Besides lines, our prototype application also supports 3D geometry creation, geometry transformation operations, and it shows the location of the other user's phone.

  10. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    PubMed

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  11. Comparison of 2D and 3D modeled tumor motion estimation/prediction for dynamic tumor tracking during arc radiotherapy.

    PubMed

    Liu, Wu; Ma, Xiangyu; Yan, Huagang; Chen, Zhe; Nath, Ravinder; Li, Haiyun

    2017-03-06

    Many real-time imaging techniques have been developed to localize the target in 3D space or in 2D beam's eye view (BEV) plane for intrafraction motion tracking in radiation therapy. With tracking system latency, 3D-modeled method is expected to be more accurate even in terms of 2D BEV tracking error. No quantitative analysis, however, has been reported. In this study, we simulated co-planar arc deliveries using respiratory motion data acquired from 42 patients to quantitatively compare the accuracy between 2D BEV and 3D-modeled tracking in arc therapy and determine whether 3D information is needed for motion tracking. We used our previously developed low kV dose adaptive MV-kV imaging and motion compensation framework as a representative of 3D-modeled methods. It optimizes the balance between additional kV imaging dose and 3D tracking accuracy and solves the MLC blockage issue. With simulated Gaussian marker detection errors (zero mean and 0.39 mm standard deviation) and ~155/310/460 ms tracking system latencies, the mean percentage of time that the target moved >2 mm from the predicted 2D BEV position are 1.1%/4.0%/7.8% and 1.3%/5.8%/11.6% for 3D-modeled and 2D-only tracking, respectively. The corresponding average BEV RMS errors are 0.67/0.90/1.13 mm and 0.79/1.10/1.37 mm. Compared to the 2D method, the 3D method reduced the average RMS unresolved motion along the beam direction from ~3 mm to ~1 mm, resulting on average only <1% dosimetric advantage in the depth direction. Only for a small fraction of the patients, when tracking latency is long, the 3D-modeled method showed significant improvement of BEV tracking accuracy, indicating potential dosimetric advantage. However, if the tracking latency is short (~150 ms or less), those improvements are limited. Therefore, 2D BEV tracking has sufficient targeting accuracy for most clinical cases. The 3D technique is, however, still important in solving the MLC blockage problem during 2D BEV tracking.

  12. Tracking tissue section surfaces for automated 3D confocal cytometry

    NASA Astrophysics Data System (ADS)

    Agustin, Ramses; Price, Jeffrey H.

    2002-05-01

    Three-dimensional cytometry, whereby large volumes of tissue would be measured automatically, requires a computerized method for detecting the upper and lower tissue boundaries. In conventional confocal microscopy, the user interactively sets limits for axial scanning for each field-of-view. Biological specimens vary in section thickness, thereby driving the requirement for setting vertical scan limits. Limits could be set arbitrarily large to ensure the entire tissue is scanned, but automatic surface identification would eliminate storing undue numbers of empty optical sections and forms the basis for incorporating lateral microscope stage motion to collect unlimited numbers of stacks. This walk-away automation of 3D confocal scanning for biological imaging is the first sep towards practical, computerized statistical sampling from arbitrarily large tissue volumes. Preliminary results for automatic tissue surface tracking were obtained for phase-contrast microscopy by measuring focus sharpness (previously used for high-speed autofocus by our group). Measurements were taken from 5X5 fields-of-view from hamster liver sections, varying from five to twenty microns in thickness, then smoothed to lessen variations of in-focus information at each axial position. Because image sharpness (as the power of high spatial frequency components) drops across the axial boundaries of a tissue section, mathematical quantities including the full-width at half-maximum, extrema in the first derivative, and second derivative were used to locate the proximal and distal surfaces of a tissue. Results from these tests were evaluated against manual (i.e., visual) determination of section boundaries.

  13. LayTracks3D: A new approach for meshing general solids using medial axis transform

    SciTech Connect

    Quadros, William Roshan

    2015-08-22

    This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to the MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.

  14. A Hidden Markov Model for 3D Catheter Tip Tracking with 2D X-ray Catheterization Sequence and 3D Rotational Angiography.

    PubMed

    Ambrosini, Pierre; Smal, Ihor; Ruijters, Daniel; Niessen, Wiro; Moelker, Adriaan; van Walsum, Theo

    2016-11-07

    In minimal invasive image guided catheterization procedures, physicians require information of the catheter position with respect to the patient's vasculature. However, in fluoroscopic images, visualization of the vasculature requires toxic contrast agent. Static vasculature roadmapping, which can reduce the usage of iodine contrast, is hampered by the breathing motion in abdominal catheterization. In this paper, we propose a method to track the catheter tip inside the patient's 3D vessel tree using intra-operative single-plane 2D X-ray image sequences and a peri-operative 3D rotational angiography (3DRA). The method is based on a hidden Markov model (HMM) where states of the model are the possible positions of the catheter tip inside the 3D vessel tree. The transitions from state to state model the probabilities for the catheter tip to move from one position to another. The HMM is updated following the observation scores, based on the registration between the 2D catheter centerline extracted from the 2D X-ray image, and the 2D projection of 3D vessel tree centerline extracted from the 3DRA. The method is extensively evaluated on simulated and clinical datasets acquired during liver abdominal catheterization. The evaluations show a median 3D tip tracking error of 2.3 mm with optimal settings in simulated data. The registered vessels close to the tip have a median distance error of 4.7 mm with angiographic data and optimal settings. Such accuracy is sufficient to help the physicians with an up-to-date roadmapping. The method tracks in real-time the catheter tip and enables roadmapping during catheterization procedures.

  15. A 3D feature point tracking method for ion radiation

    NASA Astrophysics Data System (ADS)

    Kouwenberg, Jasper J. M.; Ulrich, Leonie; Jäkel, Oliver; Greilich, Steffen

    2016-06-01

    A robust and computationally efficient algorithm for automated tracking of high densities of particles travelling in (semi-) straight lines is presented. It extends the implementation of (Sbalzarini and Koumoutsakos 2005) and is intended for use in the analysis of single ion track detectors. By including information of existing tracks in the exclusion criteria and a recursive cost minimization function, the algorithm is robust to variations on the measured particle tracks. A trajectory relinking algorithm was included to resolve the crossing of tracks in high particle density images. Validation of the algorithm was performed using fluorescent nuclear track detectors (FNTD) irradiated with high- and low (heavy) ion fluences and showed less than 1% faulty trajectories in the latter.

  16. Study of a viewer tracking system with multiview 3D display

    NASA Astrophysics Data System (ADS)

    Yang, Jinn-Cherng; Wu, Chang-Shuo; Hsiao, Chuan-Heng; Yang, Ming-Chieh; Liu, Wen-Chieh; Hung, Yi-Ping

    2008-02-01

    An autostereoscopic display provides users great enjoyment of stereo visualization without uncomfortable and inconvenient drawbacks of wearing stereo glasses. However, bandwidth constraints of current multi-view 3D display severely restrict the number of views that can be simultaneously displayed without degrading resolution or increasing display cost unacceptably. An alternative to multiple view presentation is that the position of observer can be measured by using viewer-tracking sensor. It is a very important module of the viewer-tracking component for fluently rendering and accurately projecting the stereo video. In order to render stereo content with respect to user's view points and to optically project the content onto the left and right eyes of the user accurately, the real-time viewer tracking technique that allows the user to move around freely when watching the autostereoscopic display is developed in this study. It comprises the face detection by using multiple eigenspaces of various lighting conditions, fast block matching for tracking four motion parameters of the user's face region. The Edge Orientation Histogram (EOH) on Real AdaBoost to improve the performance of original AdaBoost algorithm is also applied in this study. The AdaBoost algorithm using Haar feature in OpenCV library developed by Intel to detect human face and enhance the accuracy performance with rotating image. The frame rate of viewer tracking process can achieve up to 15 Hz. Since performance of the viewer tracking autostereoscopic display is still influenced under variant environmental conditions, the accuracy, robustness and efficiency of the viewer-tracking system are evaluated in this study.

  17. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  18. Real-time hardware for a new 3D display

    NASA Astrophysics Data System (ADS)

    Kaufmann, B.; Akil, M.

    2006-02-01

    We describe in this article a new multi-view auto-stereoscopic display system with a real time architecture to generate images of n different points of view of a 3D scene. This architecture generates all the different points of view with only one generation process, the different pictures are not generated independently but all at the same time. The architecture generates a frame buffer that contains all the voxels with their three dimensions and regenerates the different pictures on demand from this frame buffer. The need of memory is decreased because there is no redundant information in the buffer.

  19. Nonintrusive viewpoint tracking for 3D for perception in smart video conference

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Martinez-Ponte, Isabel; Meessen, Jerome; Delaigle, Jean-François

    2006-02-01

    Globalisation of people's interaction in the industrial world and ecological cost of transport make video-conference an interesting solution for collaborative work. However, the lack of immersive perception makes video-conference not appealing. TIFANIS tele-immersion system was conceived to let users interact as if they were physically together. In this paper, we focus on an important feature of the immersive system: the automatic tracking of the user's point of view in order to render correctly in his display the scene from the ther site. Viewpoint information has to be computed in a very short time and the detection system should be no intrusive, otherwise it would become cumbersome for the user, i.e. he would lose the feeling of "being there". The viewpoint detection system consists of several modules. First, an analysis module identifies and follows regions of interest (ROI) where faces are detected. We will show the cooperative approach between spatial detection and temporal tracking. Secondly, an eye detector finds the position of the eyes within faces. Then, the 3D positions of the eyes are deduced using stereoscopic images from a binocular camera. Finally, the 3D scene is rendered in real-time according to the new point of view.

  20. Real-Time Camera Guidance for 3d Scene Reconstruction

    NASA Astrophysics Data System (ADS)

    Schindler, F.; Förstner, W.

    2012-07-01

    We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  1. Ebstein's anomaly assessed by real-time 3-D echocardiography.

    PubMed

    Acar, Philippe; Abadir, Sylvia; Roux, Daniel; Taktak, Assaad; Dulac, Yves; Glock, Yves; Fournial, Gerard

    2006-08-01

    The outcome of patients with Ebstein's malformation depends mainly on the severity of the tricuspid valve malformation. Accurate description of the tricuspid anatomy by two-dimensional echocardiography remains difficult. We applied real-time three-dimensional echocardiography to 3 patients with Ebstein's anomaly. Preoperative and postoperative descriptions of the tricuspid valve were obtained from views taken inside the right ventricle. Surface of the leaflets as well as the commissures were obtained by three-dimensional echocardiography. Real time three-dimensional echocardiography is a promising tool, providing new views that will help to evaluate the ability and efficiency of surgical valve repair in patient with Ebstein's malformation.

  2. Real-time 3-D ultrasound scan conversion using a multicore processor.

    PubMed

    Zhuang, Bo; Shamdasani, Vijay; Sikdar, Siddhartha; Managuli, Ravi; Kim, Yongmin

    2009-07-01

    Real-time 3-D ultrasound scan conversion (SC) in software has not been practical due to its high computation and I/O data handling requirements. In this paper, we describe software-based 3-D SC with high volume rates using a multicore processor, Cell. We have implemented both 3-D SC approaches: 1) the separable 3-D SC where two 2-D coordinate transformations in orthogonal planes are performed in sequence and 2) the direct 3-D SC where the coordinate transformation is directly handled in 3-D. One Cell processor can scan-convert a 192 x 192 x 192 16-bit volume at 87.8 volumes/s with the separable 3-D SC algorithm and 28 volumes/s with the direct 3-D SC algorithm.

  3. Realization of real-time interactive 3D image holographic display [Invited].

    PubMed

    Chen, Jhen-Si; Chu, Daping

    2016-01-20

    Realization of a 3D image holographic display supporting real-time interaction requires fast actions in data uploading, hologram calculation, and image projection. These three key elements will be reviewed and discussed, while algorithms of rapid hologram calculation will be presented with the corresponding results. Our vision of interactive holographic 3D displays will be discussed.

  4. A 3D diamond detector for particle tracking

    NASA Astrophysics Data System (ADS)

    Artuso, M.; Bachmair, F.; Bäni, L.; Bartosik, M.; Beacham, J.; Bellini, V.; Belyaev, V.; Bentele, B.; Berdermann, E.; Bergonzo, P.; Bes, A.; Brom, J.-M.; Bruzzi, M.; Cerv, M.; Chau, C.; Chiodini, G.; Chren, D.; Cindro, V.; Claus, G.; Collot, J.; Costa, S.; Cumalat, J.; Dabrowski, A.; D`Alessandro, R.; de Boer, W.; Dehning, B.; Dobos, D.; Dünser, M.; Eremin, V.; Eusebi, R.; Forcolin, G.; Forneris, J.; Frais-Kölbl, H.; Gan, K. K.; Gastal, M.; Goffe, M.; Goldstein, J.; Golubev, A.; Gonella, L.; Gorišek, A.; Graber, L.; Grigoriev, E.; Grosse-Knetter, J.; Gui, B.; Guthoff, M.; Haughton, I.; Hidas, D.; Hits, D.; Hoeferkamp, M.; Hofmann, T.; Hosslet, J.; Hostachy, J.-Y.; Hügging, F.; Jansen, H.; Janssen, J.; Kagan, H.; Kanxheri, K.; Kasieczka, G.; Kass, R.; Kassel, F.; Kis, M.; Kramberger, G.; Kuleshov, S.; Lacoste, A.; Lagomarsino, S.; Lo Giudice, A.; Maazouzi, C.; Mandic, I.; Mathieu, C.; McFadden, N.; McGoldrick, G.; Menichelli, M.; Mikuž, M.; Morozzi, A.; Moss, J.; Mountain, R.; Murphy, S.; Oh, A.; Olivero, P.; Parrini, G.; Passeri, D.; Pauluzzi, M.; Pernegger, H.; Perrino, R.; Picollo, F.; Pomorski, M.; Potenza, R.; Quadt, A.; Re, A.; Riley, G.; Roe, S.; Sapinski, M.; Scaringella, M.; Schnetzer, S.; Schreiner, T.; Sciortino, S.; Scorzoni, A.; Seidel, S.; Servoli, L.; Sfyrla, A.; Shimchuk, G.; Smith, D. S.; Sopko, B.; Sopko, V.; Spagnolo, S.; Spanier, S.; Stenson, K.; Stone, R.; Sutera, C.; Taylor, A.; Traeger, M.; Tromson, D.; Trischuk, W.; Tuve, C.; Uplegger, L.; Velthuis, J.; Venturi, N.; Vittone, E.; Wagner, S.; Wallny, R.; Wang, J. C.; Weilhammer, P.; Weingarten, J.; Weiss, C.; Wengler, T.; Wermes, N.; Yamouni, M.; Zavrtanik, M.

    2016-07-01

    In the present study, results towards the development of a 3D diamond sensor are presented. Conductive channels are produced inside the sensor bulk using a femtosecond laser. This electrode geometry allows full charge collection even for low quality diamond sensors. Results from testbeam show that charge is collected by these electrodes. In order to understand the channel growth parameters, with the goal of producing low resistivity channels, the conductive channels produced with a different laser setup are evaluated by Raman spectroscopy.

  5. Real-time 3D vibration measurements in microstructures

    NASA Astrophysics Data System (ADS)

    Kowarsch, Robert; Ochs, Wanja; Giesen, Moritz; Dräbenstedt, Alexander; Winter, Marcus; Rembe, Christian

    2012-04-01

    The real-time measurement of three-dimensional vibrations is currently a major interest of academic research and industrial device characterization. The most common and practical solution used so far consists of three single-point laser-Doppler vibrometers which measure vibrations of a scattering surface from three directions. The resulting three velocity vectors are transformed into a Cartesian coordinate system. This technique does also work for microstructures but has some drawbacks: (1) The surface needs to scatter light, (2) the three laser beams can generate optical crosstalk if at least two laser frequencies match within the demodulation bandwidth, and (3) the laser beams have to be separated on the surface under test to minimize optical crosstalk such that reliable measurements are possible. We present a novel optical approach, based on the direction-dependent Doppler effect, which overcomes all the drawbacks of the current technology. We have realized a demonstrator with a measurement spot of < 3.5 μm diameter that does not suffer from optical crosstalk because only one laser beam impinges the specimen surface while the light is collected from three different directions.

  6. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  7. High resolution 3D insider detection and tracking.

    SciTech Connect

    Nelson, Cynthia Lee

    2003-09-01

    Vulnerability analysis studies show that one of the worst threats against a facility is that of an active insider during an emergency evacuation. When a criticality or other emergency alarm occurs, employees immediately proceed along evacuation routes to designated areas. Procedures are then implemented to account for all material, classified parts, etc. The 3-Dimensional Video Motion Detection (3DVMD) technology could be used to detect and track possible insider activities during alarm situations, as just described, as well as during normal operating conditions. The 3DVMD technology uses multiple cameras to create 3-dimensional detection volumes or zones. Movement throughout detection zones is tracked and high-level information, such as the number of people and their direction of motion, is extracted. In the described alarm scenario, deviances of evacuation procedures taken by an individual could be immediately detected and relayed to a central alarm station. The insider could be tracked and any protected items removed from the area could be flagged. The 3DVMD technology could also be used to monitor such items as machines that are used to build classified parts. During an alarm, detections could be made if items were removed from the machine. Overall, the use of 3DVMD technology during emergency evacuations would help to prevent the loss of classified items and would speed recovery from emergency situations. Further security could also be added by analyzing tracked behavior (motion) as it corresponds to predicted behavior, e.g., behavior corresponding with the execution of required procedures. This information would be valuable for detecting a possible insider not only during emergency situations, but also during times of normal operation.

  8. An optical real-time 3D measurement for analysis of facial shape and movement

    NASA Astrophysics Data System (ADS)

    Zhang, Qican; Su, Xianyu; Chen, Wenjing; Cao, Yiping; Xiang, Liqun

    2003-12-01

    Optical non-contact 3-D shape measurement provides a novel and useful tool for analysis of facial shape and movement in presurgical and postsurgical regular check. In this article we present a system, which allows a precise 3-D visualization of the patient's facial before and after craniofacial surgery. We discussed, in this paper, the real time 3-D image capture, processing and the 3-D phase unwrapping method to recover complex shape deformation when the movement of the mouth. The result of real-time measurement for facial shape and movement will be helpful for the more ideal effect in plastic surgery.

  9. Tracking 3-D body motion for docking and robot control

    NASA Technical Reports Server (NTRS)

    Donath, M.; Sorensen, B.; Yang, G. B.; Starr, R.

    1987-01-01

    An advanced method of tracking three-dimensional motion of bodies has been developed. This system has the potential to dynamically characterize machine and other structural motion, even in the presence of structural flexibility, thus facilitating closed loop structural motion control. The system's operation is based on the concept that the intersection of three planes defines a point. Three rotating planes of laser light, fixed and moving photovoltaic diode targets, and a pipe-lined architecture of analog and digital electronics are used to locate multiple targets whose number is only limited by available computer memory. Data collection rates are a function of the laser scan rotation speed and are currently selectable up to 480 Hz. The tested performance on a preliminary prototype designed for 0.1 in accuracy (for tracking human motion) at a 480 Hz data rate includes a worst case resolution of 0.8 mm (0.03 inches), a repeatability of plus or minus 0.635 mm (plus or minus 0.025 inches), and an absolute accuracy of plus or minus 2.0 mm (plus or minus 0.08 inches) within an eight cubic meter volume with all results applicable at the 95 percent level of confidence along each coordinate region. The full six degrees of freedom of a body can be computed by attaching three or more target detectors to the body of interest.

  10. On the dynamics of jellyfish locomotion via 3D particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Piper, Matthew; Kim, Jin-Tae; Chamorro, Leonardo P.

    2016-11-01

    The dynamics of jellyfish (Aurelia aurita) locomotion is experimentally studied via 3D particle tracking velocimetry. 3D locations of the bell tip are tracked over 1.5 cycles to describe the jellyfish path. Multiple positions of the jellyfish bell margin are initially tracked in 2D from four independent planes and individually projected in 3D based on the jellyfish path and geometrical properties of the setup. A cubic spline interpolation and the exponentially weighted moving average are used to estimate derived quantities, including velocity and acceleration of the jellyfish locomotion. We will discuss distinctive features of the jellyfish 3D motion at various swimming phases, and will provide insight on the 3D contraction and relaxation in terms of the locomotion, the steadiness of the bell margin eccentricity, and local Reynolds number based on the instantaneous mean diameter of the bell.

  11. Ultra-Wideband Time-Difference-of-Arrival High Resolution 3D Proximity Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Dekome, Kent; Dusl, John

    2010-01-01

    This paper describes a research and development effort for a prototype ultra-wideband (UWB) tracking system that is currently under development at NASA Johnson Space Center (JSC). The system is being studied for use in tracking of lunar./Mars rovers and astronauts during early exploration missions when satellite navigation systems are not available. U IATB impulse radio (UWB-IR) technology is exploited in the design and implementation of the prototype location and tracking system. A three-dimensional (3D) proximity tracking prototype design using commercially available UWB products is proposed to implement the Time-Difference- Of-Arrival (TDOA) tracking methodology in this research effort. The TDOA tracking algorithm is utilized for location estimation in the prototype system, not only to exploit the precise time resolution possible with UWB signals, but also to eliminate the need for synchronization between the transmitter and the receiver. Simulations show that the TDOA algorithm can achieve the fine tracking resolution with low noise TDOA estimates for close-in tracking. Field tests demonstrated that this prototype UWB TDOA High Resolution 3D Proximity Tracking System is feasible for providing positioning-awareness information in a 3D space to a robotic control system. This 3D tracking system is developed for a robotic control system in a facility called "Moonyard" at Honeywell Defense & System in Arizona under a Space Act Agreement.

  12. CISUS: an integrated 3D ultrasound system for IGT using a modular tracking API

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Viswanathan, Anand; Pieper, Steve; Choti, Michael A.; Taylor, Russell H.; Kikinis, Ron; Fichtinger, Gabor

    2004-05-01

    Ultrasound has become popular in clinical/surgical applications, both as the primary image guidance modality and also in conjunction with other modalities like CT or MRI. Three dimensional ultrasound (3DUS) systems have also demonstrated usefulness in image-guided therapy (IGT). At the same time, however, current lack of open-source and open-architecture multi-modal medical visualization systems prevents 3DUS from fulfilling its potential. Several stand-alone 3DUS systems, like Stradx or In-Vivo exist today. Although these systems have been found to be useful in real clinical setting, it is difficult to augment their functionality and integrate them in versatile IGT systems. To address these limitations, a robotic/freehand 3DUS open environment (CISUS) is being integrated into the 3D Slicer, an open-source research tool developed for medical image analysis and surgical planning. In addition, the system capitalizes on generic application programming interfaces (APIs) for tracking devices and robotic control. The resulting platform-independent open-source system may serve as a valuable tool to the image guided surgery community. Other researchers could straightforwardly integrate the generic CISUS system along with other functionalities (i.e. dual view visualization, registration, real-time tracking, segmentation, etc) to rapidly create their medical/surgical applications. Our current driving clinical application is robotically assisted and freehand 3DUS-guided liver ablation, which is fully being integrated under the CISUS-3D Slicer. Initial functionality and pre-clinical feasibility are demonstrated on phantom and ex-vivo animal models.

  13. Tensor3D: A computer graphics program to simulate 3D real-time deformation and visualization of geometric bodies

    NASA Astrophysics Data System (ADS)

    Pallozzi Lavorante, Luca; Dirk Ebert, Hans

    2008-07-01

    Tensor3D is a geometric modeling program with the capacity to simulate and visualize in real-time the deformation, specified through a tensor matrix and applied to triangulated models representing geological bodies. 3D visualization allows the study of deformational processes that are traditionally conducted in 2D, such as simple and pure shears. Besides geometric objects that are immediately available in the program window, the program can read other models from disk, thus being able to import objects created with different open-source or proprietary programs. A strain ellipsoid and a bounding box are simultaneously shown and instantly deformed with the main object. The principal axes of strain are visualized as well to provide graphical information about the orientation of the tensor's normal components. The deformed models can also be saved, retrieved later and deformed again, in order to study different steps of progressive strain, or to make this data available to other programs. The shape of stress ellipsoids and the corresponding Mohr circles defined by any stress tensor can also be represented. The application was written using the Visualization ToolKit, a powerful scientific visualization library in the public domain. This development choice, allied to the use of the Tcl/Tk programming language, which is independent on the host computational platform, makes the program a useful tool for the study of geometric deformations directly in three dimensions in teaching as well as research activities.

  14. Defense Additive Manufacturing: DOD Needs to Systematically Track Department-wide 3D Printing Efforts

    DTIC Science & Technology

    2015-10-01

    Clip Additively Manufactured • The Navy installed a 3D printer aboard the USS Essex to demonstrate the ability to additively develop and produce...desired result and vision to have the capability on the fleet. These officials stated that the Navy plans to install 3D printers on two additional...DEFENSE ADDITIVE MANUFACTURING DOD Needs to Systematically Track Department-wide 3D Printing Efforts Report to

  15. Vision-Based 3D Motion Estimation for On-Orbit Proximity Satellite Tracking and Navigation

    DTIC Science & Technology

    2015-06-01

    printed using the Fortus 400mc 3D rapid- prototyping printer of the NPS Space Systems Academic Group, while the internal structure is made of aluminum...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited VISION-BASED 3D ...REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE VISION-BASED 3D MOTION ESTIMATION FOR ON-ORBIT PROXIMITY SATELLITE TRACKING

  16. FPGA-based real-time anisotropic diffusion filtering of 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Castro-Pareja, Carlos R.; Dandekar, Omkar S.; Shekhar, Raj

    2005-02-01

    Three-dimensional ultrasonic imaging, especially the emerging real-time version of it, is particularly valuable in medical applications such as echocardiography, obstetrics and surgical navigation. A known problem with ultrasound images is their high level of speckle noise. Anisotropic diffusion filtering has been shown to be effective in enhancing the visual quality of 3D ultrasound images and as preprocessing prior to advanced image processing. However, due to its arithmetic complexity and the sheer size of 3D ultrasound images, it is not possible to perform online, real-time anisotropic diffusion filtering using standard software implementations. We present an FPGA-based architecture that allows performing anisotropic diffusion filtering of 3D images at acquisition rates, thus enabling the use of this filtering technique in real-time applications, such as visualization, registration and volume rendering.

  17. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  18. The agreement between 3D, standard 2D and triplane 2D speckle tracking: effects of image quality and 3D volume rate.

    PubMed

    Trache, Tudor; Stöbe, Stephan; Tarr, Adrienn; Pfeiffer, Dietrich; Hagendorff, Andreas

    2014-12-01

    Comparison of 3D and 2D speckle tracking performed on standard 2D and triplane 2D datasets of normal and pathological left ventricular (LV) wall-motion patterns with a focus on the effect that 3D volume rate (3DVR), image quality and tracking artifacts have on the agreement between 2D and 3D speckle tracking. 37 patients with normal LV function and 18 patients with ischaemic wall-motion abnormalities underwent 2D and 3D echocardiography, followed by offline speckle tracking measurements. The values of 3D global, regional and segmental strain were compared with the standard 2D and triplane 2D strain values. Correlation analysis with the LV ejection fraction (LVEF) was also performed. The 3D and 2D global strain values correlated good in both normally and abnormally contracting hearts, though systematic differences between the two methods were observed. Of the 3D strain parameters, the area strain showed the best correlation with the LVEF. The numerical agreement of 3D and 2D analyses varied significantly with the volume rate and image quality of the 3D datasets. The highest correlation between 2D and 3D peak systolic strain values was found between 3D area and standard 2D longitudinal strain. Regional wall-motion abnormalities were similarly detected by 2D and 3D speckle tracking. 2DST of triplane datasets showed similar results to those of conventional 2D datasets. 2D and 3D speckle tracking similarly detect normal and pathological wall-motion patterns. Limited image quality has a significant impact on the agreement between 3D and 2D numerical strain values.

  19. A Framework for 3D Model-Based Visual Tracking Using a GPU-Accelerated Particle Filter.

    PubMed

    Brown, J A; Capson, D W

    2012-01-01

    A novel framework for acceleration of particle filtering approaches to 3D model-based, markerless visual tracking in monocular video is described. Specifically, we present a methodology for partitioning and mapping the computationally expensive weight-update stage of a particle filter to a graphics processing unit (GPU) to achieve particle- and pixel-level parallelism. Nvidia CUDA and Direct3D are employed to harness the massively parallel computational power of modern GPUs for simulation (3D model rendering) and evaluation (segmentation, feature extraction, and weight calculation) of hundreds of particles at high speeds. The proposed framework addresses the computational intensity that is intrinsic to all particle filter approaches, including those that have been modified to minimize the number of particles required for a particular task. Performance and tracking quality results for rigid object and articulated hand tracking experiments demonstrate markerless, model-based visual tracking on consumer-grade graphics hardware with pixel-level accuracy up to 95 percent at 60+ frames per second. The framework accelerates particle evaluation up to 49 times over a comparable CPU-only implementation, providing an increased particle count while maintaining real-time frame rates.

  20. Holovideo: Real-time 3D range video encoding and decoding on GPU

    NASA Astrophysics Data System (ADS)

    Karpinsky, Nikolaus; Zhang, Song

    2012-02-01

    We present a 3D video-encoding technique called Holovideo that is capable of encoding high-resolution 3D videos into standard 2D videos, and then decoding the 2D videos back into 3D rapidly without significant loss of quality. Due to the nature of the algorithm, 2D video compression such as JPEG encoding with QuickTime Run Length Encoding (QTRLE) can be applied with little quality loss, resulting in an effective way to store 3D video at very small file sizes. We found that under a compression ratio of 134:1, Holovideo to OBJ file format, the 3D geometry quality drops at a negligible level. Several sets of 3D videos were captured using a structured light scanner, compressed using the Holovideo codec, and then uncompressed and displayed to demonstrate the effectiveness of the codec. With the use of OpenGL Shaders (GLSL), the 3D video codec can encode and decode in realtime. We demonstrated that for a video size of 512×512, the decoding speed is 28 frames per second (FPS) with a laptop computer using an embedded NVIDIA GeForce 9400 m graphics processing unit (GPU). Encoding can be done with this same setup at 18 FPS, making this technology suitable for applications such as interactive 3D video games and 3D video conferencing.

  1. Head Tracking for 3D Audio Using a GPS-Aided MEMS IMU

    DTIC Science & Technology

    2005-03-01

    Aircraft, Directional Signals, GPS/INS Fusion , GPS/INS Integration, Head Tracking Systems, IMU (Inertial Measurement Unit), Inertial Sensors, MEMS...HEAD TRACKING FOR 3D AUDIO USING A GPS-AIDED MEMS IMU THESIS Jacque M. Joffrion, Captain, USAF AFIT/GE/ENG/05-09 DEPARTMENT OF THE AIR FORCE AIR...the United States Government. AFIT/GE/ENG/05-09 HEAD TRACKING FOR 3D AUDIO USING A GPS-AIDED MEMS IMU THESIS Presented to the Faculty of the Department

  2. 3D model-based catheter tracking for motion compensation in EP procedures

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Liao, Rui; Hornegger, Joachim; Strobel, Norbert

    2010-02-01

    Atrial fibrillation is the most common sustained heart arrhythmia and a leading cause of stroke. Its treatment by radio-frequency catheter ablation, performed using fluoroscopic image guidance, is gaining increasingly more importance. Two-dimensional fluoroscopic navigation can take advantage of overlay images derived from pre-operative 3-D data to add anatomical details otherwise not visible under X-ray. Unfortunately, respiratory motion may impair the utility of these static overlay images for catheter navigation. We developed an approach for image-based 3-D motion compensation as a solution to this problem. A bi-plane C-arm system is used to take X-ray images of a special circumferential mapping catheter from two directions. In the first step of the method, a 3-D model of the device is reconstructed. Three-dimensional respiratory motion at the site of ablation is then estimated by tracking the reconstructed catheter model in 3-D. This step involves bi-plane fluoroscopy and 2-D/3-D registration. Phantom data and clinical data were used to assess our model-based catheter tracking method. Experiments involving a moving heart phantom yielded an average 2-D tracking error of 1.4 mm and an average 3-D tracking error of 1.1 mm. Our evaluation of clinical data sets comprised 469 bi-plane fluoroscopy frames (938 monoplane fluoroscopy frames). We observed an average 2-D tracking error of 1.0 mm +/- 0.4 mm and an average 3-D tracking error of 0.8 mm +/- 0.5 mm. These results demonstrate that model-based motion-compensation based on 2-D/3-D registration is both feasible and accurate.

  3. LayTracks3D: A new approach for meshing general solids using medial axis transform

    DOE PAGES

    Quadros, William Roshan

    2015-08-22

    This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to themore » MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.« less

  4. 3D-printed concentrators for tracking-integrated CPV modules

    NASA Astrophysics Data System (ADS)

    Apostoleris, Harry; Leland, Julian; Chiesa, Matteo; Stefancich, Marco

    2016-09-01

    We demonstrate 3D-printed nonimaging concentrators and propose a tracking integration scheme to reduce the external tracking requirements of CPV modules. In the proposed system, internal sun tracking is achieved by rotation of the mini-concentrators inside the module by small motors. We discuss the design principles employed in the development of the system, experimentally evaluate the performance of the concentrator prototypes, and propose practical modifications that may be made to improve on-site performance of the devices.

  5. High-resolution real-time 3D shape measurement on a portable device

    NASA Astrophysics Data System (ADS)

    Karpinsky, Nikolaus; Hoke, Morgan; Chen, Vincent; Zhang, Song

    2013-09-01

    Recent advances in technology have enabled the acquisition of high-resolution 3D models in real-time though the use of structured light scanning techniques. While these advances are impressive, they require large amounts of computing power, thus being limited to using large desktop computers with high end CPUs and sometimes GPUs. This is undesirable in making high-resolution real-time 3D scanners ubiquitous in our mobile lives. To address this issue, this work describes and demonstrates a real-time 3D scanning system that is realized on a mobile device, namely a laptop computer, which can achieve speeds of 20fps 3D at a resolution of 640x480 per frame. By utilizing a graphics processing unit (GPU) as a multipurpose parallel processor, along with a parallel phase shifting technique, we are able to realize the entire 3D processing pipeline in parallel. To mitigate high speed camera transfer problems, which typically require a dedicated frame grabber, we make use of USB 3.0 along with direct memory access (DMA) to transfer camera images to the GPU. To demonstrate the effectiveness of the technique, we experiment with the scanner on both static geometry of a statue and dynamic geometry of a deforming material sample in front of the system.

  6. Confocal fluorometer for diffusion tracking in 3D engineered tissue constructs

    NASA Astrophysics Data System (ADS)

    Daly, D.; Zilioli, A.; Tan, N.; Buttenschoen, K.; Chikkanna, B.; Reynolds, J.; Marsden, B.; Hughes, C.

    2016-03-01

    We present results of the development of a non-contacting instrument, called fScan, based on scanning confocal fluorometry for assessing the diffusion of materials through a tissue matrix. There are many areas in healthcare diagnostics and screening where it is now widely accepted that the need for new quantitative monitoring technologies is a major pinch point in patient diagnostics and in vitro testing. With the increasing need to interpret 3D responses this commonly involves the need to track the diffusion of compounds, pharma-active species and cells through a 3D matrix of tissue. Methods are available but to support the advances that are currently only promised, this monitoring needs to be real-time, non-invasive, and economical. At the moment commercial meters tend to be invasive and usually require a sample of the medium to be removed and processed prior to testing. This methodology clearly has a number of significant disadvantages. fScan combines a fiber based optical arrangement with a compact, free space optical front end that has been integrated so that the sample's diffusion can be measured without interference. This architecture is particularly important due to the "wet" nature of the samples. fScan is designed to measure constructs located within standard well plates and a 2-D motion stage locates the required sample with respect to the measurement system. Results are presented that show how the meter has been used to evaluate movements of samples through collagen constructs in situ without disturbing their kinetic characteristics. These kinetics were little understood prior to these measurements.

  7. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  8. Real-Time 3D Contrast-Enhanced Transcranial Ultrasound and Aberration Correction

    PubMed Central

    Ivancevich, Nikolas M.; Pinton, Gianmarco F.; Nicoletto, Heather A.; Bennett, Ellen; Laskowitz, Daniel T.; Smith, Stephen W.

    2008-01-01

    Contrast-enhanced (CE) transcranial ultrasound (US) and reconstructed 3D transcranial ultrasound have shown advantages over traditional methods in a variety of cerebrovascular diseases. We present the results from a novel ultrasound technique, namely real-time 3D contrast-enhanced transcranial ultrasound. Using real-time 3D (RT3D) ultrasound and micro-bubble contrast agent, we scanned 17 healthy volunteers via a single temporal window and 9 via the sub-occipital window and report our detection rates for the major cerebral vessels. In 71% of subjects, both of our observers identified the ipsilateral circle of Willis from the temporal window, and in 59% we imaged the entire circle of Willis. From the sub-occipital window, both observers detected the entire vertebrobasilar circulation in 22% of subjects, and in 44% the basilar artery. After performing phase aberration correction on one subject, we were able to increase the diagnostic value of the scan, detecting a vessel not present in the uncorrected scan. These preliminary results suggest that RT3D CE transcranial US and RT3D CE transcranial US with phase aberration correction have the potential to greatly impact the field of neurosonology. PMID:18395321

  9. Towards real-time MRI-guided 3D localization of deforming targets for non-invasive cardiac radiosurgery.

    PubMed

    Ipsen, S; Blanck, O; Lowther, N J; Liney, G P; Rai, R; Bode, F; Dunst, J; Schweikard, A; Keall, P J

    2016-11-21

    Radiosurgery to the pulmonary vein antrum in the left atrium (LA) has recently been proposed for non-invasive treatment of atrial fibrillation (AF). Precise real-time target localization during treatment is necessary due to complex respiratory and cardiac motion and high radiation doses. To determine the 3D position of the LA for motion compensation during radiosurgery, a tracking method based on orthogonal real-time MRI planes was developed for AF treatments with an MRI-guided radiotherapy system. Four healthy volunteers underwent cardiac MRI of the LA. Contractile motion was quantified on 3D LA models derived from 4D scans with 10 phases acquired in end-exhalation. Three localization strategies were developed and tested retrospectively on 2D real-time scans (sagittal, temporal resolution 100 ms, free breathing). The best-performing method was then used to measure 3D target positions in 2D-2D orthogonal planes (sagittal-coronal, temporal resolution 200-252 ms, free breathing) in 20 configurations of a digital phantom and in the volunteer data. The 3D target localization accuracy was quantified in the phantom and qualitatively assessed in the real data. Mean cardiac contraction was  ⩽  3.9 mm between maximum dilation and contraction but anisotropic. A template matching approach with two distinct template phases and ECG-based selection yielded the highest 2D accuracy of 1.2 mm. 3D target localization showed a mean error of 3.2 mm in the customized digital phantoms. Our algorithms were successfully applied to the 2D-2D volunteer data in which we measured a mean 3D LA motion extent of 16.5 mm (SI), 5.8 mm (AP) and 3.1 mm (LR). Real-time target localization on orthogonal MRI planes was successfully implemented for highly deformable targets treated in cardiac radiosurgery. The developed method measures target shifts caused by respiration and cardiac contraction. If the detected motion can be compensated accordingly, an MRI-guided radiotherapy

  10. Towards real-time MRI-guided 3D localization of deforming targets for non-invasive cardiac radiosurgery

    NASA Astrophysics Data System (ADS)

    Ipsen, S.; Blanck, O.; Lowther, N. J.; Liney, G. P.; Rai, R.; Bode, F.; Dunst, J.; Schweikard, A.; Keall, P. J.

    2016-11-01

    Radiosurgery to the pulmonary vein antrum in the left atrium (LA) has recently been proposed for non-invasive treatment of atrial fibrillation (AF). Precise real-time target localization during treatment is necessary due to complex respiratory and cardiac motion and high radiation doses. To determine the 3D position of the LA for motion compensation during radiosurgery, a tracking method based on orthogonal real-time MRI planes was developed for AF treatments with an MRI-guided radiotherapy system. Four healthy volunteers underwent cardiac MRI of the LA. Contractile motion was quantified on 3D LA models derived from 4D scans with 10 phases acquired in end-exhalation. Three localization strategies were developed and tested retrospectively on 2D real-time scans (sagittal, temporal resolution 100 ms, free breathing). The best-performing method was then used to measure 3D target positions in 2D-2D orthogonal planes (sagittal-coronal, temporal resolution 200-252 ms, free breathing) in 20 configurations of a digital phantom and in the volunteer data. The 3D target localization accuracy was quantified in the phantom and qualitatively assessed in the real data. Mean cardiac contraction was  ⩽  3.9 mm between maximum dilation and contraction but anisotropic. A template matching approach with two distinct template phases and ECG-based selection yielded the highest 2D accuracy of 1.2 mm. 3D target localization showed a mean error of 3.2 mm in the customized digital phantoms. Our algorithms were successfully applied to the 2D-2D volunteer data in which we measured a mean 3D LA motion extent of 16.5 mm (SI), 5.8 mm (AP) and 3.1 mm (LR). Real-time target localization on orthogonal MRI planes was successfully implemented for highly deformable targets treated in cardiac radiosurgery. The developed method measures target shifts caused by respiration and cardiac contraction. If the detected motion can be compensated accordingly, an MRI-guided radiotherapy

  11. 3D tracking of mating events in wild swarms of the malaria mosquito Anopheles gambiae.

    PubMed

    Butail, Sachit; Manoukis, Nicholas; Diallo, Moussa; Yaro, Alpha S; Dao, Adama; Traoré, Sekou F; Ribeiro, José M; Lehmann, Tovi; Paley, Derek A

    2011-01-01

    We describe an automated tracking system that allows us to reconstruct the 3D kinematics of individual mosquitoes in swarms of Anopheles gambiae. The inputs to the tracking system are video streams recorded from a stereo camera system. The tracker uses a two-pass procedure to automatically localize and track mosquitoes within the swarm. A human-in-the-loop step verifies the estimates and connects broken tracks. The tracker performance is illustrated using footage of mating events filmed in Mali in August 2010.

  12. High-throughput 3D tracking of bacteria on a standard phase contrast microscope

    NASA Astrophysics Data System (ADS)

    Taute, K. M.; Gude, S.; Tans, S. J.; Shimizu, T. S.

    2015-11-01

    Bacteria employ diverse motility patterns in traversing complex three-dimensional (3D) natural habitats. 2D microscopy misses crucial features of 3D behaviour, but the applicability of existing 3D tracking techniques is constrained by their performance or ease of use. Here we present a simple, broadly applicable, high-throughput 3D bacterial tracking method for use in standard phase contrast microscopy. Bacteria are localized at micron-scale resolution over a range of 350 × 300 × 200 μm by maximizing image cross-correlations between their observed diffraction patterns and a reference library. We demonstrate the applicability of our technique to a range of bacterial species and exploit its high throughput to expose hidden contributions of bacterial individuality to population-level variability in motile behaviour. The simplicity of this powerful new tool for bacterial motility research renders 3D tracking accessible to a wider community and paves the way for investigations of bacterial motility in complex 3D environments.

  13. Optimal Local Searching for Fast and Robust Textureless 3D Object Tracking in Highly Cluttered Backgrounds.

    PubMed

    Seo, Byung-Kuk; Park, Jong-Il; Hinterstoisser, Stefan; Ilic, Slobodan

    2013-06-13

    Edge-based tracking is a fast and plausible approach for textureless 3D object tracking, but its robustness is still very challenging in highly cluttered backgrounds due to numerous local minima. To overcome this problem, we propose a novel method for fast and robust textureless 3D object tracking in highly cluttered backgrounds. The proposed method is based on optimal local searching of 3D-2D correspondences between a known 3D object model and 2D scene edges in an image with heavy background clutter. In our searching scheme, searching regions are partitioned into three levels (interior, contour, and exterior) with respect to the previous object region, and confident searching directions are determined by evaluating candidates of correspondences on their region levels; thus, the correspondences are searched among likely candidates in only the confident directions instead of searching through all candidates. To ensure the confident searching direction, we also adopt the region appearance, which is efficiently modeled on a newly defined local space (called a searching bundle). Experimental results and performance evaluations demonstrate that our method fully supports fast and robust textureless 3D object tracking even in highly cluttered backgrounds.

  14. Optimal local searching for fast and robust textureless 3D object tracking in highly cluttered backgrounds.

    PubMed

    Seo, Byung-Kuk; Park, Hanhoon; Park, Jong-Il; Hinterstoisser, Stefan; Ilic, Slobodan

    2014-01-01

    Edge-based tracking is a fast and plausible approach for textureless 3D object tracking, but its robustness is still very challenging in highly cluttered backgrounds due to numerous local minima. To overcome this problem, we propose a novel method for fast and robust textureless 3D object tracking in highly cluttered backgrounds. The proposed method is based on optimal local searching of 3D-2D correspondences between a known 3D object model and 2D scene edges in an image with heavy background clutter. In our searching scheme, searching regions are partitioned into three levels (interior, contour, and exterior) with respect to the previous object region, and confident searching directions are determined by evaluating candidates of correspondences on their region levels; thus, the correspondences are searched among likely candidates in only the confident directions instead of searching through all candidates. To ensure the confident searching direction, we also adopt the region appearance, which is efficiently modeled on a newly defined local space (called a searching bundle). Experimental results and performance evaluations demonstrate that our method fully supports fast and robust textureless 3D object tracking even in highly cluttered backgrounds.

  15. Improving segmentation of 3D touching cell nuclei using flow tracking on surface meshes.

    PubMed

    Li, Gang; Guo, Lei

    2012-01-01

    Automatic segmentation of touching cell nuclei in 3D microscopy images is of great importance in bioimage informatics and computational biology. This paper presents a novel method for improving 3D touching cell nuclei segmentation. Given binary touching nuclei by the method in Li et al. (2007), our method herein consists of several steps: surface mesh reconstruction and curvature information estimation; direction field diffusion on surface meshes; flow tracking on surface meshes; and projection of surface mesh segmentation to volumetric images. The method is validated on both synthesised and real 3D touching cell nuclei images, demonstrating its validity and effectiveness.

  16. Real-time 3-d intracranial ultrasound with an endoscopic matrix array transducer.

    PubMed

    Light, Edward D; Mukundan, Srinivasan; Wolf, Patrick D; Smith, Stephen W

    2007-08-01

    A transducer originally designed for transesophageal echocardiography (TEE) was adapted for real-time volumetric endoscopic imaging of the brain. The transducer consists of a 36 x 36 array with an interelement spacing of 0.18 mm. There are 504 transmitting and 252 receive channels placed in a regular pattern in the array. The operating frequency is 4.5 MHz with a -6 dB bandwidth of 30%. The transducer is fabricated on a 10-layer flexible circuit from Microconnex (Snoqualmie, WA, USA). The purpose of this study is to evaluate the clinical feasibility of real-time 3-D intracranial ultrasound with this device. The Volumetrics Medical Imaging (Durham, NC, USA) 3-D scanner was used to obtain images in a canine model. A transcalvarial acoustic window was created under general anesthesia in the animal laboratory by placing a 10-mm burr hole in the high parietal calvarium of a 50-kg canine subject. The burr-hole was placed in a left parasagittal location to avoid the sagittal sinus, and the transducer was placed against the intact dura mater for ultrasound imaging. Images of the lateral ventricles were produced, including real-time 3-D guidance of a needle puncture of one ventricle. In a second canine subject, contrast-enhanced 3-D Doppler color flow images were made of the cerebral vessels including the complete Circle of Willis. Clinical applications may include real-time 3-D guidance of cerebrospinal fluid extraction from the lateral ventricles and bedside evaluation of critically ill patients where computed tomography and magnetic resonance imaging techniques are unavailable.

  17. Detailed Evaluation of Five 3D Speckle Tracking Algorithms Using Synthetic Echocardiographic Recordings.

    PubMed

    Alessandrini, Martino; Heyde, Brecht; Queiros, Sandro; Cygan, Szymon; Zontak, Maria; Somphone, Oudom; Bernard, Olivier; Sermesant, Maxime; Delingette, Herve; Barbosa, Daniel; De Craene, Mathieu; ODonnell, Matthew; Dhooge, Jan

    2016-08-01

    A plethora of techniques for cardiac deformation imaging with 3D ultrasound, typically referred to as 3D speckle tracking techniques, are available from academia and industry. Although the benefits of single methods over alternative ones have been reported in separate publications, the intrinsic differences in the data and definitions used makes it hard to compare the relative performance of different solutions. To address this issue, we have recently proposed a framework to simulate realistic 3D echocardiographic recordings and used it to generate a common set of ground-truth data for 3D speckle tracking algorithms, which was made available online. The aim of this study was therefore to use the newly developed database to contrast non-commercial speckle tracking solutions from research groups with leading expertise in the field. The five techniques involved cover the most representative families of existing approaches, namely block-matching, radio-frequency tracking, optical flow and elastic image registration. The techniques were contrasted in terms of tracking and strain accuracy. The feasibility of the obtained strain measurements to diagnose pathology was also tested for ischemia and dyssynchrony.

  18. Hierarchical storage and visualization of real-time 3D data

    NASA Astrophysics Data System (ADS)

    Parry, Mitchell; Hannigan, Brendan; Ribarsky, William; Shaw, Christopher D.; Faust, Nickolas L.

    2001-08-01

    In this paper 'real-time 3D data' refers to volumetric data that are acquired and used as they are produced. Large scale, real-time data are difficult to store and analyze, either visually or by some other means, within the time frames required. Yet this is often quite important to do when decision-makers must receive and quickly act on new information. An example is weather forecasting, where forecasters must act on information received on severe storm development and movement. To meet the real-time requirements crude heuristics are often used to gather information from the original data. This is in spite of the fact that better and better real-time data are becoming available, the full use of which could significantly improve decisions. The work reported here addresses these issues by providing comprehensive data acquisition, analysis, and storage components with time budgets for the data management of each component. These components are put into a global geospatial hierarchical structure. The volumetric data are placed into this global structure, and it is shown how levels of detail can be derived and used within this structure. A volumetric visualization procedure is developed that conforms to the hierarchical structure and uses the levels of detail. These general methods are focused on the specific case of the VGIS global hierarchical structure and rendering system,. The real-time data considered are from collections of time- dependent 3D Doppler radars although the methods described here apply more generally to time-dependent volumetric data. This paper reports on the design and construction of the above hierarchical structures and volumetric visualizations. It also reports result for the specific application of 3D Doppler radar displayed over photo textured terrain height fields. Results are presented results for the specific application of 3D Doppler radar displayed over photo textured terrain height fields. Results are presented for display of time

  19. Note: Time-gated 3D single quantum dot tracking with simultaneous spinning disk imaging

    SciTech Connect

    DeVore, M. S.; Stich, D. G.; Keller, A. M.; Phipps, M. E.; Hollingsworth, J. A.; Goodwin, P. M.; Werner, J. H.; Cleyrat, C.; Lidke, D. S.; Wilson, B. S.

    2015-12-15

    We describe recent upgrades to a 3D tracking microscope to include simultaneous Nipkow spinning disk imaging and time-gated single-particle tracking (SPT). Simultaneous 3D molecular tracking and spinning disk imaging enable the visualization of cellular structures and proteins around a given fluorescently labeled target molecule. The addition of photon time-gating to the SPT hardware improves signal to noise by discriminating against Raman scattering and short-lived fluorescence. In contrast to camera-based SPT, single-photon arrival times are recorded, enabling time-resolved spectroscopy (e.g., measurement of fluorescence lifetimes and photon correlations) to be performed during single molecule/particle tracking experiments.

  20. Medical applications of fast 3D cameras in real-time image-guided radiotherapy (IGRT) of cancer

    NASA Astrophysics Data System (ADS)

    Li, Shidong; Li, Tuotuo; Geng, Jason

    2013-03-01

    Dynamic volumetric medical imaging (4DMI) has reduced motion artifacts, increased early diagnosis of small mobile tumors, and improved target definition for treatment planning. High speed cameras for video, X-ray, or other forms of sequential imaging allow a live tracking of external or internal movement useful for real-time image-guided radiation therapy (IGRT). However, none of 4DMI can track real-time organ motion and no camera has correlated with 4DMI to show volumetric changes. With a brief review of various IGRT techniques, we propose a fast 3D camera for live-video stereovision, an automatic surface-motion identifier to classify body or respiratory motion, a mechanical model for synchronizing the external surface movement with the internal target displacement by combination use of the real-time stereovision and pre-treatment 4DMI, and dynamic multi-leaf collimation for adaptive aiming the moving target. Our preliminary results demonstrate that the technique is feasible and efficient in IGRT of mobile targets. A clinical trial has been initiated for validation of its spatial and temporal accuracies and dosimetric impact for intensity-modulated RT (IMRT), volumetric-modulated arc therapy (VMAT), and stereotactic body radiotherapy (SBRT) of any mobile tumors. The technique can be extended for surface-guided stereotactic needle insertion in biopsy of small lung nodules.

  1. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  2. A Gaussian process guided particle filter for tracking 3D human pose in video.

    PubMed

    Sedai, Suman; Bennamoun, Mohammed; Huynh, Du Q

    2013-11-01

    In this paper, we propose a hybrid method that combines Gaussian process learning, a particle filter, and annealing to track the 3D pose of a human subject in video sequences. Our approach, which we refer to as annealed Gaussian process guided particle filter, comprises two steps. In the training step, we use a supervised learning method to train a Gaussian process regressor that takes the silhouette descriptor as an input and produces multiple output poses modeled by a mixture of Gaussian distributions. In the tracking step, the output pose distributions from the Gaussian process regression are combined with the annealed particle filter to track the 3D pose in each frame of the video sequence. Our experiments show that the proposed method does not require initialization and does not lose tracking of the pose. We compare our approach with a standard annealed particle filter using the HumanEva-I dataset and with other state of the art approaches using the HumanEva-II dataset. The evaluation results show that our approach can successfully track the 3D human pose over long video sequences and give more accurate pose tracking results than the annealed particle filter.

  3. Dynamic tracking of a deformable tissue based on 3D-2D MR-US image registration

    NASA Astrophysics Data System (ADS)

    Marami, Bahram; Sirouspour, Shahin; Fenster, Aaron; Capson, David W.

    2014-03-01

    Real-time registration of pre-operative magnetic resonance (MR) or computed tomography (CT) images with intra-operative Ultrasound (US) images can be a valuable tool in image-guided therapies and interventions. This paper presents an automatic method for dynamically tracking the deformation of a soft tissue based on registering pre-operative three-dimensional (3D) MR images to intra-operative two-dimensional (2D) US images. The registration algorithm is based on concepts in state estimation where a dynamic finite element (FE)- based linear elastic deformation model correlates the imaging data in the spatial and temporal domains. A Kalman-like filtering process estimates the unknown deformation states of the soft tissue using the deformation model and a measure of error between the predicted and the observed intra-operative imaging data. The error is computed based on an intensity-based distance metric, namely, modality independent neighborhood descriptor (MIND), and no segmentation or feature extraction from images is required. The performance of the proposed method is evaluated by dynamically deforming 3D pre-operative MR images of a breast phantom tissue based on real-time 2D images obtained from an US probe. Experimental results on different registration scenarios showed that deformation tracking converges in a few iterations. The average target registration error on the plane of 2D US images for manually selected fiducial points was between 0.3 and 1.5 mm depending on the size of deformation.

  4. Towards real-time change detection in videos based on existing 3D models

    NASA Astrophysics Data System (ADS)

    Ruf, Boitumelo; Schuchert, Tobias

    2016-10-01

    Image based change detection is of great importance for security applications, such as surveillance and reconnaissance, in order to find new, modified or removed objects. Such change detection can generally be performed by co-registration and comparison of two or more images. However, existing 3d objects, such as buildings, may lead to parallax artifacts in case of inaccurate or missing 3d information, which may distort the results in the image comparison process, especially when the images are acquired from aerial platforms like small unmanned aerial vehicles (UAVs). Furthermore, considering only intensity information may lead to failures in detection of changes in the 3d structure of objects. To overcome this problem, we present an approach that uses Structure-from-Motion (SfM) to compute depth information, with which a 3d change detection can be performed against an existing 3d model. Our approach is capable of the change detection in real-time. We use the input frames with the corresponding camera poses to compute dense depth maps by an image-based depth estimation algorithm. Additionally we synthesize a second set of depth maps, by rendering the existing 3d model from the same camera poses as those of the image-based depth map. The actual change detection is performed by comparing the two sets of depth maps with each other. Our method is evaluated on synthetic test data with corresponding ground truth as well as on real image test data.

  5. Real-time 3D human capture system for mixed-reality art and entertainment.

    PubMed

    Nguyen, Ta Huynh Duy; Qui, Tran Cong Thien; Xu, Ke; Cheok, Adrian David; Teo, Sze Lee; Zhou, ZhiYing; Mallawaarachchi, Asitha; Lee, Shang Ping; Liu, Wei; Teo, Hui Siang; Thang, Le Nam; Li, Yu; Kato, Hirokazu

    2005-01-01

    A real-time system for capturing humans in 3D and placing them into a mixed reality environment is presented in this paper. The subject is captured by nine cameras surrounding her. Looking through a head-mounted-display with a camera in front pointing at a marker, the user can see the 3D image of this subject overlaid onto a mixed reality scene. The 3D images of the subject viewed from this viewpoint are constructed using a robust and fast shape-from-silhouette algorithm. The paper also presents several techniques to produce good quality and speed up the whole system. The frame rate of our system is around 25 fps using only standard Intel processor-based personal computers. Besides a remote live 3D conferencing and collaborating system, we also describe an application of the system in art and entertainment, named Magic Land, which is a mixed reality environment where captured avatars of human and 3D computer generated virtual animations can form an interactive story and play with each other. This system demonstrates many technologies in human computer interaction: mixed reality, tangible interaction, and 3D communication. The result of the user study not only emphasizes the benefits, but also addresses some issues of these technologies.

  6. Real-time 3D surface-image-guided beam setup in radiotherapy of breast cancer

    SciTech Connect

    Djajaputra, David; Li Shidong

    2005-01-01

    We describe an approach for external beam radiotherapy of breast cancer that utilizes the three-dimensional (3D) surface information of the breast. The surface data of the breast are obtained from a 3D optical camera that is rigidly mounted on the ceiling of the treatment vault. This 3D camera utilizes light in the visible range therefore it introduces no ionization radiation to the patient. In addition to the surface topographical information of the treated area, the camera also captures gray-scale information that is overlaid on the 3D surface image. This allows us to visualize the skin markers and automatically determine the isocenter position and the beam angles in the breast tangential fields. The field sizes and shapes of the tangential, supraclavicular, and internal mammary gland fields can all be determined according to the 3D surface image of the target. A least-squares method is first introduced for the tangential-field setup that is useful for compensation of the target shape changes. The entire process of capturing the 3D surface data and subsequent calculation of beam parameters typically requires less than 1 min. Our tests on phantom experiments and patient images have achieved the accuracy of 1 mm in shift and 0.5 deg. in rotation. Importantly, the target shape and position changes in each treatment session can both be corrected through this real-time image-guided system.

  7. Real-time 3D reconstruction for collision avoidance in interventional environments.

    PubMed

    Ladikos, Alexander; Benhimane, Selim; Navab, Nassir

    2008-01-01

    With the increased presence of automated devices such as C-arms and medical robots and the introduction of a multitude of surgical tools, navigation systems and patient monitoring devices, collision avoidance has become an issue of practical value in interventional environments. In this paper, we present a real-time 3D reconstruction system for interventional environments which aims at predicting collisions by building a 3D representation of all the objects in the room. The 3D reconstruction is used to determine whether other objects are in the working volume of the device and to alert the medical staff before a collision occurs. In the case of C-arms, this allows faster rotational and angular movement which could for instance be used in 3D angiography to obtain a better reconstruction of contrasted vessels. The system also prevents staff to unknowingly enter the working volume of a device. This is of relevance in complex environments with many devices. The recovered 3D representation also opens the path to many new applications utilizing this data such as workflow analysis, 3D video generation or interventional room planning. To validate our claims, we performed several experiments with a real C-arm that show the validity of the approach. This system is currently being transferred to an interventional room in our university hospital.

  8. Geometric-model-free tracking of extended targets using 3D lidar measurements

    NASA Astrophysics Data System (ADS)

    Steinemann, Philipp; Klappstein, Jens; Dickmann, Juergen; von Hundelshausen, Felix; Wünsche, Hans-Joachim

    2012-06-01

    Tracking of extended targets in high definition, 360-degree 3D-LIDAR (Light Detection and Ranging) measurements is a challenging task and a current research topic. It is a key component in robotic applications, and is relevant to path planning and collision avoidance. This paper proposes a new method without a geometric model to simultaneously track and accumulate 3D-LIDAR measurements of an object. The method itself is based on a particle filter and uses an object-related local 3D grid for each object. No geometric object hypothesis is needed. Accumulation allows coping with occlusions. The prediction step of the particle filter is governed by a motion model consisting of a deterministic and a probabilistic part. Since this paper is focused on tracking ground vehicles, a bicycle model is used for the deterministic part. The probabilistic part depends on the current state of each particle. A function for calculating the current probability density function for state transition is developed. It is derived in detail and based on a database consisting of vehicle dynamics measurements over several hundreds of kilometers. The adaptive probability density function narrows down the gating area for measurement data association. The second part of the proposed method addresses weighting the particles with a cost function. Different 3D-griddependent cost functions are presented and evaluated. Evaluations with real 3D-LIDAR measurements show the performance of the proposed method. The results are also compared to ground truth data.

  9. 3D real-time measurement system of seam with laser

    NASA Astrophysics Data System (ADS)

    Huang, Min-shuang; Huang, Jun-fen

    2014-02-01

    3-D Real-time Measurement System of seam outline based on Moiré Projection is proposed and designed. The system is composed of LD, grating, CCD, video A/D, FPGA, DSP and an output interface. The principle and hardware makeup of high-speed and real-time image processing circuit based on a Digital Signal Processor (DSP) and a Field Programmable Gate Array (FPGA) are introduced. Noise generation mechanism in poor welding field conditions is analyzed when Moiré stripes are projected on a welding workpiece surface. Median filter is adopted to smooth the acquired original laser image of seam, and then measurement results of a 3-D outline image of weld groove are provided.

  10. The BaBar Level 1 Drift-Chamber Trigger Upgrade With 3D Tracking

    SciTech Connect

    Chai, X.D.; /Iowa U.

    2005-11-29

    At BABAR, the Level 1 Drift Chamber trigger is being upgraded to reduce increasing background rates while the PEP-II luminosity keeps improving. This upgrade uses the drift time information and stereo wires in the drift chamber to perform a 3D track reconstruction that effectively rejects background events spread out along the beam line.

  11. A Microscopic Optically Tracking Navigation System That Uses High-resolution 3D Computer Graphics.

    PubMed

    Yoshino, Masanori; Saito, Toki; Kin, Taichi; Nakagawa, Daichi; Nakatomi, Hirofumi; Oyama, Hiroshi; Saito, Nobuhito

    2015-01-01

    Three-dimensional (3D) computer graphics (CG) are useful for preoperative planning of neurosurgical operations. However, application of 3D CG to intraoperative navigation is not widespread because existing commercial operative navigation systems do not show 3D CG in sufficient detail. We have developed a microscopic optically tracking navigation system that uses high-resolution 3D CG. This article presents the technical details of our microscopic optically tracking navigation system. Our navigation system consists of three components: the operative microscope, registration, and the image display system. An optical tracker was attached to the microscope to monitor the position and attitude of the microscope in real time; point-pair registration was used to register the operation room coordinate system, and the image coordinate system; and the image display system showed the 3D CG image in the field-of-view of the microscope. Ten neurosurgeons (seven males, two females; mean age 32.9 years) participated in an experiment to assess the accuracy of this system using a phantom model. Accuracy of our system was compared with the commercial system. The 3D CG provided by the navigation system coincided well with the operative scene under the microscope. Target registration error for our system was 2.9 ± 1.9 mm. Our navigation system provides a clear image of the operation position and the surrounding structures. Systems like this may reduce intraoperative complications.

  12. Moving Human Path Tracking Based on Video Surveillance in 3d Indoor Scenarios

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Zlatanova, Sisi; Wang, Zhe; Zhang, Yeting; Liu, Liu

    2016-06-01

    Video surveillance systems are increasingly used for a variety of 3D indoor applications. We can analyse human behaviour, discover and avoid crowded areas, monitor human traffic and so forth. In this paper we concentrate on use of surveillance cameras to track and reconstruct the path a person has followed. For the purpose we integrated video surveillance data with a 3D indoor model of the building and develop a single human moving path tracking method. We process the surveillance videos to detected single human moving traces; then we match the depth information of 3D scenes to the constructed 3D indoor network model and define the human traces in the 3D indoor space. Finally, the single human traces extracted from multiple cameras are connected with the help of the connectivity provided by the 3D network model. Using this approach, we can reconstruct the entire walking path. The provided experiments with a single person have verified the effectiveness and robustness of the method.

  13. A full-parallax 3D display with restricted viewing zone tracking viewer's eye

    NASA Astrophysics Data System (ADS)

    Beppu, Naoto; Yendo, Tomohiro

    2015-03-01

    The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.

  14. Handling Motion-Blur in 3D Tracking and Rendering for Augmented Reality.

    PubMed

    Park, Youngmin; Lepetit, Vincent; Woo, Woontack

    2012-09-01

    The contribution of this paper is two-fold. First, we show how to extend the ESM algorithm to handle motion blur in 3D object tracking. ESM is a powerful algorithm for template matching-based tracking, but it can fail under motion blur. We introduce an image formation model that explicitly consider the possibility of blur, and shows its results in a generalization of the original ESM algorithm. This allows to converge faster, more accurately and more robustly even under large amount of blur. Our second contribution is an efficient method for rendering the virtual objects under the estimated motion blur. It renders two images of the object under 3D perspective, and warps them to create many intermediate images. By fusing these images we obtain a final image for the virtual objects blurred consistently with the captured image. Because warping is much faster than 3D rendering, we can create realistically blurred images at a very low computational cost.

  15. Mesoscopic in vivo 3-D tracking of sparse cell populations using angular multiplexed optical projection tomography.

    PubMed

    Chen, Lingling; Alexandrov, Yuriy; Kumar, Sunil; Andrews, Natalie; Dallman, Margaret J; French, Paul M W; McGinty, James

    2015-04-01

    We describe an angular multiplexed imaging technique for 3-D in vivo cell tracking of sparse cell distributions and optical projection tomography (OPT) with superior time-lapse resolution and a significantly reduced light dose compared to volumetric time-lapse techniques. We demonstrate that using dual axis OPT, where two images are acquired simultaneously at different projection angles, can enable localization and tracking of features in 3-D with a time resolution equal to the camera frame rate. This is achieved with a 200x reduction in light dose compared to an equivalent volumetric time-lapse single camera OPT acquisition with 200 projection angles. We demonstrate the application of this technique to mapping the 3-D neutrophil migration pattern observed over ~25.5 minutes in a live 2 day post-fertilisation transgenic LysC:GFP zebrafish embryo following a tail wound.

  16. Surveillance, detection, and 3D infrared tracking of bullets, rockets, mortars, and artillery

    NASA Astrophysics Data System (ADS)

    Leslie, Daniel H.; Hyman, Howard; Moore, Fritz; Squire, Mark D.

    2001-09-01

    We describe test results using the FIRST (Fast InfraRed Sniper Tracker) to detect, track, and range to bullets in flight for determining the location of the bullet launch point. The technology developed for the FIRST system can be used to provide detection and accurate 3D track data for other small threat objects including rockets, mortars, and artillery in addition to bullets. We discuss the radiometry and detection range for these objects, and discuss the trade-offs involved in design of the very fast optical system for acquisition, tracking, and ranging of these targets.

  17. Exploring Drug Dosing Regimens In Vitro Using Real-Time 3D Spheroid Tumor Growth Assays.

    PubMed

    Lal-Nag, Madhu; McGee, Lauren; Titus, Steven A; Brimacombe, Kyle; Michael, Sam; Sittampalam, Gurusingham; Ferrer, Marc

    2017-03-01

    Two-dimensional monolayer cell proliferation assays for cancer drug discovery have made the implementation of large-scale screens feasible but only seem to reflect a simplified view that oncogenes or tumor suppressor genes are the genetic drivers of cancer cell proliferation. However, there is now increased evidence that the cellular and physiological context in which these oncogenic events occur play a key role in how they drive tumor growth in vivo and, therefore, in how tumors respond to drug treatments. In vitro 3D spheroid tumor models are being developed to better mimic the physiology of tumors in vivo, in an attempt to improve the predictability and efficiency of drug discovery for the treatment of cancer. Here we describe the establishment of a real-time 3D spheroid growth, 384-well screening assay. The cells used in this study constitutively expressed green fluorescent protein (GFP), which enabled the real-time monitoring of spheroid formation and the effect of chemotherapeutic agents on spheroid size at different time points of sphere growth and drug treatment. This real-time 3D spheroid assay platform represents a first step toward the replication in vitro of drug dosing regimens being investigated in vivo. We hope that further development of this assay platform will allow the investigation of drug dosing regimens, efficacy, and resistance before preclinical and clinical studies.

  18. Particle Filters and Occlusion Handling for Rigid 2D-3D Pose Tracking

    PubMed Central

    Lee, Jehoon; Sandhu, Romeil; Tannenbaum, Allen

    2013-01-01

    In this paper, we address the problem of 2D-3D pose estimation. Specifically, we propose an approach to jointly track a rigid object in a 2D image sequence and to estimate its pose (position and orientation) in 3D space. We revisit a joint 2D segmentation/3D pose estimation technique, and then extend the framework by incorporating a particle filter to robustly track the object in a challenging environment, and by developing an occlusion detection and handling scheme to continuously track the object in the presence of occlusions. In particular, we focus on partial occlusions that prevent the tracker from extracting an exact region properties of the object, which plays a pivotal role for region-based tracking methods in maintaining the track. To this end, a dynamical choice of how to invoke the objective functional is performed online based on the degree of dependencies between predictions and measurements of the system in accordance with the degree of occlusion and the variation of the object’s pose. This scheme provides the robustness to deal with occlusions of an obstacle with different statistical properties from that of the object of interest. Experimental results demonstrate the practical applicability and robustness of the proposed method in several challenging scenarios. PMID:24058277

  19. 3D model-based detection and tracking for space autonomous and uncooperative rendezvous

    NASA Astrophysics Data System (ADS)

    Shang, Yang; Zhang, Yueqiang; Liu, Haibo

    2015-10-01

    In order to fully navigate using a vision sensor, a 3D edge model based detection and tracking technique was developed. Firstly, we proposed a target detection strategy over a sequence of several images from the 3D model to initialize the tracking. The overall purpose of such approach is to robustly match each image with the model views of the target. Thus we designed a line segment detection and matching method based on the multi-scale space technology. Experiments on real images showed that our method is highly robust under various image changes. Secondly, we proposed a method based on 3D particle filter (PF) coupled with M-estimation to track and estimate the pose of the target efficiently. In the proposed approach, a similarity observation model was designed according to a new distance function of line segments. Then, based on the tracking results of PF, the pose was optimized using M-estimation. Experiments indicated that the proposed method can effectively track and accurately estimate the pose of freely moving target in unconstrained environment.

  20. Structured light 3D tracking system for measuring motions in PET brain imaging

    NASA Astrophysics Data System (ADS)

    Olesen, Oline V.; Jørgensen, Morten R.; Paulsen, Rasmus R.; Højgaard, Liselotte; Roed, Bjarne; Larsen, Rasmus

    2010-02-01

    Patient motion during scanning deteriorates image quality, especially for high resolution PET scanners. A new proposal for a 3D head tracking system for motion correction in high resolution PET brain imaging is set up and demonstrated. A prototype tracking system based on structured light with a DLP projector and a CCD camera is set up on a model of the High Resolution Research Tomograph (HRRT). Methods to reconstruct 3D point clouds of simple surfaces based on phase-shifting interferometry (PSI) are demonstrated. The projector and camera are calibrated using a simple stereo vision procedure where the projector is treated as a camera. Additionally, the surface reconstructions are corrected for the non-linear projector output prior to image capture. The results are convincing and a first step toward a fully automated tracking system for measuring head motions in PET imaging.

  1. Fast, automatic, and accurate catheter reconstruction in HDR brachytherapy using an electromagnetic 3D tracking system

    SciTech Connect

    Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc; Binnekamp, Dirk

    2015-03-15

    Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.

  2. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    NASA Astrophysics Data System (ADS)

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  3. Eye Tracking to Explore the Impacts of Photorealistic 3d Representations in Pedstrian Navigation Performance

    NASA Astrophysics Data System (ADS)

    Dong, Weihua; Liao, Hua

    2016-06-01

    Despite the now-ubiquitous two-dimensional (2D) maps, photorealistic three-dimensional (3D) representations of cities (e.g., Google Earth) have gained much attention by scientists and public users as another option. However, there is no consistent evidence on the influences of 3D photorealism on pedestrian navigation. Whether 3D photorealism can communicate cartographic information for navigation with higher effectiveness and efficiency and lower cognitive workload compared to the traditional symbolic 2D maps remains unknown. This study aims to explore whether the photorealistic 3D representation can facilitate processes of map reading and navigation in digital environments using a lab-based eye tracking approach. Here we show the differences of symbolic 2D maps versus photorealistic 3D representations depending on users' eye-movement and navigation behaviour data. We found that the participants using the 3D representation were less effective, less efficient and were required higher cognitive workload than using the 2D map for map reading. However, participants using the 3D representation performed more efficiently in self-localization and orientation at the complex decision points. The empirical results can be helpful to improve the usability of pedestrian navigation maps in future designs.

  4. Real-Time Large Scale 3d Reconstruction by Fusing Kinect and Imu Data

    NASA Astrophysics Data System (ADS)

    Huai, J.; Zhang, Y.; Yilmaz, A.

    2015-08-01

    Kinect-style RGB-D cameras have been used to build large scale dense 3D maps for indoor environments. These maps can serve many purposes such as robot navigation, and augmented reality. However, to generate dense 3D maps of large scale environments is still very challenging. In this paper, we present a mapping system for 3D reconstruction that fuses measurements from a Kinect and an inertial measurement unit (IMU) to estimate motion. Our major achievements include: (i) Large scale consistent 3D reconstruction is realized by volume shifting and loop closure; (ii) The coarse-to-fine iterative closest point (ICP) algorithm, the SIFT odometry, and IMU odometry are combined to robustly and precisely estimate pose. In particular, ICP runs routinely to track the Kinect motion. If ICP fails in planar areas, the SIFT odometry provides incremental motion estimate. If both ICP and the SIFT odometry fail, e.g., upon abrupt motion or inadequate features, the incremental motion is estimated by the IMU. Additionally, the IMU also observes the roll and pitch angles which can reduce long-term drift of the sensor assembly. In experiments on a consumer laptop, our system estimates motion at 8Hz on average while integrating color images to the local map and saving volumes of meshes concurrently. Moreover, it is immune to tracking failures, and has smaller drift than the state-of-the-art systems in large scale reconstruction.

  5. MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems

    NASA Astrophysics Data System (ADS)

    Kopecky, Ken; Winer, Eliot

    2014-06-01

    Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.

  6. Real-time volume rendering of 4D image using 3D texture mapping

    NASA Astrophysics Data System (ADS)

    Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-05-01

    Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.

  7. Fusing inertial sensor data in an extended Kalman filter for 3D camera tracking.

    PubMed

    Erdem, Arif Tanju; Ercan, Ali Özer

    2015-02-01

    In a setup where camera measurements are used to estimate 3D egomotion in an extended Kalman filter (EKF) framework, it is well-known that inertial sensors (i.e., accelerometers and gyroscopes) are especially useful when the camera undergoes fast motion. Inertial sensor data can be fused at the EKF with the camera measurements in either the correction stage (as measurement inputs) or the prediction stage (as control inputs). In general, only one type of inertial sensor is employed in the EKF in the literature, or when both are employed they are both fused in the same stage. In this paper, we provide an extensive performance comparison of every possible combination of fusing accelerometer and gyroscope data as control or measurement inputs using the same data set collected at different motion speeds. In particular, we compare the performances of different approaches based on 3D pose errors, in addition to camera reprojection errors commonly found in the literature, which provides further insight into the strengths and weaknesses of different approaches. We show using both simulated and real data that it is always better to fuse both sensors in the measurement stage and that in particular, accelerometer helps more with the 3D position tracking accuracy, whereas gyroscope helps more with the 3D orientation tracking accuracy. We also propose a simulated data generation method, which is beneficial for the design and validation of tracking algorithms involving both camera and inertial measurement unit measurements in general.

  8. Alignment of 3D Building Models and TIR Video Sequences with Line Tracking

    NASA Astrophysics Data System (ADS)

    Iwaszczuk, D.; Stilla, U.

    2014-11-01

    Thermal infrared imagery of urban areas became interesting for urban climate investigations and thermal building inspections. Using a flying platform such as UAV or a helicopter for the acquisition and combining the thermal data with the 3D building models via texturing delivers a valuable groundwork for large-area building inspections. However, such thermal textures are useful for further analysis if they are geometrically correctly extracted. This can be achieved with a good coregistrations between the 3D building models and thermal images, which cannot be achieved by direct georeferencing. Hence, this paper presents methodology for alignment of 3D building models and oblique TIR image sequences taken from a flying platform. In a single image line correspondences between model edges and image line segments are found using accumulator approach and based on these correspondences an optimal camera pose is calculated to ensure the best match between the projected model and the image structures. Among the sequence the linear features are tracked based on visibility prediction. The results of the proposed methodology are presented using a TIR image sequence taken from helicopter in a densely built-up urban area. The novelty of this work is given by employing the uncertainty of the 3D building models and by innovative tracking strategy based on a priori knowledge from the 3D building model and the visibility checking.

  9. Development of a Wireless and Near Real-Time 3D Ultrasound Strain Imaging System.

    PubMed

    Chen, Zhaohong; Chen, Yongdong; Huang, Qinghua

    2016-04-01

    Ultrasound elastography is an important medical imaging tool for characterization of lesions. In this paper, we present a wireless and near real-time 3D ultrasound strain imaging system. It uses a 3D translating device to control a commercial linear ultrasound transducer to collect pre-compression and post-compression radio-frequency (RF) echo signal frames. The RF frames are wirelessly transferred to a high-performance server via a local area network (LAN). A dynamic programming strain estimation algorithm is implemented with the compute unified device architecture (CUDA) on the graphic processing unit (GPU) in the server to calculate the strain image after receiving a pre-compression RF frame and a post-compression RF frame at the same position. Each strain image is inserted into a strain volume which can be rendered in near real-time. We take full advantage of the translating device to precisely control the probe movement and compression. The GPU-based parallel computing techniques are designed to reduce the computation time. Phantom and in vivo experimental results demonstrate that our system can generate strain volumes with good quality and display an incrementally reconstructed volume image in near real-time.

  10. 3D Orbital Tracking in a Modified Two-photon Microscope: An Application to the Tracking of Intracellular Vesicles

    PubMed Central

    Gratton, Enrico

    2014-01-01

    The objective of this video protocol is to discuss how to perform and analyze a three-dimensional fluorescent orbital particle tracking experiment using a modified two-photon microscope1. As opposed to conventional approaches (raster scan or wide field based on a stack of frames), the 3D orbital tracking allows to localize and follow with a high spatial (10 nm accuracy) and temporal resolution (50 Hz frequency response) the 3D displacement of a moving fluorescent particle on length-scales of hundreds of microns2. The method is based on a feedback algorithm that controls the hardware of a two-photon laser scanning microscope in order to perform a circular orbit around the object to be tracked: the feedback mechanism will maintain the fluorescent object in the center by controlling the displacement of the scanning beam3-5. To demonstrate the advantages of this technique, we followed a fast moving organelle, the lysosome, within a living cell6,7. Cells were plated according to standard protocols, and stained using a commercially lysosome dye. We discuss briefly the hardware configuration and in more detail the control software, to perform a 3D orbital tracking experiment inside living cells. We discuss in detail the parameters required in order to control the scanning microscope and enable the motion of the beam in a closed orbit around the particle. We conclude by demonstrating how this method can be effectively used to track the fast motion of a labeled lysosome along microtubules in 3D within a live cell. Lysosomes can move with speeds in the range of 0.4-0.5 µm/sec, typically displaying a directed motion along the microtubule network8. PMID:25350070

  11. 3D orbital tracking in a modified two-photon microscope: an application to the tracking of intracellular vesicles.

    PubMed

    Anzalone, Andrea; Annibale, Paolo; Gratton, Enrico

    2014-10-01

    The objective of this video protocol is to discuss how to perform and analyze a three-dimensional fluorescent orbital particle tracking experiment using a modified two-photon microscope(1). As opposed to conventional approaches (raster scan or wide field based on a stack of frames), the 3D orbital tracking allows to localize and follow with a high spatial (10 nm accuracy) and temporal resolution (50 Hz frequency response) the 3D displacement of a moving fluorescent particle on length-scales of hundreds of microns(2). The method is based on a feedback algorithm that controls the hardware of a two-photon laser scanning microscope in order to perform a circular orbit around the object to be tracked: the feedback mechanism will maintain the fluorescent object in the center by controlling the displacement of the scanning beam(3-5). To demonstrate the advantages of this technique, we followed a fast moving organelle, the lysosome, within a living cell(6,7). Cells were plated according to standard protocols, and stained using a commercially lysosome dye. We discuss briefly the hardware configuration and in more detail the control software, to perform a 3D orbital tracking experiment inside living cells. We discuss in detail the parameters required in order to control the scanning microscope and enable the motion of the beam in a closed orbit around the particle. We conclude by demonstrating how this method can be effectively used to track the fast motion of a labeled lysosome along microtubules in 3D within a live cell. Lysosomes can move with speeds in the range of 0.4-0.5 µm/sec, typically displaying a directed motion along the microtubule network(8).

  12. Measurement Matrix Optimization and Mismatch Problem Compensation for DLSLA 3-D SAR Cross-Track Reconstruction

    PubMed Central

    Bao, Qian; Jiang, Chenglong; Lin, Yun; Tan, Weixian; Wang, Zhirui; Hong, Wen

    2016-01-01

    With a short linear array configured in the cross-track direction, downward looking sparse linear array three-dimensional synthetic aperture radar (DLSLA 3-D SAR) can obtain the 3-D image of an imaging scene. To improve the cross-track resolution, sparse recovery methods have been investigated in recent years. In the compressive sensing (CS) framework, the reconstruction performance depends on the property of measurement matrix. This paper concerns the technique to optimize the measurement matrix and deal with the mismatch problem of measurement matrix caused by the off-grid scatterers. In the model of cross-track reconstruction, the measurement matrix is mainly affected by the configuration of antenna phase centers (APC), thus, two mutual coherence based criteria are proposed to optimize the configuration of APCs. On the other hand, to compensate the mismatch problem of the measurement matrix, the sparse Bayesian inference based method is introduced into the cross-track reconstruction by jointly estimate the scatterers and the off-grid error. Experiments demonstrate the performance of the proposed APCs’ configuration schemes and the proposed cross-track reconstruction method. PMID:27556471

  13. Measurement Matrix Optimization and Mismatch Problem Compensation for DLSLA 3-D SAR Cross-Track Reconstruction.

    PubMed

    Bao, Qian; Jiang, Chenglong; Lin, Yun; Tan, Weixian; Wang, Zhirui; Hong, Wen

    2016-08-22

    With a short linear array configured in the cross-track direction, downward looking sparse linear array three-dimensional synthetic aperture radar (DLSLA 3-D SAR) can obtain the 3-D image of an imaging scene. To improve the cross-track resolution, sparse recovery methods have been investigated in recent years. In the compressive sensing (CS) framework, the reconstruction performance depends on the property of measurement matrix. This paper concerns the technique to optimize the measurement matrix and deal with the mismatch problem of measurement matrix caused by the off-grid scatterers. In the model of cross-track reconstruction, the measurement matrix is mainly affected by the configuration of antenna phase centers (APC), thus, two mutual coherence based criteria are proposed to optimize the configuration of APCs. On the other hand, to compensate the mismatch problem of the measurement matrix, the sparse Bayesian inference based method is introduced into the cross-track reconstruction by jointly estimate the scatterers and the off-grid error. Experiments demonstrate the performance of the proposed APCs' configuration schemes and the proposed cross-track reconstruction method.

  14. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    SciTech Connect

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    2014-02-15

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  15. Display of real-time 3D sensor data in a DVE system

    NASA Astrophysics Data System (ADS)

    Völschow, Philipp; Münsterer, Thomas; Strobel, Michael; Kuhn, Michael

    2016-05-01

    This paper describes the implementation of displaying real-time processed LiDAR 3D data in a DVE pilot assistance system. The goal is to display to the pilot a comprehensive image of the surrounding world without misleading or cluttering information. 3D data which can be attributed, i.e. classified, to terrain or predefined obstacle classes is depicted differently from data belonging to elevated objects which could not be classified. Display techniques may be different for head-down and head-up displays to avoid cluttering of the outside view in the latter case. While terrain is shown as shaded surfaces with grid structures or as grid structures alone, respectively, classified obstacles are typically displayed with obstacle symbols only. Data from objects elevated above ground are displayed as shaded 3D points in space. In addition the displayed 3D points are accumulated over a certain time frame allowing on the one hand side a cohesive structure being displayed and on the other hand displaying moving objects correctly. In addition color coding or texturing can be applied based on known terrain features like land use.

  16. A 3D front tracking method on a CPU/GPU system

    SciTech Connect

    Bo, Wurigen; Grove, John

    2011-01-21

    We describe the method to port a sequential 3D interface tracking code to a GPU with CUDA. The interface is represented as a triangular mesh. Interface geometry properties and point propagation are performed on a GPU. Interface mesh adaptation is performed on a CPU. The convergence of the method is assessed from the test problems with given velocity fields. Performance results show overall speedups from 11 to 14 for the test problems under mesh refinement. We also briefly describe our ongoing work to couple the interface tracking method with a hydro solver.

  17. Error control in the set-up of stereo camera systems for 3d animal tracking

    NASA Astrophysics Data System (ADS)

    Cavagna, A.; Creato, C.; Del Castello, L.; Giardina, I.; Melillo, S.; Parisi, L.; Viale, M.

    2015-12-01

    Three-dimensional tracking of animal systems is the key to the comprehension of collective behavior. Experimental data collected via a stereo camera system allow the reconstruction of the 3d trajectories of each individual in the group. Trajectories can then be used to compute some quantities of interest to better understand collective motion, such as velocities, distances between individuals and correlation functions. The reliability of the retrieved trajectories is strictly related to the accuracy of the 3d reconstruction. In this paper, we perform a careful analysis of the most significant errors affecting 3d reconstruction, showing how the accuracy depends on the camera system set-up and on the precision of the calibration parameters.

  18. Holographic multi-focus 3D two-photon polymerization with real-time calculated holograms.

    PubMed

    Vizsnyiczai, Gaszton; Kelemen, Lóránd; Ormos, Pál

    2014-10-06

    Two-photon polymerization enables the fabrication of micron sized structures with submicron resolution. Spatial light modulators (SLM) have already been used to create multiple polymerizing foci in the photoresist by holographic beam shaping, thus enabling the parallel fabrication of multiple microstructures. Here we demonstrate the parallel two-photon polymerization of single 3D microstructures by multiple holographically translated foci. Multiple foci were created by phase holograms, which were calculated real-time on an NVIDIA CUDA GPU, and displayed on an electronically addressed SLM. A 3D demonstrational structure was designed that is built up from a nested set of dodecahedron frames of decreasing size. Each individual microstructure was fabricated with the parallel and coordinated motion of 5 holographic foci. The reproducibility and the high uniformity of features of the microstructures were verified by scanning electron microscopy.

  19. Cooperative Wall-climbing Robots in 3D Environments for Surveillance and Target Tracking

    DTIC Science & Technology

    2009-02-08

    distribution of impeller vanes, volume of the chamber, and sealing effect , etc. Fig. 5 and 6 show some exemplary simulation results. In paper [11], we...Environments for Surveillance and Target Tracking 11 multiple nonholonomic mobile robots using Cartesian coordinates. Based on the special feature...gamma-ray or x-ray cargo inspection system. Three-dimensional (3D) measurements of the objects inside a cargo can be obtained by effectively

  20. 3D imaging of semiconductor colloid nanocrystals: on the way to nanodiagnostics of track membranes

    NASA Astrophysics Data System (ADS)

    Kulyk, S. I.; Eremchev, I. Y.; Gorshelev, A. A.; Naumov, A. V.; Zagorsky, D. L.; Kotova, S. P.; Volostnikov, V. G.; Vorontsov, E. N.

    2016-12-01

    The work concerns the feasibility of 3D optical diagnostic of porous media with subdifraction spatial resolution via epi-luminescence microscopy of single semiconductor colloid nanocrystals (quantum dots, QD) CdSe/ZnS used as emitting labels/nanoprobes. The nanoprecise reconstruction of axial coordinate is provided by double helix technique of point spread function transformation (DH-PSF). The results of QD localization in polycarbonate track membrane (TM) is presented.

  1. Demonstration of digital hologram recording and 3D-scenes reconstruction in real-time

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Kulakov, Mikhail N.; Kurbatova, Ekaterina A.; Molodtsov, Dmitriy Y.; Rodin, Vladislav G.

    2016-04-01

    Digital holography is technique that allows to reconstruct information about 2D-objects and 3D-scenes. This is achieved by registration of interference pattern formed by two beams: object and reference ones. Pattern registered by the digital camera is processed. This allows to obtain amplitude and phase of the object beam. Reconstruction of shape of the 2D objects and 3D-scenes can be obtained numerically (using computer) and optically (using spatial light modulators - SLMs). In this work camera Megaplus II ES11000 was used for digital holograms recording. The camera has 4008 × 2672 pixels with sizes of 9 μm × 9 μm. For hologram recording, 50 mW frequency-doubled Nd:YAG laser with wavelength 532 nm was used. Liquid crystal on silicon SLM HoloEye PLUTO VIS was used for optical reconstruction of digital holograms. SLM has 1920 × 1080 pixels with sizes of 8 μm × 8 μm. At objects reconstruction 10 mW He-Ne laser with wavelength 632.8 nm was used. Setups for digital holograms recording and their optical reconstruction with the SLM were combined as follows. MegaPlus Central Control Software allows to display registered frames by the camera with a little delay on the computer monitor. The SLM can work as additional monitor. In result displayed frames can be shown on the SLM display in near real-time. Thus recording and reconstruction of the 3D-scenes was obtained in real-time. Preliminary, resolution of displayed frames was chosen equaled to the SLM one. Quantity of the pixels was limited by the SLM resolution. Frame rate was limited by the camera one. This holographic video setup was applied without additional program implementations that would increase time delays between hologram recording and object reconstruction. The setup was demonstrated for reconstruction of 3D-scenes.

  2. 3D Visualization of Monte-Carlo Simulation's of HZE Track Structure and Initial Chemical Species

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2009-01-01

    Heavy ions biophysics is important for space radiation risk assessment [1] and hadron-therapy [2]. The characteristic of heavy ions tracks include a very high energy deposition region close to the track (<20 nm) denoted as the track core, and an outer penumbra region consisting of individual secondary electrons (6-rays). A still open question is the radiobiological effects of 6- rays relative to the track core. Of importance is the induction of double-strand breaks (DSB) [3] and oxidative damage to the biomolecules and the tissue matrix, considered the most important lesions for acute and long term effects of radiation. In this work, we have simulated a 56Fe26+ ion track of 1 GeV/amu with our Monte-Carlo code RITRACKS [4]. The simulation results have been used to calculate the energy depiction and initial chemical species in a "voxelized" space, which is then visualized in 3D. Several voxels with dose >1000 Gy are found in the penumbra, some located 0.1 mm from the track core. In computational models, the DSB induction probability is calculated with radial dose [6], which may not take into account the higher RBE of electron track ends for DSB induction. Therefore, these simulations should help improve models of DSB induction and our understanding of heavy ions biophysics.

  3. A Comprehensive Software System for Interactive, Real-time, Visual 3D Deterministic and Stochastic Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Li, S.

    2002-05-01

    Taking advantage of the recent developments in groundwater modeling research and computer, image and graphics processing, and objected oriented programming technologies, Dr. Li and his research group have recently developed a comprehensive software system for unified deterministic and stochastic groundwater modeling. Characterized by a new real-time modeling paradigm and improved computational algorithms, the software simulates 3D unsteady flow and reactive transport in general groundwater formations subject to both systematic and "randomly" varying stresses and geological and chemical heterogeneity. The software system has following distinct features and capabilities: Interactive simulation and real time visualization and animation of flow in response to deterministic as well as stochastic stresses. Interactive, visual, and real time particle tracking, random walk, and reactive plume modeling in both systematically and randomly fluctuating flow. Interactive statistical inference, scattered data interpolation, regression, and ordinary and universal Kriging, conditional and unconditional simulation. Real-time, visual and parallel conditional flow and transport simulations. Interactive water and contaminant mass balance analysis and visual and real-time flux update. Interactive, visual, and real time monitoring of head and flux hydrographs and concentration breakthroughs. Real-time modeling and visualization of aquifer transition from confined to unconfined to partially de-saturated or completely dry and rewetting Simultaneous and embedded subscale models, automatic and real-time regional to local data extraction; Multiple subscale flow and transport models Real-time modeling of steady and transient vertical flow patterns on multiple arbitrarily-shaped cross-sections and simultaneous visualization of aquifer stratigraphy, properties, hydrological features (rivers, lakes, wetlands, wells, drains, surface seeps), and dynamically adjusted surface flooding area

  4. Real-time 3D radiation risk assessment supporting simulation of work in nuclear environments.

    PubMed

    Szőke, I; Louka, M N; Bryntesen, T R; Bratteli, J; Edvardsen, S T; RøEitrheim, K K; Bodor, K

    2014-06-01

    This paper describes the latest developments at the Institute for Energy Technology (IFE) in Norway, in the field of real-time 3D (three-dimensional) radiation risk assessment for the support of work simulation in nuclear environments. 3D computer simulation can greatly facilitate efficient work planning, briefing, and training of workers. It can also support communication within and between work teams, and with advisors, regulators, the media and public, at all the stages of a nuclear installation's lifecycle. Furthermore, it is also a beneficial tool for reviewing current work practices in order to identify possible gaps in procedures, as well as to support the updating of international recommendations, dissemination of experience, and education of the current and future generation of workers.IFE has been involved in research and development into the application of 3D computer simulation and virtual reality (VR) technology to support work in radiological environments in the nuclear sector since the mid 1990s. During this process, two significant software tools have been developed, the VRdose system and the Halden Planner, and a number of publications have been produced to contribute to improving the safety culture in the nuclear industry.This paper describes the radiation risk assessment techniques applied in earlier versions of the VRdose system and the Halden Planner, for visualising radiation fields and calculating dose, and presents new developments towards implementing a flexible and up-to-date dosimetric package in these 3D software tools, based on new developments in the field of radiation protection. The latest versions of these 3D tools are capable of more accurate risk estimation, permit more flexibility via a range of user choices, and are applicable to a wider range of irradiation situations than their predecessors.

  5. Registration of 2D cardiac images to real-time 3D ultrasound volumes for 3D stress echocardiography

    NASA Astrophysics Data System (ADS)

    Leung, K. Y. Esther; van Stralen, Marijn; Voormolen, Marco M.; van Burken, Gerard; Nemes, Attila; ten Cate, Folkert J.; Geleijnse, Marcel L.; de Jong, Nico; van der Steen, Antonius F. W.; Reiber, Johan H. C.; Bosch, Johan G.

    2006-03-01

    Three-dimensional (3D) stress echocardiography is a novel technique for diagnosing cardiac dysfunction, by comparing wall motion of the left ventricle under different stages of stress. For quantitative comparison of this motion, it is essential to register the ultrasound data. We propose an intensity based rigid registration method to retrieve two-dimensional (2D) four-chamber (4C), two-chamber, and short-axis planes from the 3D data set acquired in the stress stage, using manually selected 2D planes in the rest stage as reference. The algorithm uses the Nelder-Mead simplex optimization to find the optimal transformation of one uniform scaling, three rotation, and three translation parameters. We compared registration using the SAD, SSD, and NCC metrics, performed on four resolution levels of a Gaussian pyramid. The registration's effectiveness was assessed by comparing the 3D positions of the registered apex and mitral valve midpoints and 4C direction with the manually selected results. The registration was tested on data from 20 patients. Best results were found using the NCC metric on data downsampled with factor two: mean registration errors were 8.1mm, 5.4mm, and 8.0° in the apex position, mitral valve position, and 4C direction respectively. The errors were close to the interobserver (7.1mm, 3.8mm, 7.4°) and intraobserver variability (5.2mm, 3.3mm, 7.0°), and better than the error before registration (9.4mm, 9.0mm, 9.9°). We demonstrated that the registration algorithm visually and quantitatively improves the alignment of rest and stress data sets, performing similar to manual alignment. This will improve automated analysis in 3D stress echocardiography.

  6. 3D shape tracking of minimally invasive medical instruments using optical frequency domain reflectometry

    NASA Astrophysics Data System (ADS)

    Parent, Francois; Kanti Mandal, Koushik; Loranger, Sebastien; Watanabe Fernandes, Eric Hideki; Kashyap, Raman; Kadoury, Samuel

    2016-03-01

    We propose here a new alternative to provide real-time device tracking during minimally invasive interventions using a truly-distributed strain sensor based on optical frequency domain reflectometry (OFDR) in optical fibers. The guidance of minimally invasive medical instruments such as needles or catheters (ex. by adding a piezoelectric coating) has been the focus of extensive research in the past decades. Real-time tracking of instruments in medical interventions facilitates image guidance and helps the user to reach a pre-localized target more precisely. Image-guided systems using ultrasound imaging and shape sensors based on fiber Bragg gratings (FBG)-embedded optical fibers can provide retroactive feedback to the user in order to reach the targeted areas with even more precision. However, ultrasound imaging with electro-magnetic tracking cannot be used in the magnetic resonance imaging (MRI) suite, while shape sensors based on FBG embedded in optical fibers provides discrete values of the instrument position, which requires approximations to be made to evaluate its global shape. This is why a truly-distributed strain sensor based on OFDR could enhance the tracking accuracy. In both cases, since the strain is proportional to the radius of curvature of the fiber, a strain sensor can provide the three-dimensional shape of medical instruments by simply inserting fibers inside the devices. To faithfully follow the shape of the needle in the tracking frame, 3 fibers glued in a specific geometry are used, providing 3 degrees of freedom along the fiber. Near real-time tracking of medical instruments is thus obtained offering clear advantages for clinical monitoring in remotely controlled catheter or needle guidance. We present results demonstrating the promising aspects of this approach as well the limitations of using the OFDR technique.

  7. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  8. Automated 3-D tracking of centrosomes in sequences of confocal image stacks.

    PubMed

    Kerekes, Ryan A; Gleason, Shaun S; Trivedi, Niraj; Solecki, David J

    2009-01-01

    In order to facilitate the study of neuron migration, we propose a method for 3-D detection and tracking of centrosomes in time-lapse confocal image stacks of live neuron cells. We combine Laplacian-based blob detection, adaptive thresholding, and the extraction of scale and roundness features to find centrosome-like objects in each frame. We link these detections using the joint probabilistic data association filter (JPDAF) tracking algorithm with a Newtonian state-space model tailored to the motion characteristics of centrosomes in live neurons. We apply our algorithm to image sequences containing multiple cells, some of which had been treated with motion-inhibiting drugs. We provide qualitative results and quantitative comparisons to manual segmentation and tracking results showing that our average motion estimates agree to within 13% of those computed manually by neurobiologists.

  9. Miniature stereoscopic video system provides real-time 3D registration and image fusion for minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Yaron, Avi; Bar-Zohar, Meir; Horesh, Nadav

    2007-02-01

    Sophisticated surgeries require the integration of several medical imaging modalities, like MRI and CT, which are three-dimensional. Many efforts are invested in providing the surgeon with this information in an intuitive & easy to use manner. A notable development, made by Visionsense, enables the surgeon to visualize the scene in 3D using a miniature stereoscopic camera. It also provides real-time 3D measurements that allow registration of navigation systems as well as 3D imaging modalities, overlaying these images on the stereoscopic video image in real-time. The real-time MIS 'see through tissue' fusion solutions enable the development of new MIS procedures in various surgical segments, such as spine, abdomen, cardio-thoracic and brain. This paper describes 3D surface reconstruction and registration methods using Visionsense camera, as a step toward fully automated multi-modality 3D registration.

  10. Meanie3D - a mean-shift based, multivariate, multi-scale clustering and tracking algorithm

    NASA Astrophysics Data System (ADS)

    Simon, Jürgen-Lorenz; Malte, Diederich; Silke, Troemel

    2014-05-01

    Project OASE is the one of 5 work groups at the HErZ (Hans Ertel Centre for Weather Research), an ongoing effort by the German weather service (DWD) to further research at Universities concerning weather prediction. The goal of project OASE is to gain an object-based perspective on convective events by identifying them early in the onset of convective initiation and follow then through the entire lifecycle. The ability to follow objects in this fashion requires new ways of object definition and tracking, which incorporate all the available data sets of interest, such as Satellite imagery, weather Radar or lightning counts. The Meanie3D algorithm provides the necessary tool for this purpose. Core features of this new approach to clustering (object identification) and tracking are the ability to identify objects using the mean-shift algorithm applied to a multitude of variables (multivariate), as well as the ability to detect objects on various scales (multi-scale) using elements of Scale-Space theory. The algorithm works in 2D as well as 3D without modifications. It is an extension of a method well known from the field of computer vision and image processing, which has been tailored to serve the needs of the meteorological community. In spite of the special application to be demonstrated here (like convective initiation), the algorithm is easily tailored to provide clustering and tracking for a wide class of data sets and problems. In this talk, the demonstration is carried out on two of the OASE group's own composite sets. One is a 2D nationwide composite of Germany including C-Band Radar (2D) and Satellite information, the other a 3D local composite of the Bonn/Jülich area containing a high-resolution 3D X-Band Radar composite.

  11. Twin-beam real-time position estimation of micro-objects in 3D

    NASA Astrophysics Data System (ADS)

    Gurtner, Martin; Zemánek, Jiří

    2016-12-01

    Various optical methods for measuring positions of micro-objects in 3D have been reported in the literature. Nevertheless, the majority of them are not suitable for real-time operation, which is needed, for example, for feedback position control. In this paper, we present a method for real-time estimation of the position of micro-objects in 3D1; the method is based on twin-beam illumination and requires only a very simple hardware setup whose essential part is a standard image sensor without any lens. The performance of the proposed method is tested during a micro-manipulation task in which the estimated position served as feedback for the controller. The experiments show that the estimate is accurate to within  ∼3 μm in the lateral position and  ∼7 μm in the axial distance with the refresh rate of 10 Hz. Although the experiments are done using spherical objects, the presented method could be modified to handle non-spherical objects as well.

  12. In vivo real-time 3-D intracardiac echo using PMUT arrays.

    PubMed

    Dausch, David E; Gilchrist, Kristin H; Carlson, James B; Hall, Stephen D; Castellucci, John B; von Ramm, Olaf T

    2014-10-01

    Piezoelectric micromachined ultrasound transducer (PMUT) matrix arrays were fabricated containing novel through-silicon interconnects and integrated into intracardiac catheters for in vivo real-time 3-D imaging. PMUT arrays with rectangular apertures containing 256 and 512 active elements were fabricated and operated at 5 MHz. The arrays were bulk micromachined in silicon-on-insulator substrates, and contained flexural unimorph membranes comprising the device silicon, lead zirconate titanate (PZT), and electrode layers. Through-silicon interconnects were fabricated by depositing a thin-film conformal copper layer in the bulk micromachined via under each PMUT membrane and photolithographically patterning this copper layer on the back of the substrate to facilitate contact with the individually addressable matrix array elements. Cable assemblies containing insulated 45-AWG copper wires and a termination silicon substrate were thermocompression bonded to the PMUT substrate for signal wire interconnection to the PMUT array. Side-viewing 14-Fr catheters were fabricated and introduced through the femoral vein in an adult porcine model. Real-time 3-D images were acquired from the right atrium using a prototype ultrasound scanner. Full 60° × 60° volume sectors were obtained with penetration depth of 8 to 10 cm at frame rates of 26 to 31 volumes per second.

  13. Ring array transducers for real-time 3-D imaging of an atrial septal occluder.

    PubMed

    Light, Edward D; Lindsey, Brooks D; Upchurch, Joseph A; Smith, Stephen W

    2012-08-01

    We developed new miniature ring array transducers integrated into interventional device catheters such as used to deploy atrial septal occluders. Each ring array consisted of 55 elements operating near 5 MHz with interelement spacing of 0.20 mm. It was constructed on a flat piece of copper-clad polyimide and then wrapped around an 11 French O.D. catheter. We used a braided cabling technology from Tyco Electronics Corporation to connect the elements to the Volumetric Medical Imaging (VMI) real-time 3-D ultrasound scanner. Transducer performance yielded a -6 dB fractional bandwidth of 20% centered at 4.7 MHz without a matching layer vs. average bandwidth of 60% centered at 4.4 MHz with a matching layer. Real-time 3-D rendered images of an en face view of a Gore Helex septal occluder in a water tank showed a finer texture of the device surface from the ring array with the matching layer.

  14. Laser 3-D measuring system and real-time visual feedback for teaching and correcting breathing.

    PubMed

    Povšič, Klemen; Fležar, Matjaž; Možina, Janez; Jezeršek, Matija

    2012-03-01

    We present a novel method for real-time 3-D body-shape measurement during breathing based on the laser multiple-line triangulation principle. The laser projector illuminates the measured surface with a pattern of 33 equally inclined light planes. Simultaneously, the camera records the distorted light pattern from a different viewpoint. The acquired images are transferred to a personal computer, where the 3-D surface reconstruction, shape analysis, and display are performed in real time. The measured surface displacements are displayed with a color palette, which enables visual feedback to the patient while breathing is being taught. The measuring range is approximately 400×600×500 mm in width, height, and depth, respectively, and the accuracy of the calibrated apparatus is ±0.7 mm. The system was evaluated by means of its capability to distinguish between different breathing patterns. The accuracy of the measured volumes of chest-wall deformation during breathing was verified using standard methods of volume measurements. The results show that the presented 3-D measuring system with visual feedback has great potential as a diagnostic and training assistance tool when monitoring and evaluating the breathing pattern, because it offers a simple and effective method of graphical communication with the patient.

  15. A brain-computer interface method combined with eye tracking for 3D interaction.

    PubMed

    Lee, Eui Chul; Woo, Jin Cheol; Kim, Jong Hwa; Whang, Mincheol; Park, Kang Ryoung

    2010-07-15

    With the recent increase in the number of three-dimensional (3D) applications, the need for interfaces to these applications has increased. Although the eye tracking method has been widely used as an interaction interface for hand-disabled persons, this approach cannot be used for depth directional navigation. To solve this problem, we propose a new brain computer interface (BCI) method in which the BCI and eye tracking are combined to analyze depth navigation, including selection and two-dimensional (2D) gaze direction, respectively. The proposed method is novel in the following five ways compared to previous works. First, a device to measure both the gaze direction and an electroencephalogram (EEG) pattern is proposed with the sensors needed to measure the EEG attached to a head-mounted eye tracking device. Second, the reliability of the BCI interface is verified by demonstrating that there is no difference between the real and the imaginary movements for the same work in terms of the EEG power spectrum. Third, depth control for the 3D interaction interface is implemented by an imaginary arm reaching movement. Fourth, a selection method is implemented by an imaginary hand grabbing movement. Finally, for the independent operation of gazing and the BCI, a mode selection method is proposed that measures a user's concentration by analyzing the pupil accommodation speed, which is not affected by the operation of gazing and the BCI. According to experimental results, we confirmed the feasibility of the proposed 3D interaction method using eye tracking and a BCI.

  16. A real-time cardiac surface tracking system using Subspace Clustering.

    PubMed

    Singh, Vimal; Tewfik, Ahmed H; Gowreesunker, B

    2010-01-01

    Catheter based radio frequency ablation of atrial fibrillation requires real-time 3D tracking of cardiac surfaces with sub-millimeter accuracy. To best of our knowledge, there are no commercial or non-commercial systems capable to do so. In this paper, a system for high-accuracy 3D tracking of cardiac surfaces in real-time is proposed and results applied to a real patient dataset are presented. Proposed system uses Subspace Clustering algorithm to identify the potential deformation subspaces for cardiac surfaces during the training phase from pre-operative MRI scan based training set. In Tracking phase, using low-density outer cardiac surface samples, active deformation subspace is identified and complete inner & outer cardiac surfaces are reconstructed in real-time under a least squares formulation.

  17. Simultaneous real-time 3D photoacoustic tomography and EEG for neurovascular coupling study in an animal model of epilepsy

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Xiao, Jiaying; Jiang, Huabei

    2014-08-01

    Objective. Neurovascular coupling in epilepsy is poorly understood; its study requires simultaneous monitoring of hemodynamic changes and neural activity in the brain. Approach. Here for the first time we present a combined real-time 3D photoacoustic tomography (PAT) and electrophysiology/electroencephalography (EEG) system for the study of neurovascular coupling in epilepsy, whose ability was demonstrated with a pentylenetetrazol (PTZ) induced generalized seizure model in rats. Two groups of experiments were carried out with different wavelengths to detect the changes of oxy-hemoglobin (HbO2) and deoxy-hemoglobin (HbR) signals in the rat brain. We extracted the average PAT signals of the superior sagittal sinus (SSS), and compared them with the EEG signal. Main results. Results showed that the seizure process can be divided into three stages. A ‘dip’ lasting for 1-2 min in the first stage and the following hyperfusion in the second stage were observed. The HbO2 signal and the HbR signal were generally negatively correlated. The change of blood flow was also estimated. All the acquired results here were in accordance with other published results. Significance. Compared to other existing functional neuroimaging tools, the method proposed here enables reliable tracking of hemodynamic signal with both high spatial and high temporal resolution in 3D, so it is more suitable for neurovascular coupling study of epilepsy.

  18. Coordination of gaze and hand movements for tracking and tracing in 3D.

    PubMed

    Gielen, Constantinus C A M; Dijkstra, Tjeerd M H; Roozen, Irene J; Welten, Joke

    2009-03-01

    In this study we have investigated movements in three-dimensional space. Since most studies have investigated planar movements (like ellipses, cloverleaf shapes and "figure eights") we have compared two generalizations of the two-thirds power law to three dimensions. In particular we have tested whether the two-thirds power law could be best described by tangential velocity and curvature in a plane (compatible with the idea of planar segmentation) or whether tangential velocity and curvature should be calculated in three dimensions. We defined total curvature in three dimensions as the square root of the sum of curvature squared and torsion squared. The results demonstrate that most of the variance is explained by tangential velocity and total curvature. This indicates that all three orthogonal components of movements in 3D are equally important and that movements are truly 3D and do not reflect a concatenation of 2D planar movement segments. In addition, we have studied the coordination of eye and hand movements in 3D by measuring binocular eye movements while subjects move the finger along a curved path. The results show that the directional component and finger position almost superimpose when subjects track a target moving in 3D. However, the vergence component of gaze leads finger position by about 250msec. For drawing (tracing) the path of a visible 3D shape, the directional component of gaze leads finger position by about 225msec, and the vergence component leads finger position by about 400msec. These results are compatible with the idea that gaze leads hand position during drawing movement to assist prediction and planning of hand position in 3D space.

  19. Management of three-dimensional intrafraction motion through real-time DMLC tracking.

    PubMed

    Sawant, Amit; Venkat, Raghu; Srivastava, Vikram; Carlson, David; Povzner, Sergey; Cattell, Herb; Keall, Paul

    2008-05-01

    Tumor tracking using a dynamic multileaf collimator (DMLC) represents a promising approach for intrafraction motion management in thoracic and abdominal cancer radiotherapy. In this work, we develop, empirically demonstrate, and characterize a novel 3D tracking algorithm for real-time, conformal, intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT)-based radiation delivery to targets moving in three dimensions. The algorithm obtains real-time information of target location from an independent position monitoring system and dynamically calculates MLC leaf positions to account for changes in target position. Initial studies were performed to evaluate the geometric accuracy of DMLC tracking of 3D target motion. In addition, dosimetric studies were performed on a clinical linac to evaluate the impact of real-time DMLC tracking for conformal, step-and-shoot (S-IMRT), dynamic (D-IMRT), and VMAT deliveries to a moving target. The efficiency of conformal and IMRT delivery in the presence of tracking was determined. Results show that submillimeter geometric accuracy in all three dimensions is achievable with DMLC tracking. Significant dosimetric improvements were observed in the presence of tracking for conformal and IMRT deliveries to moving targets. A gamma index evaluation with a 3%-3 mm criterion showed that deliveries without DMLC tracking exhibit between 1.7 (S-IMRT) and 4.8 (D-IMRT) times more dose points that fail the evaluation compared to corresponding deliveries with tracking. The efficiency of IMRT delivery, as measured in the lab, was observed to be significantly lower in case of tracking target motion perpendicular to MLC leaf travel compared to motion parallel to leaf travel. Nevertheless, these early results indicate that accurate, real-time DMLC tracking of 3D tumor motion is feasible and can potentially result in significant geometric and dosimetric advantages leading to more effective management of intrafraction motion.

  20. Realistic 3D Terrain Roaming and Real-Time Flight Simulation

    NASA Astrophysics Data System (ADS)

    Que, Xiang; Liu, Gang; He, Zhenwen; Qi, Guang

    2014-12-01

    This paper presents an integrate method, which can provide access to current status and the dynamic visible scanning topography, to enhance the interactive during the terrain roaming and real-time flight simulation. A digital elevation model and digital ortho-photo map data integrated algorithm is proposed as the base algorithm for our approach to build a realistic 3D terrain scene. A new technique with help of render to texture and head of display for generating the navigation pane is used. In the flight simulating, in order to eliminate flying "jump", we employs the multidimensional linear interpolation method to adjust the camera parameters dynamically and steadily. Meanwhile, based on the principle of scanning laser imaging, we draw pseudo color figures by scanning topography in different directions according to the real-time flying status. Simulation results demonstrate that the proposed algorithm is prospective for applications and the method can improve the effect and enhance dynamic interaction during the real-time flight.

  1. Analysis of thoracic aorta hemodynamics using 3D particle tracking velocimetry and computational fluid dynamics.

    PubMed

    Gallo, Diego; Gülan, Utku; Di Stefano, Antonietta; Ponzini, Raffaele; Lüthi, Beat; Holzner, Markus; Morbiducci, Umberto

    2014-09-22

    Parallel to the massive use of image-based computational hemodynamics to study the complex flow establishing in the human aorta, the need for suitable experimental techniques and ad hoc cases for the validation and benchmarking of numerical codes has grown more and more. Here we present a study where the 3D pulsatile flow in an anatomically realistic phantom of human ascending aorta is investigated both experimentally and computationally. The experimental study uses 3D particle tracking velocimetry (PTV) to characterize the flow field in vitro, while finite volume method is applied to numerically solve the governing equations of motion in the same domain, under the same conditions. Our findings show that there is an excellent agreement between computational and measured flow fields during the forward flow phase, while the agreement is poorer during the reverse flow phase. In conclusion, here we demonstrate that 3D PTV is very suitable for a detailed study of complex unsteady flows as in aorta and for validating computational models of aortic hemodynamics. In a future step, it will be possible to take advantage from the ability of 3D PTV to evaluate velocity fluctuations and, for this reason, to gain further knowledge on the process of transition to turbulence occurring in the thoracic aorta.

  2. 3-D Flow Field Diagnostics and Validation Studies using Stereoscopic Tracking Velocimetry

    NASA Technical Reports Server (NTRS)

    Cha, Soyoung Stephen; Ramachandran, Narayanan; Whitaker, Ann F. (Technical Monitor)

    2002-01-01

    The measurement of 3-D three-component velocity fields is of great importance in both ground and space experiments for understanding materials processing and fluid physics. Here, we present the investigation results of stereoscopic tracking velocimetry (STV) for measuring 3-D velocity fields. The effort includes diagnostic technology development, experimental velocity measurement, and comparison with analytical and numerical computation. The advantages of STV stems from the system simplicity for building compact hardware and in software efficiency for continual near-real-time process monitoring. It also has illumination flexibility for observing volumetric flow fields from arbitrary directions. STV is based on stereoscopic CCD observations of particles seeded in a flow. Neural networks are used for data analysis. The developed diagnostic tool is tested with a simple directional solidification apparatus using Succinonitrile. The 3-D velocity field in the liquid phase is measured and compared with results from detailed numerical computations. Our theoretical, numerical, and experimental effort has shown STV to be a viable candidate for reliably quantifying the 3-D flow field in materials processing and fluids experiments.

  3. The CT-PPS tracking system with 3D pixel detectors

    NASA Astrophysics Data System (ADS)

    Ravera, F.

    2016-11-01

    The CMS-TOTEM Precision Proton Spectrometer (CT-PPS) detector will be installed in Roman pots (RP) positioned on either side of CMS, at about 210 m from the interaction point. This detector will measure leading protons, allowing detailed studies of diffractive physics and central exclusive production in standard LHC running conditions. An essential component of the CT-PPS apparatus is the tracking system, which consists of two detector stations per arm equipped with six 3D silicon pixel-sensor modules, each read out by six PSI46dig chips. The front-end electronics has been designed to fulfill the mechanical constraints of the RP and to be compatible as much as possible with the readout chain of the CMS pixel detector. The tracking system is currently under construction and will be installed by the end of 2016. In this contribution the final design and the expected performance of the CT-PPS tracking system is presented. A summary of the studies performed, before and after irradiation, on the 3D detectors produced for CT-PPS is given.

  4. Methods for using 3-D ultrasound speckle tracking in biaxial mechanical testing of biological tissue samples.

    PubMed

    Yap, Choon Hwai; Park, Dae Woo; Dutta, Debaditya; Simon, Marc; Kim, Kang

    2015-04-01

    Being multilayered and anisotropic, biological tissues such as cardiac and arterial walls are structurally complex, making the full assessment and understanding of their mechanical behavior challenging. Current standard mechanical testing uses surface markers to track tissue deformations and does not provide deformation data below the surface. In the study described here, we found that combining mechanical testing with 3-D ultrasound speckle tracking could overcome this limitation. Rat myocardium was tested with a biaxial tester and was concurrently scanned with high-frequency ultrasound in three dimensions. The strain energy function was computed from stresses and strains using an iterative non-linear curve-fitting algorithm. Because the strain energy function consists of terms for the base matrix and for embedded fibers, spatially varying fiber orientation was also computed by curve fitting. Using finite-element simulations, we first validated the accuracy of the non-linear curve-fitting algorithm. Next, we compared experimentally measured rat myocardium strain energy function values with those in the literature and found a matching order of magnitude. Finally, we retained samples after the experiments for fiber orientation quantification using histology and found that the results satisfactorily matched those computed in the experiments. We conclude that 3-D ultrasound speckle tracking can be a useful addition to traditional mechanical testing of biological tissues and may provide the benefit of enabling fiber orientation computation.

  5. Experimental analysis of mechanical response of stabilized occipitocervical junction by 3D mark tracking technique

    NASA Astrophysics Data System (ADS)

    Germaneau, A.; Doumalin, P.; Dupré, J. C.; Brèque, C.; Brémand, F.; D'Houtaud, S.; Rigoard, P.

    2010-06-01

    This study is about a biomechanical comparison of some stabilization solutions for the occipitocervical junction. Four kinds of occipito-cervical fixations are analysed in this work: lateral plates fixed by two kinds of screws, lateral plates fixed by hooks and median plate. To study mechanical rigidity of each one, tests have been performed on human skulls by applying loadings and by studying mechanical response of fixations and bone. For this experimental analysis, a specific setup has been developed to impose a load corresponding to the flexion-extension physiological movements. 3D mark tracking technique is employed to measure 3D displacement fields on the bone and on the fixations. Observations of displacement evolution on the bone according to the fixation show different rigidities given by each solution.

  6. Real-time 3D visualization of the thoraco-abdominal surface during breathing with body movement and deformation extraction.

    PubMed

    Povšič, K; Jezeršek, M; Možina, J

    2015-07-01

    Real-time 3D visualization of the breathing displacements can be a useful diagnostic tool in order to immediately observe the most active regions on the thoraco-abdominal surface. The developed method is capable of separating non-relevant torso movement and deformations from the deformations that are solely related to breathing. This makes it possible to visualize only the breathing displacements. The system is based on the structured laser triangulation principle, with simultaneous spatial and color data acquisition of the thoraco-abdominal region. Based on the tracking of the attached passive markers, the torso movement and deformation is compensated using rigid and non-rigid transformation models on the three-dimensional (3D) data. The total time of 3D data processing together with visualization equals 20 ms per cycle.In vitro verification of the rigid movement extraction was performed using the iterative closest point algorithm as a reference. Furthermore, a volumetric evaluation on a live subject was performed to establish the accuracy of the rigid and non-rigid model. The root mean square deviation between the measured and the reference volumes shows an error of  ±0.08 dm(3) for rigid movement extraction. Similarly, the error was calculated to be  ±0.02 dm(3) for torsional deformation extraction and  ±0.11 dm(3) for lateral bending deformation extraction. The results confirm that during the torso movement and deformation, the proposed method is sufficiently accurate to visualize only the displacements related to breathing. The method can be used, for example, during the breathing exercise on an indoor bicycle or a treadmill.

  7. Swimming Behavior of Pseudomonas aeruginosa Studied by Holographic 3D Tracking

    PubMed Central

    Vater, Svenja M.; Weiße, Sebastian; Maleschlijski, Stojan; Lotz, Carmen; Koschitzki, Florian; Schwartz, Thomas; Obst, Ursula; Rosenhahn, Axel

    2014-01-01

    Holographic 3D tracking was applied to record and analyze the swimming behavior of Pseudomonas aeruginosa. The obtained trajectories allow to qualitatively and quantitatively analyze the free swimming behavior of the bacterium. This can be classified into five distinct swimming patterns. In addition to the previously reported smooth and oscillatory swimming motions, three additional patterns are distinguished. We show that Pseudomonas aeruginosa performs helical movements which were so far only described for larger microorganisms. Occurrence of the swimming patterns was determined and transitions between the patterns were analyzed. PMID:24498187

  8. An automated tool for 3D tracking of single molecules in living cells

    NASA Astrophysics Data System (ADS)

    Gardini, L.; Capitanio, M.; Pavone, F. S.

    2015-07-01

    Recently, tremendous improvements have been achieved in the precision of localization of single fluorescent molecules, allowing localization and tracking of biomolecules at the nm level. Since the behaviour of proteins and biological molecules is tightly influenced by the cell's environment, a growing number of microscopy techniques are moving from in vitro to live cell experiments. Looking at both diffusion and active transportation processes inside a cell requires three-dimensional localization over a few microns range, high SNR images and high temporal resolution (ms order of magnitude). To satisfy these requirements we developed an automated routine that allow 3D tracking of single fluorescent molecules in living cells with nanometer accuracy, by exploiting the properties of the point-spread-function of out-of-focus Quantum Dots bound to the protein of interest.

  9. A portable instrument for 3-D dynamic robot measurements using triangulation and laser tracking

    SciTech Connect

    Mayer, J.R.R. . Mechanical Engineering Dept.); Parker, G.A. . Dept. of Mechanical Engineering)

    1994-08-01

    The paper describes the development and validation of a 3-D measurement instrument capable of determining the static and dynamic performance of industrial robots to ISO standards. Using two laser beams to track an optical target attached to the robot end-effector, the target position coordinates may be estimated, relative to the instrument coordinate frame, to a high accuracy using triangulation principles. The effect of variations in the instrument geometry from the nominal model is evaluated through a kinematic model of the tracking head. Significant improvements of the measurement accuracy are then obtained by a simple adjustment of the main parameters. Extensive experimental test results are included to demonstrate the instrument performance. Finally typical static and dynamic measurement results for an industrial robot are presented to illustrate the effectiveness and usefulness of the instrument.

  10. Three-dimensional liver motion tracking using real-time two-dimensional MRI

    SciTech Connect

    Brix, Lau; Ringgaard, Steffen; Sørensen, Thomas Sangild; Poulsen, Per Rugaard

    2014-04-15

    Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (or tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial, sagittal

  11. Adaptive Kalman snake for semi-autonomous 3D vessel tracking.

    PubMed

    Lee, Sang-Hoon; Lee, Sanghoon

    2015-10-01

    In this paper, we propose a robust semi-autonomous algorithm for 3D vessel segmentation and tracking based on an active contour model and a Kalman filter. For each computed tomography angiography (CTA) slice, we use the active contour model to segment the vessel boundary and the Kalman filter to track position and shape variations of the vessel boundary between slices. For successful segmentation via active contour, we select an adequate number of initial points from the contour of the first slice. The points are set manually by user input for the first slice. For the remaining slices, the initial contour position is estimated autonomously based on segmentation results of the previous slice. To obtain refined segmentation results, an adaptive control spacing algorithm is introduced into the active contour model. Moreover, a block search-based initial contour estimation procedure is proposed to ensure that the initial contour of each slice can be near the vessel boundary. Experiments were performed on synthetic and real chest CTA images. Compared with the well-known Chan-Vese (CV) model, the proposed algorithm exhibited better performance in segmentation and tracking. In particular, receiver operating characteristic analysis on the synthetic and real CTA images demonstrated the time efficiency and tracking robustness of the proposed model. In terms of computational time redundancy, processing time can be effectively reduced by approximately 20%.

  12. 3D Fluorescent and Reflective Imaging of Whole Stardust Tracks in Aerogel

    SciTech Connect

    Greenberg, M.; Ebel, D.S.

    2011-11-07

    The NASA Stardust mission returned to earth in 2006 with the cometary collector having captured over 1,000 particles in an aerogel medium at a relative velocity of 6.1 km/s. Particles captured in aerogel were heated, disaggregated and dispersed along 'tracks' or cavities in aerogel, singular tracks representing a history of one capture event. It has been our focus to chemically and morphologically characterize whole tracks in 3-dimensions, utilizing solely non-destructive methods. To this end, we have used a variety of methods: 3D Laser Scanning Confocal Microscopy (LSCM), synchrotron X-ray fluorescence (SXRF), and synchrotron X-ray diffraction (SXRD). In the past months we have developed two new techniques to aid in data collection. (1) We have received a new confocal microscope which has enabled autofluorescent and spectral imaging of aerogel samples. (2) We have developed a stereo-SXRF technique to chemically identify large grains in SXRF maps in 3-space. The addition of both of these methods to our analytic abilities provides a greater understanding of the mechanisms and results of track formation.

  13. Quantifying the 3D Odorant Concentration Field Used by Actively Tracking Blue Crabs

    NASA Astrophysics Data System (ADS)

    Webster, D. R.; Dickman, B. D.; Jackson, J. L.; Weissburg, M. J.

    2007-11-01

    Blue crabs and other aquatic organisms locate food and mates by tracking turbulent odorant plumes. The odorant concentration fluctuates unpredictably due to turbulent transport, and many characteristics of the fluctuation pattern have been hypothesized as useful cues for orienting to the odorant source. To make a direct linkage between tracking behavior and the odorant concentration signal, we developed a measurement system based the laser induced fluorescence technique to quantify the instantaneous 3D concentration field surrounding actively tracking blue crabs. The data suggest a correlation between upstream walking speed and the concentration of the odorant signal arriving at the antennule chemosensors, which are located near the mouth region. More specifically, we note an increase in upstream walking speed when high concentration bursts arrive at the antennules location. We also test hypotheses regarding the ability of blue crabs to steer relative to the plume centerline based on the signal contrast between the chemosensors located on their leg appendages. These chemosensors are located much closer to the substrate compared to the antennules and are separated by the width of the blue crab. In this case, it appears that blue crabs use the bilateral signal comparison to track along the edge of the plume.

  14. 3D tracking the Brownian motion of colloidal particles using digital holographic microscopy and joint reconstruction.

    PubMed

    Verrier, Nicolas; Fournier, Corinne; Fournel, Thierry

    2015-06-01

    In-line digital holography is a valuable tool for sizing, locating, and tracking micro- or nano-objects in a volume. When a parametric imaging model is available, inverse problem approaches provide a straightforward estimate of the object parameters by fitting data with the model, thereby allowing accurate reconstruction. As recently proposed and demonstrated, combining pixel super-resolution techniques with inverse problem approaches improves the estimation of particle size and 3D position. Here, we demonstrate the accurate tracking of colloidal particles in Brownian motion. Particle size and 3D position are jointly optimized from video holograms acquired with a digital holographic microscopy setup based on a low-end microscope objective (×20, NA 0.5). Exploiting information redundancy makes it possible to characterize particles with a standard deviation of 15 nm in size and a theoretical resolution of 2×2×5  nm3 for position under additive white Gaussian noise assumption.

  15. Designing a high accuracy 3D auto stereoscopic eye tracking display, using a common LCD monitor

    NASA Astrophysics Data System (ADS)

    Taherkhani, Reza; Kia, Mohammad

    2012-09-01

    This paper describes the design and building of a low cost and practical stereoscopic display that does not need to wear special glasses, and uses eye tracking to give a large degree of freedom to viewer (or viewer's) movement while displaying the minimum amount of information. The parallax barrier technique is employed to turn a LCD into an auto-stereoscopic display. The stereo image pair is screened on the usual liquid crystal display simultaneously but in different columns of pixels. Controlling of the display in red-green-blue sub pixels increases the accuracy of light projecting direction to less than 2 degrees without losing too much LCD's resolution and an eye-tracking system determines the correct angle to project the images along the viewer's eye pupils and an image processing system puts the 3D images data in correct R-G-B sub pixels. 1.6 degree of light direction controlling achieved in practice. The 3D monitor is just made by applying some simple optical materials on a usual LCD display with normal resolution. [Figure not available: see fulltext.

  16. The role of 3D and speckle tracking echocardiography in cardiac amyloidosis: a case report.

    PubMed

    Nucci, E M; Lisi, M; Cameli, M; Baldi, L; Puccetti, L; Mondillo, S; Favilli, R; Lunghetti, S

    2014-01-01

    Cardiac amyloidosis (CA) is a disorder characterized by amyloid fibrils deposition in cardiac interstitium; it results in a restrictive cardiomyopathy with heart failure (HF) and conduction abnormalities. The "gold standard" for diagnosis of CA is myocardial biopsy but possible sampling errors and procedural risks, limit it's use. Magnetic resonance (RMN) offers more information than traditional echocardiography and allows diagnosis of CA but often it's impossible to perform. We report the case of a man with HF and symptomatic bradyarrhythmia that required an urgent pacemaker implant. Echocardiography was strongly suggestive of CA but wasn't impossible to perform an RMN to confirm this hypothesis because the patient was implanted with a definitive pacemaker. So was performed a Speckle Tracking Echocardiography (STE) and a 3D echocardiography: STE allows to differentiate CA from others hypertrophic cardiomyopathy by longitudinal strain value < 12% and 3D echocardiography shows regional left ventricular dyssynchrony with a characteristic temporal pattern of dispersion of regional volume systolic change. On the basis of these results, finally was performed an endomyocardial biopsy that confirmed the diagnosis of CA. This case underlines the importance of news, noninvasive techniques such as eco 3D and STE for early diagnosis of CA, especially when RMN cannot be performed.

  17. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  18. 3D Visualization of near real-time remote-sensing observation for hurricanes field campaign using Google Earth API

    NASA Astrophysics Data System (ADS)

    Li, P.; Turk, J.; Vu, Q.; Knosp, B.; Hristova-Veleva, S. M.; Lambrigtsen, B.; Poulsen, W. L.; Licata, S.

    2009-12-01

    NASA is planning a new field experiment, the Genesis and Rapid Intensification Processes (GRIP), in the summer of 2010 to better understand how tropical storms form and develop into major hurricanes. The DC-8 aircraft and the Global Hawk Unmanned Airborne System (UAS) will be deployed loaded with instruments for measurements including lightning, temperature, 3D wind, precipitation, liquid and ice water contents, aerosol and cloud profiles. During the field campaign, both the spaceborne and the airborne observations will be collected in real-time and integrated with the hurricane forecast models. This observation-model integration will help the campaign achieve its science goals by allowing team members to effectively plan the mission with current forecasts. To support the GRIP experiment, JPL developed a website for interactive visualization of all related remote-sensing observations in the GRIP’s geographical domain using the new Google Earth API. All the observations are collected in near real-time (NRT) with 2 to 5 hour latency. The observations include a 1KM blended Sea Surface Temperature (SST) map from GHRSST L2P products; 6-hour composite images of GOES IR; stability indices, temperature and vapor profiles from AIRS and AMSU-B; microwave brightness temperature and rain index maps from AMSR-E, SSMI and TRMM-TMI; ocean surface wind vectors, vorticity and divergence of the wind from QuikSCAT; the 3D precipitation structure from TRMM-PR and vertical profiles of cloud and precipitation from CloudSAT. All the NRT observations are collected from the data centers and science facilities at NASA and NOAA, subsetted, re-projected, and composited into hourly or daily data products depending on the frequency of the observation. The data products are then displayed on the 3D Google Earth plug-in at the JPL Tropical Cyclone Information System (TCIS) website. The data products offered by the TCIS in the Google Earth display include image overlays, wind vectors, clickable

  19. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    NASA Astrophysics Data System (ADS)

    Afik, Eldad

    2015-09-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection.

  20. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    PubMed Central

    Afik, Eldad

    2015-01-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection. PMID:26329642

  1. Does the mitral annulus shrink or enlarge during systole? A real-time 3D echocardiography study.

    PubMed

    Kwan, Jun; Jeon, Min-Jae; Kim, Dae-Hyeok; Park, Keum-Soo; Lee, Woo-Hyung

    2009-04-01

    This study was conducted to explore the geometrical changes of the mitral annulus during systole. The 3D shape of the mitral annulus was reconstructed in 13 normal subjects who had normal structure of the mitral apparatus using real-time 3D echocardiography (RT3DE) and 3D computer software. The two orthogonal (antero-posterior and commissure-commissure) dimensions, the areas (2D projected and 3D surface) and the non-planarity of the mitral annulus were estimated during early, mid and late systole. We demonstrated that the MA had a "saddle shape" appearance and it consistently enlarged mainly in the antero-posterior direction from early to late systole with lessening of its non-planarity, as was determined by 3D reconstruction using RT3DE and 3D computer software.

  2. Nondestructive 3D confocal laser imaging with deconvolution of seven whole stardust tracks with complementary XRF and quantitative analysis

    SciTech Connect

    Greenberg, M.; Ebel, D.S.

    2009-03-19

    We present a nondestructive 3D system for analysis of whole Stardust tracks, using a combination of Laser Confocal Scanning Microscopy and synchrotron XRF. 3D deconvolution is used for optical corrections, and results of quantitative analyses of several tracks are presented. The Stardust mission to comet Wild 2 trapped many cometary and ISM particles in aerogel, leaving behind 'tracks' of melted silica aerogel on both sides of the collector. Collected particles and their tracks range in size from submicron to millimeter scale. Interstellar dust collected on the obverse of the aerogel collector is thought to have an average track length of {approx}15 {micro}m. It has been our goal to perform a total non-destructive 3D textural and XRF chemical analysis on both types of tracks. To that end, we use a combination of Laser Confocal Scanning Microscopy (LCSM) and X Ray Florescence (XRF) spectrometry. Utilized properly, the combination of 3D optical data and chemical data provides total nondestructive characterization of full tracks, prior to flattening or other destructive analysis methods. Our LCSM techniques allow imaging at 0.075 {micro}m/pixel, without the use of oil-based lenses. A full textural analysis on track No.82 is presented here as well as analysis of 6 additional tracks contained within 3 keystones (No.128, No.129 and No.140). We present a method of removing the axial distortion inherent in LCSM images, by means of a computational 3D Deconvolution algorithm, and present some preliminary experiments with computed point spread functions. The combination of 3D LCSM data and XRF data provides invaluable information, while preserving the integrity of the samples for further analysis. It is imperative that these samples, the first extraterrestrial solids returned since the Apollo era, be fully mapped nondestructively in 3D, to preserve the maximum amount of information prior to other, destructive analysis.

  3. Segmentation and tracking of adherens junctions in 3D for the analysis of epithelial tissue morphogenesis.

    PubMed

    Cilla, Rodrigo; Mechery, Vinodh; Hernandez de Madrid, Beatriz; Del Signore, Steven; Dotu, Ivan; Hatini, Victor

    2015-04-01

    Epithelial morphogenesis generates the shape of tissues, organs and embryos and is fundamental for their proper function. It is a dynamic process that occurs at multiple spatial scales from macromolecular dynamics, to cell deformations, mitosis and apoptosis, to coordinated cell rearrangements that lead to global changes of tissue shape. Using time lapse imaging, it is possible to observe these events at a system level. However, to investigate morphogenetic events it is necessary to develop computational tools to extract quantitative information from the time lapse data. Toward this goal, we developed an image-based computational pipeline to preprocess, segment and track epithelial cells in 4D confocal microscopy data. The computational pipeline we developed, for the first time, detects the adherens junctions of epithelial cells in 3D, without the need to first detect cell nuclei. We accentuate and detect cell outlines in a series of steps, symbolically describe the cells and their connectivity, and employ this information to track the cells. We validated the performance of the pipeline for its ability to detect vertices and cell-cell contacts, track cells, and identify mitosis and apoptosis in surface epithelia of Drosophila imaginal discs. We demonstrate the utility of the pipeline to extract key quantitative features of cell behavior with which to elucidate the dynamics and biomechanical control of epithelial tissue morphogenesis. We have made our methods and data available as an open-source multiplatform software tool called TTT (http://github.com/morganrcu/TTT).

  4. Segmentation and Tracking of Adherens Junctions in 3D for the Analysis of Epithelial Tissue Morphogenesis

    PubMed Central

    Cilla, Rodrigo; Mechery, Vinodh; Hernandez de Madrid, Beatriz; Del Signore, Steven; Dotu, Ivan; Hatini, Victor

    2015-01-01

    Epithelial morphogenesis generates the shape of tissues, organs and embryos and is fundamental for their proper function. It is a dynamic process that occurs at multiple spatial scales from macromolecular dynamics, to cell deformations, mitosis and apoptosis, to coordinated cell rearrangements that lead to global changes of tissue shape. Using time lapse imaging, it is possible to observe these events at a system level. However, to investigate morphogenetic events it is necessary to develop computational tools to extract quantitative information from the time lapse data. Toward this goal, we developed an image-based computational pipeline to preprocess, segment and track epithelial cells in 4D confocal microscopy data. The computational pipeline we developed, for the first time, detects the adherens junctions of epithelial cells in 3D, without the need to first detect cell nuclei. We accentuate and detect cell outlines in a series of steps, symbolically describe the cells and their connectivity, and employ this information to track the cells. We validated the performance of the pipeline for its ability to detect vertices and cell-cell contacts, track cells, and identify mitosis and apoptosis in surface epithelia of Drosophila imaginal discs. We demonstrate the utility of the pipeline to extract key quantitative features of cell behavior with which to elucidate the dynamics and biomechanical control of epithelial tissue morphogenesis. We have made our methods and data available as an open-source multiplatform software tool called TTT (http://github.com/morganrcu/TTT) PMID:25884654

  5. Simulations of Coalescence and Breakup of Interfaces Using a 3D Front-tracking Method

    NASA Astrophysics Data System (ADS)

    Lu, Jiacai; Tryggvason, Gretar

    2015-11-01

    Direct Numerical Simulations (DNS) of complex multiphase flows with coalescing and breaking-up of interfaces are conducted using a 3D front-tracking method. Front-tracking method has been successfully used in DNS of turbulent channel bubbly flows and many other multiphase flows, but as the void fraction increases changes in the interface topology, though coalescence and breakup, become more common and have to be accounted for. Topology changes have often been identified as a challenge for front tracking, where the interface is represented using a triangular mesh, but here we present an efficient algorithm to change the topology of triangular elements of interfaces. In the current implementation we have not included any small-scale attractive forces so thin films coalesce either at prescribed times or when their thickness reaches a given value. Simulations of the collisions of two drops and comparisons with experimental results have been used to validate the algorithm but the main applications have been to flow regime transitions in gas-liquid flows in pressure driven channel flows. The evolution of flow, including flow rate, wall shear, projected interface areas, pseudo-turbulence, and the average size of the various flow structures, is examined as the topology of the interface changes through coalescence and breakup. Research supported by DOE (CASL).

  6. Monitoring the effects of doxorubicin on 3D-spheroid tumor cells in real-time

    PubMed Central

    Baek, NamHuk; Seo, Ok Won; Kim, MinSung; Hulme, John; An, Seong Soo A

    2016-01-01

    Recently, increasing numbers of cell culture experiments with 3D spheroids presented better correlating results in vivo than traditional 2D cell culture systems. 3D spheroids could offer a simple and highly reproducible model that would exhibit many characteristics of natural tissue, such as the production of extracellular matrix. In this paper numerous cell lines were screened and selected depending on their ability to form and maintain a spherical shape. The effects of increasing concentrations of doxorubicin (DXR) on the integrity and viability of the selected spheroids were then measured at regular intervals and in real-time. In total 12 cell lines, adenocarcinomic alveolar basal epithelial (A549), muscle (C2C12), prostate (DU145), testis (F9), pituitary epithelial-like (GH3), cervical cancer (HeLa), HeLa contaminant (HEp2), embryo (NIH3T3), embryo (PA317), neuroblastoma (SH-SY5Y), osteosarcoma U2OS, and embryonic kidney cells (293T), were screened. Out of the 12, 8 cell lines, NIH3T3, C2C12, 293T, SH-SY5Y, A549, HeLa, PA317, and U2OS formed regular spheroids and the effects of DXR on these structures were measured at regular intervals. Finally, 5 cell lines, A549, HeLa, SH-SY5Y, U2OS, and 293T, were selected for real-time monitoring and the effects of DXR treatment on their behavior were continuously recorded for 5 days. A potential correlation regarding the effects of DXR on spheroid viability and ATP production was measured on days 1, 3, and 5. Cytotoxicity of DXR seemed to occur after endocytosis, since the cellular activities and ATP productions were still viable after 1 day of the treatment in all spheroids, except SH-SY5Y. Both cellular activity and ATP production were halted 3 and 5 days from the start of the treatment in all spheroids. All cell lines maintained their spheroid shape, except SHSY-5, which behaved in an unpredictable manner when exposed to toxic concentrations of DXR. Cytotoxic effects of DXR towards SH-SY5Y seemed to cause degradation of

  7. Real-time 3D Fourier-domain optical coherence tomography guided microvascular anastomosis

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Ibrahim, Zuhaib; Lee, W. P. A.; Brandacher, Gerald; Kang, Jin U.

    2013-03-01

    Vascular and microvascular anastomosis is considered to be the foundation of plastic and reconstructive surgery, hand surgery, transplant surgery, vascular surgery and cardiac surgery. In the last two decades innovative techniques, such as vascular coupling devices, thermo-reversible poloxamers and suture-less cuff have been introduced. Intra-operative surgical guidance using a surgical imaging modality that provides in-depth view and 3D imaging can improve outcome following both conventional and innovative anastomosis techniques. Optical coherence tomography (OCT) is a noninvasive high-resolution (micron level), high-speed, 3D imaging modality that has been adopted widely in biomedical and clinical applications. In this work we performed a proof-of-concept evaluation study of OCT as an assisted intraoperative and post-operative imaging modality for microvascular anastomosis of rodent femoral vessels. The OCT imaging modality provided lateral resolution of 12 μm and 3.0 μm axial resolution in air and 0.27 volume/s imaging speed, which could provide the surgeon with clearly visualized vessel lumen wall and suture needle position relative to the vessel during intraoperative imaging. Graphics processing unit (GPU) accelerated phase-resolved Doppler OCT (PRDOCT) imaging of the surgical site was performed as a post-operative evaluation of the anastomosed vessels and to visualize the blood flow and thrombus formation. This information could help surgeons improve surgical precision in this highly challenging anastomosis of rodent vessels with diameter less than 0.5 mm. Our imaging modality could not only detect accidental suture through the back wall of lumen but also promptly diagnose and predict thrombosis immediately after reperfusion. Hence, real-time OCT can assist in decision-making process intra-operatively and avoid post-operative complications.

  8. Application of 3d-ptv To Track Particle Moving Inside Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Cenedese, A.; Cushman, J. H.; Moroni, M.

    There exist a number of imaging-based measurement techniques for determining 3D velocity fields in an observation volume. Among these are: a) scanning techniques (Guezennec et al. 1994, Moroni and Cushman, 2001); b) holographic techniques (Hin- sch and Hinrichs 1996); c) defocusing techniques (Willert and Gharib 1992); d) stereo- scopic techniques (Maas et al. 1993, Kasagi and Nishino 1990). We have focused our attention on 3D-PTV which is an experimental technique based on reconstructing 3D trajectories of reflecting tracer particles through a stereoscopic recording of image se- quences. Coordinates are determined first and then trajectories are defined. 3D-PTV requires the operator to light a volume of the test section as opposed to 2D techniques that require a light sheet. Stereoscopic methods share the following basic steps (Pa- pantoniou, 1990): a) stereoscopic calibrated imaging and recording of a suitably illu- minated particle flow; b) subsequent photogrammetric analysis of the resulting images to derive the instantaneous 3-D particle positions and c) tracking of the 3-D coordinate sets in time to derive the tracer trajectories. The ideal setup for obtaining highly accu- rate trajectories requires the cameras to be mounted with the distance between them equal to the distance to the center of the measurement volume (with three cameras this requires a hexagonal cell). But the camera arrangement is usually a compromise between ideal geometrical conditions for a homogeneous distribution of accuracies in the measuring volume and practical restrictions associated with the experiment. The position of the cameras in object space (exterior orientation) and the parameters of each camera (interior orientation) are needed to reconstruct the 3D objects. These pa- rameters can be calculated simultaneously in a so-called "bundle adjustment" or by pre-calibration. A matched index (of refraction) porous medium heterogeneous at the bench scale has been constructed by filling

  9. SU-E-J-237: Real-Time 3D Anatomy Estimation From Undersampled MR Acquisitions

    SciTech Connect

    Glitzner, M; Lagendijk, J; Raaymakers, B; Crijns, S; Senneville, B Denis de

    2015-06-15

    Recent developments made MRI guided radiotherapy feasible. Performing simultaneous imaging during fractions can provide information about changing anatomy by means of deformable image registration for either immediate plan adaptations or accurate dose accumulation on the changing anatomy. In 3D MRI, however, acquisition time is considerable and scales with resolution. Furthermore, intra-scan motion degrades image quality.In this work, we investigate the sensitivity of registration quality on imageresolution: potentially, by employing spatial undersampling, the acquisition timeof MR images for the purpose of deformable image registration can be reducedsignificantly.On a volunteer, 3D-MR imaging data was sampled in a navigator-gated manner, acquiring one axial volume (360×260×100mm{sup 3}) per 3s during exhale phase. A T1-weighted FFE sequence was used with an acquired voxel size of (2.5mm{sup 3}) for a duration of 17min. Deformation vector fields were evaluated for 100 imaging cycles with respect to the initial anatomy using deformable image registration based on optical flow. Subsequently, the imaging data was downsampled by a factor of 2, simulating a fourfold acquisition speed. Displacements of the downsampled volumes were then calculated by the same process.In kidneyliver boundaries and the region around stomach/duodenum, prominent organ drifts could be observed in both the original and the downsampled imaging data. An increasing displacement of approximately 2mm was observed for the kidney, while an area around the stomach showed sudden displacements of 4mm. Comparison of the motile points over time showed high reproducibility between the displacements of high-resolution and downsampled volumes: over a 17min acquisition, the componentwise RMS error was not more than 0.38mm.Based on the synthetic experiments, 3D nonrigid image registration shows little sensitivity to image resolution and the displacement information is preserved even when halving the

  10. A real-time emergency response workstation using a 3-D numerical model initialized with sodar

    SciTech Connect

    Lawver, B.S.; Sullivan, T.J.; Baskett, R.L.

    1993-01-28

    Many emergency response dispersion modeling systems provide simple Gaussian models driven by single meteorological tower inputs to estimate the downwind consequences from accidental spills or stack releases. Complex meteorological or terrain settings demand more sophisticated resolution of the three-dimensional structure of the atmosphere to reliably calculate plume dispersion. Mountain valleys and sea breeze flows are two common examples of such settings. To address these complexities, the authors have implemented the three-dimensional diagnostic MATHEW mass-adjusted wind field and ADPIC particle-in-cell dispersion models on a workstation for use in real-time emergency response modeling. MATHEW/ADPIC have shown their utility in a variety of complex settings over the last 15 years within the Department of Energy`s Atmospheric Release Advisory Capability (ARAC) project. The models are initialized using an array of surface wind measurements from meteorological towers coupled with vertical profiles from an acoustic sounder (sodar). The workstation automatically acquires the meteorological data every 15 minutes. A source term is generated using either defaults or a real-time stack monitor. Model outputs include contoured isopleths displayed on site geography or plume densities shown over 3-D color shaded terrain. The models are automatically updated every 15 minutes to provide the emergency response manager with a continuous display of potentially hazardous ground-level conditions if an actual release were to occur. Model run time is typically less than 2 minutes on 6 megaflop ({approximately}30 MIPS) workstations. Data acquisition, limited by dial-up modem communications, requires 3 to 5 minutes.

  11. Application of 3D WebGIS and real-time technique in earthquake information publishing and visualization

    NASA Astrophysics Data System (ADS)

    Li, Boren; Wu, Jianping; Pan, Mao; Huang, Jing

    2015-06-01

    In hazard management, earthquake researchers have utilized GIS to ease the process of managing disasters. Researchers use WebGIS to assess hazards and seismic risk. Although they can provide a visual analysis platform based on GIS technology, they lack a general description in the extensibility of WebGIS for processing dynamic data, especially real-time data. In this paper, we propose a novel approach for real-time 3D visual earthquake information publishing model based on WebGIS and digital globe to improve the ability of processing real-time data in systems based on WebGIS. On the basis of the model, we implement a real-time 3D earthquake information publishing system—EqMap3D. The system can not only publish real-time earthquake information but also display these data and their background geoscience information in a 3D scene. It provides a powerful tool for display, analysis, and decision-making for researchers and administrators. It also facilitates better communication between researchers engaged in geosciences and the interested public.

  12. Application of 3D hydrodynamic and particle tracking models for better environmental management of finfish culture

    NASA Astrophysics Data System (ADS)

    Moreno Navas, Juan; Telfer, Trevor C.; Ross, Lindsay G.

    2011-04-01

    Hydrographic conditions, and particularly current speeds, have a strong influence on the management of fish cage culture. These hydrodynamic conditions can be used to predict particle movement within the water column and the results used to optimise environmental conditions for effective site selection, setting of environmental quality standards, waste dispersion, and potential disease transfer. To this end, a 3D hydrodynamic model, MOHID, has been coupled to a particle tracking model to study the effects of mean current speed, quiescent water periods and bulk water circulation in Mulroy Bay, Co. Donegal Ireland, an Irish fjard (shallow fjordic system) important to the aquaculture industry. A Lagangrian method simulated the instantaneous release of "particles" emulating discharge from finfish cages to show the behaviour of waste in terms of water circulation and water exchange. The 3D spatial models were used to identify areas of mixed and stratified water using a version of the Simpson-Hunter criteria, and to use this in conjunction with models of current flow for appropriate site selection for salmon aquaculture. The modelled outcomes for stratification were in good agreement with the direct measurements of water column stratification based on observed density profiles. Calculations of the Simpson-Hunter tidal parameter indicated that most of Mulroy Bay was potentially stratified with a well mixed region over the shallow channels where the water is faster flowing. The fjard was characterised by areas of both very low and high mean current speeds, with some areas having long periods of quiescent water. The residual current and the particle tracking animations created through the models revealed an anticlockwise eddy that may influence waste dispersion and potential for disease transfer, among salmon cages and which ensures that the retention time of waste substances from cages is extended. The hydrodynamic model results were incorporated into the ArcView TM GIS

  13. Defragmented image based autostereoscopic 3D displays with dynamic eye tracking

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-12-01

    We studied defragmented image based autostereoscopic 3D displays with dynamic eye tracking. Specifically, we examined the impact of parallax barrier (PB) angular orientation on their image quality. The 3D display system required fine adjustment of PB angular orientation with respect to a display panel. This was critical for both image color balancing and minimizing image resolution mismatch between horizontal and vertical directions. For evaluating uniformity of image brightness, we applied optical ray tracing simulations. The simulations took effects of PB orientation misalignment into account. The simulation results were then compared with recorded experimental data. Our optimal simulated system produced significantly enhanced image uniformity at around sweet spots in viewing zones. However this was contradicted by real experimental results. We offer quantitative treatment of illuminance uniformity of view images to estimate misalignment of PB orientation, which could account for brightness non-uniformity observed experimentally. Our study also shows that slight imperfection in the adjustment of PB orientation due to practical restrictions of adjustment accuracy can induce substantial non-uniformity of view images' brightness. We find that image brightness non-uniformity critically depends on misalignment of PB angular orientation, for example, as slight as ≤ 0.01 ° in our system. This reveals that reducing misalignment of PB angular orientation from the order of 10-2 to 10-3 degrees can greatly improve the brightness uniformity.

  14. Using an automated 3D-tracking system to record individual and shoals of adult zebrafish.

    PubMed

    Maaswinkel, Hans; Zhu, Liqun; Weng, Wei

    2013-12-05

    Like many aquatic animals, zebrafish (Danio rerio) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals.

  15. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.

  16. Real-time motion- and B0-correction for LASER-localized spiral-accelerated 3D-MRSI of the brain at 3T.

    PubMed

    Bogner, Wolfgang; Hess, Aaron T; Gagoski, Borjan; Tisdall, M Dylan; van der Kouwe, Andre J W; Trattnig, Siegfried; Rosen, Bruce; Andronesi, Ovidiu C

    2014-03-01

    The full potential of magnetic resonance spectroscopic imaging (MRSI) is often limited by localization artifacts, motion-related artifacts, scanner instabilities, and long measurement times. Localized adiabatic selective refocusing (LASER) provides accurate B1-insensitive spatial excitation even at high magnetic fields. Spiral encoding accelerates MRSI acquisition, and thus, enables 3D-coverage without compromising spatial resolution. Real-time position- and shim/frequency-tracking using MR navigators correct motion- and scanner instability-related artifacts. Each of these three advanced MRI techniques provides superior MRSI data compared to commonly used methods. In this work, we integrated in a single pulse sequence these three promising approaches. Real-time correction of motion, shim, and frequency-drifts using volumetric dual-contrast echo planar imaging-based navigators were implemented in an MRSI sequence that uses low-power gradient modulated short-echo time LASER localization and time efficient spiral readouts, in order to provide fast and robust 3D-MRSI in the human brain at 3T. The proposed sequence was demonstrated to be insensitive to motion- and scanner drift-related degradations of MRSI data in both phantoms and volunteers. Motion and scanner drift artifacts were eliminated and excellent spectral quality was recovered in the presence of strong movement. Our results confirm the expected benefits of combining a spiral 3D-LASER-MRSI sequence with real-time correction. The new sequence provides accurate, fast, and robust 3D metabolic imaging of the human brain at 3T. This will further facilitate the use of 3D-MRSI for neuroscience and clinical applications.

  17. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations.

  18. Registration of Real-Time 3-D Ultrasound to Tomographic Images of the Abdominal Aorta.

    PubMed

    Brekken, Reidar; Iversen, Daniel Høyer; Tangen, Geir Arne; Dahl, Torbjørn

    2016-08-01

    The purpose of this study was to develop an image-based method for registration of real-time 3-D ultrasound to computed tomography (CT) of the abdominal aorta, targeting future use in ultrasound-guided endovascular intervention. We proposed a method in which a surface model of the aortic wall was segmented from CT, and the approximate initial location of this model relative to the ultrasound volume was manually indicated. The model was iteratively transformed to automatically optimize correspondence to the ultrasound data. Feasibility was studied using data from a silicon phantom and in vivo data from a volunteer with previously acquired CT. Through visual evaluation, the ultrasound and CT data were seen to correspond well after registration. Both aortic lumen and branching arteries were well aligned. The processing was done offline, and the registration took approximately 0.2 s per ultrasound volume. The results encourage further patient studies to investigate accuracy, robustness and clinical value of the approach.

  19. Eulerian and Lagrangian methods for vortex tracking in 2D and 3D flows

    NASA Astrophysics Data System (ADS)

    Huang, Yangzi; Green, Melissa

    2014-11-01

    Coherent structures are a key component of unsteady flows in shear layers. Improvement of experimental techniques has led to larger amounts of data and requires of automated procedures for vortex tracking. Many vortex criteria are Eulerian, and identify the structures by an instantaneous local swirling motion in the field, which are indicated by closed or spiral streamlines or pathlines in a reference frame. Alternatively, a Lagrangian Coherent Structures (LCS) analysis is a Lagrangian method based on the quantities calculated along fluid particle trajectories. In the current work, vortex detection is demonstrated on data from the simulation of two cases: a 2D flow with a flat plate undergoing a 45 ° pitch-up maneuver and a 3D wall-bounded turbulence channel flow. Vortices are visualized and tracked by their centers and boundaries using Γ1, the Q criterion, and LCS saddle points. In the cases of 2D flow, saddle points trace showed a rapid acceleration of the structure which indicates the shedding from the plate. For channel flow, saddle points trace shows that average structure convection speed exhibits a similar trend as a function of wall-normal distance as the mean velocity profile, and leads to statistical quantities of vortex dynamics. Dr. Jeff Eldredge and his research group at UCLA are gratefully acknowledged for sharing the database of simulation for the current research. This work was supported by the Air Force Office of Scientific Research under AFOSR Award No. FA9550-14-1-0210.

  20. Infrared tomographic PIV and 3D motion tracking system applied to aquatic predator-prey interaction

    NASA Astrophysics Data System (ADS)

    Adhikari, Deepak; Longmire, Ellen K.

    2013-02-01

    Infrared tomographic PIV and 3D motion tracking are combined to measure evolving volumetric velocity fields and organism trajectories during aquatic predator-prey interactions. The technique was used to study zebrafish foraging on both non-evasive and evasive prey species. Measurement volumes of 22.5 mm × 10.5 mm × 12 mm were reconstructed from images captured on a set of four high-speed cameras. To obtain accurate fluid velocity vectors within each volume, fish were first masked out using an automated visual hull method. Fish and prey locations were identified independently from the same image sets and tracked separately within the measurement volume. Experiments demonstrated that fish were not influenced by the infrared laser illumination or the tracer particles. Results showed that the zebrafish used different strategies, suction and ram feeding, for successful capture of non-evasive and evasive prey, respectively. The two strategies yielded different variations in fluid velocity between the fish mouth and the prey. In general, the results suggest that the local flow field, the direction of prey locomotion with respect to the predator and the relative accelerations and speeds of the predator and prey may all be significant in determining predation success.

  1. Real-time markerless tracking for augmented reality: the virtual visual servoing framework.

    PubMed

    Comport, Andrew I; Marchand, Eric; Pressigout, Muriel; Chaumette, François

    2006-01-01

    Tracking is a very important research subject in a real-time augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a real-time, robust, and efficient 3D model-based tracking algorithm is proposed for a "video see through" monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. Here, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating an M-estimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D model-free augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.

  2. A real-time 3D end-to-end augmented reality system (and its representation transformations)

    NASA Astrophysics Data System (ADS)

    Tytgat, Donny; Aerts, Maarten; De Busser, Jeroen; Lievens, Sammy; Rondao Alface, Patrice; Macq, Jean-Francois

    2016-09-01

    The new generation of HMDs coming to the market is expected to enable many new applications that allow free viewpoint experiences with captured video objects. Current applications usually rely on 3D content that is manually created or captured in an offline manner. In contrast, this paper focuses on augmented reality applications that use live captured 3D objects while maintaining free viewpoint interaction. We present a system that allows live dynamic 3D objects (e.g. a person who is talking) to be captured in real-time. Real-time performance is achieved by traversing a number of representation formats and exploiting their specific benefits. For instance, depth images are maintained for fast neighborhood retrieval and occlusion determination, while implicit surfaces are used to facilitate multi-source aggregation for both geometry and texture. The result is a 3D reconstruction system that outputs multi-textured triangle meshes at real-time rates. An end-to-end system is presented that captures and reconstructs live 3D data and allows for this data to be used on a networked (AR) device. For allocating the different functional blocks onto the available physical devices, a number of alternatives are proposed considering the available computational power and bandwidth for each of the components. As we will show, the representation format can play an important role in this functional allocation and allows for a flexible system that can support a highly heterogeneous infrastructure.

  3. A smart homecage system with 3D tracking for long-term behavioral experiments.

    PubMed

    Byunghun Lee; Kiani, Mehdi; Ghovanloo, Maysam

    2014-01-01

    A wirelessly-powered homecage system, called the EnerCage-HC, that is equipped with multi-coil wireless power transfer, closed-loop power control, optical behavioral tracking, and a graphic user interface (GUI) is presented for long-term electrophysiology experiments. The EnerCage-HC system can wirelessly power a mobile unit attached to a small animal subject and also track its behavior in real-time as it is housed inside a standard homecage. The EnerCage-HC system is equipped with one central and four overlapping slanted wire-wound coils (WWCs) with optimal geometries to form 3-and 4-coil power transmission links while operating at 13.56 MHz. Utilizing multi-coil links increases the power transfer efficiency (PTE) compared to conventional 2-coil links and also reduces the number of power amplifiers (PAs) to only one, which significantly reduces the system complexity, cost, and dissipated heat. A Microsoft Kinect installed 90 cm above the homecage localizes the animal position and orientation with 1.6 cm accuracy. An in vivo experiment was conducted on a freely behaving rat by continuously delivering 24 mW to the mobile unit for > 7 hours inside a standard homecage.

  4. Tracking Human Faces in Real-Time,

    DTIC Science & Technology

    1995-11-01

    human-computer interactive applications such as lip-reading and gaze tracking. The principle in developing this system can be extended to other tracking problems such as tracking the human hand for gesture recognition .

  5. Discovery of a biofilm electrocline using real-time 3D metabolite analysis.

    PubMed

    Koley, Dipankar; Ramsey, Matthew M; Bard, Allen J; Whiteley, Marvin

    2011-12-13

    Bacteria are social organisms that possess multiple pathways for sensing and responding to small molecules produced by other microbes. Most bacteria in nature exist in sessile communities called biofilms, and the ability of biofilm bacteria to sense and respond to small molecule signals and cues produced by neighboring biofilm bacteria is particularly important. To understand microbial interactions between biofilms, it is necessary to perform rapid, real-time spatial quantification of small molecules in microenvironments immediately surrounding biofilms; however, such measurements have been elusive. In this study, scanning electrochemical microscopy was used to quantify small molecules surrounding a biofilm in 3D space. Measuring concentrations of the redox-active signaling molecule pyocyanin (PYO) produced by biofilms of the bacterium Pseudomonas aeruginosa revealed a high concentration of PYO that is actively maintained in the reduced state proximal to the biofilm. This gradient results in a reduced layer of PYO that we have termed the PYO "electrocline," a gradient of redox potential, which extends several hundred microns from the biofilm surface. We also demonstrate that the PYO electrocline is formed under electron acceptor-limiting conditions, and that growth conditions favoring formation of the PYO electrocline correlate to an increase in soluble iron. Additionally, we have taken a "reactive image" of a biofilm surface, demonstrating the rate of bacterial redox activity across a 2D surface. These studies establish methodology for spatially coordinated concentration and redox status measurements of microbe-produced small molecules and provide exciting insights into the roles these molecules play in microbial competition and nutrient acquisition.

  6. Evaluation and comparison of current biopsy needle localization and tracking methods using 3D ultrasound.

    PubMed

    Zhao, Yue; Shen, Yi; Bernard, Adeline; Cachard, Christian; Liebgott, Hervé

    2017-01-01

    This article compares four different biopsy needle localization algorithms in both 3D and 4D situations to evaluate their accuracy and execution time. The localization algorithms were: Principle component analysis (PCA), random Hough transform (RHT), parallel integral projection (PIP) and ROI-RK (ROI based RANSAC and Kalman filter). To enhance the contrast of the biopsy needle and background tissue, a line filtering pre-processing step was implemented. To make the PCA, RHT and PIP algorithms comparable with the ROI-RK method, a region of interest (ROI) strategy was added. Simulated and ex-vivo data were used to evaluate the performance of the different biopsy needle localization algorithms. The resolutions of the sectorial and cylindrical volumes were 0.3mm×0.4mm×0.6mmand0.1mm×0.1mm×0.2mm (axial×lateral×azimuthal) respectively. In so far as the simulation and experimental results show, the ROI-RK method successfully located and tracked the biopsy needle in both 3D and 4D situations. The tip localization error was within 1.5mm and the axis accuracy was within 1.6mm. To the best of our knowledge, considering both localization accuracy and execution time, the ROI-RK was the most stable and time-saving method. Normally, accuracy comes at the expense of time. However, the ROI-RK method was able to locate the biopsy needle with high accuracy in real time, which makes it a promising method for clinical applications.

  7. Automated 3D motion tracking using Gabor filter bank, robust point matching, and deformable models.

    PubMed

    Chen, Ting; Wang, Xiaoxu; Chung, Sohae; Metaxas, Dimitris; Axel, Leon

    2010-01-01

    Tagged magnetic resonance imaging (tagged MRI or tMRI) provides a means of directly and noninvasively displaying the internal motion of the myocardium. Reconstruction of the motion field is needed to quantify important clinical information, e.g., the myocardial strain, and detect regional heart functional loss. In this paper, we present a three-step method for this task. First, we use a Gabor filter bank to detect and locate tag intersections in the image frames, based on local phase analysis. Next, we use an improved version of the robust point matching (RPM) method to sparsely track the motion of the myocardium, by establishing a transformation function and a one-to-one correspondence between grid tag intersections in different image frames. In particular, the RPM helps to minimize the impact on the motion tracking result of 1) through-plane motion and 2) relatively large deformation and/or relatively small tag spacing. In the final step, a meshless deformable model is initialized using the transformation function computed by RPM. The model refines the motion tracking and generates a dense displacement map, by deforming under the influence of image information, and is constrained by the displacement magnitude to retain its geometric structure. The 2D displacement maps in short and long axis image planes can be combined to drive a 3D deformable model, using the moving least square method, constrained by the minimization of the residual error at tag intersections. The method has been tested on a numerical phantom, as well as on in vivo heart data from normal volunteers and heart disease patients. The experimental results show that the new method has a good performance on both synthetic and real data. Furthermore, the method has been used in an initial clinical study to assess the differences in myocardial strain distributions between heart disease (left ventricular hypertrophy) patients and the normal control group. The final results show that the proposed method

  8. 3-D geometry calibration and markerless electromagnetic tracking with a mobile C-arm

    NASA Astrophysics Data System (ADS)

    Cheryauka, Arvi; Barrett, Johnny; Wang, Zhonghua; Litvin, Andrew; Hamadeh, Ali; Beaudet, Daniel

    2007-03-01

    The design of mobile X-ray C-arm equipment with image tomography and surgical guidance capabilities involves the retrieval of repeatable gantry positioning in three-dimensional space. Geometry misrepresentations can cause degradation of the reconstruction results with the appearance of blurred edges, image artifacts, and even false structures. It may also amplify surgical instrument tracking errors leading to improper implant placement. In our prior publications we have proposed a C-arm 3D positioner calibration method comprising separate intrinsic and extrinsic geometry calibration steps. Following this approach, in the present paper, we extend the intrinsic geometry calibration of C-gantry beyond angular positions in the orbital plane into angular positions on a unit sphere of isocentric rotation. Our method makes deployment of markerless interventional tool guidance with use of high-resolution fluoro images and electromagnetic tracking feasible at any angular position of the tube-detector assembly. Variations of the intrinsic parameters associated with C-arm motion are measured off-line as functions of orbital and lateral angles. The proposed calibration procedure provides better accuracy, and prevents unnecessary workflow steps for surgical navigation applications. With a slight modification, the Misalignment phantom, a tool for intrinsic geometry calibration, is also utilized to obtain an accurate 'image-to-sensor' mapping. We show simulation results, image quality and navigation accuracy estimates, and feasibility data acquired with the prototype system. The experimental results show the potential of high-resolution CT imaging (voxel size below 0.5 mm) and confident navigation in an interventional surgery setting with a mobile C-arm.

  9. Model Estimation and Selection towardsUnconstrained Real-Time Tracking and Mapping.

    PubMed

    Gauglitz, Steffen; Sweeney, Chris; Ventura, Jonathan; Turk, Matthew; Höllerer, Tobias

    2014-06-01

    We present an approach and prototype implementation to initialization-free real-time tracking and mapping that supports any type of camera motion in 3D environments, that is, parallax-inducing as well as rotation-only motions. Our approach effectively behaves like a keyframe-based Simultaneous Localization and Mapping system or a panorama tracking and mapping system, depending on the camera movement. It seamlessly switches between the two modes and is thus able to track and map through arbitrary sequences of parallax-inducing and rotation-only camera movements. The system integrates both model-based and model-free tracking, automatically choosing between the two depending on the situation, and subsequently uses the "Geometric Robust Information Criterion" to decide whether the current camera motion can best be represented as a parallax-inducing motion or a rotation-only motion. It continues to collect and map data after tracking failure by creating separate tracks which are later merged if they are found to overlap. This is in contrast to most existing tracking and mapping systems, which suspend tracking and mapping and thus discard valuable data until relocalization with respect to the initial map is successful. We tested our prototype implementation on a variety of video sequences, successfully tracking through different camera motions and fully automatically building combinations of panoramas and 3D structure.

  10. Real-Time Feature Tracking Using Homography

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel S.; Cheng, Yang; Ansar, Adnan I.; Trotz, David C.; Padgett, Curtis W.

    2010-01-01

    This software finds feature point correspondences in sequences of images. It is designed for feature matching in aerial imagery. Feature matching is a fundamental step in a number of important image processing operations: calibrating the cameras in a camera array, stabilizing images in aerial movies, geo-registration of images, and generating high-fidelity surface maps from aerial movies. The method uses a Shi-Tomasi corner detector and normalized cross-correlation. This process is likely to result in the production of some mismatches. The feature set is cleaned up using the assumption that there is a large planar patch visible in both images. At high altitude, this assumption is often reasonable. A mathematical transformation, called an homography, is developed that allows us to predict the position in image 2 of any point on the plane in image 1. Any feature pair that is inconsistent with the homography is thrown out. The output of the process is a set of feature pairs, and the homography. The algorithms in this innovation are well known, but the new implementation improves the process in several ways. It runs in real-time at 2 Hz on 64-megapixel imagery. The new Shi-Tomasi corner detector tries to produce the requested number of features by automatically adjusting the minimum distance between found features. The homography-finding code now uses an implementation of the RANSAC algorithm that adjusts the number of iterations automatically to achieve a pre-set probability of missing a set of inliers. The new interface allows the caller to pass in a set of predetermined points in one of the images. This allows the ability to track the same set of points through multiple frames.

  11. Real-Time Climate Simulations in the Interactive 3D Game Universe Sandbox ²

    NASA Astrophysics Data System (ADS)

    Goldenson, N. L.

    2014-12-01

    Exploration in an open-ended computer game is an engaging way to explore climate and climate change. Everyone can explore physical models with real-time visualization in the educational simulator Universe Sandbox ² (universesandbox.com/2), which includes basic climate simulations on planets. I have implemented a time-dependent, one-dimensional meridional heat transport energy balance model to run and be adjustable in real time in the midst of a larger simulated system. Universe Sandbox ² is based on the original game - at its core a gravity simulator - with other new physically-based content for stellar evolution, and handling collisions between bodies. Existing users are mostly science enthusiasts in informal settings. We believe that this is the first climate simulation to be implemented in a professionally developed computer game with modern 3D graphical output in real time. The type of simple climate model we've adopted helps us depict the seasonal cycle and the more drastic changes that come from changing the orbit or other external forcings. Users can alter the climate as the simulation is running by altering the star(s) in the simulation, dragging to change orbits and obliquity, adjusting the climate simulation parameters directly or changing other properties like CO2 concentration that affect the model parameters in representative ways. Ongoing visuals of the expansion and contraction of sea ice and snow-cover respond to the temperature calculations, and make it accessible to explore a variety of scenarios and intuitive to understand the output. Variables like temperature can also be graphed in real time. We balance computational constraints with the ability to capture the physical phenomena we wish to visualize, giving everyone access to a simple open-ended meridional energy balance climate simulation to explore and experiment with. The software lends itself to labs at a variety of levels about climate concepts including seasons, the Greenhouse effect

  12. Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware.

    PubMed

    Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr

    2005-09-01

    We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.

  13. Real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events

    NASA Astrophysics Data System (ADS)

    Javidi, Bahram; Yeom, Seokwon; Moon, Inkyu; Daneshpanah, Mehdi

    2006-05-01

    In this paper, we present an overview of three-dimensional (3D) optical imaging techniques for real-time automated sensing, visualization, and recognition of dynamic biological microorganisms. Real time sensing and 3D reconstruction of the dynamic biological microscopic objects can be performed by single-exposure on-line (SEOL) digital holographic microscopy. A coherent 3D microscope-based interferometer is constructed to record digital holograms of dynamic micro biological events. Complex amplitude 3D images of the biological microorganisms are computationally reconstructed at different depths by digital signal processing. Bayesian segmentation algorithms are applied to identify regions of interest for further processing. A number of pattern recognition approaches are addressed to identify and recognize the microorganisms. One uses 3D morphology of the microorganisms by analyzing 3D geometrical shapes which is composed of magnitude and phase. Segmentation, feature extraction, graph matching, feature selection, and training and decision rules are used to recognize the biological microorganisms. In a different approach, 3D technique is used that are tolerant to the varying shapes of the non-rigid biological microorganisms. After segmentation, a number of sampling patches are arbitrarily extracted from the complex amplitudes of the reconstructed 3D biological microorganism. These patches are processed using a number of cost functions and statistical inference theory for the equality of means and equality of variances between the sampling segments. Also, we discuss the possibility of employing computational integral imaging for 3D sensing, visualization, and recognition of biological microorganisms illuminated under incoherent light. Experimental results with several biological microorganisms are presented to illustrate detection, segmentation, and identification of micro biological events.

  14. Mapping 3D Strains with Ultrasound Speckle Tracking: Method Validation and Initial Results in Porcine Scleral Inflation.

    PubMed

    Cruz Perez, Benjamin; Pavlatos, Elias; Morris, Hugh J; Chen, Hong; Pan, Xueliang; Hart, Richard T; Liu, Jun

    2016-07-01

    This study aimed to develop and validate a high frequency ultrasound method for measuring distributive, 3D strains in the sclera during elevations of intraocular pressure. A 3D cross-correlation based speckle-tracking algorithm was implemented to compute the 3D displacement vector and strain tensor at each tracking point. Simulated ultrasound radiofrequency data from a sclera-like structure at undeformed and deformed states with known strains were used to evaluate the accuracy and signal-to-noise ratio (SNR) of strain estimation. An experimental high frequency ultrasound (55 MHz) system was built to acquire 3D scans of porcine eyes inflated from 15 to 17 and then 19 mmHg. Simulations confirmed good strain estimation accuracy and SNR (e.g., the axial strains had less than 4.5% error with SNRs greater than 16.5 for strains from 0.005 to 0.05). Experimental data in porcine eyes showed increasing tensile, compressive, and shear strains in the posterior sclera during inflation, with a volume ratio close to one suggesting near-incompressibility. This study established the feasibility of using high frequency ultrasound speckle tracking for measuring 3D tissue strains and its potential to characterize physiological deformations in the posterior eye.

  15. A real-time noise filtering strategy for photon counting 3D imaging lidar.

    PubMed

    Zhang, Zijing; Zhao, Yuan; Zhang, Yong; Wu, Long; Su, Jianzhong

    2013-04-22

    For a direct-detection 3D imaging lidar, the use of Geiger mode avalanche photodiode (Gm-APD) could greatly enhance the detection sensitivity of the lidar system since each range measurement requires a single detected photon. Furthermore, Gm-APD offers significant advantages in reducing the size, mass, power and complexity of the system. However the inevitable noise, including the background noise, the dark count noise and so on, remains a significant challenge to obtain a clear 3D image of the target of interest. This paper presents a smart strategy, which can filter out false alarms in the stage of acquisition of raw time of flight (TOF) data and obtain a clear 3D image in real time. As a result, a clear 3D image is taken from the experimental system despite the background noise of the sunny day.

  16. 3D surface real-time measurement using phase-shifted interference fringe technique for craniofacial identification

    NASA Astrophysics Data System (ADS)

    Levin, Gennady G.; Vishnyakov, Gennady N.; Naumov, Alexey V.; Abramov, Sergey

    1998-03-01

    We offer to use the 3D surface profile real-time measurement using phase-shifted interference fringe projection technique for the cranioficial identification. Our system realizes the profile measurement by projecting interference fringe pattern on the object surface and by observing the deformed fringe pattern at the direction different from the projection. Fringes are formed by a Michelson interferometer with one mirror mounted on a piezoelectric translator. Four steps self- calibration phase-shift method was used.

  17. 3D real-time visualization of blood flow in cerebral aneurysms by light field particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Carlsohn, Matthias F.; Kemmling, André; Petersen, Arne; Wietzke, Lennart

    2016-04-01

    Cerebral aneurysms require endovascular treatment to eliminate potentially lethal hemorrhagic rupture by hemostasis of blood flow within the aneurysm. Devices (e.g. coils and flow diverters) promote homeostasis, however, measurement of blood flow within an aneurysm or cerebral vessel before and after device placement on a microscopic level has not been possible so far. This would allow better individualized treatment planning and improve manufacture design of devices. For experimental analysis, direct measurement of real-time microscopic cerebrovascular flow in micro-structures may be an alternative to computed flow simulations. An application of microscopic aneurysm flow measurement on a regular basis to empirically assess a high number of different anatomic shapes and the corresponding effect of different devices would require a fast and reliable method at low cost with high throughout assessment. Transparent three dimensional 3D models of brain vessels and aneurysms may be used for microscopic flow measurements by particle image velocimetry (PIV), however, up to now the size of structures has set the limits for conventional 3D-imaging camera set-ups. On line flow assessment requires additional computational power to cope with the processing large amounts of data generated by sequences of multi-view stereo images, e.g. generated by a light field camera capturing the 3D information by plenoptic imaging of complex flow processes. Recently, a fast and low cost workflow for producing patient specific three dimensional models of cerebral arteries has been established by stereo-lithographic (SLA) 3D printing. These 3D arterial models are transparent an exhibit a replication precision within a submillimeter range required for accurate flow measurements under physiological conditions. We therefore test the feasibility of microscopic flow measurements by PIV analysis using a plenoptic camera system capturing light field image sequences. Averaging across a sequence of

  18. A Bayesian approach to real-time 3D tumor localization via monoscopic x-ray imaging during treatment delivery

    SciTech Connect

    Li, Ruijiang; Fahimian, Benjamin P.; Xing, Lei

    2011-07-15

    Purpose: Monoscopic x-ray imaging with on-board kV devices is an attractive approach for real-time image guidance in modern radiation therapy such as VMAT or IMRT, but it falls short in providing reliable information along the direction of imaging x-ray. By effectively taking consideration of projection data at prior times and/or angles through a Bayesian formalism, the authors develop an algorithm for real-time and full 3D tumor localization with a single x-ray imager during treatment delivery. Methods: First, a prior probability density function is constructed using the 2D tumor locations on the projection images acquired during patient setup. Whenever an x-ray image is acquired during the treatment delivery, the corresponding 2D tumor location on the imager is used to update the likelihood function. The unresolved third dimension is obtained by maximizing the posterior probability distribution. The algorithm can also be used in a retrospective fashion when all the projection images during the treatment delivery are used for 3D localization purposes. The algorithm does not involve complex optimization of any model parameter and therefore can be used in a ''plug-and-play'' fashion. The authors validated the algorithm using (1) simulated 3D linear and elliptic motion and (2) 3D tumor motion trajectories of a lung and a pancreas patient reproduced by a physical phantom. Continuous kV images were acquired over a full gantry rotation with the Varian TrueBeam on-board imaging system. Three scenarios were considered: fluoroscopic setup, cone beam CT setup, and retrospective analysis. Results: For the simulation study, the RMS 3D localization error is 1.2 and 2.4 mm for the linear and elliptic motions, respectively. For the phantom experiments, the 3D localization error is < 1 mm on average and < 1.5 mm at 95th percentile in the lung and pancreas cases for all three scenarios. The difference in 3D localization error for different scenarios is small and is not

  19. Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography

    PubMed Central

    Carrasco-Zevallos, Oscar M.; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A.

    2016-01-01

    Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800

  20. Integrating eye tracking and motion sensor on mobile phone for interactive 3D display

    NASA Astrophysics Data System (ADS)

    Sun, Yu-Wei; Chiang, Chen-Kuo; Lai, Shang-Hong

    2013-09-01

    In this paper, we propose an eye tracking and gaze estimation system for mobile phone. We integrate an eye detector, cornereye center and iso-center to improve pupil detection. The optical flow information is used for eye tracking. We develop a robust eye tracking system that integrates eye detection and optical-flow based image tracking. In addition, we further incorporate the orientation sensor information from the mobile phone to improve the eye tracking for accurate gaze estimation. We demonstrate the accuracy of the proposed eye tracking and gaze estimation system through experiments on some public video sequences as well as videos acquired directly from mobile phone.

  1. GPU-accelerated 3D mipmap for real-time visualization of ultrasound volume data.

    PubMed

    Kwon, Koojoo; Lee, Eun-Seok; Shin, Byeong-Seok

    2013-10-01

    Ultrasound volume rendering is an efficient method for visualizing the shape of fetuses in obstetrics and gynecology. However, in order to obtain high-quality ultrasound volume rendering, noise removal and coordinates conversion are essential prerequisites. Ultrasound data needs to undergo a noise filtering process; otherwise, artifacts and speckle noise cause quality degradation in the final images. Several two-dimensional (2D) noise filtering methods have been used to reduce this noise. However, these 2D filtering methods ignore relevant information in-between adjacent 2D-scanned images. Although three-dimensional (3D) noise filtering methods are used, they require more processing time than 2D-based methods. In addition, the sampling position in the ultrasonic volume rendering process has to be transformed between conical ultrasound coordinates and Cartesian coordinates. We propose a 3D-mipmap-based noise reduction method that uses graphics hardware, as a typical 3D mipmap requires less time to be generated and less storage capacity. In our method, we compare the density values of the corresponding points on consecutive mipmap levels and find the noise area using the difference in the density values. We also provide a noise detector for adaptively selecting the mipmap level using the difference of two mipmap levels. Our method can visualize 3D ultrasound data in real time with 3D noise filtering.

  2. Real-time 3D adaptive filtering for portable imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often not able to run with sufficient performance on a portable platform. In recent years, advanced multicore DSPs have been introduced that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms like 3D adaptive filtering, improving the image quality of portable medical imaging devices. In this study, the performance of a 3D adaptive filtering algorithm on a digital signal processor (DSP) is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec.

  3. Hybrid MV-kV 3D respiratory motion tracking during radiation therapy with low imaging dose.

    PubMed

    Yan, Huagang; Li, Haiyun; Liu, Zhixiang; Nath, Ravinder; Liu, Wu

    2012-12-21

    A novel real-time adaptive MV-kV imaging framework for image-guided radiation therapy is developed to reduce the thoracic and abdominal tumor targeting uncertainty caused by respiration-induced intrafraction motion with ultra-low patient imaging dose. In our method, continuous stereoscopic MV-kV imaging is used at the beginning of a radiation therapy delivery for several seconds to measure the implanted marker positions. After this stereoscopic imaging period, the kV imager is switched off except for the times when no fiducial marker is detected in the cine-MV images. The 3D time-varying marker positions are estimated by combining the MV 2D projection data and the motion correlations between directional components of marker motion established from the stereoscopic imaging period and updated afterwards; in particular, the most likely position is assumed to be the position on the projection line that has the shortest distance to the first principal component line segment constructed from previous trajectory points. An adaptive windowed auto-regressive prediction is utilized to predict the marker position a short time later (310 ms and 460 ms in this study) to allow for tracking system latency. To demonstrate the feasibility and evaluate the accuracy of the proposed method, computer simulations were performed for both arc and fixed-gantry deliveries using 66 h of retrospective tumor motion data from 42 patients treated for thoracic or abdominal cancers. The simulations reveal that using our hybrid approach, a smaller than 1.2 mm or 1.5 mm root-mean-square tracking error can be achieved at a system latency of 310 ms or 460 ms, respectively. Because the kV imaging is only used for a short period of time in our method, extra patient imaging dose can be reduced by an order of magnitude compared to continuous MV-kV imaging, while the clinical tumor targeting accuracy for thoracic or abdominal cancers is maintained. Furthermore, no additional hardware is required

  4. Hybrid MV-kV 3D respiratory motion tracking during radiation therapy with low imaging dose

    NASA Astrophysics Data System (ADS)

    Yan, Huagang; Li, Haiyun; Liu, Zhixiang; Nath, Ravinder; Liu, Wu

    2012-12-01

    A novel real-time adaptive MV-kV imaging framework for image-guided radiation therapy is developed to reduce the thoracic and abdominal tumor targeting uncertainty caused by respiration-induced intrafraction motion with ultra-low patient imaging dose. In our method, continuous stereoscopic MV-kV imaging is used at the beginning of a radiation therapy delivery for several seconds to measure the implanted marker positions. After this stereoscopic imaging period, the kV imager is switched off except for the times when no fiducial marker is detected in the cine-MV images. The 3D time-varying marker positions are estimated by combining the MV 2D projection data and the motion correlations between directional components of marker motion established from the stereoscopic imaging period and updated afterwards; in particular, the most likely position is assumed to be the position on the projection line that has the shortest distance to the first principal component line segment constructed from previous trajectory points. An adaptive windowed auto-regressive prediction is utilized to predict the marker position a short time later (310 ms and 460 ms in this study) to allow for tracking system latency. To demonstrate the feasibility and evaluate the accuracy of the proposed method, computer simulations were performed for both arc and fixed-gantry deliveries using 66 h of retrospective tumor motion data from 42 patients treated for thoracic or abdominal cancers. The simulations reveal that using our hybrid approach, a smaller than 1.2 mm or 1.5 mm root-mean-square tracking error can be achieved at a system latency of 310 ms or 460 ms, respectively. Because the kV imaging is only used for a short period of time in our method, extra patient imaging dose can be reduced by an order of magnitude compared to continuous MV-kV imaging, while the clinical tumor targeting accuracy for thoracic or abdominal cancers is maintained. Furthermore, no additional hardware is required with the

  5. 3D Ultrasonic Needle Tracking with a 1.5D Transducer Array for Guidance of Fetal Interventions

    PubMed Central

    West, Simeon J.; Mari, Jean-Martial; Ourselin, Sebastien; David, Anna L.; Desjardins, Adrien E.

    2016-01-01

    Ultrasound image guidance is widely used in minimally invasive procedures, including fetal surgery. In this context, maintaining visibility of medical devices is a significant challenge. Needles and catheters can readily deviate from the ultrasound imaging plane as they are inserted. When the medical device tips are not visible, they can damage critical structures, with potentially profound consequences including loss of pregnancy. In this study, we performed 3D ultrasonic tracking of a needle using a novel probe with a 1.5D array of transducer elements that was driven by a commercial ultrasound system. A fiber-optic hydrophone integrated into the needle received transmissions from the probe, and data from this sensor was processed to estimate the position of the hydrophone tip in the coordinate space of the probe. Golay coding was used to increase the signal-to-noise (SNR). The relative tracking accuracy was better than 0.4 mm in all dimensions, as evaluated using a water phantom. To obtain a preliminary indication of the clinical potential of 3D ultrasonic needle tracking, an intravascular needle insertion was performed in an in vivo pregnant sheep model. The SNR values ranged from 12 to 16 at depths of 20 to 31 mm and at an insertion angle of 49° relative to the probe surface normal. The results of this study demonstrate that 3D ultrasonic needle tracking with a fiber-optic hydrophone sensor and a 1.5D array is feasible in clinically realistic environments. PMID:28111644

  6. Real-Time Tracking of Knee Adduction Moment in Patients with Knee Osteoarthritis

    PubMed Central

    Kang, Sang Hoon; Lee, Song Joo; Zhang, Li-Qun

    2014-01-01

    Background The external knee adduction moment (EKAM) is closely associated with the presence, progression, and severity of knee osteoarthritis (OA). However, there is a lack of convenient and practical method to estimate and track in real-time the EKAM of patients with knee OA for clinical evaluation and gait training, especially outside of gait laboratories. New Method A real-time EKAM estimation method was developed and applied to track and investigate the EKAM and other knee moments during stepping on an elliptical trainer in both healthy subjects and a patient with knee OA. Results Substantial changes were observed in the EKAM and other knee moments during stepping in the patient with knee OA. Comparison with Existing Method(s) This is the first study to develop and test feasibility of real-time tracking method of the EKAM on patients with knee OA using 3-D inverse dynamics. Conclusions The study provides us an accurate and practical method to evaluate in real-time the critical EKAM associated with knee OA, which is expected to help us to diagnose and evaluate patients with knee OA and provide the patients with real-time EKAM feedback rehabilitation training. PMID:24361759

  7. Touring Mars Online, Real-time, in 3D for Math and Science Educators and Students

    ERIC Educational Resources Information Center

    Jones, Greg; Kalinowski, Kevin

    2007-01-01

    This article discusses a project that placed over 97% of Mars' topography made available from NASA into an interactive 3D multi-user online learning environment beginning in 2003. In 2005 curriculum materials that were created to support middle school math and science education were developed. Research conducted at the University of North Texas…

  8. Real-time, high-accuracy 3D imaging and shape measurement.

    PubMed

    Nguyen, Hieu; Nguyen, Dung; Wang, Zhaoyang; Kieu, Hien; Le, Minh

    2015-01-01

    In spite of the recent advances in 3D shape measurement and geometry reconstruction, simultaneously achieving fast-speed and high-accuracy performance remains a big challenge in practice. In this paper, a 3D imaging and shape measurement system is presented to tackle such a challenge. The fringe-projection-profilometry-based system employs a number of advanced approaches, such as: composition of phase-shifted fringe patterns, externally triggered synchronization of system components, generalized system setup, ultrafast phase-unwrapping algorithm, flexible system calibration method, robust gamma correction scheme, multithread computation and processing, and graphics-processing-unit-based image display. Experiments have shown that the proposed system can acquire and display high-quality 3D reconstructed images and/or video stream at a speed of 45 frames per second with relative accuracy of 0.04% or at a reduced speed of 22.5 frames per second with enhanced accuracy of 0.01%. The 3D imaging and shape measurement system shows great promise of satisfying the ever-increasing demands of scientific and engineering applications.

  9. Seeing More Is Knowing More: V3D Enables Real-Time 3D Visualization and Quantitative Analysis of Large-Scale Biological Image Data Sets

    NASA Astrophysics Data System (ADS)

    Peng, Hanchuan; Long, Fuhui

    Everyone understands seeing more is knowing more. However, for large-scale 3D microscopic image analysis, it has not been an easy task to efficiently visualize, manipulate and understand high-dimensional data in 3D, 4D or 5D spaces. We developed a new 3D+ image visualization and analysis platform, V3D, to meet this need. The V3D system provides 3D visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3Dbased application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a high-resolution 3D digital atlas of neurite tracts in the fruitfly brain. V3D can be easily extended using a simple-to-use and comprehensive plugin interface.

  10. An embedded real-time red peach detection system based on an OV7670 camera, ARM cortex-M4 processor and 3D look-up tables.

    PubMed

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-10-22

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.

  11. An Embedded Real-Time Red Peach Detection System Based on an OV7670 Camera, ARM Cortex-M4 Processor and 3D Look-Up Tables

    PubMed Central

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second. PMID:23202040

  12. Using 3D Glyph Visualization to Explore Real-time Seismic Data on Immersive and High-resolution Display Systems

    NASA Astrophysics Data System (ADS)

    Nayak, A. M.; Lindquist, K.; Kilb, D.; Newman, R.; Vernon, F.; Leigh, J.; Johnson, A.; Renambot, L.

    2003-12-01

    The study of time-dependent, three-dimensional natural phenomena like earthquakes can be enhanced with innovative and pertinent 3D computer graphics. Here we display seismic data as 3D glyphs (graphics primitives or symbols with various geometric and color attributes), allowing us to visualize the measured, time-dependent, 3D wave field from an earthquake recorded by a certain seismic network. In addition to providing a powerful state-of-health diagnostic of the seismic network, the graphical result presents an intuitive understanding of the real-time wave field that is hard to achieve with traditional 2D visualization methods. We have named these 3D icons `seismoglyphs' to suggest visual objects built from three components of ground motion data (north-south, east-west, vertical) recorded by a seismic sensor. A seismoglyph changes color with time, spanning the spectrum, to indicate when the seismic amplitude is largest. The spatial extent of the glyph indicates the polarization of the wave field as it arrives at the recording station. We compose seismoglyphs using the real time ANZA broadband data (http://www.eqinfo.ucsd.edu) to understand the 3D behavior of a seismic wave field in Southern California. Fifteen seismoglyphs are drawn simultaneously with a 3D topography map of Southern California, as real time data is piped into the graphics software using the Antelope system. At each station location, the seismoglyph evolves with time and this graphical display allows a scientist to observe patterns and anomalies in the data. The display also provides visual clues to indicate wave arrivals and ~real-time earthquake detection. Future work will involve adding phase detections, network triggers and near real-time 2D surface shaking estimates. The visuals can be displayed in an immersive environment using the passive stereoscopic Geowall (http://www.geowall.org). The stereographic projection allows for a better understanding of attenuation due to distance and earth

  13. 3D-SURFER 2.0: web platform for real-time search and characterization of protein surfaces.

    PubMed

    Xiong, Yi; Esquivel-Rodriguez, Juan; Sael, Lee; Kihara, Daisuke

    2014-01-01

    The increasing number of uncharacterized protein structures necessitates the development of computational approaches for function annotation using the protein tertiary structures. Protein structure database search is the basis of any structure-based functional elucidation of proteins. 3D-SURFER is a web platform for real-time protein surface comparison of a given protein structure against the entire PDB using 3D Zernike descriptors. It can smoothly navigate the protein structure space in real-time from one query structure to another. A major new feature of Release 2.0 is the ability to compare the protein surface of a single chain, a single domain, or a single complex against databases of protein chains, domains, complexes, or a combination of all three in the latest PDB. Additionally, two types of protein structures can now be compared: all-atom-surface and backbone-atom-surface. The server can also accept a batch job for a large number of database searches. Pockets in protein surfaces can be identified by VisGrid and LIGSITE (csc) . The server is available at http://kiharalab.org/3d-surfer/.

  14. Real-time 3D ultrasound fetal image enhancment techniques using motion-compensated frame rate up-conversion

    NASA Astrophysics Data System (ADS)

    Lee, Gun-Ill; Park, Rae-Hong; Song, Young-Seuk; Kim, Cheol-An; Hwang, Jae-Sub

    2003-05-01

    In this paper, we present a motion compensated frame rate up-conversion method for real-time three-dimensional (3-D) ultrasound fetal image enhancement. The conventional mechanical scan method with one-dimensional (1-D) array converters used for 3-D volume data acquisition has a slow frame rate of multi-planar images. This drawback is not an issue for stationary objects, however in ultrasound images showing a fetus of more than about 25 weeks, we perceive abrupt changes due to fast motions. To compensate for this defect, we propose the frame rate up-conversion method by which new interpolated frames are inserted between two input frames, giving smooth renditions to human eyes. More natural motions can be obtained by frame rate up-conversion. In the proposed algorithm, we employ forward motion estimation (ME), in which motion vectors (MVs) ar estimated using a block matching algorithm (BMA). To smooth MVs over neighboring blocks, vector median filtering is performed. Using these smoothed MVs, interpolated frames are reconstructed by motion compensation (MC). The undesirable blocking artifacts due to blockwise processing are reduced by block boundary filtering using a Gaussian low pass filter (LPF). The proposed method can be used in computer aided diagnosis (CAD), where more natural 3-D ultrasound images are displayed in real-time. Simulation results with several real test sequences show the effectiveness of the proposed algorithm.

  15. The 3D Tele Motion Tracking for the Orthodontic Facial Analysis

    PubMed Central

    Nota, Alessandro; Marchetti, Enrico; Padricelli, Giuseppe; Marzo, Giuseppe

    2016-01-01

    Aim. This study aimed to evaluate the reliability of 3D-TMT, previously used only for dynamic testing, in a static cephalometric evaluation. Material and Method. A group of 40 patients (20 males and 20 females; mean age 14.2 ± 1.2 years; 12–18 years old) was included in the study. The measurements obtained by the 3D-TMT cephalometric analysis with a conventional frontal cephalometric analysis were compared for each subject. Nine passive markers reflectors were positioned on the face skin for the detection of the profile of the patient. Through the acquisition of these points, corresponding plans for three-dimensional posterior-anterior cephalometric analysis were found. Results. The cephalometric results carried out with 3D-TMT and with traditional posterior-anterior cephalometric analysis showed the 3D-TMT system values are slightly higher than the values measured on radiographs but statistically significant; nevertheless their correlation is very high. Conclusion. The recorded values obtained using the 3D-TMT analysis were correlated to cephalometric analysis, with small but statistically significant differences. The Dahlberg errors resulted to be always lower than the mean difference between the 2D and 3D measurements. A clinician should use, during the clinical monitoring of a patient, always the same method, to avoid comparing different millimeter magnitudes. PMID:28044130

  16. Accurate and high-performance 3D position measurement of fiducial marks by stereoscopic system for railway track inspection

    NASA Astrophysics Data System (ADS)

    Gorbachev, Alexey A.; Serikova, Mariya G.; Pantyushina, Ekaterina N.; Volkova, Daria A.

    2016-04-01

    Modern demands for railway track measurements require high accuracy (about 2-5 mm) of rails placement along the track to ensure smooth, safe and fast transportation. As a mean for railways geometry measurements we suggest a stereoscopic system which measures 3D position of fiducial marks arranged along the track by image processing algorithms. The system accuracy was verified during laboratory tests by comparison with precise laser tracker indications. The accuracy of +/-1.5 mm within a measurement volume 150×400×5000 mm was achieved during the tests. This confirmed that the stereoscopic system demonstrates good measurement accuracy and can be potentially used as fully automated mean for railway track inspection.

  17. Real-time registration by tracking for MR-guided cardiac interventions

    NASA Astrophysics Data System (ADS)

    Chung, Desmond; Satkunasingham, Janakan; Wright, Graham; Radau, Perry

    2006-03-01

    Cardiac interventional procedures such as myocardial stem cell delivery and radiofrequency ablation require a high degree of accuracy and efficiency. Real-time, 2-D MR technology is being developed to guide such procedures; the associated challenges include the relatively low resolution and image quality in real-time images. Real-time MR guidance can be enhanced by acquiring a 4-D (3-D + phase) volume prior to the procedure and aligning it to the 2-D real-time images, so that corresponding features in the prior volume can be integrated into the real-time image visualization. This technique provides spatial context with high resolution and SNR. A left ventricular (LV) myocardial wall contour tracking system was developed to maintain spatial alignment of prior volume images to real-time MR images. Over 9 test images sequences, each comprising 100 frames of simulated respiratory motion, the tracker maintained alignment with a mean displacement error of 1.61mm in a region of interest around the LV, as compared to a mean displacement error of 5.2mm without tracking.

  18. Real-time geometric scene estimation for RGBD images using a 3D box shape grammar

    NASA Astrophysics Data System (ADS)

    Willis, Andrew R.; Brink, Kevin M.

    2016-06-01

    This article describes a novel real-time algorithm for the purpose of extracting box-like structures from RGBD image data. In contrast to conventional approaches, the proposed algorithm includes two novel attributes: (1) it divides the geometric estimation procedure into subroutines having atomic incremental computational costs, and (2) it uses a generative "Block World" perceptual model that infers both concave and convex box elements from detection of primitive box substructures. The end result is an efficient geometry processing engine suitable for use in real-time embedded systems such as those on an UAVs where it is intended to be an integral component for robotic navigation and mapping applications.

  19. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    SciTech Connect

    Ravindran, V. R.; Sreelakshmi, C.; Vibin

    2008-09-26

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  20. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    NASA Astrophysics Data System (ADS)

    Ravindran, V. R.; Sreelakshmi, C.; Vibin, Vibin

    2008-09-01

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  1. Real-time forecasting of Hong Kong beach water quality by 3D deterministic model.

    PubMed

    Chan, S N; Thoe, W; Lee, J H W

    2013-03-15

    Bacterial level (e.g. Escherichia coli) is generally adopted as the key indicator of beach water quality due to its high correlation with swimming associated illnesses. A 3D deterministic hydrodynamic model is developed to provide daily water quality forecasting for eight marine beaches in Tsuen Wan, which are only about 8 km from the Harbour Area Treatment Scheme (HATS) outfall discharging 1.4 million m(3)/d of partially-treated sewage. The fate and transport of the HATS effluent and its impact on the E. coli level at nearby beaches are studied. The model features the seamless coupling of near field jet mixing and the far field transport and dispersion of wastewater discharge from submarine outfalls, and a spatial-temporal dependent E. coli decay rate formulation specifically developed for sub-tropical Hong Kong waters. The model prediction of beach water quality has been extensively validated against field data both before and after disinfection of the HATS effluent. Compared with daily beach E. coli data during August-November 2011, the model achieves an overall accuracy of 81-91% in forecasting compliance/exceedance of beach water quality standard. The 3D deterministic model has been most valuable in the interpretation of the complex variation of beach water quality which depends on tidal level, solar radiation and other hydro-meteorological factors. The model can also be used in optimization of disinfection dosage and in emergency response situations.

  2. Low-cost real-time 3D PC distributed-interactive-simulation (DIS) application for C4I

    NASA Astrophysics Data System (ADS)

    Gonthier, David L.; Veron, Harry

    1998-04-01

    A 3D Distributed Interactive Simulation (DIS) application was developed and demonstrated in a PC environment. The application is capable of running in the stealth mode or as a player which includes battlefield simulations, such as ModSAF. PCs can be clustered together, but not necessarily collocated, to run a simulation or training exercise on their own. A 3D perspective view of the battlefield is displayed that includes terrain, trees, buildings and other objects supported by the DIS application. Screen update rates of 15 to 20 frames per second have been achieved with fully lit and textured scenes thus providing high quality and fast graphics. A complete PC system can be configured for under $2,500. The software runs under Windows95 and WindowsNT. It is written in C++ and uses a commercial API called RenderWare for 3D rendering. The software uses Microsoft Foundation classes and Microsoft DirectPlay for joystick input. The RenderWare libraries enhance the performance through optimization for MMX and the Pentium Pro processor. The RenderWare and the Righteous 3D graphics board from Orchid Technologies with an advertised rendering rate of up to 2 million texture mapped triangles per second. A low-cost PC DIS simulator that can partake in a real-time collaborative simulation with other platforms is thus achieved.

  3. A spheroid toxicity assay using magnetic 3D bioprinting and real-time mobile device-based imaging

    PubMed Central

    Tseng, Hubert; Gage, Jacob A.; Shen, Tsaiwei; Haisler, William L.; Neeley, Shane K.; Shiao, Sue; Chen, Jianbo; Desai, Pujan K.; Liao, Angela; Hebel, Chris; Raphael, Robert M.; Becker, Jeanne L.; Souza, Glauco R.

    2015-01-01

    An ongoing challenge in biomedical research is the search for simple, yet robust assays using 3D cell cultures for toxicity screening. This study addresses that challenge with a novel spheroid assay, wherein spheroids, formed by magnetic 3D bioprinting, contract immediately as cells rearrange and compact the spheroid in relation to viability and cytoskeletal organization. Thus, spheroid size can be used as a simple metric for toxicity. The goal of this study was to validate spheroid contraction as a cytotoxic endpoint using 3T3 fibroblasts in response to 5 toxic compounds (all-trans retinoic acid, dexamethasone, doxorubicin, 5′-fluorouracil, forskolin), sodium dodecyl sulfate (+control), and penicillin-G (−control). Real-time imaging was performed with a mobile device to increase throughput and efficiency. All compounds but penicillin-G significantly slowed contraction in a dose-dependent manner (Z’ = 0.88). Cells in 3D were more resistant to toxicity than cells in 2D, whose toxicity was measured by the MTT assay. Fluorescent staining and gene expression profiling of spheroids confirmed these findings. The results of this study validate spheroid contraction within this assay as an easy, biologically relevant endpoint for high-throughput compound screening in representative 3D environments. PMID:26365200

  4. Application of 3D digital image correlation to track displacements and strains of canvas paintings exposed to relative humidity changes.

    PubMed

    Malowany, Krzysztof; Tymińska-Widmer, Ludmiła; Malesa, Marcin; Kujawińska, Małgorzata; Targowski, Piotr; Rouba, Bogumiła J

    2014-03-20

    This paper introduces a methodology for tracking displacements in canvas paintings exposed to relative humidity changes. Displacements are measured by means of the 3D digital image correlation method that is followed by a postprocessing of displacement data, which allows the separation of local displacements from global displacement maps. The applicability of this methodology is tested on measurements of a model painting on canvas with introduced defects causing local inhomogeneity. The method allows the evaluation of conservation methods used for repairing canvas supports.

  5. Atmospheric Motion Vectors from INSAT-3D: Initial quality assessment and its impact on track forecast of cyclonic storm NANAUK

    NASA Astrophysics Data System (ADS)

    Deb, S. K.; Kishtawal, C. M.; Kumar, Prashant; Kiran Kumar, A. S.; Pal, P. K.; Kaushik, Nitesh; Sangar, Ghansham

    2016-03-01

    The advanced Indian meteorological geostationary satellite INSAT-3D was launched on 26 July 2013 with an improved imager and an infrared sounder and is placed at 82°E over the Indian Ocean region. With the advancement in retrieval techniques of different atmospheric parameters and with improved imager data have enhanced the scope for better understanding of the different tropical atmospheric processes over this region. The retrieval techniques and accuracy of one such parameter, Atmospheric Motion Vectors (AMV) has improved significantly with the availability of improved spatial resolution data along with more options of spectral channels in the INSAT-3D imager. The present work is mainly focused on providing brief descriptions of INSAT-3D data and AMV derivation processes using these data. It also discussed the initial quality assessment of INSAT-3D AMVs for a period of six months starting from 01 February 2014 to 31 July 2014 with other independent observations: i) Meteosat-7 AMVs available over this region, ii) in-situ radiosonde wind measurements, iii) cloud tracked winds from Multi-angle Imaging Spectro-Radiometer (MISR) and iv) numerical model analysis. It is observed from this study that the qualities of newly derived INSAT-3D AMVs are comparable with existing two versions of Meteosat-7 AMVs over this region. To demonstrate its initial application, INSAT-3D AMVs are assimilated in the Weather Research and Forecasting (WRF) model and it is found that the assimilation of newly derived AMVs has helped in reduction of track forecast errors of the recent cyclonic storm NANAUK over the Arabian Sea. Though, the present study is limited to its application to one case study, however, it will provide some guidance to the operational agencies for implementation of this new AMV dataset for future applications in the Numerical Weather Prediction (NWP) over the south Asia region.

  6. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures

    NASA Astrophysics Data System (ADS)

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R.

    2012-03-01

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in realtime by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  7. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures.

    PubMed

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R

    2012-02-23

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  8. 3D Near Infrared and Ultrasound Imaging of Peripheral Blood Vessels for Real-Time Localization and Needle Guidance

    PubMed Central

    Chen, Alvin I.; Balter, Max L.; Maguire, Timothy J.; Yarmush, Martin L.

    2016-01-01

    This paper presents a portable imaging device designed to detect peripheral blood vessels for cannula insertion that are otherwise difficult to visualize beneath the skin. The device combines near infrared stereo vision, ultrasound, and real-time image analysis to map the 3D structure of subcutaneous vessels. We show that the device can identify adult forearm vessels and be used to guide manual insertions in tissue phantoms with increased first-stick accuracy compared to unassisted cannulation. We also demonstrate that the system may be coupled with a robotic manipulator to perform automated, image-guided venipuncture. PMID:27981261

  9. 3D tracking and phase-contrast imaging by twin-beams digital holographic microscope in microfluidics

    NASA Astrophysics Data System (ADS)

    Miccio, L.; Memmolo, P.; Finizio, A.; Paturzo, M.; Merola, F.; Grilli, S.; Ferraro, P.

    2012-06-01

    A compact twin-beam interferometer that can be adopted as a flexible diagnostic tool in microfluidic platforms is presented. The devise has two functionalities, as explained in the follow, and can be easily integrated in microfluidic chip. The configuration allows 3D tracking of micro-particles and, at same time, furnishes Quantitative Phase-Contrast maps of tracked micro-objects by interference microscopy. Experimental demonstration of its effectiveness and compatibility with biological field is given on for in vitro cells in microfluidic environment. Nowadays, several microfluidic configuration exist and many of them are commercially available, their development is due to the possibility for manipulating droplets, handling micro and nano-objects, visualize and quantify processes occurring in small volumes and, clearly, for direct applications on lab-on-a chip devices. In microfluidic research field, optical/photonics approaches are the more suitable ones because they have various advantages as to be non-contact, full-field, non-invasive and can be packaged thanks to the development of integrable optics. Moreover, phase contrast approaches, adapted to a lab-on-a-chip configurations, give the possibility to get quantitative information with remarkable lateral and vertical resolution directly in situ without the need to dye and/or kill cells. Furthermore, numerical techniques for tracking of micro-objects needs to be developed for measuring velocity fields, trajectories patterns, motility of cancer cell and so on. Here, we present a compact holographic microscope that can ensure, by the same configuration and simultaneously, accurate 3D tracking and quantitative phase-contrast analysis. The system, simple and solid, is based on twin laser beams coming from a single laser source. Through a easy conceptual design, we show how these two different functionalities can be accomplished by the same optical setup. The working principle, the optical setup and the mathematical

  10. Real-time optical holographic tracking of multiple objects.

    PubMed

    Chao, T H; Liu, H K

    1989-01-15

    A coherent optical correlation technique for real-time simultaneous tracking of several different objects making independent movements is described, and experimental results are presented. An evaluation of this system compared with digital computing systems is made. The real-time processing capability is obtained through the use of a liquid crystal television spatial light modulator and a dichromated gelatin multifocus hololens. A coded reference beam is utilized in the separation of the output correlation plane associated with each input target so that independent tracking can be achieved.

  11. Handheld portable real-time tracking and communications device

    DOEpatents

    Wiseman, James M [Albuquerque, NM; Riblett, Jr., Loren E.; Green, Karl L [Albuquerque, NM; Hunter, John A [Albuquerque, NM; Cook, III, Robert N.; Stevens, James R [Arlington, VA

    2012-05-22

    Portable handheld real-time tracking and communications devices include; a controller module, communications module including global positioning and mesh network radio module, data transfer and storage module, and a user interface module enclosed in a water-resistant enclosure. Real-time tracking and communications devices can be used by protective force, security and first responder personnel to provide situational awareness allowing for enhance coordination and effectiveness in rapid response situations. Such devices communicate to other authorized devices via mobile ad-hoc wireless networks, and do not require fixed infrastructure for their operation.

  12. Real-time optical holographic tracking of multiple objects

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Liu, Hua-Kuang

    1989-01-01

    A coherent optical correlation technique for real-time simultaneous tracking of several different objects making independent movements is described, and experimental results are presented. An evaluation of this system compared with digital computing systems is made. The real-time processing capability is obtained through the use of a liquid crystal television spatial light modulator and a dichromated gelatin multifocus hololens. A coded reference beam is utilized in the separation of the output correlation plane associated with each input target so that independent tracking can be achieved.

  13. Real-time seam tracking for rocket thrust chamber manufacturing

    SciTech Connect

    Schmitt, D.J.; Novak, J.L.; Starr, G.P.; Maslakowski, J.E.

    1993-11-01

    A sensor-based control approach for real-time seam tracking of rocket thrust chamber assemblies has been developed to enable automation of a braze paste dispensing process. This approach utilizes a non-contact Multi-Axis Seam Tracking (MAST) sensor to track the seams. Thee MAST sensor measures capacitance variations between the sensor and the workpiece and produces four varying voltages which are read directly into the robot controller. A PID control algorithm which runs at the application program level has been designed based upon a simple dynamic model of the combined robot and sensor plant. The control algorithm acts on the incoming sensor signals in real-time to guide the robot motion along the seam path. Experiments demonstrate that seams can be tracked at 100 mm/sec within the accuracy required for braze paste dispensing.

  14. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  15. Registration of clinical volumes to beams-eye-view images for real-time tracking

    SciTech Connect

    Bryant, Jonathan H.; Rottmann, Joerg; Lewis, John H.; Mishra, Pankaj; Berbeco, Ross I.; Keall, Paul J.

    2014-12-15

    Purpose: The authors combine the registration of 2D beam’s eye view (BEV) images and 3D planning computed tomography (CT) images, with relative, markerless tumor tracking to provide automatic absolute tracking of physician defined volumes such as the gross tumor volume (GTV). Methods: During treatment of lung SBRT cases, BEV images were continuously acquired with an electronic portal imaging device (EPID) operating in cine mode. For absolute registration of physician-defined volumes, an intensity based 2D/3D registration to the planning CT was performed using the end-of-exhale (EoE) phase of the four dimensional computed tomography (4DCT). The volume was converted from Hounsfield units into electron density by a calibration curve and digitally reconstructed radiographs (DRRs) were generated for each beam geometry. Using normalized cross correlation between the DRR and an EoE BEV image, the best in-plane rigid transformation was found. The transformation was applied to physician-defined contours in the planning CT, mapping them into the EPID image domain. A robust multiregion method of relative markerless lung tumor tracking quantified deviations from the EoE position. Results: The success of 2D/3D registration was demonstrated at the EoE breathing phase. By registering at this phase and then employing a separate technique for relative tracking, the authors are able to successfully track target volumes in the BEV images throughout the entire treatment delivery. Conclusions: Through the combination of EPID/4DCT registration and relative tracking, a necessary step toward the clinical implementation of BEV tracking has been completed. The knowledge of tumor volumes relative to the treatment field is important for future applications like real-time motion management, adaptive radiotherapy, and delivered dose calculations.

  16. Registration of clinical volumes to beams-eye-view images for real-time tracking

    PubMed Central

    Bryant, Jonathan H.; Rottmann, Joerg; Lewis, John H.; Mishra, Pankaj; Keall, Paul J.; Berbeco, Ross I.

    2014-01-01

    Purpose: The authors combine the registration of 2D beam’s eye view (BEV) images and 3D planning computed tomography (CT) images, with relative, markerless tumor tracking to provide automatic absolute tracking of physician defined volumes such as the gross tumor volume (GTV). Methods: During treatment of lung SBRT cases, BEV images were continuously acquired with an electronic portal imaging device (EPID) operating in cine mode. For absolute registration of physician-defined volumes, an intensity based 2D/3D registration to the planning CT was performed using the end-of-exhale (EoE) phase of the four dimensional computed tomography (4DCT). The volume was converted from Hounsfield units into electron density by a calibration curve and digitally reconstructed radiographs (DRRs) were generated for each beam geometry. Using normalized cross correlation between the DRR and an EoE BEV image, the best in-plane rigid transformation was found. The transformation was applied to physician-defined contours in the planning CT, mapping them into the EPID image domain. A robust multiregion method of relative markerless lung tumor tracking quantified deviations from the EoE position. Results: The success of 2D/3D registration was demonstrated at the EoE breathing phase. By registering at this phase and then employing a separate technique for relative tracking, the authors are able to successfully track target volumes in the BEV images throughout the entire treatment delivery. Conclusions: Through the combination of EPID/4DCT registration and relative tracking, a necessary step toward the clinical implementation of BEV tracking has been completed. The knowledge of tumor volumes relative to the treatment field is important for future applications like real-time motion management, adaptive radiotherapy, and delivered dose calculations. PMID:25471950

  17. Real-time 3D imaging of microstructure growth in battery cells using indirect MRI

    PubMed Central

    Ilott, Andrew J.; Mohammadi, Mohaddese; Chang, Hee Jung; Grey, Clare P.; Jerschow, Alexej

    2016-01-01

    Lithium metal is a promising anode material for Li-ion batteries due to its high theoretical specific capacity and low potential. The growth of dendrites is a major barrier to the development of high capacity, rechargeable Li batteries with lithium metal anodes, and hence, significant efforts have been undertaken to develop new electrolytes and separator materials that can prevent this process or promote smooth deposits at the anode. Central to these goals, and to the task of understanding the conditions that initiate and propagate dendrite growth, is the development of analytical and nondestructive techniques that can be applied in situ to functioning batteries. MRI has recently been demonstrated to provide noninvasive imaging methodology that can detect and localize microstructure buildup. However, until now, monitoring dendrite growth by MRI has been limited to observing the relatively insensitive metal nucleus directly, thus restricting the temporal and spatial resolution and requiring special hardware and acquisition modes. Here, we present an alternative approach to detect a broad class of metallic dendrite growth via the dendrites’ indirect effects on the surrounding electrolyte, allowing for the application of fast 3D 1H MRI experiments with high resolution. We use these experiments to reconstruct 3D images of growing Li dendrites from MRI, revealing details about the growth rate and fractal behavior. Radiofrequency and static magnetic field calculations are used alongside the images to quantify the amount of the growing structures. PMID:27621444

  18. Computer Vision Tracking Using Particle Filters for 3D Position Estimation

    DTIC Science & Technology

    2014-03-27

    5 2.2 Photogrammetry ...focus on particle filters. 2.2 Photogrammetry Photogrammetry is the process of determining 3-D coordinates through images. The mathematical underpinnings...of photogrammetry are rooted in the 1480s with Leonardo da Vinci’s study of perspectives [8, p. 1]. However, digital photogrammetry did not emerge

  19. 3D GABA imaging with real-time motion correction, shim update and reacquisition of adiabatic spiral MRSI.

    PubMed

    Bogner, Wolfgang; Gagoski, Borjan; Hess, Aaron T; Bhat, Himanshu; Tisdall, M Dylan; van der Kouwe, Andre J W; Strasser, Bernhard; Marjańska, Małgorzata; Trattnig, Siegfried; Grant, Ellen; Rosen, Bruce; Andronesi, Ovidiu C

    2014-12-01

    Gamma-aminobutyric acid (GABA) and glutamate (Glu) are the major neurotransmitters in the brain. They are crucial for the functioning of healthy brain and their alteration is a major mechanism in the pathophysiology of many neuro-psychiatric disorders. Magnetic resonance spectroscopy (MRS) is the only way to measure GABA and Glu non-invasively in vivo. GABA detection is particularly challenging and requires special MRS techniques. The most popular is MEscher-GArwood (MEGA) difference editing with single-voxel Point RESolved Spectroscopy (PRESS) localization. This technique has three major limitations: a) MEGA editing is a subtraction technique, hence is very sensitive to scanner instabilities and motion artifacts. b) PRESS is prone to localization errors at high fields (≥3T) that compromise accurate quantification. c) Single-voxel spectroscopy can (similar to a biopsy) only probe steady GABA and Glu levels in a single location at a time. To mitigate these problems, we implemented a 3D MEGA-editing MRS imaging sequence with the following three features: a) Real-time motion correction, dynamic shim updates, and selective reacquisition to eliminate subtraction artifacts due to scanner instabilities and subject motion. b) Localization by Adiabatic SElective Refocusing (LASER) to improve the localization accuracy and signal-to-noise ratio. c) K-space encoding via a weighted stack of spirals provides 3D metabolic mapping with flexible scan times. Simulations, phantom and in vivo experiments prove that our MEGA-LASER sequence enables 3D mapping of GABA+ and Glx (Glutamate+Gluatmine), by providing 1.66 times larger signal for the 3.02ppm multiplet of GABA+ compared to MEGA-PRESS, leading to clinically feasible scan times for 3D brain imaging. Hence, our sequence allows accurate and robust 3D-mapping of brain GABA+ and Glx levels to be performed at clinical 3T MR scanners for use in neuroscience and clinical applications.

  20. MobileFusion: real-time volumetric surface reconstruction and dense tracking on mobile phones.

    PubMed

    Ondrúška, Peter; Kohli, Pushmeet; Izadi, Shahram

    2015-11-01

    We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ∼ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.

  1. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    PubMed Central

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.

    2017-01-01

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot

  2. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks.

    PubMed

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P

    2017-01-07

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot

  3. Feasibility of modulation-encoded TOBE CMUTS for real-time 3-D imaging.

    PubMed

    Chee, Ryan K W; Zemp, Roger J

    2015-04-01

    Modulation-encoded top orthogonal to bottom electrode (TOBE) capacitive micromachined ultrasound transducers (CMUTs) are proposed 2-D ultrasound transducer arrays that could allow 3-D images to be acquired in a single acquisition using only N channels for an N × N array. In the proposed modulation-encoding scheme, columns are not only biased, but also modulated with a different frequency for each column. The modulation frequencies are higher than the passband of the CMUT membranes and mix nonlinearly in CMUT cells with acoustic signals to produce acoustic signal sidebands around the modulation carriers in the frequency domain. Thus, signals from elements along a row may be read out simultaneously via frequency-domain multiplexing. We present the theory and feasibility data behind modulation-encoded TOBE CMUTs. We also present experiments showing necessary modifications to the current TOBE design that would allow for crosstalk-mitigated modulation-encoding.

  4. Novel real-time 3D radiological mapping solution for ALARA maximization, D and D assessments and radiological management

    SciTech Connect

    Dubart, Philippe; Hautot, Felix; Morichi, Massimo; Abou-Khalil, Roger

    2015-07-01

    Good management of dismantling and decontamination (D and D) operations and activities is requiring safety, time saving and perfect radiological knowledge of the contaminated environment as well as optimization for personnel dose and minimization of waste volume. In the same time, Fukushima accident has imposed a stretch to the nuclear measurement operational approach requiring in such emergency situation: fast deployment and intervention, quick analysis and fast scenario definition. AREVA, as return of experience from his activities carried out at Fukushima and D and D sites has developed a novel multi-sensor solution as part of his D and D research, approach and method, a system with real-time 3D photo-realistic spatial radiation distribution cartography of contaminated premises. The system may be hand-held or mounted on a mobile device (robot, drone, e.g). In this paper, we will present our current development based on a SLAM technology (Simultaneous Localization And Mapping) and integrated sensors and detectors allowing simultaneous topographic and radiological (dose rate and/or spectroscopy) data acquisitions. This enabling technology permits 3D gamma activity cartography in real-time. (authors)

  5. Registration and real-time visualization of transcranial magnetic stimulation with 3-D MR images.

    PubMed

    Noirhomme, Quentin; Ferrant, Matthieu; Vandermeeren, Yves; Olivier, Etienne; Macq, Benoît; Cuisenaire, Olivier

    2004-11-01

    This paper describes a method for registering and visualizing in real-time the results of transcranial magnetic stimulations (TMS) in physical space on the corresponding anatomical locations in MR images of the brain. The method proceeds in three main steps. First, the patient scalp is digitized in physical space with a magnetic-field digitizer, following a specific digitization pattern. Second, a registration process minimizes the mean square distance between those points and a segmented scalp surface extracted from the magnetic resonance image. Following this registration, the physician can follow the change in coil position in real-time through the visualization interface and adjust the coil position to the desired anatomical location. Third, amplitude of motor evoked potentials can be projected onto the segmented brain in order to create functional brain maps. The registration has subpixel accuracy in a study with simulated data, while we obtain a point to surface root-mean-square error of 1.17+/-0.38 mm in a 24 subject study.

  6. Real-time 3D imaging of Haines jumps in porous media flow

    PubMed Central

    Berg, Steffen; Ott, Holger; Klapp, Stephan A.; Schwing, Alex; Neiteler, Rob; Brussee, Niels; Makurat, Axel; Leu, Leon; Enzmann, Frieder; Schwarz, Jens-Oliver; Kersten, Michael; Irvine, Sarah; Stampanoni, Marco

    2013-01-01

    Newly developed high-speed, synchrotron-based X-ray computed microtomography enabled us to directly image pore-scale displacement events in porous rock in real time. Common approaches to modeling macroscopic fluid behavior are phenomenological, have many shortcomings, and lack consistent links to elementary pore-scale displacement processes, such as Haines jumps and snap-off. Unlike the common singular pore jump paradigm based on observations of restricted artificial capillaries, we found that Haines jumps typically cascade through 10–20 geometrically defined pores per event, accounting for 64% of the energy dissipation. Real-time imaging provided a more detailed fundamental understanding of the elementary processes in porous media, such as hysteresis, snap-off, and nonwetting phase entrapment, and it opens the way for a rigorous process for upscaling based on thermodynamic models. PMID:23431151

  7. Real-time Awake Animal Motion Tracking System for SPECT Imaging

    SciTech Connect

    Goddard Jr, James Samuel; Baba, Justin S; Lee, Seung Joon; Weisenberger, A G; Stolin, A; McKisson, J; Smith, M F

    2008-01-01

    Enhancements have been made in the development of a real-time optical pose measurement and tracking system that provides 3D position and orientation data for a single photon emission computed tomography (SPECT) imaging system for awake, unanesthetized, unrestrained small animals. Three optical cameras with infrared (IR) illumination view the head movements of an animal enclosed in a transparent burrow. Markers placed on the head provide landmark points for image segmentation. Strobed IR LED s are synchronized to the cameras and illuminate the markers to prevent motion blur for each set of images. The system using the three cameras automatically segments the markers, detects missing data, rejects false reflections, performs trinocular marker correspondence, and calculates the 3D pose of the animal s head. Improvements have been made in methods for segmentation, tracking, and 3D calculation to give higher speed and more accurate measurements during a scan. The optical hardware has been installed within a Siemens MicroCAT II small animal scanner at Johns Hopkins without requiring functional changes to the scanner operation. The system has undergone testing using both phantoms and live mice and has been characterized in terms of speed, accuracy, robustness, and reliability. Experimental data showing these motion tracking results are given.

  8. 3D Markov Process for Traffic Flow Prediction in Real-Time

    PubMed Central

    Ko, Eunjeong; Ahn, Jinyoung; Kim, Eun Yi

    2016-01-01

    Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1) a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2) the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further. PMID:26821025

  9. Real-time sensing of mouth 3-D position and orientation

    NASA Astrophysics Data System (ADS)

    Burdea, Grigore C.; Dunn, Stanley M.; Mallik, Matsumita; Jun, Heesung

    1990-07-01

    A key problem in using digital subtraction radiography in dentistry is the ability to reposition the X-ray source and patient so as to reproduce an identical imaging geometry. In this paper we describe an approach to solving this problem based on real time sensing of the 3-D position and orientation of the patient's mouth. The research described here is part of a program which has a long term goal to develop an automated digital subtraction radiography system. This will allow the patient and X-ray source to be accurately repositioned without the mechanical fixtures that are presently used to preserve the imaging geometry. If we can measure the position and orientation of the mouth, then the desired position of the source can be computed as the product of the transformation matrices describing the desired imaging geometry and the position vector of the targeted tooth. Position and orientation of the mouth is measured by a real time sensing device using low-frequency magnetic field technology. We first present the problem of repositioning the patient and source and then outline our analytic solution. Then we describe an experimental setup to measure the accuracy, reproducibility and resolution of the sensor and present results of preliminary experiments.

  10. Concept for an airborne real-time ISR system with multi-sensor 3D data acquisition

    NASA Astrophysics Data System (ADS)

    Haraké, Laura; Schilling, Hendrik; Blohm, Christian; Hillemann, Markus; Lenz, Andreas; Becker, Merlin; Keskin, Göksu; Middelmann, Wolfgang

    2016-10-01

    In modern aerial Intelligence, Surveillance and Reconnaissance operations, precise 3D information becomes inevitable for increased situation awareness. In particular, object geometries represented by texturized digital surface models constitute an alternative to a pure evaluation of radiometric measurements. Besides the 3D data's level of detail aspect, its availability is time-relevant in order to make quick decisions. Expanding the concept of our preceding remote sensing platform developed together with OHB System AG and Geosystems GmbH, in this paper we present an airborne multi-sensor system based on a motor glider equipped with two wing pods; one carries the sensors, whereas the second pod downlinks sensor data to a connected ground control station by using the Aerial Reconnaissance Data System of OHB. An uplink is created to receive remote commands from the manned mobile ground control station, which on its part processes and evaluates incoming sensor data. The system allows the integration of efficient image processing and machine learning algorithms. In this work, we introduce a near real-time approach for the acquisition of a texturized 3D data model with the help of an airborne laser scanner and four high-resolution multi-spectral (RGB, near-infrared) cameras. Image sequences from nadir and off-nadir cameras permit to generate dense point clouds and to texturize also facades of buildings. The ground control station distributes processed 3D data over a linked geoinformation system with web capabilities to off-site decision-makers. As the accurate acquisition of sensor data requires boresight calibrated sensors, we additionally examine the first steps of a camera calibration workflow.

  11. Real-Time Estimation of 3-D Needle Shape and Deflection for MRI-Guided Interventions

    PubMed Central

    Park, Yong-Lae; Elayaperumal, Santhi; Daniel, Bruce; Ryu, Seok Chang; Shin, Mihye; Savall, Joan; Black, Richard J.; Moslehi, Behzad; Cutkosky, Mark R.

    2015-01-01

    We describe a MRI-compatible biopsy needle instrumented with optical fiber Bragg gratings for measuring bending deflections of the needle as it is inserted into tissues. During procedures, such as diagnostic biopsies and localized treatments, it is useful to track any tool deviation from the planned trajectory to minimize positioning errors and procedural complications. The goal is to display tool deflections in real time, with greater bandwidth and accuracy than when viewing the tool in MR images. A standard 18 ga × 15 cm inner needle is prepared using a fixture, and 350-μm-deep grooves are created along its length. Optical fibers are embedded in the grooves. Two sets of sensors, located at different points along the needle, provide an estimate of the bent profile, as well as temperature compensation. Tests of the needle in a water bath showed that it produced no adverse imaging artifacts when used with the MR scanner. PMID:26405428

  12. Intracellular nanomanipulation by a photonic-force microscope with real-time acquisition of a 3D stiffness matrix

    NASA Astrophysics Data System (ADS)

    Bertseva, E.; Singh, A. S. G.; Lekki, J.; Thévenaz, P.; Lekka, M.; Jeney, S.; Gremaud, G.; Puttini, S.; Nowak, W.; Dietler, G.; Forró, L.; Unser, M.; Kulik, A. J.

    2009-07-01

    A traditional photonic-force microscope (PFM) results in huge sets of data, which requires tedious numerical analysis. In this paper, we propose instead an analog signal processor to attain real-time capabilities while retaining the richness of the traditional PFM data. Our system is devoted to intracellular measurements and is fully interactive through the use of a haptic joystick. Using our specialized analog hardware along with a dedicated algorithm, we can extract the full 3D stiffness matrix of the optical trap in real time, including the off-diagonal cross-terms. Our system is also capable of simultaneously recording data for subsequent offline analysis. This allows us to check that a good correlation exists between the classical analysis of stiffness and our real-time measurements. We monitor the PFM beads using an optical microscope. The force-feedback mechanism of the haptic joystick helps us in interactively guiding the bead inside living cells and collecting information from its (possibly anisotropic) environment. The instantaneous stiffness measurements are also displayed in real time on a graphical user interface. The whole system has been built and is operational; here we present early results that confirm the consistency of the real-time measurements with offline computations.

  13. 3D Real-Time Echocardiography Combined with Mini Pressure Wire Generate Reliable Pressure-Volume Loops in Small Hearts

    PubMed Central

    Linden, Katharina; Dewald, Oliver; Gatzweiler, Eva; Seehase, Matthias; Duerr, Georg Daniel; Dörner, Jonas; Kleppe, Stephanie

    2016-01-01

    Background Pressure-volume loops (PVL) provide vital information regarding ventricular performance and pathophysiology in cardiac disease. Unfortunately, acquisition of PVL by conductance technology is not feasible in neonates and small children due to the available human catheter size and resulting invasiveness. The aim of the study was to validate the accuracy of PVL in small hearts using volume data obtained by real-time three-dimensional echocardiography (3DE) and simultaneously acquired pressure data. Methods In 17 piglets (weight range: 3.6–8.0 kg) left ventricular PVL were generated by 3DE and simultaneous recordings of ventricular pressure using a mini pressure wire (PVL3D). PVL3D were compared to conductance catheter measurements (PVLCond) under various hemodynamic conditions (baseline, alpha-adrenergic stimulation with phenylephrine, beta-adrenoreceptor-blockage using esmolol). In order to validate the accuracy of 3D volumetric data, cardiac magnetic resonance imaging (CMR) was performed in another 8 piglets. Results Correlation between CMR- and 3DE-derived volumes was good (enddiastolic volume: mean bias -0.03ml ±1.34ml). Computation of PVL3D in small hearts was feasible and comparable to results obtained by conductance technology. Bland-Altman analysis showed a low bias between PVL3D and PVLCond. Systolic and diastolic parameters were closely associated (Intraclass-Correlation Coefficient for: systolic myocardial elastance 0.95, arterial elastance 0.93, diastolic relaxation constant tau 0.90, indexed end-diastolic volume 0.98). Hemodynamic changes under different conditions were well detected by both methods (ICC 0.82 to 0.98). Inter- and intra-observer coefficients of variation were below 5% for all parameters. Conclusions PVL3D generated from 3DE combined with mini pressure wire represent a novel, feasible and reliable method to assess different hemodynamic conditions of cardiac function in hearts comparable to neonate and infant size. This

  14. An improved real-time visual tracking method for space non-cooperative target

    NASA Astrophysics Data System (ADS)

    Zhang, Limin; Zhu, Feng; Hao, Yingming

    2016-10-01

    In order to enable the non-cooperative rendezvous, capture, and removal of large space debris, robust and fast tracking of the non-cooperative target is needed. This paper proposes an improved algorithm of real-time visual tracking for space non-cooperative target based on three-dimensional model, and it does not require any artificial markers. The non-cooperative target is assumed to be a 3D model known and constantly in the field of view of the camera mounted on the chaser. Space non-cooperative targets are regarded as less textured manmade objects, and the design documents of 3D model are available. Space appears to be black, so we can assume the object is in empty space and only the object is visible, and the background of the image is dark. Due to edge features offer a good invariance to illumination changes or image noise, our method relies on monocular vision and uses 3D-2D correspondences between the 3D model and its corresponding 2D edges in the image. The paper proposes to remove the sample points that are susceptible to false matches based on geometrical distance due to perspective projection of the 3D model. To allow a better robustness, we compare the local region similarity to get better matches between sample points and edge points. Our algorithm is proved to be efficient and shows improved accuracy without significant computational burden. The results show potential tracking performance with mean errors of < 3 degrees and < 1.5% of range.

  15. Real-time 3-D SAFT-UT system evaluation and validation

    SciTech Connect

    Doctor, S.R.; Schuster, G.J.; Reid, L.D.; Hall, T.E.

    1996-09-01

    SAFT-UT technology is shown to provide significant enhancements to the inspection of materials used in US nuclear power plants. This report provides guidelines for the implementation of SAFT-UT technology and shows the results from its application. An overview of the development of SAFT-UT is provided so that the reader may become familiar with the technology. Then the basic fundamentals are presented with an extensive list of references. A comprehensive operating procedure, which is used in conjunction with the SAFT-UT field system developed by Pacific Northwest Laboratory (PNL), provides the recipe for both SAFT data acquisition and analysis. The specification for the hardware implementation is provided for the SAFT-UT system along with a description of the subsequent developments and improvements. One development of technical interest is the SAFT real time processor. Performance of the real-time processor is impressive and comparison is made of this dedicated parallel processor to a conventional computer and to the newer high-speed computer architectures designed for image processing. Descriptions of other improvements, including a robotic scanner, are provided. Laboratory parametric and application studies, performed by PNL and not previously reported, are discussed followed by a section on field application work in which SAFT was used during inservice inspections of operating reactors.

  16. Left ventricular endocardial surface detection based on real-time 3D echocardiographic data

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Borsari, M.; Consegnati, F.; Sarti, A.; Lamberti, C.; Travaglini, A.; Shiota, T.; Thomas, J. D.

    2001-01-01

    OBJECTIVE: A new computerized semi-automatic method for left ventricular (LV) chamber segmentation is presented. METHODS: The LV is imaged by real-time three-dimensional echocardiography (RT3DE). The surface detection model, based on level set techniques, is applied to RT3DE data for image analysis. The modified level set partial differential equation we use is solved by applying numerical methods for conservation laws. The initial conditions are manually established on some slices of the entire volume. The solution obtained for each slice is a contour line corresponding with the boundary between LV cavity and LV endocardium. RESULTS: The mathematical model has been applied to sequences of frames of human hearts (volume range: 34-109 ml) imaged by 2D and reconstructed off-line and RT3DE data. Volume estimation obtained by this new semi-automatic method shows an excellent correlation with those obtained by manual tracing (r = 0.992). Dynamic change of LV volume during the cardiac cycle is also obtained. CONCLUSION: The volume estimation method is accurate; edge based segmentation, image completion and volume reconstruction can be accomplished. The visualization technique also allows to navigate into the reconstructed volume and to display any section of the volume.

  17. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago

    2015-08-01

    The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most

  18. 3D environment modeling and location tracking using off-the-shelf components

    NASA Astrophysics Data System (ADS)

    Luke, Robert H.

    2016-05-01

    The remarkable popularity of smartphones over the past decade has led to a technological race for dominance in market share. This has resulted in a flood of new processors and sensors that are inexpensive, low power and high performance. These sensors include accelerometers, gyroscope, barometers and most importantly cameras. This sensor suite, coupled with multicore processors, allows a new community of researchers to build small, high performance platforms for low cost. This paper describes a system using off-the-shelf components to perform position tracking as well as environment modeling. The system relies on tracking using stereo vision and inertial navigation to determine movement of the system as well as create a model of the environment sensed by the system.

  19. Laetoli’s lost tracks: 3D generated mean shape and missing footprints

    PubMed Central

    Bennett, M. R.; Reynolds, S. C.; Morse, S. A.; Budka, M.

    2016-01-01

    The Laetoli site (Tanzania) contains the oldest known hominin footprints, and their interpretation remains open to debate, despite over 35 years of research. The two hominin trackways present are parallel to one another, one of which is a composite formed by at least two individuals walking in single file. Most researchers have focused on the single, clearly discernible G1 trackway while the G2/3 trackway has been largely dismissed due to its composite nature. Here we report the use of a new technique that allows us to decouple the G2 and G3 tracks for the first time. In so doing we are able to quantify the mean footprint topology of the G3 trackway and render it useable for subsequent data analyses. By restoring the effectively ‘lost’ G3 track, we have doubled the available data on some of the rarest traces directly associated with our Pliocene ancestors. PMID:26902912

  20. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    DOE PAGES

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together intomore » larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.« less

  1. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    SciTech Connect

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; Heiser, John; Yoo, Shinjae; Kalb, Paul

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together into larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.

  2. Tracking immune-related cell responses to drug delivery microparticles in 3D dense collagen matrix.

    PubMed

    Obarzanek-Fojt, Magdalena; Curdy, Catherine; Loggia, Nicoletta; Di Lena, Fabio; Grieder, Kathrin; Bitar, Malak; Wick, Peter

    2016-10-01

    Beyond the therapeutic purpose, the impact of drug delivery microparticles on the local tissue and inflammatory responses remains to be further elucidated specifically for reactions mediated by the host immune cells. Such immediate and prolonged reactions may adversely influence the release efficacy and intended therapeutic pathway. The lack of suitable in vitro platforms limits our ability to gain insight into the nature of immune responses at a single cell level. In order to establish an in vitro 3D system mimicking the connective host tissue counterpart, we utilized reproducible, compressed, rat-tail collagen polymerized matrices. THP1 cells (human acute monocytic leukaemia cells) differentiated into macrophage-like cells were chosen as cell model and their functionality was retained in the dense rat-tail collagen matrix. Placebo microparticles were later combined in the immune cell seeded system during collagen polymerization and secreted pro-inflammatory factors: TNFα and IL-8 were used as immune response readout (ELISA). Our data showed an elevated TNFα and IL-8 secretion by macrophage THP1 cells indicating that Placebo microparticles trigger certain immune cell responses under 3D in vivo like conditions. Furthermore, we have shown that the system is sensitive to measure the differences in THP1 macrophage pro-inflammatory responses to Active Pharmaceutical Ingredient (API) microparticles with different API release kinetics. We have successfully developed a tissue-like, advanced, in vitro system enabling selective "readouts" of specific responses of immune-related cells. Such system may provide the basis of an advanced toolbox enabling systemic evaluation and prediction of in vivo microparticle reactions on human immune-related cells.

  3. Incorporation of 3-D Scanning Lidar Data into Google Earth for Real-time Air Pollution Observation

    NASA Astrophysics Data System (ADS)

    Chiang, C.; Nee, J.; Das, S.; Sun, S.; Hsu, Y.; Chiang, H.; Chen, S.; Lin, P.; Chu, J.; Su, C.; Lee, W.; Su, L.; Chen, C.

    2011-12-01

    3-D Differential Absorption Scanning Lidar (DIASL) system has been designed with small size, light weight, and suitable for installation in various vehicles and places for monitoring of air pollutants and displays a detailed real-time temporal and spatial variability of trace gases via the Google Earth. The fast scanning techniques and visual information can rapidly identify the locations and sources of the polluted gases and assess the most affected areas. It is helpful for Environmental Protection Agency (EPA) to protect the people's health and abate the air pollution as quickly as possible. The distributions of the atmospheric pollutants and their relationship with local metrological parameters measured with ground based instruments will also be discussed. Details will be presented in the upcoming symposium.

  4. Visualizing and Tracking Evolving Features in 3D Unstructured and Adaptive Datasets

    SciTech Connect

    Silver, D.; Zabusky, N.

    2002-08-01

    The massive amounts of time-varying datasets being generated demand new visualization and quantification techniques. Visualization alone is not sufficient. Without proper measurement information/computations real science cannot be done. Our focus is this work was to combine visualization with quantification of the data to allow for advanced querying and searching. As part of this proposal, we have developed a feature extraction adn tracking methodology which allows researcher to identify features of interest and follow their evolution over time. The implementation is distributed and operates over data In-situ: where it is stored and when it was computed.

  5. Lagrangian 3D particle tracking in high-speed flows: Shake-The-Box for multi-pulse systems

    NASA Astrophysics Data System (ADS)

    Novara, Matteo; Schanz, Daniel; Reuther, Nico; Kähler, Christian J.; Schröder, Andreas

    2016-08-01

    The Shake-The-Box (STB) particle tracking technique, recently introduced for time-resolved 3D particle image velocimetry (PIV) images, is applied here to data from a multi-pulse investigation of a turbulent boundary layer flow with adverse pressure gradient in air at 36 m/s ( Re τ = 10,650). The multi-pulse acquisition strategy allows for the recording of four-pulse long time-resolved sequences with a time separation of a few microseconds. The experimental setup consists of a dual-imaging system and a dual-double-cavity laser emitting orthogonal polarization directions to separate the four pulses. The STB particle triangulation and tracking strategy is adapted here to cope with the limited amount of realizations available along the time sequence and to take advantage of the ghost track reduction offered by the use of two independent imaging systems. Furthermore, a correction scheme to compensate for camera vibrations is discussed, together with a method to accurately identify the position of the wall within the measurement domain. Results show that approximately 80,000 tracks can be instantaneously reconstructed within the measurement volume, enabling the evaluation of both dense velocity fields, suitable for spatial gradients evaluation, and highly spatially resolved boundary layer profiles. Turbulent boundary layer profiles obtained from ensemble averaging of the STB tracks are compared to results from 2D-PIV and long-range micro particle tracking velocimetry; the comparison shows the capability of the STB approach in delivering accurate results across a wide range of scales.

  6. The birth of a dinosaur footprint: Subsurface 3D motion reconstruction and discrete element simulation reveal track ontogeny

    PubMed Central

    2014-01-01

    Locomotion over deformable substrates is a common occurrence in nature. Footprints represent sedimentary distortions that provide anatomical, functional, and behavioral insights into trackmaker biology. The interpretation of such evidence can be challenging, however, particularly for fossil tracks recovered at bedding planes below the originally exposed surface. Even in living animals, the complex dynamics that give rise to footprint morphology are obscured by both foot and sediment opacity, which conceals animal–substrate and substrate–substrate interactions. We used X-ray reconstruction of moving morphology (XROMM) to image and animate the hind limb skeleton of a chicken-like bird traversing a dry, granular material. Foot movement differed significantly from walking on solid ground; the longest toe penetrated to a depth of ∼5 cm, reaching an angle of 30° below horizontal before slipping backward on withdrawal. The 3D kinematic data were integrated into a validated substrate simulation using the discrete element method (DEM) to create a quantitative model of limb-induced substrate deformation. Simulation revealed that despite sediment collapse yielding poor quality tracks at the air–substrate interface, subsurface displacements maintain a high level of organization owing to grain–grain support. Splitting the substrate volume along “virtual bedding planes” exposed prints that more closely resembled the foot and could easily be mistaken for shallow tracks. DEM data elucidate how highly localized deformations associated with foot entry and exit generate specific features in the final tracks, a temporal sequence that we term “track ontogeny.” This combination of methodologies fosters a synthesis between the surface/layer-based perspective prevalent in paleontology and the particle/volume-based perspective essential for a mechanistic understanding of sediment redistribution during track formation. PMID:25489092

  7. Dynamic shape modeling of the mitral valve from real-time 3D ultrasound images using continuous medial representation

    NASA Astrophysics Data System (ADS)

    Pouch, Alison M.; Yushkevich, Paul A.; Jackson, Benjamin M.; Gorman, Joseph H., III; Gorman, Robert C.; Sehgal, Chandra M.

    2012-03-01

    Purpose: Patient-specific shape analysis of the mitral valve from real-time 3D ultrasound (rt-3DUS) has broad application to the assessment and surgical treatment of mitral valve disease. Our goal is to demonstrate that continuous medial representation (cm-rep) is an accurate valve shape representation that can be used for statistical shape modeling over the cardiac cycle from rt-3DUS images. Methods: Transesophageal rt-3DUS data acquired from 15 subjects with a range of mitral valve pathology were analyzed. User-initialized segmentation with level sets and symmetric diffeomorphic normalization delineated the mitral leaflets at each time point in the rt-3DUS data series. A deformable cm-rep was fitted to each segmented image of the mitral leaflets in the time series, producing a 4D parametric representation of valve shape in a single cardiac cycle. Model fitting accuracy was evaluated by the Dice overlap, and shape interpolation and principal component analysis (PCA) of 4D valve shape were performed. Results: Of the 289 3D images analyzed, the average Dice overlap between each fitted cm-rep and its target segmentation was 0.880+/-0.018 (max=0.912, min=0.819). The results of PCA represented variability in valve morphology and localized leaflet thickness across subjects. Conclusion: Deformable medial modeling accurately captures valve geometry in rt-3DUS images over the entire cardiac cycle and enables statistical shape analysis of the mitral valve.

  8. SIMULTANEOUS BILATERAL REAL-TIME 3-D TRANSCRANIAL ULTRASOUND IMAGING AT 1 MHZ THROUGH POOR ACOUSTIC WINDOWS

    PubMed Central

    Lindsey, Brooks D.; Nicoletto, Heather A.; Bennett, Ellen R.; Laskowitz, Daniel T.; Smith, Stephen W.

    2013-01-01

    Ultrasound imaging has been proposed as a rapid, portable alternative imaging modality to examine stroke patients in pre-hospital or emergency room settings. However, in performing transcranial ultrasound examinations, 8%–29% of patients in a general population may present with window failure, in which case it is not possible to acquire clinically useful sonographic information through the temporal bone acoustic window. In this work, we describe the technical considerations, design and fabrication of low-frequency (1.2 MHz), large aperture (25.3 mm) sparse matrix array transducers for 3-D imaging in the event of window failure. These transducers are integrated into a system for real-time 3-D bilateral transcranial imaging—the ultrasound brain helmet—and color flow imaging capabilities at 1.2 MHz are directly compared with arrays operating at 1.8 MHz in a flow phantom with attenuation comparable to the in vivo case. Contrast-enhanced imaging allowed visualization of arteries of the Circle of Willis in 5 of 5 subjects and 8 of 10 sides of the head despite probe placement outside of the acoustic window. Results suggest that this type of transducer may allow acquisition of useful images either in individuals with poor windows or outside of the temporal acoustic window in the field. PMID:23415287

  9. Simultaneous bilateral real-time 3-d transcranial ultrasound imaging at 1 MHz through poor acoustic windows.

    PubMed

    Lindsey, Brooks D; Nicoletto, Heather A; Bennett, Ellen R; Laskowitz, Daniel T; Smith, Stephen W

    2013-04-01

    Ultrasound imaging has been proposed as a rapid, portable alternative imaging modality to examine stroke patients in pre-hospital or emergency room settings. However, in performing transcranial ultrasound examinations, 8%-29% of patients in a general population may present with window failure, in which case it is not possible to acquire clinically useful sonographic information through the temporal bone acoustic window. In this work, we describe the technical considerations, design and fabrication of low-frequency (1.2 MHz), large aperture (25.3 mm) sparse matrix array transducers for 3-D imaging in the event of window failure. These transducers are integrated into a system for real-time 3-D bilateral transcranial imaging-the ultrasound brain helmet-and color flow imaging capabilities at 1.2 MHz are directly compared with arrays operating at 1.8 MHz in a flow phantom with attenuation comparable to the in vivo case. Contrast-enhanced imaging allowed visualization of arteries of the Circle of Willis in 5 of 5 subjects and 8 of 10 sides of the head despite probe placement outside of the acoustic window. Results suggest that this type of transducer may allow acquisition of useful images either in individuals with poor windows or outside of the temporal acoustic window in the field.

  10. Multisensor 3D tracking for counter small unmanned air vehicles (CSUAV)

    NASA Astrophysics Data System (ADS)

    Vasquez, Juan R.; Tarplee, Kyle M.; Case, Ellen E.; Zelnio, Anne M.; Rigling, Brian D.

    2008-04-01

    A variety of unmanned air vehicles (UAVs) have been developed for both military and civilian use. The typical large UAV is typically state owned, whereas small UAVs (SUAVs) may be in the form of remote controlled aircraft that are widely available. The potential threat of these SUAVs to both the military and civilian populace has led to research efforts to counter these assets via track, ID, and attack. Difficulties arise from the small size and low radar cross section when attempting to detect and track these targets with a single sensor such as radar or video cameras. In addition, clutter objects make accurate ID difficult without very high resolution data, leading to the use of an acoustic array to support this function. This paper presents a multi-sensor architecture that exploits sensor modes including EO/IR cameras, an acoustic array, and future inclusion of a radar. A sensor resource management concept is presented along with preliminary results from three of the sensors.

  11. A Detailed Study of FDIRC Prototype with Waveform Digitizing Electronics in Cosmic Ray Telescope Using 3D Tracks.

    SciTech Connect

    Nishimura, K

    2012-07-01

    We present a detailed study of a novel Cherenkov imaging detector called the Focusing DIRC (FDIRC) with waveform digitizing electronics. In this test study, the FDIRC prototype has been instrumented with seven Hamamatsu H-8500 MaPMTs. Waveforms from ~450 pixels are digitized with waveform sampling electronics based on the BLAB2 ASIC, operating at a sampling speed of ~2.5 GSa/s. The FDIRC prototype was tested in a large cosmic ray telescope (CRT) providing 3D muon tracks with ~1.5 mrad angular resolution and muon energy of Emuon greater than 1.6 GeV. In this study we provide a detailed analysis of the tails in the Cherenkov angle distribution as a function of various variables, compare experimental results with simulation, and identify the major contributions to the tails. We demonstrate that to see the full impact of these tails on the Cherenkov angle resolution, it is crucial to use 3D tracks, and have a full understanding of the role of ambiguities. These issues could not be fully explored in previous FDIRC studies where the beam was perpendicular to the quartz radiator bars. This work is relevant for the final FDIRC prototype of the PID detector at SuperB, which will be tested this year in the CRT setup.

  12. A Detailed Study of FDIRC Prototype with Waveform Digitizing Electronics in Cosmic Ray Telescope Using 3D Tracks

    SciTech Connect

    Nishimura, K.; Dey, B.; Aston, D.; Leith, D.W.G.S.; Ratcliff, B.; Roberts, D.; Ruckman, L.; Shtol, D.; Varner, G.S.; Va'vra, J.; Vavra, Jerry; /SLAC

    2012-07-30

    We present a detailed study of a novel Cherenkov imaging detector called the Focusing DIRC (FDIRC) with waveform digitizing electronics. In this test study, the FDIRC prototype has been instrumented with seven Hamamatsu H-8500 MaPMTs. Waveforms from {approx}450 pixels are digitized with waveform sampling electronics based on the BLAB2 ASIC, operating at a sampling speed of {approx}2.5 GSa/s. The FDIRC prototype was tested in a large cosmic ray telescope (CRT) providing 3D muon tracks with {approx}1.5 mrad angular resolution and muon energy of E{sub muon} > 1.6 GeV. In this study we provide a detailed analysis of the tails in the Cherenkov angle distribution as a function of various variables, compare experimental results with simulation, and identify the major contributions to the tails. We demonstrate that to see the full impact of these tails on the Cherenkov angle resolution, it is crucial to use 3D tracks, and have a full understanding of the role of ambiguities. These issues could not be fully explored in previous FDIRC studies where the beam was perpendicular to the quartz radiator bars. This work is relevant for the final FDIRC prototype of the PID detector at SuperB, which will be tested this year in the CRT setup.

  13. Particle tracking approach for transport in three-dimensional discrete fracture networks: Particle tracking in 3-D DFNs

    SciTech Connect

    Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.; Gable, Carl W.; Karra, Satish

    2015-09-16

    The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates mass balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.

  14. Particle tracking approach for transport in three-dimensional discrete fracture networks: Particle tracking in 3-D DFNs

    DOE PAGES

    Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.; ...

    2015-09-16

    The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates massmore » balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.« less

  15. Ultra-high-speed 3D astigmatic particle tracking velocimetry: application to particle-laden supersonic impinging jets

    NASA Astrophysics Data System (ADS)

    Buchmann, N. A.; Cierpka, C.; Kähler, C. J.; Soria, J.

    2014-11-01

    The paper demonstrates ultra-high-speed three-component, three-dimensional (3C3D) velocity measurements of micron-sized particles suspended in a supersonic impinging jet flow. Understanding the dynamics of individual particles in such flows is important for the design of particle impactors for drug delivery or cold gas dynamic spray processing. The underexpanded jet flow is produced via a converging nozzle, and micron-sized particles ( d p = 110 μm) are introduced into the gas flow. The supersonic jet impinges onto a flat surface, and the particle impact velocity and particle impact angle are studied for a range of flow conditions and impingement distances. The imaging system consists of an ultra-high-speed digital camera (Shimadzu HPV-1) capable of recording rates of up to 1 Mfps. Astigmatism particle tracking velocimetry (APTV) is used to measure the 3D particle position (Cierpka et al., Meas Sci Technol 21(045401):13, 2010) by coding the particle depth location in the 2D images by adding a cylindrical lens to the high-speed imaging system. Based on the reconstructed 3D particle positions, the particle trajectories are obtained via a higher-order tracking scheme that takes advantage of the high temporal resolution to increase robustness and accuracy of the measurement. It is shown that the particle velocity and impingement angle are affected by the gas flow in a manner depending on the nozzle pressure ratio and stand-off distance where higher pressure ratios and stand-off distances lead to higher impact velocities and larger impact angles.

  16. Readily Accessible Multiplane Microscopy: 3D Tracking the HIV-1 Genome in Living Cells.

    PubMed

    Itano, Michelle S; Bleck, Marina; Johnson, Daniel S; Simon, Sanford M

    2016-02-01

    Human immunodeficiency virus (HIV)-1 infection and the associated disease AIDS are a major cause of human death worldwide with no vaccine or cure available. The trafficking of HIV-1 RNAs from sites of synthesis in the nucleus, through the cytoplasm, to sites of assembly at the plasma membrane are critical steps in HIV-1 viral replication, but are not well characterized. Here we present a broadly accessible microscopy method that captures multiple focal planes simultaneously, which allows us to image the trafficking of HIV-1 genomic RNAs with high precision. This method utilizes a customization of a commercial multichannel emission splitter that enables high-resolution 3D imaging with single-macromolecule sensitivity. We show with high temporal and spatial resolution that HIV-1 genomic RNAs are most mobile in the cytosol, and undergo confined mobility at sites along the nuclear envelope and in the nucleus and nucleolus. These provide important insights regarding the mechanism by which the HIV-1 RNA genome is transported to the sites of assembly of nascent virions.

  17. Automatic Tracking Of Markers From 3D-Measurement Of Human Body Movements During Walking

    NASA Astrophysics Data System (ADS)

    Elsner, Thomas; Meier, G.; Baumann, Juerg U.

    1989-04-01

    For human motion analysis, the spatio-temporal resolution of cinematographic registrations of body marker positions is still higher than the results of the best opto electronic systems available for this purpose today. So far, the need for manual digitization of several thousand marker positions per tested person has made this method unpractical for regular applications. An interactive and largely automated system for marker recognition and tracking from 16 mm film images based on progress in digital image processing has been developed and tested. Projected pictures are digitized with a high-resolution CCD-camera (1320x1035 pixel), processed, analyzed and serially evaluated with an interactive image analysis system SIGNUM IS200.

  18. Fusion of current technologies with real-time 3D MEMS ladar for novel security and defense applications

    NASA Astrophysics Data System (ADS)

    Siepmann, James P.

    2006-05-01

    Through the utilization of scanning MEMS mirrors in ladar devices, a whole new range of potential military, Homeland Security, law enforcement, and civilian applications is now possible. Currently, ladar devices are typically large (>15,000 cc), heavy (>15 kg), and expensive (>$100,000) while current MEMS ladar designs are more than a magnitude less, opening up a myriad of potential new applications. One such application with current technology is a GPS integrated MEMS ladar unit, which could be used for real-time border monitoring or the creation of virtual 3D battlefields after being dropped or propelled into hostile territory. Another current technology that can be integrated into a MEMS ladar unit is digital video that can give high resolution and true color to a picture that is then enhanced with range information in a real-time display format that is easier for the user to understand and assimilate than typical gray-scale or false color images. The problem with using 2-axis MEMS mirrors in ladar devices is that in order to have a resonance frequency capable of practical real-time scanning, they must either be quite small and/or have a low maximum tilt angle. Typically, this value has been less than (< or = to 10 mg-mm2-kHz2)-degrees. We have been able to solve this problem by using angle amplification techniques that utilize a series of MEMS mirrors and/or a specialized set of optics to achieve a broad field of view. These techniques and some of their novel applications mentioned will be explained and discussed herein.

  19. DLP technology application: 3D head tracking and motion correction in medical brain imaging

    NASA Astrophysics Data System (ADS)

    Olesen, Oline V.; Wilm, Jakob; Paulsen, Rasmus R.; Højgaard, Liselotte; Larsen, Rasmus

    2014-03-01

    In this paper we present a novel sensing system, robust Near-infrared Structured Light Scanning (NIRSL) for three-dimensional human model scanning application. Human model scanning due to its nature of various hair and dress appearance and body motion has long been a challenging task. Previous structured light scanning methods typically emitted visible coded light patterns onto static and opaque objects to establish correspondence between a projector and a camera for triangulation. In the success of these methods rely on scanning objects with proper reflective surface for visible light, such as plaster, light colored cloth. Whereas for human model scanning application, conventional methods suffer from low signal to noise ratio caused by low contrast of visible light over the human body. The proposed robust NIRSL, as implemented with the near infrared light, is capable of recovering those dark surfaces, such as hair, dark jeans and black shoes under visible illumination. Moreover, successful structured light scan relies on the assumption that the subject is static during scanning. Due to the nature of body motion, it is very time sensitive to keep this assumption in the case of human model scan. The proposed sensing system, by utilizing the new near-infrared capable high speed LightCrafter DLP projector, is robust to motion, provides accurate and high resolution three-dimensional point cloud, making our system more efficient and robust for human model reconstruction. Experimental results demonstrate that our system is effective and efficient to scan real human models with various dark hair, jeans and shoes, robust to human body motion and produces accurate and high resolution 3D point cloud.

  20. Real-time cardiac surface tracking from sparse samples using subspace clustering and maximum-likelihood linear regressors

    NASA Astrophysics Data System (ADS)

    Singh, Vimal; Tewfik, Ahmed H.

    2011-03-01

    Cardiac minimal invasive surgeries such as catheter based radio frequency ablation of atrial fibrillation requires high-precision tracking of inner cardiac surfaces in order to ascertain constant electrode-surface contact. Majority of cardiac motion tracking systems are either limited to outer surface or track limited slices/sectors of inner surface in echocardiography data which are unrealizable in MIS due to the varying resolution of ultrasound with depth and speckle effect. In this paper, a system for high accuracy real-time 3D tracking of both cardiac surfaces using sparse samples of outer-surface only is presented. This paper presents a novel approach to model cardiac inner surface deformations as simple functions of outer surface deformations in the spherical harmonic domain using multiple maximal-likelihood linear regressors. Tracking system uses subspace clustering to identify potential deformation spaces for outer surfaces and trains ML linear regressors using pre-operative MRI/CT scan based training set. During tracking, sparse-samples from outer surfaces are used to identify the active outer surface deformation space and reconstruct outer surfaces in real-time under least squares formulation. Inner surface is reconstructed using tracked outer surface with trained ML linear regressors. High-precision tracking and robustness of the proposed system are demonstrated through results obtained on a real patient dataset with tracking root mean square error <= (0.23 +/- 0.04)mm and <= (0.30 +/- 0.07)mm for outer & inner surfaces respectively.

  1. Helicopter Flight Test of a Compact, Real-Time 3-D Flash Lidar for Imaging Hazardous Terrain During Planetary Landing

    NASA Technical Reports Server (NTRS)

    Roback, VIncent E.; Amzajerdian, Farzin; Brewster, Paul F.; Barnes, Bruce W.; Kempton, Kevin S.; Reisse, Robert A.; Bulyshev, Alexander E.

    2013-01-01

    A second generation, compact, real-time, air-cooled 3-D imaging Flash Lidar sensor system, developed from a number of cutting-edge components from industry and NASA, is lab characterized and helicopter flight tested under the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project. The ALHAT project is seeking to develop a guidance, navigation, and control (GN&C) and sensing system based on lidar technology capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The Flash Lidar incorporates a 3-D imaging video camera based on Indium-Gallium-Arsenide Avalanche Photo Diode and novel micro-electronic technology for a 128 x 128 pixel array operating at a video rate of 20 Hz, a high pulse-energy 1.06 µm Neodymium-doped: Yttrium Aluminum Garnet (Nd:YAG) laser, a remote laser safety termination system, high performance transmitter and receiver optics with one and five degrees field-of-view (FOV), enhanced onboard thermal control, as well as a compact and self-contained suite of support electronics housed in a single box and built around a PC-104 architecture to enable autonomous operations. The Flash Lidar was developed and then characterized at two NASA-Langley Research Center (LaRC) outdoor laser test range facilities both statically and dynamically, integrated with other ALHAT GN&C subsystems from partner organizations, and installed onto a Bell UH-1H Iroquois "Huey" helicopter at LaRC. The integrated system was flight tested at the NASA-Kennedy Space Center (KSC) on simulated lunar approach to a custom hazard field consisting of rocks, craters, hazardous slopes, and safe-sites near the Shuttle Landing Facility runway starting at slant ranges of 750 m. In order to evaluate different methods of achieving hazard detection, the lidar, in conjunction with the ALHAT hazard detection and GN&C system, operates in both a narrow 1deg FOV raster

  2. Method for dose-reduced 3D catheter tracking on a scanning-beam digital x-ray system using dynamic electronic collimation

    PubMed Central

    Dunkerley, David A. P.; Funk, Tobias; Speidel, Michael A.

    2016-01-01

    Scanning-beam digital x-ray (SBDX) is an inverse geometry x-ray fluoroscopy system capable of tomosynthesis-based 3D catheter tracking. This work proposes a method of dose-reduced 3D tracking using dynamic electronic collimation (DEC) of the SBDX scanning x-ray tube. Positions in the 2D focal spot array are selectively activated to create a region-of-interest (ROI) x-ray field around the tracked catheter. The ROI position is updated for each frame based on a motion vector calculated from the two most recent 3D tracking results. The technique was evaluated with SBDX data acquired as a catheter tip inside a chest phantom was pulled along a 3D trajectory. DEC scans were retrospectively generated from the detector images stored for each focal spot position. DEC imaging of a catheter tip in a volume measuring 11.4 cm across at isocenter required 340 active focal spots per frame, versus 4473 spots in full-FOV mode. The dose-area-product (DAP) and peak skin dose (PSD) for DEC versus full field-of-view (FOV) scanning were calculated using an SBDX Monte Carlo simulation code. DAP was reduced to 7.4% to 8.4% of the full-FOV value, consistent with the relative number of active focal spots (7.6%). For image sequences with a moving catheter, PSD was 33.6% to 34.8% of the full-FOV value. The root-mean-squared-deviation between DEC-based 3D tracking coordinates and full-FOV 3D tracking coordinates was less than 0.1 mm. The 3D distance between the tracked tip and the sheath centerline averaged 0.75 mm. Dynamic electronic collimation can reduce dose with minimal change in tracking performance. PMID:27375314

  3. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    NASA Astrophysics Data System (ADS)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time

  4. A 3D-printed polymer micro-gripper with self-defined electrical tracks and thermal actuator

    NASA Astrophysics Data System (ADS)

    Alblalaihid, Khalid; Overton, James; Lawes, Simon; Kinnell, Peter

    2017-04-01

    This paper presents a simple fabrication process that allows for isolated metal tracks to be easily defined on the surface of 3D printed micro-scale polymer components. The process makes use of a standard low cost conformal sputter coating system to quickly deposit thin film metal layers on to the surface of 3D printed polymer micro parts. The key novelty lies in the inclusion of inbuilt masking features, on the surface of the polymer parts, to ensure that the conformal metal layer can be effectively broken to create electrically isolated metal features. The presented process is extremely flexible, and it is envisaged that it may be applied to a wide range of sensor and actuator applications. To demonstrate the process a polymer micro-scale gripper with an inbuilt thermal actuator is designed and fabricated. In this work the design methodology for creating the micro-gripper is presented, illustrating how the rapid and flexible manufacturing process allows for fast cycle time design iterations to be performed. In addition the compatibility of this approach with traditional design and analysis techniques such as basic finite element simulation is also demonstrated with simulation results in reasonable agreement with experimental performance data for the micro-gripper.

  5. Real-time WAMI streaming target tracking in fog

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Blasch, Erik; Chen, Ning; Deng, Anna; Ling, Haibin; Chen, Genshe

    2016-05-01

    Real-time information fusion based on WAMI (Wide-Area Motion Imagery), FMV (Full Motion Video), and Text data is highly desired for many mission critical emergency or security applications. Cloud Computing has been considered promising to achieve big data integration from multi-modal sources. In many mission critical tasks, however, powerful Cloud technology cannot satisfy the tight latency tolerance as the servers are allocated far from the sensing platform, actually there is no guaranteed connection in the emergency situations. Therefore, data processing, information fusion, and decision making are required to be executed on-site (i.e., near the data collection). Fog Computing, a recently proposed extension and complement for Cloud Computing, enables computing on-site without outsourcing jobs to a remote Cloud. In this work, we have investigated the feasibility of processing streaming WAMI in the Fog for real-time, online, uninterrupted target tracking. Using a single target tracking algorithm, we studied the performance of a Fog Computing prototype. The experimental results are very encouraging that validated the effectiveness of our Fog approach to achieve real-time frame rates.

  6. A Real-Time Magnetoencephalography Brain-Computer Interface Using Interactive 3D Visualization and the Hadoop Ecosystem.

    PubMed

    McClay, Wilbert A; Yadav, Nancy; Ozbek, Yusuf; Haas, Andy; Attias, Hagaii T; Nagarajan, Srikantan S

    2015-09-30

    Ecumenically, the fastest growing segment of Big Data is human biology-related data and the annual data creation is on the order of zetabytes. The implications are global across industries, of which the treatment of brain related illnesses and trauma could see the most significant and immediate effects. The next generation of health care IT and sensory devices are acquiring and storing massive amounts of patient related data. An innovative Brain-Computer Interface (BCI) for interactive 3D visualization is presented utilizing the Hadoop Ecosystem for data analysis and storage. The BCI is an implementation of Bayesian factor analysis algorithms that can distinguish distinct thought actions using magneto encephalographic (MEG) brain signals. We have collected data on five subjects yielding 90% positive performance in MEG mid- and post-movement activity. We describe a driver that substitutes the actions of the BCI as mouse button presses for real-time use in visual simulations. This process has been added into a flight visualization demonstration. By thinking left or right, the user experiences the aircraft turning in the chosen direction. The driver components of the BCI can be compiled into any software and substitute a user's intent for specific keyboard strikes or mouse button presses. The BCI's data analytics OPEN ACCESS Brain. Sci. 2015, 5 420 of a subject's MEG brainwaves and flight visualization performance are stored and analyzed using the Hadoop Ecosystem as a quick retrieval data warehouse.

  7. Real-time microstructure imaging by Laue microdiffraction: A sample application in laser 3D printed Ni-based superalloys

    PubMed Central

    Zhou, Guangni; Zhu, Wenxin; Shen, Hao; Li, Yao; Zhang, Anfeng; Tamura, Nobumichi; Chen, Kai

    2016-01-01

    Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in real time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments). PMID:27302087

  8. A Real-Time Magnetoencephalography Brain-Computer Interface Using Interactive 3D Visualization and the Hadoop Ecosystem

    PubMed Central

    McClay, Wilbert A.; Yadav, Nancy; Ozbek, Yusuf; Haas, Andy; Attias, Hagaii T.; Nagarajan, Srikantan S.

    2015-01-01

    Ecumenically, the fastest growing segment of Big Data is human biology-related data and the annual data creation is on the order of zetabytes. The implications are global across industries, of which the treatment of brain related illnesses and trauma could see the most significant and immediate effects. The next generation of health care IT and sensory devices are acquiring and storing massive amounts of patient related data. An innovative Brain-Computer Interface (BCI) for interactive 3D visualization is presented utilizing the Hadoop Ecosystem for data analysis and storage. The BCI is an implementation of Bayesian factor analysis algorithms that can distinguish distinct thought actions using magneto encephalographic (MEG) brain signals. We have collected data on five subjects yielding 90% positive performance in MEG mid- and post-movement activity. We describe a driver that substitutes the actions of the BCI as mouse button presses for real-time use in visual simulations. This process has been added into a flight visualization demonstration. By thinking left or right, the user experiences the aircraft turning in the chosen direction. The driver components of the BCI can be compiled into any software and substitute a user’s intent for specific keyboard strikes or mouse button presses. The BCI’s data analytics of a subject’s MEG brainwaves and flight visualization performance are stored and analyzed using the Hadoop Ecosystem as a quick retrieval data warehouse. PMID:26437432

  9. Real-time microstructure imaging by Laue microdiffraction: A sample application in laser 3D printed Ni-based superalloys

    NASA Astrophysics Data System (ADS)

    Zhou, Guangni; Zhu, Wenxin; Shen, Hao; Li, Yao; Zhang, Anfeng; Tamura, Nobumichi; Chen, Kai

    2016-06-01

    Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in real time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments).

  10. A quantitative study of 3D-scanning frequency and Δd of tracking points on the tooth surface

    PubMed Central

    Li, Hong; Lyu, Peijun; Sun, Yuchun; Wang, Yong; Liang, Xiaoyue

    2015-01-01

    Micro-movement of human jaws in the resting state might influence the accuracy of direct three-dimensional (3D) measurement. Providing a reference for sampling frequency settings of intraoral scanning systems to overcome this influence is important. In this study, we measured micro-movement, or change in distance (∆d), as the change in position of a single tracking point from one sampling time point to another in five human subjects. ∆d of tracking points on incisors at 7 sampling frequencies was judged against the clinical accuracy requirement to select proper sampling frequency settings. The curve equation was then fit quantitatively between ∆d median and the sampling frequency to predict the trend of ∆d with increasing f. The difference of ∆d among the subjects and the difference between upper and lower incisor feature points of the same subject were analyzed by a non-parametric test (α = 0.05). Significant differences of incisor feature points were noted among different subjects and between upper and lower jaws of the same subject (P < 0.01). Overall, ∆d decreased with increasing frequency. When the frequency was 60 Hz, ∆d nearly reached the clinical accuracy requirement. Frequencies higher than 60 Hz did not significantly decrease Δd further. PMID:26400112

  11. Real-Time Gaze Tracking for Public Displays

    NASA Astrophysics Data System (ADS)

    Sippl, Andreas; Holzmann, Clemens; Zachhuber, Doris; Ferscha, Alois

    In this paper, we explore the real-time tracking of human gazes in front of large public displays. The aim of our work is to estimate at which area of a display one ore more people are looking at a time, independently from the distance and angle to the display as well as the height of the tracked people. Gaze tracking is relevant for a variety of purposes, including the automatic recognition of the user's focus of attention, or the control of interactive applications with gaze gestures. The scope of the present paper is on the former, and we show how gaze tracking can be used for implicit interaction in the pervasive advertising domain. We have developed a prototype for this purpose, which (i) uses an overhead mounted camera to distinguish four gaze areas on a large display, (ii) works for a wide range of positions in front of the display, and (iii) provides an estimation of the currently gazed quarters in real time. A detailed description of the prototype as well as the results of a user study with 12 participants, which show the recognition accuracy for different positions in front of the display, are presented.

  12. WE-AB-BRB-00: Session in Memory of Robert J. Shalek: High Resolution Dosimetry from 2D to 3D to Real-Time 3D.

    PubMed

    Li, Harold

    2016-06-01

    Despite widespread IMRT treatments at modern radiation therapy clinics, precise dosimetric commissioning of an IMRT system remains a challenge. In the most recent report from the Radiological Physics Center (RPC), nearly 20% of institutions failed an end-to-end test with an anthropomorphic head and neck phantom, a test that has rather lenient dose difference and distance-to-agreement criteria of 7% and 4 mm. The RPC report provides strong evidence that IMRT implementation is prone to error and that improved quality assurance tools are required. At the heart of radiation therapy dosimetry is the multidimensional dosimeter. However, due to the limited availability of water-equivalent dosimetry materials, research and development in this important field is challenging. In this session, we will review a few dosimeter developments that are either in the laboratory phase or in the pre-commercialization phase. 1) Radiochromic plastic. Novel formulations exhibit light absorbing optical contrast with very little scatter, enabling faster, broad beam optical CT design. 2) Storage phosphor. After irradiation, the dosimetry panels will be read out using a dedicated 2D scanning apparatus in a non-invasive, electro-optic manner and immediately restored for further use. 3) Liquid scintillator. Scintillators convert the energy from x-rays and proton beams into visible light, which can be recorded with a scientific camera (CCD or CMOS) from multiple angles. The 3D shape of the dose distribution can then be reconstructed. 4) Cherenkov emission imaging. Gated intensified imaging allows video-rate passive detection of Cherenkov emission during radiation therapy with the room lights on.

  13. Exploring single-molecule interactions through 3D optical trapping and tracking: From thermal noise to protein refolding

    NASA Astrophysics Data System (ADS)

    Wong, Wesley Philip

    The focus of this thesis is the development and application of a novel technique for investigating the structure and dynamics of weak interactions between and within single-molecules. This approach is designed to explore unusual features in bi-directional transitions near equilibrium. The basic idea is to infer molecular events by observing changes in the three-dimensional Brownian fluctuations of a functionalized microsphere held weakly near a reactive substrate. Experimentally, I have developed a unique optical tweezers system that combines an interference technique for accurate 3D tracking (˜1 nm vertically, and ˜2-3 nm laterally) with a continuous autofocus system which stabilizes the trap height to within 1-2 mn over hours. A number of different physical and biological systems were investigated with this instrument. Data interpretation was assisted by a multi-scale Brownian Dynamics simulation that I have developed. I have explored the 3D signatures of different molecular tethers, distinguishing between single and multiple attachments, as well as between stiff and soft linkages. As well, I have developed a technique for measuring the force-dependent compliance of molecular tethers from thermal noise fluctuations and demonstrated this with a short ssDNA oligomer. Another practical approach that I have developed for extracting information from fluctuation measurements is Inverse Brownian Dynamics, which yields the underlying potential of mean force and position dependent diffusion coefficient from the Brownian motion of a particle. I have also developed a new force calibration method that takes into account video motion blur, and that uses this information to measure bead dynamics. Perhaps most significantly, I have trade the first direct observations of the refolding of spectrin repeats under mechanical force, and investigated the force-dependent kinetics of this transition.

  14. Breakup of Finite-Size Colloidal Aggregates in Turbulent Flow Investigated by Three-Dimensional (3D) Particle Tracking Velocimetry.

    PubMed

    Saha, Debashish; Babler, Matthaus U; Holzner, Markus; Soos, Miroslav; Lüthi, Beat; Liberzon, Alex; Kinzelbach, Wolfgang

    2016-01-12

    Aggregates grown in mild shear flow are released, one at a time, into homogeneous isotropic turbulence, where their motion and intermittent breakup is recorded by three-dimensional particle tracking velocimetry (3D-PTV). The aggregates have an open structure with a fractal dimension of ∼2.2, and their size is 1.4 ± 0.4 mm, which is large, compared to the Kolmogorov length scale (η = 0.15 mm). 3D-PTV of flow tracers allows for the simultaneous measurement of aggregate trajectories and the full velocity gradient tensor along their pathlines, which enables us to access the Lagrangian stress history of individual breakup events. From this data, we found no consistent pattern that relates breakup to the local flow properties at the point of breakup. Also, the correlation between the aggregate size and both shear stress and normal stress at the location of breakage is found to be weaker, when compared with the correlation between size and drag stress. The analysis suggests that the aggregates are mostly broken due to the accumulation of the drag stress over a time lag on the order of the Kolmogorov time scale. This finding is explained by the fact that the aggregates are large, which gives their motion inertia and increases the time for stress propagation inside the aggregate. Furthermore, it is found that the scaling of the largest fragment and the accumulated stress at breakup follows an earlier established power law, i.e., dfrag ∼ σ(-0.6) obtained from laminar nozzle experiments. This indicates that, despite the large size and the different type of hydrodynamic stress, the microscopic mechanism causing breakup is consistent over a wide range of aggregate size and stress magnitude.

  15. Fast leaf-fitting with generalized underdose/overdose constraints for real-time MLC tracking

    SciTech Connect

    Moore, Douglas Sawant, Amit; Ruan, Dan

    2016-01-15

    Purpose: Real-time multileaf collimator (MLC) tracking is a promising approach to the management of intrafractional tumor motion during thoracic and abdominal radiotherapy. MLC tracking is typically performed in two steps: transforming a planned MLC aperture in response to patient motion and refitting the leaves to the newly generated aperture. One of the challenges of this approach is the inability to faithfully reproduce the desired motion-adapted aperture. This work presents an optimization-based framework with which to solve this leaf-fitting problem in real-time. Methods: This optimization framework is designed to facilitate the determination of leaf positions in real-time while accounting for the trade-off between coverage of the PTV and avoidance of organs at risk (OARs). Derived within this framework, an algorithm is presented that can account for general linear transformations of the planned MLC aperture, particularly 3D translations and in-plane rotations. This algorithm, together with algorithms presented in Sawant et al. [“Management of three-dimensional intrafraction motion through real-time DMLC tracking,” Med. Phys. 35, 2050–2061 (2008)] and Ruan and Keall [Presented at the 2011 IEEE Power Engineering and Automation Conference (PEAM) (2011) (unpublished)], was applied to apertures derived from eight lung intensity modulated radiotherapy plans subjected to six-degree-of-freedom motion traces acquired from lung cancer patients using the kilovoltage intrafraction monitoring system developed at the University of Sydney. A quality-of-fit metric was defined, and each algorithm was evaluated in terms of quality-of-fit and computation time. Results: This algorithm is shown to perform leaf-fittings of apertures, each with 80 leaf pairs, in 0.226 ms on average as compared to 0.082 and 64.2 ms for the algorithms of Sawant et al., Ruan, and Keall, respectively. The algorithm shows approximately 12% improvement in quality-of-fit over the Sawant et al

  16. Real-time 3D image reconstruction of a 24×24 row-column addressing array: from raw data to image

    NASA Astrophysics Data System (ADS)

    Li, Chunyu; Yang, Jiali; Li, Xu; Zhong, Xiaoli; Song, Junjie; Ding, Mingyue; Yuchi, Ming

    2016-04-01

    This paper presents a work of real-time 3-D image reconstruction for a 7.5-MHz, 24×24 row-column addressing array transducer. The transducer works with a predesigned transmit/receive module. After the raw data are captured by the NI PXIe data acquisition (DAQ) module, the following processing procedures are performed: delay and sum (DAS), base-line calibration, envelope detection, logarithm compression, down-sampling, gray scale mapping and 3-D display. These procedures are optimized for obtaining real-time 3-D images. Fixed-point focusing scheme is applied in delay and sum (DAS) to obtain line data from channel data. Zero-phase high-pass filter is used to calibrate the base-line shift of echo. The classical Hilbert transformation is adopted to detect the envelopes of echo. Logarithm compression is implemented to enlarge the weak signals and narrow the gap from the strong ones. Down-sampling reduces the amount of data to improve the processing speed. Linear gray scale mapping is introduced that the weakest signal is mapped to 0 and the strongest signal 255. The real-time 3-D images are displayed with multi-planar mode, which shows three orthogonal sections (vertical section, coronal section, transverse section). A trigger signal is sent from the transmit/receive module to the DAQ module at the start of each volume data generation to ensure synchronization between these two modules. All procedures, include data acquisition (DAQ), signal processing and image display, are programmed on the platform of LabVIEW. 675MB raw echo data are acquired in one minute to generate 24×24×48, 27fps 3-D images. The experiment on the strong reflection object (aluminum slice) shows the feasibility of the whole process from raw data to real-time 3-D images.

  17. GPU Based Real-time Instrument Tracking with Three Dimensional Ultrasound

    PubMed Central

    Novotny, Paul M.; Stoll, Jeff A.; Vasilyev, Nikolay V.; Del Nido, Pedro J.; Dupont, Pierre E.; Howe, Robert D.

    2009-01-01

    Real-time three-dimensional ultrasound enables new intra-cardiac surgical procedures, but the distorted appearance of instruments in ultrasound poses a challenge to surgeons. This paper presents a detection technique that identifies the position of the instrument within the ultrasound volume. The algorithm uses a form of the generalized Radon transform to search for long straight objects in the ultrasound image, a feature characteristic of instruments and not found in cardiac tissue. When combined with passive markers placed on the instrument shaft, the full position and orientation of the instrument is found in 3D space. This detection technique is amenable to rapid execution on the current generation of personal computer graphics processor units (GPU). Our GPU implementation detected a surgical instrument in 31 ms, sufficient for real-time tracking at the 25 volumes per second rate of the ultrasound machine. A water tank experiment found instrument orientation errors of 1.1 degrees and tip position errors of less than 1.8 mm. Finally, an in vivo study demonstrated successful instrument tracking inside a beating porcine heart. PMID:17681483

  18. Real-Time Tumor Tracking in the Lung Using an Electromagnetic Tracking System

    SciTech Connect

    Shah, Amish P.; Kupelian, Patrick A.; Waghorn, Benjamin J.; Willoughby, Twyla R.; Rineer, Justin M.; Mañon, Rafael R.; Vollenweider, Mark A.; Meeks, Sanford L.

    2013-07-01

    Purpose: To describe the first use of the commercially available Calypso 4D Localization System in the lung. Methods and Materials: Under an institutional review board-approved protocol and an investigational device exemption from the US Food and Drug Administration, the Calypso system was used with nonclinical methods to acquire real-time 4-dimensional lung tumor tracks for 7 lung cancer patients. The aims of the study were to investigate (1) the potential for bronchoscopic implantation; (2) the stability of smooth-surface beacon transponders (transponders) after implantation; and (3) the ability to acquire tracking information within the lung. Electromagnetic tracking was not used for any clinical decision making and could only be performed before any radiation delivery in a research setting. All motion tracks for each patient were reviewed, and values of the average displacement, amplitude of motion, period, and associated correlation to a sinusoidal model (R{sup 2}) were tabulated for all 42 tracks. Results: For all 7 patients at least 1 transponder was successfully implanted. To assist in securing the transponder at the tumor site, it was necessary to implant a secondary fiducial for most transponders owing to the transponder's smooth surface. For 3 patients, insertion into the lung proved difficult, with only 1 transponder remaining fixed during implantation. One patient developed a pneumothorax after implantation of the secondary fiducial. Once implanted, 13 of 14 transponders remained stable within the lung and were successfully tracked with the tracking system. Conclusions: Our initial experience with electromagnetic guidance within the lung demonstrates that transponder implantation and tracking is achievable though not clinically available. This research investigation proved that lung tumor motion exhibits large variations from fraction to fraction within a single patient and that improvements to both transponder and tracking system are still necessary

  19. Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction

    NASA Astrophysics Data System (ADS)

    Verrier, N.; Grosjean, N.; Dib, E.; Méès, L.; Fournier, C.; Marié, J.-L.

    2016-04-01

    Digital holography is a valuable tool for three-dimensional information extraction. Among existing configurations, the originally proposed set-up (i.e. Gabor, or in-line holography), is reasonably immune to variations in the experimental environment making it a method of choice for studies of fluid dynamics. Nevertheless, standard hologram reconstruction techniques, based on numerical light back-propagation are prone to artifacts such as twin images or aliases that limit both the quality and quantity of information extracted from the acquired holograms. To get round this issue, the hologram reconstruction as a parametric inverse problem has been shown to accurately estimate 3D positions and the size of seeding particles directly from the hologram. To push the bounds of accuracy on size estimation still further, we propose to fully exploit the information redundancy of a hologram video sequence using joint estimation reconstruction. Applying this approach in a bench-top experiment, we show that it led to a relative precision of 0.13% (for a 60 μm diameter droplet) for droplet size estimation, and a tracking precision of {σx}× {σy}× {σz}=0.15× 0.15× 1~\\text{pixels} .

  20. Robust real-time instrument tracking in ultrasound images

    NASA Astrophysics Data System (ADS)

    Ortmaier, Tobias; Vitrani, Marie-Aude; Morel, Guillaume; Pinault, Samuel

    2005-04-01

    Minimally invasive surgery in combination with ultrasound (US) imaging imposes high demands on the surgeon's hand-eye-coordination capabilities. A possible solution to reduce these requirements is minimally invasive robotic surgery in which the instrument is guided by visual servoing towards the goal defined by the surgeon in the US image. This approach requires robust tracking of the instrument in the US image sequences which is known to be difficult due to poor image quality. This paper presents algorithms and results of first tracking experiments. Adaptive thresholding based on Otsu's method allows to cope with large intensity variations of the instrument echo. Median filtering of the binary image and subsequently applied morphological operations suppress noise and echo artefacts. A fast run length code based labelling algorithm allows for real-time labelling of the regions. A heuristic exploiting region size and region velocity helps to overcome ambiguities. The overall computation time is less than 20 ms per frame on a standard PC. The tracking algorithm requires no information about texture and shape which are known to be very unreliable in US image sequences. Experimental results for two different instrument materials (polyvinyl chloride and polyurethane) are given, showing the performance of the proposed approach. Choosing the appropriate material, trajectories are smooth and only few outliers occur.

  1. A heterogeneous sensor network simulation system with integrated terrain data for real-time target detection in 3D space

    NASA Astrophysics Data System (ADS)

    Lin, Hong; Tanner, Steve; Rushing, John; Graves, Sara; Criswell, Evans

    2008-03-01

    Large scale sensor networks composed of many low-cost small sensors networked together with a small number of high fidelity position sensors can provide a robust, fast and accurate air defense and warning system. The team has been developing simulations of such large networks, and is now adding terrain data in an effort to provide more realistic analysis of the approach. This work, a heterogeneous sensor network simulation system with integrated terrain data for real-time target detection in a three-dimensional environment is presented. The sensor network can be composed of large numbers of low fidelity binary and bearing-only sensors, and small numbers of high fidelity position sensors, such as radars. The binary and bearing-only sensors are randomly distributed over a large geographic region; while the position sensors are distributed evenly. The elevations of the sensors are determined through the use of DTED Level 0 dataset. The targets are located through fusing measurement information from all types of sensors modeled by the simulation. The network simulation utilizes the same search-based optimization algorithm as in our previous two-dimensional sensor network simulation with some significant modifications. The fusion algorithm is parallelized using spatial decomposition approach: the entire surveillance area is divided into small regions and each region is assigned to one compute node. Each node processes sensor measurements and terrain data only for the assigned sub region. A master process combines the information from all the compute nodes to get the overall network state. The simulation results have indicated that the distributed fusion algorithm is efficient enough so that an optimal solution can be reached before the arrival of the next sensor data with a reasonable time interval, and real-time target detection can be achieved. The simulation was performed on a Linux cluster with communication between nodes facilitated by the Message Passing Interface

  2. Quantification of Shunt Volume Through Ventricular Septal Defect by Real-Time 3-D Color Doppler Echocardiography: An in Vitro Study.

    PubMed

    Zhu, Meihua; Ashraf, Muhammad; Tam, Lydia; Streiff, Cole; Kimura, Sumito; Shimada, Eriko; Sahn, David J

    2016-05-01

    Quantification of shunt volume is important for ventricular septal defects (VSDs). The aim of the in vitro study described here was to test the feasibility of using real-time 3-D color Doppler echocardiography (RT3-D-CDE) to quantify shunt volume through a modeled VSD. Eight porcine heart phantoms with VSDs ranging in diameter from 3 to 25 mm were studied. Each phantom was passively driven at five different stroke volumes from 30 to 70 mL and two stroke rates, 60 and 120 strokes/min. RT3-D-CDE full volumes were obtained at color Doppler volume rates of 15, 20 and 27 volumes/s. Shunt flow derived from RT3-D-CDE was linearly correlated with pump-driven stroke volume (R = 0.982). RT3-D-CDE-derived shunt volumes from three color Doppler flow rate settings and two stroke rate acquisitions did not differ (p > 0.05). The use of RT3-D-CDE to determine shunt volume though VSDs is feasible. Different color volume rates/heart rates under clinically/physiologically relevant range have no effect on VSD 3-D shunt volume determination.

  3. Real-Time 3D Fluoroscopy-Guided Large Core Needle Biopsy of Renal Masses: A Critical Early Evaluation According to the IDEAL Recommendations

    SciTech Connect

    Kroeze, Stephanie G. C.; Huisman, Merel; Verkooijen, Helena M.; Diest, Paul J. van; Ruud Bosch, J. L. H.; Bosch, Maurice A. A. J. van den

    2012-06-15

    Introduction: Three-dimensional (3D) real-time fluoroscopy cone beam CT is a promising new technique for image-guided biopsy of solid tumors. We evaluated the technical feasibility, diagnostic accuracy, and complications of this technique for guidance of large-core needle biopsy in patients with suspicious renal masses. Methods: Thirteen patients with 13 suspicious renal masses underwent large-core needle biopsy under 3D real-time fluoroscopy cone beam CT guidance. Imaging acquisition and subsequent 3D reconstruction was done by a mobile flat-panel detector (FD) C-arm system to plan the needle path. Large-core needle biopsies were taken by the interventional radiologist. Technical success, accuracy, and safety were evaluated according to the Innovation, Development, Exploration, Assessment, Long-term study (IDEAL) recommendations. Results: Median tumor size was 2.6 (range, 1.0-14.0) cm. In ten (77%) patients, the histological diagnosis corresponded to the imaging findings: five were malignancies, five benign lesions. Technical feasibility was 77% (10/13); in three patients biopsy results were inconclusive. The lesion size of these three patients was <2.5 cm. One patient developed a minor complication. Median follow-up was 16.0 (range, 6.4-19.8) months. Conclusions: 3D real-time fluoroscopy cone beam CT-guided biopsy of renal masses is feasible and safe. However, these first results suggest that diagnostic accuracy may be limited in patients with renal masses <2.5 cm.

  4. MO-FG-BRD-01: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: Introduction and KV Tracking

    SciTech Connect

    Fahimian, B.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.

  5. MO-FG-BRD-02: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MV Tracking

    SciTech Connect

    Berbeco, R.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.

  6. MO-FG-BRD-04: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MR Tracking

    SciTech Connect

    Low, D.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.

  7. MO-FG-BRD-03: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: EM Tracking

    SciTech Connect

    Keall, P.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.

  8. Real-time skeleton tracking for embedded systems

    NASA Astrophysics Data System (ADS)

    Coleca, Foti; Klement, Sascha; Martinetz, Thomas; Barth, Erhardt

    2013-03-01

    Touch-free gesture technology is beginning to become more popular with consumers and may have a significant future impact on interfaces for digital photography. However, almost every commercial software framework for gesture and pose detection is aimed at either desktop PCs or high-powered GPUs, making mobile implementations for gesture recognition an attractive area for research and development. In this paper we present an algorithm for hand skeleton tracking and gesture recognition that runs on an ARM-based platform (Pandaboard ES, OMAP 4460 architecture). The algorithm uses self-organizing maps to fit a given topology (skeleton) into a 3D point cloud. This is a novel way of approaching the problem of pose recognition as it does not employ complex optimization techniques or data-based learning. After an initial background segmentation step, the algorithm is ran in parallel with heuristics, which detect and correct artifacts arising from insufficient or erroneous input data. We then optimize the algorithm for the ARM platform using fixed-point computation and the NEON SIMD architecture the OMAP4460 provides. We tested the algorithm with two different depth-sensing devices (Microsoft Kinect, PMD Camboard). For both input devices we were able to accurately track the skeleton at the native framerate of the cameras.

  9. Real-Time Bioluminescent Tracking of Cellular Population Dynamics

    SciTech Connect

    Close, Dan; Sayler, Gary Steven; Xu, Tingting; Ripp, Steven Anthony

    2014-01-01

    Cellular population dynamics are routinely monitored across many diverse fields for a variety of purposes. In general, these dynamics are assayed either through the direct counting of cellular aliquots followed by extrapolation to the total population size, or through the monitoring of signal intensity from any number of externally stimulated reporter proteins. While both viable methods, here we describe a novel technique that allows for the automated, non-destructive tracking of cellular population dynamics in real-time. This method, which relies on the detection of a continuous bioluminescent signal produced through expression of the bacterial luciferase gene cassette, provides a low cost, low time-intensive means for generating additional data compared to alternative methods.

  10. Real-Time Bioluminescent Tracking of Cellular Population Dynamics

    PubMed Central

    Close, Dan; Xu, Tingling; Ripp, Steven; Sayler, Gary

    2015-01-01

    Cellular population dynamics are routinely monitored across many diverse fields for a variety of purposes. In general, these dynamics are assayed either through the direct counting of cellular aliquots followed by extrapolation to the total population size, or through the monitoring of signal intensity from any number of externally stimulated reporter proteins. While both viable methods, here we describe a novel technique that allows for the automated, non-destructive tracking of cellular population dynamics in real-time. This method, which relies on the detection of a continuous bioluminescent signal produced through expression of the bacterial luciferase gene cassette, provides a low cost, low time-intensive means for generating additional data compared to alternative methods. PMID:24166372

  11. Optical eye tracking system for real-time noninvasive tumor localization in external beam radiotherapy

    SciTech Connect

    Via, Riccardo Fassi, Aurora; Fattori, Giovanni; Fontana, Giulia; Pella, Andrea; Tagaste, Barbara; Ciocca, Mario; Riboldi, Marco; Baroni, Guido; Orecchia, Roberto

    2015-05-15

    Purpose: External beam radiotherapy currently represents an important therapeutic strategy for the treatment of intraocular tumors. Accurate target localization and efficient compensation of involuntary eye movements are crucial to avoid deviations in dose distribution with respect to the treatment plan. This paper describes an eye tracking system (ETS) based on noninvasive infrared video imaging. The system was designed for capturing the tridimensional (3D) ocular motion and provides an on-line estimation of intraocular lesions position based on a priori knowledge coming from volumetric imaging. Methods: Eye tracking is performed by localizing cornea and pupil centers on stereo images captured by two calibrated video cameras, exploiting eye reflections produced by infrared illumination. Additionally, torsional eye movements are detected by template matching in the iris region of eye images. This information allows estimating the 3D position and orientation of the eye by means of an eye local reference system. By combining ETS measurements with volumetric imaging for treatment planning [computed tomography (CT) and magnetic resonance (MR)], one is able to map the position of the lesion to be treated in local eye coordinates, thus enabling real-time tumor referencing during treatment setup and irradiation. Experimental tests on an eye phantom and seven healthy subjects were performed to assess ETS tracking accuracy. Results: Measurements on phantom showed an overall median accuracy within 0.16 mm and 0.40° for translations and rotations, respectively. Torsional movements were affected by 0.28° median uncertainty. On healthy subjects, the gaze direction error ranged between 0.19° and 0.82° at a median working distance of 29 cm. The median processing time of the eye tracking algorithm was 18.60 ms, thus allowing eye monitoring up to 50 Hz. Conclusions: A noninvasive ETS prototype was designed to perform real-time target localization and eye movement monitoring

  12. Real-time probabilistic covariance tracking with efficient model update.

    PubMed

    Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li

    2012-05-01

    The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.

  13. Real-time 3D display system based on computer-generated integral imaging technique using enhanced ISPP for hexagonal lens array.

    PubMed

    Kim, Do-Hyeong; Erdenebat, Munkh-Uchral; Kwon, Ki-Chul; Jeong, Ji-Seong; Lee, Jae-Won; Kim, Kyung-Ah; Kim, Nam; Yoo, Kwan-Hee

    2013-12-01

    This paper proposes an open computer language (OpenCL) parallel processing method to generate the elemental image arrays (EIAs) for hexagonal lens array from a three-dimensional (3D) object such as a volume data. Hexagonal lens array has a higher fill factor compared to the rectangular lens array case; however, each pixel of an elemental image should be determined to belong to the single hexagonal lens. Therefore, generation for the entire EIA requires very large computations. The proposed method reduces processing time for the EIAs for a given hexagonal lens array. By using the proposed image space parallel processing (ISPP) method, it can enhance the processing speed that generates the 3D display of real-time interactive integral imaging for hexagonal lens array. In our experiment, we implemented the EIAs for hexagonal lens array in real-time and obtained a good processing time for a large of volume data for multiple cases of lens arrays.

  14. Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.

  15. Bilateral outflow obstructions without ventricular septal defect in an adult: Illustrated by real-time 3D echocardiography

    PubMed Central

    Mohan, Jagdish C.; Mohan, Vishwas

    2015-01-01

    Double-chambered right ventricle with discrete subaortic stenosis without ventricular septal defect is rare in adults. This report shows incremental value of 3D echocardiography in delineating the pathoanatomy of these lesions. PMID:26304572

  16. Crosstalk minimization in autostereoscopic multiveiw 3D display by eye tracking and fusion (overlapping) of viewing zones

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Seon-Kyu; Yoon, Ki-Hyuk

    2012-06-01

    An autostereoscopic 3D display provides the binocular perception without eye glasses, but induces the low 3D effect and dizziness due to the crosstalk effect. The crosstalk related problems give the deterioration of 3D effect, clearness, and reality of 3D image. A novel method of reducing the crosstalk is designed and tested; the method is based on the fusion of viewing zones and the real time eye position. It is shown experimentally that the crosstalk is effectively reduced at any position around the optimal viewing distance.

  17. Bedside assistance in freehand ultrasonic diagnosis by real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion

    NASA Astrophysics Data System (ADS)

    Fukuzawa, M.; Kawata, K.; Nakamori, N.; Kitsunezuka, Y.

    2011-03-01

    By real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion, freehand ultrasonic diagnosis of neonatal ischemic diseases has been assisted at the bedside. The 2D ultrasonic movie was taken with a conventional ultrasonic apparatus (ATL HDI5000) and ultrasonic probes of 5-7 MHz with the compact tilt-sensor to measure the probe orientation. The real-time 3D visualization was realized by developing an extended version of the PC-based visualization system. The software was originally developed on the DirectX platform and optimized with the streaming SIMD extensions. The 3D scatter diagram of the latest pulsatile tissues has been continuously generated and visualized as projection image with the ultrasonic movie in the current section more than 15 fps. It revealed the 3D structure of pulsatile tissues such as middle and posterior cerebral arteries, Willis ring and cerebellar arteries, in which pediatricians have great interests in the blood flow because asphyxiated and/or low-birth-weight neonates have a high risk of ischemic diseases such as hypoxic-ischemic encephalopathy and periventricular leukomalacia. Since the pulsatile tissue-motion is due to local blood flow, it can be concluded that the system developed in this work is very useful to assist freehand ultrasonic diagnosis of ischemic diseases in the neonatal cranium.

  18. Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms.

    PubMed

    Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro

    2012-09-10

    We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.

  19. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  20. Real-time 3D video utilizing a compressed sensing time-of-flight single-pixel camera

    NASA Astrophysics Data System (ADS)

    Edgar, Matthew P.; Sun, Ming-Jie; Gibson, Graham M.; Spalding, Gabriel C.; Phillips, David B.; Padgett, Miles J.

    2016-09-01

    Time-of-flight 3D imaging is an important tool for applications such as remote sensing, machine vision and autonomous navigation. Conventional time-of-flight three-dimensional imaging systems that utilize a raster scanned laser to measure the range of each pixel in the scene sequentially, inherently have acquisition times that scale directly with the resolution. Here we show a modified time-of-flight 3D camera employing structured illumination, which uses a visible camera to enable a novel compressed sensing technique, minimising the acquisition time as well as providing a high-resolution reflectivity map for image overlay. Furthermore, a quantitative assessment of the 3D imaging performance is provided.

  1. Capturing geometry in real-time using a tracked Microsoft Kinect

    NASA Astrophysics Data System (ADS)

    Tenedorio, Daniel; Fecho, Marlena; Schwartzhaupt, Jorge; Pardridge, Robert; Lue, James; Schulze, Jürgen P.

    2012-03-01

    We investigate the suitability of the Microsoft Kinect device for capturing real-world objects and places. Our new geometry scanning system permits the user to obtain detailed triangle models of non-moving objects with a tracked Kinect. The system generates a texture map for the triangle mesh using video frames from the Kinect's color camera and displays a continually-updated preview of the textured model in real-time, allowing the user to re-scan the scene from any direction to fill holes or increase the texture resolution. We also present filtering methods to maintain a high-quality model of reasonable size by removing overlapping or low-precision range scans. Our approach works well in the presence of degenerate geometry or when closing loops about the scanned subject. We demonstrate the ability of our system to acquire 3D models at human scale with a prototype implementation in the StarCAVE, a virtual reality environment at the University of California, San Diego. We designed the capturing algorithm to support the scanning of large areas, provided that accurate tracking is available.

  2. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  3. Imaging approach to real-time tracking of submarine pipeline

    NASA Astrophysics Data System (ADS)

    Zingaretti, Primo; Tascini, Guido; Puliti, Paolo; Zanoli, Silvia

    1996-03-01

    The work presents a real-time underwater imaging system for identification and tracking of a submarine pipeline on a sequence of recorded images. The main novelty of this work relies on adopting an automatic approach that is entirely based on the analysis and interpretation of visual data, in spite of the various limitations upon the ability to image underwater objects. The analysis of the data is performed starting from image processing operations (like filtering, profile analysis, feature enhancement) implemented on a dedicated board. Then, the system employs an efficient dynamic process for recognizing the two contours of the pipeline. In each frame the system is able to determine the equations of the two straight lines corresponding to the pipeline contours. The system reaches satisfactory performances in real time operation: up to eight frames per second on a Pentium based PC. The results of this work are somewhat more meaningful as the input images were acquired by three cameras, mounted on a remotely operated vehicle travelling at one nautical mile an hour, without any attention either to illumination conditions or stability of cameras. This work is originated from the interest of Snamprogetti in enhancing the level of automation in submarine pipeline inspection.

  4. Unstructured grids in 3D and 4D for a time-dependent interface in front tracking with improved accuracy

    SciTech Connect

    Glimm, J.; Grove, J. W.; Li, X. L.; Li, Y.; Xu, Z.

    2002-01-01

    Front tracking traces the dynamic evolution of an interface separating differnt materials or fluid components. In this paper, they describe three types of the grid generation methods used in the front tracking method. One is the unstructured surface grid. The second is a structured grid-based reconstruction method. The third is a time-space grid, also grid based, for a conservative tracking algorithm with improved accuracy.

  5. MO-FG-BRD-00: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management

    SciTech Connect

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.

  6. Detailed Measurement of Wall Strain with 3D Speckle Tracking in the Aortic Root: A Case of Bionic Support for Clinical Decision Making

    PubMed Central

    Vogt, Sebastian; Karatolios, Konstantinos; Wittek, Andreas; Blasé, Christopher; Ramaswamy, Anette; Mirow, Nikolas; Moosdorf, Rainer

    2016-01-01

    Three-dimensional (3D) wall motion tracking (WMT) based on ultrasound imaging enables estimation of aortic wall motion and deformation. It provides insights into changes in vascular compliance and vessel wall properties essential for understanding the pathogenesis and progression of aortic diseases. In this report, we employed the novel 3D WMT analysis on the ascending aorta aneurysm (AA) to estimate local aortic wall motion and strain in case of a patient scheduled for replacement of the aortic root. Although progression of the diameter indicates surgical therapy, at present we addressed the question for optimal surgical time point. According to the data, AA in our case has enlarged diameter and subsequent reduced circumferential wall strain, but area tracking data reveals almost normal elastic properties. Virtual remodeling of the aortic root opens a play list for different loading conditions to determine optimal surgical intervention in time. PMID:28018834

  7. Flying triangulation - A motion-robust optical 3D sensor for the real-time shape acquisition of complex objects

    NASA Astrophysics Data System (ADS)

    Willomitzer, Florian; Ettl, Svenja; Arold, Oliver; Häusler, Gerd

    2013-05-01

    The three-dimensional shape acquisition of objects has become more and more important in the last years. Up to now, there are several well-established methods which already yield impressive results. However, even under quite common conditions like object movement or a complex shaping, most methods become unsatisfying. Thus, the 3D shape acquisition is still a difficult and non-trivial task. We present our measurement principle "Flying Triangulation" which enables a motion-robust 3D acquisition of complex-shaped object surfaces by a freely movable handheld sensor. Since "Flying Triangulation" is scalable, a whole sensor-zoo for different object sizes is presented. Concluding, an overview of current and future fields of investigation is given.

  8. Probing the benefits of real-time tracking during cancer care.

    PubMed

    Patel, Rupa A; Klasnja, Predrag; Hartzler, Andrea; Unruh, Kenton T; Pratt, Wanda

    2012-01-01

    People with cancer experience many unanticipated symptoms and struggle to communicate them to clinicians. Although researchers have developed patient-reported outcome (PRO) tools to address this problem, such tools capture retrospective data intended for clinicians to review. In contrast, real-time tracking tools with visible results for patients could improve health outcomes and communication with clinicians, while also enhancing patients' symptom management. To understand potential benefits of such tools, we studied the tracking behaviors of 25 women with breast cancer. We provided 10 of these participants with a real-time tracking tool that served as a "technology probe" to uncover behaviors and benefits from voluntary use. Our findings showed that while patients' tracking behaviors without a tool were fragmented and sporadic, these behaviors with a tool were more consistent. Participants also used tracked data to see patterns among symptoms, feel psychosocial comfort, and improve symptom communication with clinicians. We conclude with design implications for future real-time tracking tools.

  9. Real-time monitoring of quorum sensing in 3D-printed bacterial aggregates using scanning electrochemical microscopy.

    PubMed

    Connell, Jodi L; Kim, Jiyeon; Shear, Jason B; Bard, Allen J; Whiteley, Marvin

    2014-12-23

    Microbes frequently live in nature as small, densely packed aggregates containing ∼10(1)-10(5) cells. These aggregates not only display distinct phenotypes, including resistance to antibiotics, but also, serve as building blocks for larger biofilm communities. Aggregates within these larger communities display nonrandom spatial organization, and recent evidence indicates that this spatial organization is critical for fitness. Studying single aggregates as well as spatially organized aggregates remains challenging because of the technical difficulties associated with manipulating small populations. Micro-3D printing is a lithographic technique capable of creating aggregates in situ by printing protein-based walls around individual cells or small populations. This 3D-printing strategy can organize bacteria in complex arrangements to investigate how spatial and environmental parameters influence social behaviors. Here, we combined micro-3D printing and scanning electrochemical microscopy (SECM) to probe quorum sensing (QS)-mediated communication in the bacterium Pseudomonas aeruginosa. Our results reveal that QS-dependent behaviors are observed within aggregates as small as 500 cells; however, aggregates larger than 2,000 bacteria are required to stimulate QS in neighboring aggregates positioned 8 μm away. These studies provide a powerful system to analyze the impact of spatial organization and aggregate size on microbial behaviors.

  10. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  11. Accuracy and precision of a custom camera-based system for 2-d and 3-d motion tracking during speech and nonspeech motor tasks.

    PubMed

    Feng, Yongqiang; Max, Ludo

    2014-04-01

    PURPOSE Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and submillimeter accuracy. METHOD The authors examined the accuracy and precision of 2-D and 3-D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially available computer software (APAS, Ariel Dynamics), and a custom calibration device. RESULTS Overall root-mean-square error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3- vs. 6-mm diameter) was negligible at all frame rates for both 2-D and 3-D data. CONCLUSION Motion tracking with consumer-grade digital cameras and the APAS software can achieve submillimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes.

  12. Real-time 3D millimeter wave imaging based FMCW using GGD focal plane array as detectors

    NASA Astrophysics Data System (ADS)

    Levanon, Assaf; Rozban, Daniel; Kopeika, Natan S.; Yitzhaky, Yitzhak; Abramovich, Amir

    2014-03-01

    Millimeter wave (MMW) imaging systems are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is relatively low. The lack of inexpensive room temperature imaging systems makes it difficult to give a suitable MMW system for many of the above applications. 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with a Glow Discharge Detector (GDD) Focal Plane Array (FPA) of plasma based detectors. Each point on the object corresponds to a point in the image and includes the distance information. This will enable 3D MMW imaging. The radar system requires that the millimeter wave detector (GDD) will be able to operate as a heterodyne detector. Since the source of radiation is a frequency modulated continuous wave (FMCW), the detected signal as a result of heterodyne detection gives the object's depth information according to value of difference frequency, in addition to the reflectance of the image. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of GDD devices. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  13. C-ME: A 3D Community-Based, Real-Time Collaboration Tool for Scientific Research and Training

    PubMed Central

    Kolatkar, Anand; Kennedy, Kevin; Halabuk, Dan; Kunken, Josh; Marrinucci, Dena; Bethel, Kelly; Guzman, Rodney; Huckaby, Tim; Kuhn, Peter

    2008-01-01

    The need for effective collaboration tools is growing as multidisciplinary proteome-wide projects and distributed research teams become more common. The resulting data is often quite disparate, stored in separate locations, and not contextually related. Collaborative Molecular Modeling Environment (C-ME) is an interactive community-based collaboration system that allows researchers to organize information, visualize data on a two-dimensional (2-D) or three-dimensional (3-D) basis, and share and manage that information with collaborators in real time. C-ME stores the information in industry-standard databases that are immediately accessible by appropriate permission within the computer network directory service or anonymously across the internet through the C-ME application or through a web browser. The system addresses two important aspects of collaboration: context and information management. C-ME allows a researcher to use a 3-D atomic structure model or a 2-D image as a contextual basis on which to attach and share annotations to specific atoms or molecules or to specific regions of a 2-D image. These annotations provide additional information about the atomic structure or image data that can then be evaluated, amended or added to by other project members. PMID:18286178

  14. Using virtual reality technology and hand tracking technology to create software for training surgical skills in 3D game

    NASA Astrophysics Data System (ADS)

    Zakirova, A. A.; Ganiev, B. A.; Mullin, R. I.

    2015-11-01

    The lack of visible and approachable ways of training surgical skills is one of the main problems in medical education. Existing simulation training devices are not designed to teach students, and are not available due to the high cost of the equipment. Using modern technologies such as virtual reality and hands movements fixation technology we want to create innovative method of learning the technics of conducting operations in 3D game format, which can make education process interesting and effective. Creating of 3D format virtual simulator will allow to solve several conceptual problems at once: opportunity of practical skills improvement unlimited by the time without the risk for patient, high realism of environment in operational and anatomic body structures, using of game mechanics for information perception relief and memorization of methods acceleration, accessibility of this program.

  15. A Real-Time Skin Dose Tracking System for Biplane Neuro-Interventional Procedures

    PubMed Central

    Rana, Vijay K.; Rudin, Stephen; Bednarek, Daniel R.

    2015-01-01

    A biplane dose-tracking system (Biplane-DTS) that provides a real-time display of the skin-dose distribution on a 3D-patient graphic during neuro-interventional fluoroscopic procedures was developed. Biplane-DTS calculates patient skin dose using geometry and exposure information for the two gantries of the imaging system acquired from the digital system bus. The dose is calculated for individual points on the patient graphic surface for each exposure pulse and cumulative dose for both x-ray tubes is displayed as color maps on a split screen showing frontal and lateral projections of a 3D-humanoid graphic. Overall peak skin dose (PSD), FOV-PSD and current dose rates for the two gantries are also displayed. Biplane-DTS uses calibration files of mR/mAs for the frontal and lateral tubes measured with and without the table in the beam at the entrance surface of a 20 cm thick PMMA phantom placed 15 cm tube-side of the isocenter. For neuro-imaging, conversion factors are applied as a function of entrance field area to scale the calculated dose to that measured with a Phantom Laboratory head phantom which contains a human skull to account for differences in backscatter between PMMA and the human head. The software incorporates inverse-square correction to each point on the skin and corrects for angulation of the beam through the table. Dose calculated by Biplane DTS and values measured by a 6-cc ionization chamber placed on the head phantom at multiple points agree within a range of −3% to +7% with a standard deviation for all points of less than 3%. PMID:26430290

  16. A real-time skin dose tracking system for biplane neuro-interventional procedures

    NASA Astrophysics Data System (ADS)

    Rana, Vijay K.; Rudin, Stephen R.; Bednarek, Daniel R.

    2015-03-01

    A biplane dose-tracking system (Biplane-DTS) that provides a real-time display of the skin-dose distribution on a 3D-patient graphic during neuro-interventional fluoroscopic procedures was developed. Biplane-DTS calculates patient skin dose using geometry and exposure information for the two gantries of the imaging system acquired from the digital system bus. The dose is calculated for individual points on the patient graphic surface for each exposure pulse and cumulative dose for both x-ray tubes is displayed as color maps on a split screen showing frontal and lateral projections of a 3D-humanoid graphic. Overall peak skin dose (PSD), FOV-PSD and current dose rates for the two gantries are also displayed. Biplane- TS uses calibration files of mR/mAs for the frontal and lateral tubes measured with and without the table in the beam at the entrance surface of a 20 cm thick PMMA phantom placed 15 cm tube-side of the isocenter. For neuro-imaging, conversion factors are applied as a function of entrance field area to scale the calculated dose to that measured with a Phantom Laboratory head phantom which contains a human skull to account for differences in backscatter between PMMA and the human head. The software incorporates inverse-square correction to each point on the skin and corrects for angulation of the beam through the table. Dose calculated by Biplane DTS and values measured by a 6-cc ionization chamber placed on the head phantom at multiple points agree within a range of -3% to +7% with a standard deviation for all points of less than 3%.

  17. Towards real-time 3D US to CT bone image registration using phase and curvature feature based GMM matching.

    PubMed

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2011-01-01

    In order to use pre-operatively acquired computed tomography (CT) scans to guide surgical tool movements in orthopaedic surgery, the CT scan must first be registered to the patient's anatomy. Three-dimensional (3D) ultrasound (US) could potentially be used for this purpose if the registration process could be made sufficiently automatic, fast and accurate, but existing methods have difficulties meeting one or more of these criteria. We propose a near-real-time US-to-CT registration method that matches point clouds extracted from local phase images with points selected in part on the basis of local curvature. The point clouds are represented as Gaussian Mixture Models (GMM) and registration is achieved by minimizing the statistical dissimilarity between the GMMs using an L2 distance metric. We present quantitative and qualitative results on both phantom and clinical pelvis data and show a mean registration time of 2.11 s with a mean accuracy of 0.49 mm.

  18. [Measurement of left atrial and ventricular volumes in real-time 3D echocardiography. Validation by nuclear magnetic resonance

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Qin, J. X.; White, R. D.; Thomas, J. D.

    2001-01-01

    The measurement of the left ventricular ejection fraction is important for the evaluation of cardiomyopathy and depends on the measurement of left ventricular volumes. There are no existing conventional echocardiographic means of measuring the true left atrial and ventricular volumes without mathematical approximations. The aim of this study was to test anew real time 3-dimensional echocardiographic system of calculating left atrial and ventricular volumes in 40 patients after in vitro validation. The volumes of the left atrium and ventricle acquired from real time 3-D echocardiography in the apical view, were calculated in 7 sections parallel to the surface of the probe and compared with atrial (10 patients) and ventricular (30 patients) volumes calculated by nuclear magnetic resonance with the simpson method and with volumes of water in balloons placed in a cistern. Linear regression analysis showed an excellent correlation between the real volume of water in the balloons and volumes given in real time 3-dimensional echocardiography (y = 0.94x + 5.5, r = 0.99, p < 0.001, D = -10 +/- 4.5 ml). A good correlation was observed between real time 3-dimensional echocardiography and nuclear magnetic resonance for the measurement of left atrial and ventricular volumes (y = 0.95x - 10, r = 0.91, p < 0.001, D = -14.8 +/- 19.5 ml and y = 0.87x + 10, r = 0.98, P < 0.001, D = -8.3 +/- 18.7 ml, respectively. The authors conclude that real time three-dimensional echocardiography allows accurate measurement of left heart volumes underlying the clinical potential of this new 3-D method.

  19. Experimental evaluations of the accuracy of 3D and 4D planning in robotic tracking stereotactic body radiotherapy for lung cancers

    SciTech Connect

    Chan, Mark K. H.; Kwong, Dora L. W.; Ng, Sherry C. Y.; Tong, Anthony S. M.; Tam, Eric K. W.

    2013-04-15

    Purpose: Due to the complexity of 4D target tracking radiotherapy, the accuracy of this treatment strategy should be experimentally validated against established standard 3D technique. This work compared the accuracy of 3D and 4D dose calculations in respiration tracking stereotactic body radiotherapy (SBRT). Methods: Using the 4D planning module of the CyberKnife treatment planning system, treatment plans for a moving target and a static off-target cord structure were created on different four-dimensional computed tomography (4D-CT) datasets of a thorax phantom moving in different ranges. The 4D planning system used B-splines deformable image registrations (DIR) to accumulate dose distributions calculated on different breathing geometries, each corresponding to a static 3D-CT image of the 4D-CT dataset, onto a reference image to compose a 4D dose distribution. For each motion, 4D optimization was performed to generate a 4D treatment plan of the moving target. For comparison with standard 3D planning, each 4D plan was copied to the reference end-exhale images and a standard 3D dose calculation was followed. Treatment plans of the off-target structure were first obtained by standard 3D optimization on the end-exhale images. Subsequently, they were applied to recalculate the 4D dose distributions using DIRs. All dose distributions that were initially obtained using the ray-tracing algorithm with equivalent path-length heterogeneity correction (3D{sub EPL} and 4D{sub EPL}) were recalculated by a Monte Carlo algorithm (3D{sub MC} and 4D{sub MC}) to further investigate the effects of dose calculation algorithms. The calculated 3D{sub EPL}, 3D{sub MC}, 4D{sub EPL}, and 4D{sub MC} dose distributions were compared to measurements by Gafchromic EBT2 films in the axial and coronal planes of the moving target object, and the coronal plane for the static off-target object based on the {gamma} metric at 5%/3mm criteria ({gamma}{sub 5%/3mm}). Treatment plans were considered

  20. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    NASA Astrophysics Data System (ADS)

    Min, Yugang; Santhanam, Anand; Neelakkantan, Harini; Ruddy, Bari H.; Meeks, Sanford L.; Kupelian, Patrick A.

    2010-09-01

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  1. Towards real-time 3D US-CT registration on the beating heart for guidance of minimally invasive cardiac interventions

    NASA Astrophysics Data System (ADS)

    Li, Feng; Lang, Pencilla; Rajchl, Martin; Chen, Elvis C. S.; Guiraudon, Gerard; Peters, Terry M.

    2012-02-01

    Compared to conventional open-heart surgeries, minimally invasive cardiac interventions cause less trauma and sideeffects to patients. However, the direct view of surgical targets and tools is usually not available in minimally invasive procedures, which makes image-guided navigation systems essential. The choice of imaging modalities used in the navigation systems must consider the capability of imaging soft tissues, spatial and temporal resolution, compatibility and flexibility in the OR, and financial cost. In this paper, we propose a new means of guidance for minimally invasive cardiac interventions using 3D real-time ultrasound images to show the intra-operative heart motion together with preoperative CT image(s) employed to demonstrate high-quality 3D anatomical context. We also develop a method to register intra-operative ultrasound and pre-operative CT images in close to real-time. The registration method has two stages. In the first, anatomical features are segmented from the first frame of ultrasound images and the CT image(s). A feature based registration is used to align those features. The result of this is used as an initialization in the second stage, in which a mutual information based registration is used to register every ultrasound frame to the CT image(s). A GPU based implementation is used to accelerate the registration.

  2. Longitudinal, label-free, quantitative tracking of cell death and viability in a 3D tumor model with OCT

    NASA Astrophysics Data System (ADS)

    Jung, Yookyung; Klein, Oliver J.; Wang, Hequn; Evans, Conor L.

    2016-06-01

    Three-dimensional in vitro tumor models are highly useful tools for studying tumor growth and treatment response of malignancies such as ovarian cancer. Existing viability and treatment assessment assays, however, face shortcomings when applied to these large, complex, and heterogeneous culture systems. Optical coherence tomography (OCT) is a noninvasive, label-free, optical imaging technique that can visualize live cells and tissues over time with subcellular resolution and millimeters of optical penetration depth. Here, we show that OCT is capable of carrying out high-content, longitudinal assays of 3D culture treatment response. We demonstrate the usage and capability of OCT for the dynamic monitoring of individual and combination therapeutic regimens in vitro, including both chemotherapy drugs and photodynamic therapy (PDT) for ovarian cancer. OCT was validated against the standard LIVE/DEAD Viability/Cytotoxicity Assay in small tumor spheroid cultures, showing excellent correlation with existing standards. Importantly, OCT was shown to be capable of evaluating 3D spheroid treatment response even when traditional viability assays failed. OCT 3D viability imaging revealed synergy between PDT and the standard-of-care chemotherapeutic carboplatin that evolved over time. We believe the efficacy and accuracy of OCT in vitro drug screening will greatly contribute to the field of cancer treatment and therapy evaluation.

  3. Longitudinal, label-free, quantitative tracking of cell death and viability in a 3D tumor model with OCT

    PubMed Central

    Jung, Yookyung; Klein, Oliver J.; Wang, Hequn; Evans, Conor L.

    2016-01-01

    Three-dimensional in vitro tumor models are highly useful tools for studying tumor growth and treatment response of malignancies such as ovarian cancer. Existing viability and treatment assessment assays, however, face shortcomings when applied to these large, complex, and heterogeneous culture systems. Optical coherence tomography (OCT) is a noninvasive, label-free, optical imaging technique that can visualize live cells and tissues over time with subcellular resolution and millimeters of optical penetration depth. Here, we show that OCT is capable of carrying out high-content, longitudinal assays of 3D culture treatment response. We demonstrate the usage and capability of OCT for the dynamic monitoring of individual and combination therapeutic regimens in vitro, including both chemotherapy drugs and photodynamic therapy (PDT) for ovarian cancer. OCT was validated against the standard LIVE/DEAD Viability/Cytotoxicity Assay in small tumor spheroid cultures, showing excellent correlation with existing standards. Importantly, OCT was shown to be capable of evaluating 3D spheroid treatment response even when traditional viability assays failed. OCT 3D viability imaging revealed synergy between PDT and the standard-of-care chemotherapeutic carboplatin that evolved over time. We believe the efficacy and accuracy of OCT in vitro drug screening will greatly contribute to the field of cancer treatment and therapy evaluation. PMID:27248849

  4. Quantification of Coupled Stiffness and Fiber Orientation Remodeling in Hypertensive Rat Right-Ventricular Myocardium Using 3D Ultrasound Speckle Tracking with Biaxial Testing

    PubMed Central

    Park, Dae Woo; Sebastiani, Andrea; Yap, Choon Hwai; Simon, Marc A.; Kim, Kang

    2016-01-01

    Mechanical and structural changes of right ventricular (RV) in response to pulmonary hypertension (PH) are inadequately understood. While current standard biaxial testing provides information on the mechanical behavior of RV tissues using surface markers, it is unable to fully assess structural and mechanical properties across the full tissue thickness. In this study, the mechanical and structural properties of normotensive and pulmonary hypertension right ventricular (PHRV) myocardium through its full thickness were examined using mechanical testing combined with 3D ultrasound speckle tracking (3D-UST). RV pressure overload was induced in Sprague–Dawley rats by pulmonary artery (PA) banding. The second Piola–Kirchhoff stress tensors and Green-Lagrangian strain tensors were computed in the RV myocardium using the biaxial testing combined with 3D-UST. A previously established non-linear curve-fitting algorithm was applied to fit experimental data to a Strain Energy Function (SEF) for computation of myofiber orientation. The fiber orientations obtained by the biaxial testing with 3D-UST compared well with the fiber orientations computed from the histology. In addition, the re-orientation of myofiber in the right ventricular free wall (RVFW) along longitudinal direction (apex-to-outflow-tract direction) was noticeable in response to PH. For normotensive RVFW samples, the average fiber orientation angles obtained by 3D-UST with biaxial test spiraled from 20° at the endo-cardium to -42° at the epi-cardium (Δ = 62°). For PHRV samples, the average fiber orientation angles obtained by 3D-UST with biaxial test had much less spiral across tissue thickness: 3° at endo-cardium to -7° at epi-cardium (Δ = 10°, P<0.005 compared to normotensive). PMID:27780271

  5. Novel System for Real-Time Integration of 3-D Echocardiography and Fluoroscopy for Image-Guided Cardiac Interventions: Preclinical Validation and Clinical Feasibility Evaluation

    PubMed Central

    Housden, R. James; Ma, Yingliang; Rajani, Ronak; Gao, Gang; Nijhof, Niels; Cathier, Pascal; Bullens, Roland; Gijsbers, Geert; Parish, Victoria; Kapetanakis, Stamatis; Hancock, Jane; Rinaldi, C. Aldo; Cooklin, Michael; Gill, Jaswinder; Thomas, Martyn; O'neill, Mark D.; Razavi, Reza; Rhode, Kawal S.

    2014-01-01

    Real-time imaging is required to guide minimally invasive catheter-based cardiac interventions. While transesophageal echocardiography allows for high-quality visualization of cardiac anatomy, X-ray fluoroscopy provides excellent visualization of devices. We have developed a novel image fusion system that allows real-time integration of 3-D echocardiography and the X-ray fluoroscopy. The system was validated in the following two stages: 1) preclinical to determine function and validate accuracy; and 2) in the clinical setting to assess clinical workflow feasibility and determine overall system accuracy. In the preclinical phase, the system was assessed using both phantom and porcine experimental studies. Median 2-D projection errors of 4.5 and 3.3 mm were found for the phantom and porcine studies, respectively. The clinical phase focused on extending the use of the system to interventions in patients undergoing either atrial fibrillation catheter ablation (CA) or transcatheter aortic valve implantation (TAVI). Eleven patients were studied with nine in the CA group and two in the TAVI group. Successful real-time view synchronization was achieved in all cases with a calculated median distance error of 2.2 mm in the CA group and 3.4 mm in the TAVI group. A standard clinical workflow was established using the image fusion system. These pilot data confirm the technical feasibility of accurate real-time echo-fluoroscopic image overlay in clinical practice, which may be a useful adjunct for real-time guidance during interventional cardiac procedures. PMID:27170872

  6. FIRE: an open-software suite for real-time 2D/3D image registration for image guided radiotherapy research

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.

    2016-03-01

    Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.

  7. Real-time self-calibration of a tracked augmented reality display

    NASA Astrophysics Data System (ADS)

    Baum, Zachary; Lasso, Andras; Ungi, Tamas; Fichtinger, Gabor

    2016-03-01

    PURPOSE: Augmented reality systems have been proposed for image-guided needle interventions but they have not become widely used in clinical practice due to restrictions such as limited portability, low display refresh rates, and tedious calibration procedures. We propose a handheld tablet-based self-calibrating image overlay system. METHODS: A modular handheld augmented reality viewbox was constructed from a tablet computer and a semi-transparent mirror. A consistent and precise self-calibration method, without the use of any temporary markers, was designed to achieve an accurate calibration of the system. Markers attached to the viewbox and patient are simultaneously tracked using an optical pose tracker to report the position of the patient with respect to a displayed image plane that is visualized in real-time. The software was built using the open-source 3D Slicer application platform's SlicerIGT extension and the PLUS toolkit. RESULTS: The accuracy of the image overlay with image-guided needle interventions yielded a mean absolute position error of 0.99 mm (95th percentile 1.93 mm) in-plane of the overlay and a mean absolute position error of 0.61 mm (95th percentile 1.19 mm) out-of-plane. This accuracy is clinically acceptable for tool guidance during various procedures, such as musculoskeletal injections. CONCLUSION: A self-calibration method was developed and evaluated for a tracked augmented reality display. The results show potential for the use of handheld image overlays in clinical studies with image-guided needle interventions.

  8. 3D visualisation of the stochastic patterns of the radial dose in nano-volumes by a Monte Carlo simulation of HZE ion track structure.

    PubMed

    Plante, Ianik; Ponomarev, Artem; Cucinotta, Francis A

    2011-02-01

    The description of energy deposition by high charge and energy (HZE) nuclei is of importance for space radiation risk assessment and due to their use in hadrontherapy. Such ions deposit a large fraction of their energy within the so-called core of the track and a smaller proportion in the penumbra (or track periphery). We study the stochastic patterns of the radial dependence of energy deposition using Monte Carlo track structure codes RITRACKS and RETRACKS, that were used to simulate HZE tracks and calculate energy deposition in voxels of 40 nm. The simulation of a (56)Fe(26+) ion of 1 GeV u(-1) revealed zones of high-energy deposition which maybe found as far as a few millimetres away from the track core in some simulations. The calculation also showed that ∼43 % of the energy was deposited in the penumbra. These 3D stochastic simulations combined with a visualisation interface are a powerful tool for biophysicists which may be used to study radiation-induced biological effects such as double strand breaks and oxidative damage and the subsequent cellular and tissue damage processing and signalling.

  9. Tracking time interval changes of pulmonary nodules on follow-up 3D CT images via image-based risk score of lung cancer

    NASA Astrophysics Data System (ADS)

    Kawata, Y.; Niki, N.; Ohmatsu, H.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.; Moriyama, N.

    2013-03-01

    In this paper, we present a computer-aided follow-up (CAF) scheme to support physicians to track interval changes of pulmonary nodules on three dimensional (3D) CT images and to decide the treatment strategies without making any under or over treatment. Our scheme involves analyzing CT histograms to evaluate the volumetric distribution of CT values within pulmonary nodules. A variational Bayesian mixture modeling framework translates the image-derived features into an image-based risk score for predicting the patient recurrence-free survival. Through applying our scheme to follow-up 3D CT images of pulmonary nodules, we demonstrate the potential usefulness of the CAF scheme which can provide the trajectories that can characterize time interval changes of pulmonary nodules.

  10. VR-Planets : a 3D immersive application for real-time flythrough images of planetary surfaces

    NASA Astrophysics Data System (ADS)

    Civet, François; Le Mouélic, Stéphane

    2015-04-01

    During the last two decades, a fleet of planetary probes has acquired several hundred gigabytes of images of planetary surfaces. Mars has been particularly well covered thanks to the Mars Global Surveyor, Mars Express and Mars Reconnaissance Orbiter spacecrafts. HRSC, CTX, HiRISE instruments allowed the computation of Digital Elevation Models with a resolution from hundreds of meters up to 1 meter per pixel, and corresponding orthoimages with a resolution from few hundred of meters up to 25 centimeters per pixel. The integration of such huge data sets into a system allowing user-friendly manipulation either for scientific investigation or for public outreach can represent a real challenge. We are investigating how innovative tools can be used to freely fly over reconstructed landscapes in real time, using technologies derived from the game industry and virtual reality. We have developed an application based on a game engine, using planetary data, to immerse users in real martian landscapes. The user can freely navigate in each scene at full spatial resolution using a game controller. The actual rendering is compatible with several visualization devices such as 3D active screen, virtual reality headsets (Oculus Rift), and android devices.

  11. Exploration of the potential of liquid scintillators for real-time 3D dosimetry of intensity modulated proton beams

    PubMed Central

    Beddar, Sam; Archambault, Louis; Sahoo, Narayan; Poenisch, Falk; Chen, George T.; Gillin, Michael T.; Mohan, Radhe

    2009-01-01

    In this study, the authors investigated the feasibility of using a 3D liquid scintillator (LS) detector system for the verification and characterization of proton beams in real time for intensity and energy-modulated proton therapy. A plastic tank filled with liquid scintillator was irradiated with pristine proton Bragg peaks. Scintillation light produced during the irradiation was measured with a CCD camera. Acquisition rates of 20 and 10 frames per second (fps) were used to image consecutive frame sequences. These measurements were then compared to ion chamber measurements and Monte Carlo simulations. The light distribution measured from the images acquired at rates of 20 and 10 fps have standard deviations of 1.1% and 0.7%, respectively, in the plateau region of the Bragg curve. Differences were seen between the raw LS signal and the ion chamber due to the quenching effects of the LS and due to the optical properties of the imaging system. The authors showed that this effect can be accounted for and corrected by Monte Carlo simulations. The liquid scintillator detector system has a good potential for performing fast proton beam verification and characterization. PMID:19544791

  12. Acoustic and hybrid 3D-printed electrochemical biosensors for the real-time immunodetection of liver cancer cells (HepG2).

    PubMed

    Damiati, Samar; Küpcü, Seta; Peacock, Martin; Eilenberger, Christoph; Zamzami, Mazin; Qadri, Ishtiaq; Choudhry, Hani; Sleytr, Uwe B; Schuster, Bernhard

    2017-03-21

    This study presents an efficient acoustic and hybrid three-dimensional (3D)-printed electrochemical biosensors for the detection of liver cancer cells. The biosensors function by recognizing the highly expressed tumor marker CD133, which is located on the surface of liver cancer cells. Detection was achieved by recrystallizing a recombinant S-layer fusion protein (rSbpA/ZZ) on the surface of the sensors. The fused ZZ-domain enables immobilization of the anti-CD133 antibody in a defined manner. These highly accessible anti-CD133 antibodies were employed as a sensing layer, thereby enabling the efficient detection of liver cancer cells (HepG2). The recognition of HepG2 cells was investigated in situ using a quartz crystal microbalance with dissipation monitoring (QCM-D), which enabled the label-free, real-time detection of living cells on the modified sensor surface under controlled conditions. Furthermore, the hybrid 3D additive printing strategy for biosensors facilitates both rapid development and small-scale manufacturing. The hybrid strategy of combining 3D-printed parts and more traditionally fabricated parts enables the use of optimal materials: a ceramic substrate with noble metals for the sensing element and 3D-printed capillary channels to guide and constrain the clinical sample. Cyclic voltammetry (CV) measurements confirmed the efficiency of the fabricated sensors. Most importantly, these sensors offer low-cost and disposable detection platforms for real-world applications. Thus, as demonstrated in this study, both fabricated acoustic and electrochemical sensing platforms can detect cancer cells and therefore may have further potential in other clinical applications and drug-screening studies.

  13. Real-time tracking of objects for space applications using a laser range scanner

    NASA Technical Reports Server (NTRS)

    Blais, F.; Couvillon, R. A.; Rioux, M.; Maclean, S. G.

    1994-01-01

    Real-time tracking of multiple targets and three dimensional object features was demonstrated using a laser range scanner. The prototype was immune to ambient illumination and sun interference. Tracking error feedback was simultaneously obtained from individual targets, global predicted target position, and the human operator. A more complete study of calibration parameters and temperature variations on the scanner is needed to determine the exact performance of the sensor. Lissajous patterns used in three-dimensional real-time tracking prove helpful given their high resolution. The photogrammetry-based Advanced Space Vision System (ASVS) is discussed in combination with the laser range scanner.

  14. Real-time object tracking for moving target auto-focus in digital camera

    NASA Astrophysics Data System (ADS)

    Guan, Haike; Niinami, Norikatsu; Liu, Tong

    2015-02-01

    Focusing at a moving object accurately is difficult and important to take photo of the target successfully in a digital camera. Because the object often moves randomly and changes its shape frequently, position and distance of the target should be estimated at real-time so as to focus at the objet precisely. We propose a new method of real-time object tracking to do auto-focus for moving target in digital camera. Video stream in the camera is used for the moving target tracking. Particle filter is used to deal with problem of the target object's random movement and shape change. Color and edge features are used as measurement of the object's states. Parallel processing algorithm is developed to realize real-time particle filter object tracking easily in hardware environment of the digital camera. Movement prediction algorithm is also proposed to remove focus error caused by difference between tracking result and target object's real position when the photo is taken. Simulation and experiment results in digital camera demonstrate effectiveness of the proposed method. We embedded real-time object tracking algorithm in the digital camera. Position and distance of the moving target is obtained accurately by object tracking from the video stream. SIMD processor is applied to enforce parallel real-time processing. Processing time less than 60ms for each frame is obtained in the digital camera with its CPU of only 162MHz.

  15. Tracking Accuracy of a Real-Time Fiducial Tracking System for Patient Positioning and Monitoring in Radiation Therapy

    SciTech Connect

    Shchory, Tal; Schifter, Dan; Lichtman, Rinat; Neustadter, David; Corn, Benjamin W.

    2010-11-15

    Purpose: In radiation therapy there is a need to accurately know the location of the target in real time. A novel radioactive tracking technology has been developed to answer this need. The technology consists of a radioactive implanted fiducial marker designed to minimize migration and a linac mounted tracking device. This study measured the static and dynamic accuracy of the new tracking technology in a clinical radiation therapy environment. Methods and Materials: The tracking device was installed on the linac gantry. The radioactive marker was located in a tissue equivalent phantom. Marker location was measured simultaneously by the radioactive tracking system and by a Microscribe G2 coordinate measuring machine (certified spatial accuracy of 0.38 mm). Localization consistency throughout a volume and absolute accuracy in the Fixed coordinate system were measured at multiple gantry angles over volumes of at least 10 cm in diameter centered at isocenter. Dynamic accuracy was measured with the marker located inside a breathing phantom. Results: The mean consistency for the static source was 0.58 mm throughout the tested region at all measured gantry angles. The mean absolute position error in the Fixed coordinate system for all gantry angles was 0.97 mm. The mean real-time tracking error for the dynamic source within the breathing phantom was less than 1 mm. Conclusions: This novel radioactive tracking technology has the potential to be useful in accurate target localization and real-time monitoring for radiation therapy.

  16. Applications of 3D hydrodynamic and particle tracking models in the San Francisco bay-delta estuary

    USGS Publications Warehouse

    Smith, P.E.; Donovan, J.M.; Wong, H.F.N.

    2005-01-01

    Three applications of three-dimensional hydrodynamic and particle-tracking models are currently underway by the United States Geological Survey in the San Francisco Bay-Delta Estuary. The first application is to the San Francisco Bay and a portion of the coastal ocean. The second application is to an important, gated control channel called the Delta Cross Channel, located within the northern portion of the Sacramento-San Joaquin River Delta. The third application is to a reach of the San Joaquin River near Stockton, California where a significant dissolved oxygen problem exists due, in part, to conditions associated with the deep-water ship channel for the Port of Stockton, California. This paper briefly discusses the hydrodynamic and particle tracking models being used and the three applications. Copyright ASCE 2005.

  17. Analysis of the real-time 3D display system based on the reconstruction of parallax rays

    NASA Astrophysics Data System (ADS)

    Yamada, Kenji; Takahashi, Hideya; Shimizu, Eiji

    2002-11-01

    Several types of auto-stereoscopic display systems have been developed. We also have developed a real-time color auto-stereoscopic display system using a reconstruction method of parallax rays. Our system consists of an optical element (such as lens array, a pinhole, a HOEs and so on), a spatial light modulator (SLM), and an image-processing unit. On our system, it is not probability to appear pseudoscopic image. The algorithm for solving this problem is processed in an image-processing unit. The resolution limitation of IP has studied by Hoshino, Burckhardt, and Okoshi. They designed the optimum width of the lens or the aperture. However, we cannot apply those theories to our system. Therefore, we consider not only the spatial frequency measured at the viewpoint but the performance of our system. In this paper, we describe an analysis of resolution for our system. The first we consider the spatial frequency along the depth and the horizontal direction respectively according to the geometrical optics and wave optics. The next we study the performance of our system. Especially, we esitmate the cross talk that the point sources from pixels on an SLM cause by considering to the geometrical optics and the wave optics.

  18. A Feature-adaptive Subdivision Method for Real-time 3D Reconstruction of Repeated Topology Surfaces

    NASA Astrophysics Data System (ADS)

    Lin, Jinhua; Wang, Yanjie; Sun, Honghai

    2017-03-01

    It's well known that rendering for a large number of triangles with GPU hardware tessellation has made great progress. However, due to the fixed nature of GPU pipeline, many off-line methods that perform well can not meet the on-line requirements. In this paper, an optimized Feature-adaptive subdivision method is proposed, which is more suitable for reconstructing surfaces with repeated cusps or creases. An Octree primitive is established in irregular regions where there are the same sharp vertices or creases, this method can find the neighbor geometry information quickly. Because of having the same topology structure between Octree primitive and feature region, the Octree feature points can match the arbitrary vertices in feature region more precisely. In the meanwhile, the patches is re-encoded in the Octree primitive by using the breadth-first strategy, resulting in a meta-table which allows for real-time reconstruction by GPU hardware tessellation unit. There is only one feature region needed to be calculated under Octree primitive, other regions with the same repeated feature generate their own meta-table directly, the reconstruction time is saved greatly for this step. With regard to the meshes having a large number of repeated topology feature, our algorithm improves the subdivision time by 17.575% and increases the average frame drawing time by 0.2373 ms compared to the traditional FAS (Feature-adaptive Subdivision), at the same time the model can be reconstructed in a watertight manner.

  19. Real-Time Display Of 3-D Computed Holograms By Scanning The Image Of An Acousto-Optic Modulator

    NASA Astrophysics Data System (ADS)

    Kollin, Joel S.; Benton, Stephen A.; Jepsen, Mary Lou

    1989-10-01

    The invention of holography has sparked hopes for a three-dimensional electronic imaging systems analogous to television. Unfortunately, the extraordinary spatial detail of ordinary holographic recordings requires unattainable bandwidth and display resolution for three-dimensional moving imagery, effectively preventing their commercial development. However, the essential bandwidth of holographic images can be reduced enough to permit their transmission through fiber optic or coaxial cable, and the required resolution or space-bandwidth product of the display can be obtained by raster scanning the image of a commercially available acousto-optic modulator. No film recording or other photographic intermediate step is necessary as the projected modulator image is viewed directly. The design and construction of a working demonstration of the principles involved is also presented along with a discussion of engineering considerations in the system design. Finally, the theoretical and practical limitations of the system are addressed in the context of extending the system to real-time transmission of moving holograms synthesized from views of real and computer-generated three-dimensional scenes.

  20. Real-time x-ray fluoroscopy-based catheter detection and tracking for cardiac electrophysiology interventions

    SciTech Connect

    Ma Yingliang; Housden, R. James; Razavi, Reza; Rhode, Kawal S.; Gogin, Nicolas; Cathier, Pascal; Gijsbers, Geert; Cooklin, Michael; O'Neill, Mark; Gill, Jaswinder; Rinaldi, C. Aldo

    2013-07-15

    Purpose: X-ray fluoroscopically guided cardiac electrophysiology (EP) procedures are commonly carried out to treat patients with arrhythmias. X-ray images have poor soft tissue contrast and, for this reason, overlay of a three-dimensional (3D) roadmap derived from preprocedural volumetric images can be used to add anatomical information. It is useful to know the position of the catheter electrodes relative to the cardiac anatomy, for example, to record ablation therapy locations during atrial fibrillation therapy. Also, the electrode positions of the coronary sinus (CS) catheter or lasso catheter can be used for road map motion correction.Methods: In this paper, the authors present a novel unified computational framework for image-based catheter detection and tracking without any user interaction. The proposed framework includes fast blob detection, shape-constrained searching and model-based detection. In addition, catheter tracking methods were designed based on the customized catheter models input from the detection method. Three real-time detection and tracking methods are derived from the computational framework to detect or track the three most common types of catheters in EP procedures: the ablation catheter, the CS catheter, and the lasso catheter. Since the proposed methods use the same blob detection method to extract key information from x-ray images, the ablation, CS, and lasso catheters can be detected and tracked simultaneously in real-time.Results: The catheter detection methods were tested on 105 different clinical fluoroscopy sequences taken from 31 clinical procedures. Two-dimensional (2D) detection errors of 0.50 {+-} 0.29, 0.92 {+-} 0.61, and 0.63 {+-} 0.45 mm as well as success rates of 99.4%, 97.2%, and 88.9% were achieved for the CS catheter, ablation catheter, and lasso catheter, respectively. With the tracking method, accuracies were increased to 0.45 {+-} 0.28, 0.64 {+-} 0.37, and 0.53 {+-} 0.38 mm and success rates increased to 100%, 99

  1. 3D tracking of single nanoparticles and quantum dots in living cells by out-of-focus imaging with diffraction pattern recognition.

    PubMed

    Gardini, Lucia; Capitanio, Marco; Pavone, Francesco S

    2015-11-03

    Live cells are three-dimensional environments where biological molecules move to find their targets and accomplish their functions. However, up to now, most single molecule investigations have been limited to bi-dimensional studies owing to the complexity of 3d-tracking techniques. Here, we present a novel method for three-dimensional localization of single nano-emitters based on automatic recognition of out-of-focus diffraction patterns. Our technique can be applied to track the movements of single molecules in living cells using a conventional epifluorescence microscope. We first demonstrate three-dimensional localization of fluorescent nanobeads over 4 microns depth with accuracy below 2 nm in vitro. Remarkably, we also establish three-dimensional tracking of Quantum Dots, overcoming their anisotropic emission, by adopting a ligation strategy that allows rotational freedom of the emitter combined with proper pattern recognition. We localize commercially available Quantum Dots in living cells with accuracy better than 7 nm over 2 microns depth. We validate our technique by tracking the three-dimensional movements of single protein-conjugated Quantum Dots in living cell. Moreover, we find that important localization errors can occur in off-focus imaging when improperly calibrated and we give indications to avoid them. Finally, we share a Matlab script that allows readily application of our technique by other laboratories.

  2. Real-time edge tracking using a tactile sensor

    NASA Technical Reports Server (NTRS)

    Berger, Alan D.; Volpe, Richard; Khosla, Pradeep K.

    1989-01-01

    Object recognition through the use of input from multiple sensors is an important aspect of an autonomous manipulation system. In tactile object recognition, it is necessary to determine the location and orientation of object edges and surfaces. A controller is proposed that utilizes a tactile sensor in the feedback loop of a manipulator to track along edges. In the control system, the data from the tactile sensor is first processed to find edges. The parameters of these edges are then used to generate a control signal to a hybrid controller. Theory is presented for tactile edge detection and an edge tracking controller. In addition, experimental verification of the edge tracking controller is presented.

  3. 3-D diffusion tensor axonal tracking shows distinct SMA and pre-SMA projections to the human striatum.

    PubMed

    Lehéricy, Stéphane; Ducros, Mathieu; Krainik, Alexandre; Francois, Chantal; Van de Moortele, Pierre-François; Ugurbil, Kamil; Kim, Dae-Shik

    2004-12-01

    Studies in non-human primates have shown that medial premotor projections to the striatum are characterized as a set of distinct circuits conveying different type of information. This study assesses the anatomical projections from the supplementary motor area (SMA), pre-SMA and motor cortex (MC) to the human striatum using diffusion tensor imaging (DTI) axonal tracking. Eight right-handed volunteers were studied at 1.5 T using DTI axonal tracking. A connectivity matrix was computed, which tested for connections between cortical areas (MC, SMA and pre-SMA) and subcortical areas (posterior, middle and anterior putamen and the head of the caudate nucleus) in each hemisphere. Pre-SMA projections to the striatum were located rostral to SMA projections to the striatum. The SMA and the MC were similarly connected to the posterior and middle putamen and not to the anterior striatum. These data show that the MC and SMA have connections with similar parts of the sensorimotor compartment of the human striatum, whereas the pre-SMA sends connections to more rostral parts of the striatum, including the associative compartment.

  4. Nanoelectronic three-dimensional (3D) nanotip sensing array for real-time, sensitive, label-free sequence specific detection of nucleic acids.

    PubMed

    Esfandyarpour, Rahim; Yang, Lu; Koochak, Zahra; Harris, James S; Davis, Ronald W

    2016-02-01

    The improvements in our ability to sequence and genotype DNA have opened up numerous avenues in the understanding of human biology and medicine with various applications, especially in medical diagnostics. But the realization of a label free, real time, high-throughput and low cost biosensing platforms to detect molecular interactions with a high level of sensitivity has been yet stunted due to two factors: one, slow binding kinetics caused by the lack of probe molecules on the sensors and two, limited mass transport due to the planar structure (two-dimensional) of the current biosensors. Here we present a novel three-dimensional (3D), highly sensitive, real-time, inexpensive and label-free nanotip array as a rapid and direct platform to sequence-specific DNA screening. Our nanotip sensors are designed to have a nano sized thin film as their sensing area (~ 20 nm), sandwiched between two sensing electrodes. The tip is then conjugated to a DNA oligonucleotide complementary to the sequence of interest, which is electrochemically detected in real-time via impedance changes upon the formation of a double-stranded helix at the sensor interface. This 3D configuration is specifically designed to improve the biomolecular hit rate and the detection speed. We demonstrate that our nanotip array effectively detects oligonucleotides in a sequence-specific and highly sensitive manner, yielding concentration-dependent impedance change measurements with a target concentration as low as 10 pM and discrimination against even a single mismatch. Notably, our nanotip sensors achieve this accurate, sensitive detection without relying on signal indicators or enhancing molecules like fluorophores. It can also easily be scaled for highly multiplxed detection with up to 5000 sensors/square centimeter, and integrated into microfluidic devices. The versatile, rapid, and sensitive performance of the nanotip array makes it an excellent candidate for point-of-care diagnostics, and high

  5. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the

  6. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach

    NASA Astrophysics Data System (ADS)

    Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas

    2010-03-01

    Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.

  7. NASA's "Eyes On The Solar System:" A Real-time, 3D-Interactive Tool to Teach the Wonder of Planetary Science

    NASA Astrophysics Data System (ADS)

    Hussey, K.

    2014-12-01

    NASA's Jet Propulsion Laboratory is using video game technology to immerse students, the general public and mission personnel in our solar system and beyond. "Eyes on the Solar System," a cross-platform, real-time, 3D-interactive application that can run on-line or as a stand-alone "video game," is of particular interest to educators looking for inviting tools to capture students interest in a format they like and understand. (eyes.nasa.gov). It gives users an extraordinary view of our solar system by virtually transporting them across space and time to make first-person observations of spacecraft, planetary bodies and NASA/ESA missions in action. Key scientific results illustrated with video presentations, supporting imagery and web links are imbedded contextually into the solar system. Educators who want an interactive, game-based approach to engage students in learning Planetary Science will see how "Eyes" can be effectively used to teach its principles to grades 3 through 14.The presentation will include a detailed demonstration of the software along with a description/demonstration of how this technology is being adapted for education. There will also be a preview of coming attractions. This work is being conducted by the Visualization Technology Applications and Development Group at NASA's Jet Propulsion Laboratory, the same team responsible for "Eyes on the Earth 3D," and "Eyes on Exoplanets," which can be viewed at eyes.nasa.gov/earth and eyes.nasa.gov/exoplanets.

  8. Real-Time High Resolution 3D Imaging of the Lyme Disease Spirochete Adhering to and Escaping from the Vasculature of a Living Host

    PubMed Central

    Colarusso, Pina; Bankhead, Troy; Kubes, Paul; Chaconas, George

    2008-01-01

    Pathogenic spirochetes are bacteria that cause a number of emerging and re-emerging diseases worldwide, including syphilis, leptospirosis, relapsing fever, and Lyme borreliosis. They navigate efficiently through dense extracellular matrix and cross the blood–brain barrier by unknown mechanisms. Due to their slender morphology, spirochetes are difficult to visualize by standard light microscopy, impeding studies of their behavior in situ. We engineered a fluorescent infectious strain of Borrelia burgdorferi, the Lyme disease pathogen, which expressed green fluorescent protein (GFP). Real-time 3D and 4D quantitative analysis of fluorescent spirochete dissemination from the microvasculature of living mice at high resolution revealed that dissemination was a multi-stage process that included transient tethering-type associations, short-term dragging interactions, and stationary adhesion. Stationary adhesions and extravasating spirochetes were most commonly observed at endothelial junctions, and translational motility of spirochetes appeared to play an integral role in transendothelial migration. To our knowledge, this is the first report of high resolution 3D and 4D visualization of dissemination of a bacterial pathogen in a living mammalian host, and provides the first direct insight into spirochete dissemination in vivo. PMID:18566656

  9. Real-time high resolution 3D imaging of the lyme disease spirochete adhering to and escaping from the vasculature of a living host.

    PubMed

    Moriarty, Tara J; Norman, M Ursula; Colarusso, Pina; Bankhead, Troy; Kubes, Paul; Chaconas, George

    2008-06-20

    Pathogenic spirochetes are bacteria that cause a number of emerging and re-emerging diseases worldwide, including syphilis, leptospirosis, relapsing fever, and Lyme borreliosis. They navigate efficiently through dense extracellular matrix and cross the blood-brain barrier by unknown mechanisms. Due to their slender morphology, spirochetes are difficult to visualize by standard light microscopy, impeding studies of their behavior in situ. We engineered a fluorescent infectious strain of Borrelia burgdorferi, the Lyme disease pathogen, which expressed green fluorescent protein (GFP). Real-time 3D and 4D quantitative analysis of fluorescent spirochete dissemination from the microvasculature of living mice at high resolution revealed that dissemination was a multi-stage process that included transient tethering-type associations, short-term dragging interactions, and stationary adhesion. Stationary adhesions and extravasating spirochetes were most commonly observed at endothelial junctions, and translational motility of spirochetes appeared to play an integral role in transendothelial migration. To our knowledge, this is the first report of high resolution 3D and 4D visualization of dissemination of a bacterial pathogen in a living mammalian host, and provides the first direct insight into spirochete dissemination in vivo.

  10. Real-time photothermoplastic 8-inch camera with an emphasis on the visualization of 3D digital data by holographic means

    NASA Astrophysics Data System (ADS)

    Cherkasov, Yuri A.; Alexandrova, Elena L.; Rumjantsev, Alexander G.; Smirnov, Mikhail V.

    1995-04-01

    The development and investigations of large-formate (8-inch) real-time photothermoplastic (PTP) camera are carried out. The PTP camera is applied for operative recording of 3D- images by means of compound and digital holography and visualization of these holograms as 3D-static images. The optimization of the recording regimes is fulfilled with use the model of the relief-phase PTP images thermodevelopment, proposed by authors. According with that model, the achievement of maximal value of deformation (diffraction efficiency) is based on the opportunity in increasing of charge contrast of electrostatic latent image formed early by the moment of the viscosity decreasing during the thermodevelopment process. It is achieved by means of the control of the thermodevelopment regime. Also, the opportunities of the increase of the camera size (to 14 inch), of the rising of photosensitivity value and the enlarging of its spectral range, of the creation of Benton holograms and of the increasing of the speed of response to 25 Hz are discussed.

  11. 3D-localization microscopy and tracking of FoF1-ATP synthases in living bacteria

    NASA Astrophysics Data System (ADS)

    Renz, Anja; Renz, Marc; Klütsch, Diana; Deckers-Hebestreit, Gabriele; Börsch, Michael

    2015-03-01

    FoF1-ATP synthases are membrane-embedded protein machines that catalyze the synthesis of adenosine triphosphate. Using photoactivation-based localization microscopy (PALM) in TIR-illumination as well as structured illumination microscopy (SIM), we explore the spatial distribution and track single FoF1-ATP synthases in living E. coli cells under physiological conditions at different temperatures. For quantitative diffusion analysis by mean-squared-displacement measurements, the limited size of the observation area in the membrane with its significant membrane curvature has to be considered. Therefore, we applied a 'sliding observation window' approach (M. Renz et al., Proc. SPIE 8225, 2012) and obtained the one-dimensional diffusion coefficient of FoF1-ATP synthase diffusing on the long axis in living E. coli cells.

  12. FFT integration of instantaneous 3D pressure gradient fields measured by Lagrangian particle tracking in turbulent flows

    NASA Astrophysics Data System (ADS)

    Huhn, F.; Schanz, D.; Gesemann, S.; Schröder, A.

    2016-09-01

    Pressure gradient fields in unsteady flows can be estimated through flow measurements of the material acceleration in the fluid and the assumption of the governing momentum equation. In order to derive pressure from its gradient, almost exclusively two numerical methods have been used to spatially integrate the pressure gradient until now: first, direct path integration in the spatial domain, and second, the solution of the Poisson equation for pressure. Instead, we propose an alternative third method that integrates the pressure gradient field in Fourier space. Using a FFT function, the method is fast and easy to implement in programming languages for scientific computing. We demonstrate the accuracy of the integration scheme on a synthetic pressure field and apply it to an experimental example based on time-resolved material acceleration data from high-resolution Lagrangian particle tracking with the Shake-The-Box method.

  13. A comprehensive method for magnetic sensor calibration: a precise system for 3-D tracking of the tongue movements.

    PubMed

    Farajidavar, Aydin; Block, Jacob M; Ghovanloo, Maysam

    2012-01-01

    Magnetic localization has been used in a variety of applications, including the medical field. Small magnetic tracers are often modeled as dipoles and localization has been achieved by solving well-defined dipole equations. However, in practice, the precise calculation of the tracer location not only depends on solving the highly nonlinear dipole equations through numerical algorithms but also on the precision of the magnetic sensor, accuracy of the tracer magnetization, and the earth magnetic field (EMF) measurements. We have developed and implemented a comprehensive calibration method that addresses all of the aforementioned factors. We evaluated this method in a bench-top setting by moving the tracer along controlled trajectories. We also conducted several experiments to track the tongue movement in a human subject.

  14. Incorporating system latency associated with real-time target tracking radiotherapy in the dose prediction step

    NASA Astrophysics Data System (ADS)

    Roland, Teboh; Mavroidis, Panayiotis; Shi, Chengyu; Papanikolaou, Nikos

    2010-05-01

    System latency introduces geometric errors in the course of real-time target tracking radiotherapy. This effect can be minimized, for example by the use of predictive filters, but cannot be completely avoided. In this work, we present a convolution technique that can incorporate the effect as part of the treatment planning process. The method can be applied independently or in conjunction with the predictive filters to compensate for residual latency effects. The implementation was performed on TrackBeam (Initia Ltd, Israel), a prototype real-time target tracking system assembled and evaluated at our Cancer Institute. For the experimental system settings examined, a Gaussian distribution attributable to the TrackBeam latency was derived with σ = 3.7 mm. The TrackBeam latency, expressed as an average response time, was deduced to be 172 ms. Phantom investigations were further performed to verify the convolution technique. In addition, patient studies involving 4DCT volumes of previously treated lung cancer patients were performed to incorporate the latency effect in the dose prediction step. This also enabled us to effectively quantify the dosimetric and radiobiological impact of the TrackBeam and other higher latency effects on the clinical outcome of a real-time target tracking delivery.

  15. Combining 3D tracking and surgical instrumentation to determine the stiffness of spinal motion segments: a validation study.

    PubMed

    Reutlinger, C; Gédet, P; Büchler, P; Kowal, J; Rudolph, T; Burger, J; Scheffler, K; Hasler, C

    2011-04-01

    The spine is a complex structure that provides motion in three directions: flexion and extension, lateral bending and axial rotation. So far, the investigation of the mechanical and kinematic behavior of the basic unit of the spine, a motion segment, is predominantly a domain of in vitro experiments on spinal loading simulators. Most existing approaches to measure spinal stiffness intraoperatively in an in vivo environment use a distractor. However, these concepts usually assume a planar loading and motion. The objective of our study was to develop and validate an apparatus, that allows to perform intraoperative in vivo measurements to determine both the applied force and the resulting motion in three dimensional space. The proposed setup combines force measurement with an instrumented distractor and motion tracking with an optoelectronic system. As the orientation of the applied force and the three dimensional motion is known, not only force-displacement, but also moment-angle relations could be determined. The validation was performed using three cadaveric lumbar ovine spines. The lateral bending stiffness of two motion segments per specimen was determined with the proposed concept and compared with the stiffness acquired on a spinal loading simulator which was considered to be gold standard. The mean values of the stiffness computed with the proposed concept were within a range of ±15% compared to data obtained with the spinal loading simulator under applied loads of less than 5 Nm.

  16. A 3D front-tracking approach for simulation of a two-phase fluid with insoluble surfactant

    NASA Astrophysics Data System (ADS)

    de Jesus, Wellington C.; Roma, Alexandre M.; Pivello, Márcio R.; Villar, Millena M.; da Silveira-Neto, Aristeu

    2015-01-01

    Surface active agents play a significant role in interfacial dynamics of multiphase systems.While the understanding of their behavior is crucial to many important practical applications, realistic mathematical modeling and computer simulation represent an extraordinary task. By employing a front-tracking method with Eulerian adaptive mesh refinement capabilities in concert with a finite volume scheme for solving an advection-diffusion equation constrained to a moving and deforming interface, the numerical challenges posed by the full three-dimensional computer simulation of transient, incompressible two-phase flows with an insoluble surfactant are efficiently and accurately tackled in the present work. The individual numerical components forming the resulting methodology are here combined and applied for the first time. Verification tests to check the accuracy and the simulation of the deformation of a droplet in simple shear flow in the presence of an insoluble surfactant are performed, the results being compared to laboratory experiments as well as to other numerical data. In all the cases considered, the methodology presents excellent conservation properties for the total surfactant mass (even to machine precision under certain circumstances).

  17. Tracking Down the Causes of Recent Induced and Natural Intraplate Earthquakes with 3D Seismological Analyses in Northwest Germany

    NASA Astrophysics Data System (ADS)

    Uta, P.; Brandes, C.; Boennemann, C.; Plenefisch, T.; Winsemann, J.

    2015-12-01

    Northwest Germany is a typical low strain intraplate region with a low seismic activity. Nevertheless, 58 well documented earthquakes with magnitudes of 0.5 - 4.3 affected the area in the last 40 years. Most of the epicenters were located in the vicinity of active natural gas fields and some inside. Accordingly, the earthquakes were interpreted as a consequence of hydrocarbon recovery (e.g. Dahm et al. 2007, Bischoff et al. 2013) and classified as induced events in the bulletins of the Federal Institute for Geosciences and Natural Resources (BGR). The two major ones have magnitudes of 4.3 and 4.0. They are the strongest earthquakes ever recorded in Northern Germany. Consequently, these events raise the question whether the ongoing extraction itself can cause them or if other natural tectonic processes like glacial isostatic adjustment may considerably contribute to their initiation. Recent studies of Brandes et al. (2012) imply that lithospheric stress changes due to post glacial isostatic adjustment might be also a potential natural cause for earthquakes in Central Europe. In order to better analyse the earthquakes and to test this latter hypothesis we performed a relocalization of the events with the NonLinLoc (Lomax et al. 2000) program package and two differently scaled 3D P-wave velocity models. Depending on the station coverage for a distinct event, either a fine gridded local model (88 x 73 x 15 km, WEG-model, made available by the industry) or a coarse regional model (1600 x 1600 x 45 km, data from CRUST1.0, Laske et al. 2013) and for some cases a combination of both models was used for the relocalization. The results confirm the trend of the older routine analysis: The majority of the events are located at the margins of the natural gas fields, some of them are now located closer to them. Focal depths mostly vary between 3.5 km and 10 km. However, for some of the events, especially for the older events with relatively bad station coverage, the error bars

  18. Testbeam results of the first real-time embedded tracking system with artificial retina

    NASA Astrophysics Data System (ADS)

    Neri, N.; Abba, A.; Caponio, F.; Citterio, M.; Coelli, S.; Fu, J.; Merli, A.; Monti, M.; Petruzzo, M.

    2017-02-01

    We present the testbeam results of the first real-time embedded tracking system based on artificial retina algorithm. The tracking system prototype is capable of fast track reconstruction with a latency of the response below 1 μs and track parameter resolutions that are comparable with the offline results. The artificial retina algorithm was implemented in hardware in a custom data acquisition board based on commercial FPGA. The system was tested successfully using a 180 GeV/c proton beam at the CERN SPS with a maximum track rate of about 280 kHz. Online track parameters were found in good agreement with offline results and with the simulated response.

  19. Real-time particle tracking for studying intracellular trafficking of pharmaceutical nanocarriers.

    PubMed

    Huang, Feiran; Watson, Erin; Dempsey, Christopher; Suh, Junghae

    2013-01-01

    Real-time particle tracking is a technique that combines fluorescence microscopy with object tracking and computing and can be used to extract quantitative transport parameters for small particles inside cells. Since the success of a nanocarrier can often be determined by how effectively it delivers cargo to the target organelle, understanding the complex intracellular transport of pharmaceutical nanocarriers is critical. Real-time particle tracking provides insight into the dynamics of the intracellular behavior of nanoparticles, which may lead to significant improvements in the design and development of novel delivery systems. Unfortunately, this technique is not often fully understood, limiting its implementation by researchers in the field of nanomedicine. In this chapter, one of the most complicated aspects of particle tracking, the mean square displacement (MSD) calculation, is explained in a simple manner designed for the novice particle tracker. Pseudo code for performing the MSD calculation in MATLAB is also provided. This chapter contains clear and comprehensive instructions for a series of basic procedures in the technique of particle tracking. Instructions for performing confocal microscopy of nanoparticle samples are provided, and two methods of determining particle trajectories that do not require commercial particle-tracking software are provided. Trajectory analysis and determination of the tracking resolution are also explained. By providing comprehensive instructions needed to perform particle-tracking experiments, this chapter will enable researchers to gain new insight into the intracellular dynamics of nanocarriers, potentially leading to the development of more effective and intelligent therapeutic delivery vectors.

  20. Probing the benefits of real-time tracking during cancer care

    PubMed Central

    Patel, Rupa A.; Klasnja, Predrag; Hartzler, Andrea; Unruh, Kenton T.; Pratt, Wanda

    2012-01-01

    People with cancer experience many unanticipated symptoms and struggle to communicate them to clinicians. Although researchers have developed patient-reported outcome (PRO) tools to address this problem, such tools capture retrospective data intended for clinicians to review. In contrast, real-time tracking tools with visible results for patients could improve health outcomes and communication with clinicians, while also enhancing patients’ symptom management. To understand potential benefits of such tools, we studied the tracking behaviors of 25 women with breast cancer. We provided 10 of these participants with a real-time tracking tool that served as a “technology probe” to uncover behaviors and benefits from voluntary use. Our findings showed that while patients’ tracking behaviors without a tool were fragmented and sporadic, these behaviors with a tool were more consistent. Participants also used tracked data to see patterns among symptoms, feel psychosocial comfort, and improve symptom communication with clinicians. We conclude with design implications for future real-time tracking tools. PMID:23304413

  1. Real-Time Visual Tracking through Fusion Features

    PubMed Central

    Ruan, Yang; Wei, Zhenzhong

    2016-01-01

    Due to their high-speed, correlation filters for object tracking have begun to receive increasing attention. Traditional object trackers based on correlation filters typically use a single type of feature. In this paper, we attempt to integrate multiple feature types to improve the performance, and we propose a new DD-HOG fusion feature that consists of discriminative descriptors (DDs) and histograms of oriented gradients (HOG). However, fusion features as multi-vector descriptors cannot be directly used in prior correlation filters. To overcome this difficulty, we propose a multi-vector correlation filter (MVCF) that can directly convolve with a multi-vector descriptor to obtain a single-channel response that indicates the location of an object. Experiments on the CVPR2013 tracking benchmark with the evaluation of state-of-the-art trackers show the effectiveness and speed of the proposed method. Moreover, we show that our MVCF tracker, which uses the DD-HOG descriptor, outperforms the structure-preserving object tracker (SPOT) in multi-object tracking because of its high-speed and ability to address heavy occlusion. PMID:27347951

  2. Optical coherence tomography for ultrahigh-resolution 3D imaging of cell development and real-time guiding for photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Wang, Tianshi; Zhen, Jinggao; Wang, Bo; Xue, Ping

    2009-11-01

    Optical coherence tomography is a new emerging technique for cross-sectional imaging with high spatial resolution of micrometer scale. It enables in vivo and non-invasive imaging with no need to contact the sample and is widely used in biological and clinic application. In this paper optical coherence tomography is demonstrated for both biological and clinic applications. For biological application, a white-light interference microscope is developed for ultrahigh-resolution full-field optical coherence tomography (full-field OCT) to implement 3D imaging of biological tissue. Spatial resolution of 0.9μm×1.1μm (transverse×axial) is achieved A system sensitivity of 85 dB is obtained at an acquisition time of 5s per image. The development of a mouse embryo is studied layer by layer with our ultrahigh-resolution full-filed OCT. For clinic application, a handheld optical coherence tomography system is designed for real-time and in situ imaging of the port wine stains (PWS) patient and supplying surgery guidance for photodynamic therapy (PDT) treatment. The light source with center wavelength of 1310nm, -3 dB wavelength range of 90 nm and optical power of 9mw is utilized. Lateral resolution of 8 μm and axial resolution of 7μm at a rate of 2 frames per second and with 102dB sensitivity are achieved in biological tissue. It is shown that OCT images distinguish very well the normal and PWS tissues in clinic and are good to serve as a valuable diagnosis tool for PDT treatment.

  3. An Agile Framework for Real-Time Visual Tracking in Videos

    DTIC Science & Technology

    2012-09-05

    IMPLEMENTATION OF OUR APPROACH We implemented tracking in C++ using the OpenCV library for real-time computer vision. The ensemble in our case consisted...of the algorithm,” OpenCV Document, Intel, Microprocessor Research Labs, 2000. [6] Kaiki Huang and Tieniu Tan, “Vs-star: A Visual Interpretation

  4. Real-time passive tracking for multi-touch medical modeling and simulation.

    PubMed

    Guerra, Christopher J; Hackett, Matthew; Carrola, John T; Couvillion, Warren C; Chambers, David R; Desai, Sapna A

    2014-01-01

    The military medical community expects to minimize use of live tissue and cadavers for training purposes. This research demonstrates an innovative use of synthetic tissue and passive tracking computer vision to create a real-time, interactive environment for training medical staff.

  5. Real-Time Multi-Resolution Blob Tracking

    DTIC Science & Technology

    2004-04-01

    tennis and racquetball videos. 1 Introduction A large number of works in the vision community have focused on video analysis, especially for video...challenging. Small blobs might be noise, but might also be important features of the scene (e.g. in a tennis match, it is imperative not to discard the...few frames. IRIS-04-422 c©2004 ARJF 8 Figure 5: Segmentation and tracking of the players and the ball in professional tennis broadcast video. Since the

  6. 3D Printed Microfluidic Device with Microporous Mn2O3-Modified Screen Printed Electrode for Real-Time Determination of Heavy Metal Ions.

    PubMed

    Hong, Ying; Wu, Meiyan; Chen, Guangwei; Dai, Ziyang; Zhang, Yizhou; Chen, Guosong; Dong, Xiaochen

    2016-12-07

    Fabricating portable devices for the determination of heavy metal ions is an ongoing challenge. Here, a 3D printing approach was adopted to fabricate a microfluidic electrochemical sensor with the desired shape in which the model for velocity profiles in microfluidic cells was built and optimized by the finite element method (FEM). The electrode in the microfluidic cell was a flexible screen-printed electrode (SPE) modified with porous Mn2O3 derived from manganese containing metal-organic framework (Mn-MOF). The microfluidic device presented superior electrochemical detection properties toward heavy metal ions. The calibration curves at the modified SPE for Cd(II) and Pb(II) covered two linear ranges varying from 0.5 to 8 and 10 to 100 μg L(-1), respectively. The limits of detection were estimated to be 0.5 μg L(-1) for Cd(II) and 0.2 μg L(-1) for Pb(II), which were accordingly about 6 and 50 times lower than the guideline values proposed by the World Health Organization. Furthermore, the microfluidic device was connected to iPad via a USB to enable real-time household applications. Additionally, the sensing system exhibited a better stability and reproducibility compared with traditional detecting system which offered a promising prospect for the detection of heavy metal ions especially in household and resource-limited occasions.

  7. Real-time and in situ enzyme inhibition assay for the flux of hydrogen sulfide based on 3D interwoven AuPd-reduced graphene oxide network.

    PubMed

    Yang, Hongmei; Zhang, Yan; Li, Li; Sun, Guoqiang; Zhang, Lina; Ge, Shenguang; Yu, Jinghua

    2017-01-15

    A highly sensitive enzyme inhibition analytical platform was established firstly based on paper-supported 3D interwoven AuPd-reduced graphene oxide (rGO) network (NW) for real-time and in situ analysis of H2S released from cancer cells. The novel paper working electrode (PWE) with large electric conductivity, effective surface area and unusual biocompatibility, was fabricated via controllably assembling rGO and AuPd alloy nanoparticles onto the surface of cellulose fibers and into the macropores of paper, which was employed as affinity matrix for horseradish peroxidase (HRP) loading and cells capture. It was the superior performances of AuPd-rGO-NW-PWE that made the loaded HRP exhibit excellent electrocatalytic behavior to H2O2, bring the rapid enhancement of current response. After releasing H2S, the current response would be obviously decreased due to the efficient inhibition effect of H2S on HRP activity. The inhibition degree of HRP was directly proportional to the amount of H2S, and so, the flux of H2S released from cells could be recorded availably. Thus, this proposed enzyme inhibition cyto-sensor could be applied for efficient recording of the release of H2S, which had potential utility to cellular biology and pathophysiology.

  8. 4D ICE: A 2D Array Transducer with Integrated ASIC in a 10 Fr Catheter for Real-Time 3D Intracardiac Echocardiography.

    PubMed

    Wildes, Douglas; Lee, Warren; Haider, Bruno; Cogan, Scott; Sundaresan, Krishnakumar; Mills, David; Yetter, Christopher; Hart, Patrick; Haun, Christopher; Concepcion, Mikael; Kirkhorn, Johan; Bitoun, Marc

    2016-10-12

    We developed a 2.5 x 6.6 mm 2D array transducer with integrated transmit/receive ASIC for 4D ICE (real-time 3D IntraCardiac Echocardiography) applications. The ASIC and transducer design were optimized so that the high voltage transmit, low-voltage TGC (time-gain control) and preamp, subaperture beamformer, and digital control circuits for each transducer element all fit within the 0.019 mm2 area of the element. The transducer assembly was deployed in a 10 Fr (3.3 mm diameter) catheter, integrated with a GE Vivid1 E9 ultrasound imaging system, and evaluated in three pre-clinical studies. 2D image quality and imaging modes were comparable to commercial 2D ICE catheters. The 4D field of view was at least 90° x 60° x 8 cm and could be imaged at 30 volumes/sec, sufficient to visualize cardiac anatomy and other diagnostic and therapy catheters. 4D ICE should significantly reduce X-ray fluoroscopy use and dose during electrophysiology (EP) ablation procedures. 4D ICE may be able to replace trans-esophageal echocardiography (TEE), and the associated risks and costs of general anesthesia, for guidance of some structural heart procedures.

  9. Real-Time Location Tracking of Multiple Construction Laborers

    PubMed Central

    Lim, Jin-Sun; Song, Ki-Il; Lee, Hang-Lo

    2016-01-01

    A real-time location (RTL) system was developed to improve safety for multiple laborers in confined construction sites. The RTL system can monitor the location and movement of multiple laborers in real time. A portable RTL system with a low-battery mode was developed to accommodate various constraints in the construction site. A conventional RTL system that uses radio signal strength indicators (RSSIs) has high error, so an accelerometer with Bluetooth Low Energy (BLE) was added, and a calculation process is suggested. Field tests were performed for validation in underground construction and bridge overlay sites. The results show that the accelerometer and BLE can be used as effective sensors to detect the movement of laborers. When the sensor is fixed, the average error ranges 0.2–0.22 m, and when the sensor is moving, the average error ranges 0.1–0.47 m. PMID:27827973

  10. Real-time tracking of deformable objects based on combined matching-and-tracking

    NASA Astrophysics Data System (ADS)

    Yan, Junhua; Wang, Zhigang; Wang, Shunfei

    2016-03-01

    Visual tracking is very challenging due to the existence of several sources of variations, such as partial occlusion, deformation, scale variation, rotation, and background clutter. A model-free tracking method based on fusing accelerated features using fast explicit diffusion in nonlinear scale spaces (AKAZE) and KLT features is presented. First, matching-keypoints are generated by finding corresponding keypoints from the consecutive frames and the object template, then tracking-keypoints are generated using the forward-backward flow tracking method, and at last, credible keypoints are obtained by AKAZE-KLT tracking (AKT) algorithm. To avoid the instability of a statistical method, the median method is adopted to compute the object's location, scale, and rotation in each frame. The experimental results show that the AKT algorithm has strong robustness and can achieve accurate tracking especially under conditions of partial occlusion, scale variation, rotation, and deformation. The tracking performance shows higher robustness and accuracy in a variety of datasets and the average frame rate reaches 78 fps, showing good performance in real time.

  11. Real-time tumor tracking with an artificial neural networks-based method: a feasibility study.

    PubMed

    Seregni, Matteo; Pella, Andrea; Riboldi, Marco; Orecchia, Roberto; Cerveri, Pietro; Baroni, Guido

    2013-01-01

    The purpose of this study was to develop and assess the performance of a tumor tracking method designed for application in radiation therapy. This motion compensation strategy is currently applied clinically only in conventional photon radiotherapy but not in particle therapy, as greater accuracy in dose delivery is required. We proposed a tracking method that exploits artificial neural networks to estimate the internal tumor trajectory as a function of external surrogate signals. The developed algorithm was tested by means of a retrospective clinical data analysis in 20 patients, who were treated with state of the art infra-red motion tracking for photon radiotherapy, which is used as a benchmark. Integration into a hardware platform for motion tracking in particle therapy was performed and then tested on a moving phantom, specifically developed for this purpose. Clinical data show that a median tracking error reduction up to 0.7 mm can be achieved with respect to state of the art technologies. The phantom study demonstrates that a real-time tumor position estimation is feasible when the external signals are acquired at 60 Hz. The results of this work show that neural networks can be considered a valuable tool for the implementation of high accuracy real-time tumor tracking methodologies.

  12. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network.

    PubMed

    Bukhari, W; Hong, S-M

    2016-03-07

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient's breathing cycle. The algorithm, named EKF-GPRN(+) , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN(+) prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN(+) implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN(+) . The experimental results show that the EKF-GPRN(+) algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN(+) algorithm can further reduce the prediction error by employing the gating

  13. Real-time robust target tracking in videos via graph-cuts

    NASA Astrophysics Data System (ADS)

    Fishbain, Barak; Hochbaum, Dorit S.; Yang, Yan T.

    2013-02-01

    Video tracking is a fundamental problem in computer vision with many applications. The goal of video tracking is to isolate a target object from its background across a sequence of frames. Tracking is inherently a three dimensional problem in that it incorporates the time dimension. As such, the computational efficiency of video segmentation is a major challenge. In this paper we present a generic and robust graph-theory-based tracking scheme in videos. Unlike previous graph-based tracking methods, the suggested approach treats motion as a pixel's property (like color or position) rather than as consistency constraints (i.e., the location of the object in the current frame is constrained to appear around its location in the previous frame shifted by the estimated motion) and solves the tracking problem optimally (i.e., neither heuristics nor approximations are applied). The suggested scheme is so robust that it allows for incorporating the computationally cheaper MPEG-4 motion estimation schemes. Although block matching techniques generate noisy and coarse motion fields, their use allows faster computation times as broad variety of off-the-shelf software and hardware components that specialize in performing this task are available. The evaluation of the method on standard and non-standard benchmark videos shows that the suggested tracking algorithm can support a fast and accurate video tracking, thus making it amenable to real-time applications.

  14. Real-time tracking mitochondrial dynamic remodeling with two-photon phosphorescent iridium (III) complexes.

    PubMed

    Huang, Huaiyi; Yang, Liang; Zhang, Pingyu; Qiu, Kangqiang; Huang, Juanjuan; Chen, Yu; Diao, JiaJie; Liu, Jiankang; Ji, Liangnian; Long, Jiangang; Chao, Hui

    2016-03-01

    Mitochondrial fission and fusion control the shape, size, number, and function of mitochondria in the cells of organisms from yeast to mammals. The disruption of mitochondrial fission and fusion is involved in severe human diseases such as Parkinson's disease, Alzheimer's disease, metabolic diseases, and cancers. Agents that can real-time track the mitochondrial dynamics are of great importance. However, the short excitation wavelengths and rapidly photo-bleaching properties of commercial mitochondrial dyes render them unsuitable for tracking mitochondrial dynamics. Thus, mitochondrial targeting agents that exhibit superior photo-stability under continual light irradiation, deep tissue penetration and at intrinsically high three-dimensional resolutions are urgently needed. Two-photon-excited compounds employ low-energy near-infrared light and have emerged as a non-invasive tool for real-time cell imaging. Here, cyclometalated Ir(III) complexes (Ir1-Ir5) are demonstrated as one- and two-photon phosphorescent probes for the real-time imaging and tracking of mitochondrial fission and fusion. The results indicate that Ir2 is well suited for two-photon phosphorescent tracking of mitochondrial fission and fusion in living cells and in Caenorhabditis elegans (C. elegans). This study provides a practical use for mitochondrial targeting two-photon phosphorescent Ir(III) complexes.

  15. Study on Sensor Design Technique for Real-Time Robotic Welding Tracking System

    NASA Astrophysics Data System (ADS)

    Liu, C. J.; Li, Y. B.; Zhu, J. G.; Ye, S. H.

    2006-10-01

    Based on visual measurement techniques, the real-time robotic welding tracking system achieves real-time adjustment for robotic welding according to the position and shape changes of a workpiece. In system design, the sensor design technique is so important that its performance directly affects the precision and stability of the tracking system. Through initiative visual measurement technology, a camera unit for real-time sampling is built with multiple-strip structured light and a high-performance CMOS image sensor including 1.3 million pixels; to realize real-time data process and transmission, an image process unit is built with FPGA and DSP. Experiments show that the precision of this sensor reaches 0.3mm, and band rate comes up to 10Mbps, which effectively improves robot welding quality.With the development of advanced manufacturing technology, it becomes an inexorable trend to realize the automatic, flexible and intelligent welding product manufacture. With the advantage of interchangeability and reliability, robotic welding can boost productivity, improve work condition, stabilize and guarantee weld quality, and realize welding automation of the short run products [1]. At present, robotic welding has already become the application trend of automatic welding technology. Traditional welding robots are play-back ones, which cannot adapt environment and weld distortion. Especially in the more and more extensive arc-welding course, the deficiency and limitation of play-back welding technology becomes more prominent because of changeable welding condition. It becomes one of the key technology influencing the development of modern robotic welding technology to eliminate or decrease uncertain influence on quality of welding such as changing welding condition etc [2]. Based on visual measuring principle, this text adopts active visual measuring technology, cooperated with high-speed image process and transmission technology to structure a tracking sensor, to realize

  16. WE-A-17A-10: Fast, Automatic and Accurate Catheter Reconstruction in HDR Brachytherapy Using An Electromagnetic 3D Tracking System

    SciTech Connect

    Poulin, E; Racine, E; Beaulieu, L; Binnekamp, D

    2014-06-15

    Purpose: In high dose rate brachytherapy (HDR-B), actual catheter reconstruction protocols are slow and errors prompt. The purpose of this study was to evaluate the accuracy and robustness of an electromagnetic (EM) tracking system for improved catheter reconstruction in HDR-B protocols. Methods: For this proof-of-principle, a total of 10 catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a Philips-design 18G biopsy needle (used as an EM stylet) and the second generation Aurora Planar Field Generator from Northern Digital Inc. The Aurora EM system exploits alternating current technology and generates 3D points at 40 Hz. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical CT system with a resolution of 0.089 mm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, 5 catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 seconds or less. This would imply that for a typical clinical implant of 17 catheters, the total reconstruction time would be less than 3 minutes. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.92 ± 0.37 mm and 1.74 ± 1.39 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be significantly more accurate (unpaired t-test, p < 0.05). A mean difference of less than 0.5 mm was found between successive EM reconstructions. Conclusion: The EM reconstruction was found to be faster, more accurate and more robust than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators. We would like to disclose that the equipments, used in this study, is coming from a collaboration with Philips Medical.

  17. Transient imaging for real-time tracking around a corner

    NASA Astrophysics Data System (ADS)

    Klein, Jonathan; Laurenzis, Martin; Hullin, Matthias

    2016-10-01

    Non-line-of-sight imaging is a fascinating emerging area of research and expected to have an impact in numerous application fields including civilian and military sensing. Performance of human perception and situational awareness can be extended by the sensing of shapes and movement around a corner in future scenarios. Rather than seeing through obstacles directly, non-line-of-sight imaging relies on analyzing indirect reflections of light that traveled around the obstacle. In previous work, transient imaging was established as the key mechanic to enable the extraction of useful information from such reflections. So far, a number of different approaches based on transient imaging have been proposed, with back projection being the most prominent one. Different hardware setups were used for the acquisition of the required data, however all of them have severe drawbacks such as limited image quality, long capture time or very high prices. In this paper we propose the analysis of synthetic transient renderings to gain more insights into the transient light transport. With this simulated data, we are no longer bound to the imperfect data of real systems and gain more flexibility and control over the analysis. In a second part, we use the insights of our analysis to formulate a novel reconstruction algorithm. It uses an adapted light simulation to formulate an inverse problem which is solved in an analysis-by-synthesis fashion. Through rigorous optimization of the reconstruction, it then becomes possible to track known objects outside the line of side in real time. Due to the forward formulation of the light transport, the algorithm is easily expandable to more general scenarios or different hardware setups. We therefore expect it to become a viable alternative to the classic back projection approach in the future.

  18. An Automated Pipeline for Dendrite Spine Detection and Tracking of 3D Optical Microscopy Neuron Images of In Vivo Mouse Models

    PubMed Central

    Fan, Jing; Zhou, Xiaobo; Dy, Jennifer G.; Zhang, Yong; Wong, Stephen T. C.

    2009-01-01

    The variations in dendritic branch morphology and spine density provide insightful information about the brain function and possible treatment to neurodegenerative disease, for example investigating structural plasticity during the course of Alzheimer's disease. Most automated image processing methods aiming at analyzing these problems are developed for in vitro data. However, in vivo neuron images provide real time information and direct observation of the dynamics of a disease process in a live animal model. This paper presents an automated approach for detecting spines and tracking spine evolution over time with in vivo image data in an animal model of Alzheimer's disease. We propose an automated pipeline starting with curvilinear structure detection to determine the medial axis of the dendritic backbone and spines connected to the backbone. We, then, propose the adaptive local binary fitting (aLBF) energy level set model to accurately locate the boundary of dendritic structures using the central line of curvilinear structure as initialization. To track the growth or loss of spines, we present a maximum likelihood based technique to find the graph homomorphism between two image graph structures at different time points. We employ dynamic programming to search for the optimum solution. The pipeline enables us to extract dynamically changing information from real time in vivo data. We validate our proposed approach by comparing with manual results generated by neurologists. In addition, we discuss the performance of 3D based segmentation and conclude that our method is more accurate in identifying weak spines. Experiments show that our approach can quickly and accurately detect and quantify spines of in vivo neuron images and is able to identify spine elimination and formation. PMID:19434521

  19. A low-cost test-bed for real-time landmark tracking

    NASA Astrophysics Data System (ADS)

    Csaszar, Ambrus; Hanan, Jay C.; Moreels, Pierre; Assad, Christopher

    2007-04-01

    A low-cost vehicle test-bed system was developed to iteratively test, refine and demonstrate navigation algorithms before attempting to transfer the algorithms to more advanced rover prototypes. The platform used here was a modified radio controlled (RC) car. A microcontroller board and onboard laptop computer allow for either autonomous or remote operation via a computer workstation. The sensors onboard the vehicle represent the types currently used on NASA-JPL rover prototypes. For dead-reckoning navigation, optical wheel encoders, a single axis gyroscope, and 2-axis accelerometer were used. An ultrasound ranger is available to calculate distance as a substitute for the stereo vision systems presently used on rovers. The prototype also carries a small laptop computer with a USB camera and wireless transmitter to send real time video to an off-board computer. A real-time user interface was implemented that combines an automatic image feature selector, tracking parameter controls, streaming video viewer, and user generated or autonomous driving commands. Using the test-bed, real-time landmark tracking was demonstrated by autonomously driving the vehicle through the JPL Mars yard. The algorithms tracked rocks as waypoints. This generated coordinates calculating relative motion and visually servoing to science targets. A limitation for the current system is serial computing-each additional landmark is tracked in order-but since each landmark is tracked independently, if transferred to appropriate parallel hardware, adding targets would not significantly diminish system speed.

  20. Real-time motion compensation for EM bronchoscope tracking with smooth output - ex-vivo validation

    NASA Astrophysics Data System (ADS)

    Reichl, Tobias; Gergel, Ingmar; Menzel, Manuela; Hautmann, Hubert; Wegner, Ingmar; Meinzer, Hans-Peter; Navab, Nassir

    2012-02-01

    Navigated bronchoscopy provides benefits for endoscopists and patients, but accurate tracking information is needed. We present a novel real-time approach for bronchoscope tracking combining electromagnetic (EM) tracking, airway segmentation, and a continuous model of output. We augment a previously published approach by including segmentation information in the tracking optimization instead of image similarity. Thus, the new approach is feasible in real-time. Since the true bronchoscope trajectory is continuous, the output is modeled using splines and the control points are optimized with respect to displacement from EM tracking measurements and spatial relation to segmented airways. Accuracy of the proposed method and its components is evaluated on a ventilated porcine ex-vivo lung with respect to ground truth data acquired from a human expert. We demonstrate the robustness of the output of the proposed method against added artificial noise in the input data. Smoothness in terms of inter-frame distance is shown to remain below 2 mm, even when up to 5 mm of Gaussian noise are added to the input. The approach is shown to be easily extensible to include other measures like image similarity.

  1. SU-E-J-240: Development of a Novel 4D MRI Sequence for Real-Time Liver Tumor Tracking During Radiotherapy

    SciTech Connect

    Zhuang, L; Burmeister, J; Ye, Y

    2015-06-15

    Purpose: To develop a Novel 4D MRI Technique that is feasible for realtime liver tumor tracking during radiotherapy. Methods: A volunteer underwent an abdominal 2D fast EPI coronal scan on a 3.0T MRI scanner (Siemens Inc., Germany). An optimal set of parameters was determined based on image quality and scan time. A total of 23 slices were scanned to cover the whole liver in the test scan. For each scan position, the 2D images were retrospectively sorted into multiple phases based on breathing signal extracted from the images. Consequently the 2D slices with same phase numbers were stacked to form one 3D image. Multiple phases of 3D images formed the 4D MRI sequence representing one breathing cycle. Results: The optimal set of scan parameters were: TR= 57ms, TE= 19ms, FOV read= 320mm and flip angle= 30°, which resulted in a total scan time of 14s for 200 frames (FMs) per slice and image resolution of (2.5mm,2.5mm,5.0mm) in three directions. Ten phases of 3D images were generated, each of which had 23 slices. Based on our test scan, only 100FMs were necessary for the phase sorting process which may lower the scan time to 7s/100FMs/slice. For example, only 5 slices/35s are necessary for a 4D MRI scan to cover liver tumor size ≤ 2cm leading to the possibility of tumor trajectory tracking every 35s during treatment. Conclusion: The novel 4D MRI technique we developed can reconstruct a 4D liver MRI sequence representing one breathing cycle (7s/ slice) without an external monitor. This technique can potentially be used for real-time liver tumor tracking during radiotherapy.

  2. On the local acceleration and flow trajectory of jet flows from circular and semi-circular pipes via 3D particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Tae; Liberzon, Alex; Chamorro, Leonardo P.

    2015-11-01

    The distinctive differences between two jet flows that share the same hydraulic diameter dh = 0.01 m and Re ~ 6000, but different (nozzle) shape are explored via 3D Particle Tracking Velocimetry using OpenPTV (http://www.openptv.net). The two jets are formed from circular and semicircular pipes and released in a quiescent water tank of 40 dh height, 40 dh wide, and 200 dh long. The recirculating system is seeded with 100 μm particles, where flow measurements are performed in the intermediate flow field (14.5 < x /dh <18.5) at 550Hz for a total of ~ 30,000 frames. Analysis is focused on the spatial distribution of the local flow acceleration and curvature of the Lagrangian trajectories. The velocity and acceleration of particles are estimated by low-pass filtering their position with a moving cubic spline fitting, while the curvature is obtained from the Frenet-Serret equations. Probability density functions (p.d.f.) of these quantities are obtained at various sub-volumes containing a given streamwise velocity range, and compared between the two cases to evaluate the memory effects in the intermediate flow field.

  3. A real-time multiple-cell tracking platform for dielectrophoresis (DEP)-based cellular analysis

    NASA Astrophysics Data System (ADS)

    Prasad, Brinda; Du, Shan; Badawy, Wael; Kaler, Karan V. I. S.

    2005-04-01

    There is an increasing demand from biosciences to develop new and efficient techniques to assist in the preparation and analysis of biological samples such as cells in suspension. A dielectrophoresis (DEP)-based characterization and measurement technique on biological cells opens up a broader perspective for early diagnosis of diseases. An efficient real-time multiple-cell tracking platform coupled with DEP to capture and quantify the dynamics of cell motion and obtain cell viability information is presented. The procedure for tracking a single DEP-levitated Canola plant protoplast, using the motion-based segmentation algorithm hierarchical adaptive merge split mesh-based technique (HAMSM) for cell identification, has been enhanced for identifying and tracking multiple cells. The tracking technique relies on the deformation of mesh topology that is generated according to the movement of biological cells in a sequence of images that allows the simultaneous extraction of the biological cell from the image and the associated motion characteristics. Preliminary tests were conducted with yeast cells and then applied to a cancerous cell line subjected to DEP fields. Characteristics, such as cell count, velocity and size, were individually extracted from the tracked results of the cell sample. Tests were limited to eight yeast cells and two cancer cells. A performance analysis to assess tracking accuracy, computational effort and processing time was also conducted. The tracking technique employed on model intact cells in DEP fields proved to be accurate, reliable and robust.

  4. Real-time object tracking with correlation filtering and state prediction

    NASA Astrophysics Data System (ADS)

    Contreras, Viridiana; Díaz-Ramírez, Victor H.; Kober, Vitaly; Tapia-Armenta, Juan J.

    2013-09-01

    A real-time tracking system based on adaptive correlation filtering and state prediction is proposed. The system is able to estimate at high-rate the position of multiple targets within the observed scene by taking into account information of past and present scene-frames. The position of the targets in the current frame is estimated with the help of a bank of composite correlation filters applied to several small regions taken from the observed scene. These small regions are updated in each frame according to information from a state predictor based on the motion model of targets in a twodimensional plane. The proposed system is implemented on a graphics processing unit to take advantage of massive parallelism. Computer simulation results obtained with the proposed system are presented and discussed in terms of tracking accuracy and real-time operation efficiency.

  5. DspaceOgreTerrain 3D Terrain Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.

    2012-01-01

    DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.

  6. Real-time optical multiple-object recognition and tracking demonstration: A friendly challenge to the digital field

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Liu, Hua-Kuang

    1980-01-01

    Researchers demonstrated the first optical multiple object tracking system. The system is capable of simultaneous tracking of multiple objects, each with independent movements in real-time, limited only to the TV frame rate (30 msec). In order to perform a similar tracking operation, a large computer system and very complex software would be needed. Although researchers have demonstrated the tracking of only 3 objects, the system capacity can easily be expanded by 2 orders of magnitude.

  7. A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots

    PubMed Central

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-01-01

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system. PMID:25856331

  8. A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.

    PubMed

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-04-08

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.

  9. Real-time circumferential mapping catheter tracking for motion compensation in atrial fibrillation ablation procedures

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Bourier, Felix; Wimmer, Andreas; Koch, Martin; Kiraly, Atilla; Liao, Rui; Kurzidim, Klaus; Hornegger, Joachim; Strobel, Norbert

    2012-02-01

    Atrial fibrillation (AFib) has been identified as a major cause of stroke. Radiofrequency catheter ablation has become an increasingly important treatment option, especially when drug therapy fails. Navigation under X-ray can be enhanced by using augmented fluoroscopy. It renders overlay images from pre-operative 3-D data sets which are then fused with X-ray images to provide more details about the underlying soft-tissue anatomy. Unfortunately, these fluoroscopic overlay images are compromised by respiratory and cardiac motion. Various methods to deal with motion have been proposed. To meet clinical demands, they have to be fast. Methods providing a processing frame rate of 3 frames-per-second (fps) are considered suitable for interventional electrophysiology catheter procedures if an acquisition frame rate of 2 fps is used. Unfortunately, when working at a processing rate of 3 fps, the delay until the actual motion compensated image can be displayed is about 300 ms. More recent algorithms can achieve frame rates of up to 20 fps, which reduces the lag to 50 ms. By using a novel approach involving a 3-D catheter model, catheter segmentation and a distance transform, we can speed up motion compensation to 25 fps which results in a display delay of only 40 ms on a standard workstation for medical applications. Our method uses a constrained 2-D/3-D registration to perform catheter tracking, and it obtained a 2-D tracking error of 0.61 mm.

  10. Imaging the behavior of molecules in biological systems: breaking the 3D speed barrier with 3D multi-resolution microscopy.

    PubMed

    Welsher, Kevin; Yang, Haw

    2015-01-01

    The overwhelming effort in the development of new microscopy methods has been focused on increasing the spatial and temporal resolution in all three dimensions to enable the measurement of the molecular scale phenomena at the heart of biological processes. However, there exists a significant speed barrier to existing 3D imaging methods, which is associated with the overhead required to image large volumes. This overhead can be overcome to provide nearly unlimited temporal precision by simply focusing on a single molecule or particle via real-time 3D single-particle tracking and the newly developed 3D Multi-resolution Microscopy (3D-MM). Here, we investigate the optical and mechanical limits of real-time 3D single-particle tracking in the context of other methods. In particular, we investigate the use of an optical cantilever for position sensitive detection, finding that this method yields system magnifications of over 3000×. We also investigate the ideal PID control parameters and their effect on the power spectrum of simulated trajectories. Taken together, these data suggest that the speed limit in real-time 3D single particle-tracking is a result of slow piezoelectric stage response as opposed to optical sensitivity or PID control.

  11. Quantification of Left Ventricular Linear, Areal and Volumetric Dimensions: A Phantom and in Vivo Comparison of 2-D and Real-Time 3-D Echocardiography with Cardiovascular Magnetic Resonance.

    PubMed

    Polte, Christian L; Lagerstrand, Kerstin M; Gao, Sinsia A; Lamm, Carl R; Bech-Hanssen, Odd

    2015-07-01

    Two-dimensional echocardiography and real-time 3-D echocardiography have been reported to underestimate human left ventricular volumes significantly compared with cardiovascular magnetic resonance. We investigated the ability of 2-D echocardiography, real-time 3-D echocardiography and cardiovascular magnetic resonance to delineate dimensions of increasing complexity (diameter-area-volume) in a multimodality phantom model and in vivo, with the aim of elucidating the main cause of underestimation. All modalities were able to delineate phantom dimensions with high precision. In vivo, 2-D and real-time 3-D echocardiography underestimated short-axis end-diastolic linear and areal and all left ventricular volumetric dimensions significantly compared with cardiovascular magnetic resonance, but not short-axis end-systolic linear and areal dimensions. Underestimation increased successively from linear to volumetric left ventricular dimensions. When analyzed according to the same principles, 2-D and real-time 3-DE echocardiography provided similar left ventricular volumes. In conclusion, echocardiographic underestimation of left ventricular dimensions is due mainly to inherent technical differences in the ability to differentiate trabeculated from compact myocardium. Identical endocardial border definition criteria are needed to minimize differences between the modalities and to ensure better comparability in clinical practice.

  12. Real-time active MR-tracking of metallic stylets in MR-guided radiation therapy

    PubMed Central

    Wang, Wei; Dumoulin, Charles L.; Viswanathan, Akila N.; Tse, Zion T. H.; Mehrtash, Alireza; Loew, Wolfgang; Norton, Isaiah; Tokuda, Junichi; Seethamraju, Ravi T.; Kapur, Tina; Damato, Antonio L.; Cormack, Robert A.; Schmidt, Ehud J.

    2014-01-01

    Purpose To develop an active MR-tracking system to guide placement of metallic devices for radiation therapy. Methods An actively tracked metallic stylet for brachytherapy was constructed by adding printed-circuit micro-coils to a commercial stylet. The coil design was optimized by electromagnetic simulation, and has a radio-frequency lobe pattern extending ~5 mm beyond the strong B0 inhomogeneity region near the metal surface. An MR-tracking sequence with phase-field dithering was used to overcome residual effects of B0 and B1 inhomogeneities caused by the metal, as well as from inductive coupling to surrounding metallic stylets. The tracking system was integrated with a graphical workstation for real-time visualization. 3T MRI catheter-insertion procedures were tested in phantoms and ex-vivo animal tissue, and then performed in three patients during interstitial brachytherapy. Results The tracking system provided high-resolution (0.6 × 0.6 × 0.6 mm3) and rapid (16 to 40 frames per second, with three to one phase-field dithering directions) catheter localization in phantoms, animals, and three gynecologic cancer patients. Conclusion This is the first demonstration of active tracking of the shaft of metallic stylet in MR-guided brachytherapy. It holds the promise of assisting physicians to achieve better targeting and improving outcomes in interstitial brachytherapy. PMID:24903165

  13. Performance of a wavelength-diversified FSO tracking algorithm for real-time battlefield communications

    NASA Astrophysics Data System (ADS)

    Al-Akkoumi, Mouhammad K.; Harris, Alan; Huck, Robert C.; Sluss, James J., Jr.; Giuma, Tayeb A.

    2008-02-01

    Free-space optical (FSO) communications links are envisioned as a viable option for the provision of temporary high-bandwidth communication links between moving platforms, especially for deployment in battlefield situations. For successful deployment in such real-time environments, fast and accurate alignment and tracking of the FSO equipment is essential. In this paper, a two-wavelength diversity scheme using 1.55 μm and 10 μm is investigated in conjunction with a previously described tracking algorithm to maintain line-of-sight connectivity battlefield scenarios. An analytical model of a mobile FSO communications link is described. Following the analytical model, simulation results are presented for an FSO link between an unmanned aerial surveillance vehicle, the Global Hawk, with a mobile ground vehicle, an M1 Abrams Main Battle Tank. The scenario is analyzed under varying weather conditions to verify continuous connectivity is available through the tracking algorithm. Simulation results are generated to describe the performance of the tracking algorithm with respect to both received optical power levels and variations in beam divergence. Advances to any proposed tracking algorithm due to these power and divergence variations are described for future tracking algorithm development.

  14. Real-time model-based vision system for object acquisition and tracking

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd

    1987-01-01

    A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.

  15. Accuracy of a Real-Time, Computerized, Binocular, Three-Dimensional Trajectory-Tracking Device for Recording Functional Mandibular Movements

    PubMed Central

    Zhao, Tian; Yang, Huifang; Sui, Huaxin; Salvi, Satyajeet Sudhir; Wang, Yong; Sun, Yuchun

    2016-01-01

    Objective Developments in digital technology have permitted researchers to study mandibular movements. Here, the accuracy of a real-time, computerized, binocular, three-dimensional (3D) trajectory-tracking device for recording functional mandibular movements was evaluated. Methods An occlusal splint without the occlusal region was created based on a plaster cast of the lower dentition. The splint was rigidly connected with a target on its labial side and seated on the cast. The cast was then rigidly attached to the stage of a high-precision triaxial electronic translator, which was used to move the target-cast-stage complex. Half-circular movements (5.00-mm radius) in three planes (XOY, XOZ, YOZ) and linear movements along the x-axis were performed at 5.00 mm/s. All trajectory points were recorded with the binocular 3D trajectory-tracking device and fitted to arcs or lines, respectively, with the Imageware software. To analyze the accuracy of the trajectory-tracking device, the mean distances between the trajectory points and the fitted arcs or lines were measured, and the mean differences between the lengths of the fitted arcs’ radii and a set value (5.00 mm) were then calculated. A one-way analysis of variance was used to evaluate the spatial consistency of the recording accuracy in three different planes. Results The mean distances between the trajectory points and fitted arcs or lines were 0.076 ± 0.033 mm or 0.089 ± 0.014 mm. The mean difference between the lengths of the fitted arcs’ radii and the set value (5.00 mm) was 0.025 ± 0.071 mm. A one-way ANOVA showed that the recording errors in three different planes were not statistically significant. Conclusion These results suggest that the device can record certain movements at 5.00 mm/s, which is similar to the speed of functional mandibular movements. In addition, the recordings had an error of <0.1 mm and good spatial consistency. Thus, the device meets some of the requirements necessary for

  16. Real-time auto-adaptive margin generation for MLC-tracked radiotherapy.

    PubMed

    Glitzner, M; Fast, M F; de Senneville, B Denis; Nill, S; Oelfke, U; Lagendijk, J J W; Raaymakers, B W; Crijns, S P M

    2017-01-07

    In radiotherapy, abdominal and thoracic sites are candidates for performing motion tracking. With real-time control it is possible to adjust the multileaf collimator (MLC) position to the target position. However, positions are not perfectly matched and position errors arise from system delays and complicated response of the electromechanic MLC system. Although, it is possible to compensate parts of these errors by using predictors, residual errors remain and need to be compensated to retain target coverage. This work presents a method to statistically describe tracking errors and to automatically derive a patient-specific, per-segment margin to compensate the arising underdosage on-line, i.e. during plan delivery. The statistics of the geometric error between intended and actual machine position are derived using kernel density estimators. Subsequently a margin is calculated on-line according to a selected coverage parameter, which determines the amount of accepted underdosage. The margin is then applied onto the actual segment to accommodate the positioning errors in the enlarged segment. The proof-of-concept was tested in an on-line tracking experiment and showed the ability to recover underdosages for two test cases, increasing [Formula: see text] in the underdosed area about [Formula: see text] and [Formula: see text], respectively. The used dose model was able to predict the loss of dose due to tracking errors and could be used to infer the necessary margins. The implementation had a running time of 23 ms which is compatible with real-time requirements of MLC tracking systems. The auto-adaptivity to machine and patient characteristics makes the technique a generic yet intuitive candidate to avoid underdosages due to MLC tracking errors.

  17. Real-time auto-adaptive margin generation for MLC-tracked radiotherapy

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; Fast, M. F.; de Senneville, B. Denis; Nill, S.; Oelfke, U.; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2017-01-01

    In radiotherapy, abdominal and thoracic sites are candidates for performing motion tracking. With real-time control it is possible to adjust the multileaf collimator (MLC) position to the target position. However, positions are not perfectly matched and position errors arise from system delays and complicated response of the electromechanic MLC system. Although, it is possible to compensate parts of these errors by using predictors, residual errors remain and need to be compensated to retain target coverage. This work presents a method to statistically describe tracking errors and to automatically derive a patient-specific, per-segment margin to compensate the arising underdosage on-line, i.e. during plan delivery. The statistics of the geometric error between intended and actual machine position are derived using kernel density estimators. Subsequently a margin is calculated on-line according to a selected coverage parameter, which determines the amount of accepted underdosage. The margin is then applied onto the actual segment to accommodate the positioning errors in the enlarged segment. The proof-of-concept was tested in an on-line tracking experiment and showed the ability to recover underdosages for two test cases, increasing {{V}90 %} in the underdosed area about 47 % and 41 % , respectively. The used dose model was able to predict the loss of dose due to tracking errors and could be used to infer the necessary margins. The implementation had a running time of 23 ms which is compatible with real-time requirements of MLC tracking systems. The auto-adaptivity to machine and patient characteristics makes the technique a generic yet intuitive candidate to avoid underdosages due to MLC tracking errors.

  18. Quantitative Evaluation of 3D Mouse Behaviors and Motor Function in the Open-Field after Spinal Cord Injury Using Markerless Motion Tracking

    PubMed Central

    Sheets, Alison L.; Lai, Po-Lun; Fisher, Lesley C.; Basso, D. Michele

    2013-01-01

    Thousands of scientists strive to identify cellular mechanisms that could lead to breakthroughs in developing ameliorative treatments for debilitating neural and muscular conditions such as spinal cord injury (SCI). Most studies use rodent models to test hypotheses, and these are all limited by the methods available to evaluate animal motor function. This study’s goal was to develop a behavioral and locomotor assessment system in a murine model of SCI that enables quantitative kinematic measurements to be made automatically in the open-field by applying markerless motion tracking approaches. Three-dimensional movements of eight naïve, five mild, five moderate, and four severe SCI mice were recorded using 10 cameras (100 Hz). Background subtraction was used in each video frame to identify the animal’s silhouette, and the 3D shape at each time was reconstructed using shape-from-silhouette. The reconstructed volume was divided into front and back halves using k-means clustering. The animal’s front Center of Volume (CoV) height and whole-body CoV speed were calculated and used to automatically classify animal behaviors including directed locomotion, exploratory locomotion, meandering, standing, and rearing. More detailed analyses of CoV height, speed, and lateral deviation during directed locomotion revealed behavioral differences and functional impairments in animals with mild, moderate, and severe SCI when compared with naïve animals. Naïve animals displayed the widest variety of behaviors including rearing and crossing the center of the open-field, the fastest speeds, and tallest rear CoV heights. SCI reduced the range of behaviors, and decreased speed (r = .70 p<.005) and rear CoV height (r = .65 p<.01) were significantly correlated with greater lesion size. This markerless tracking approach is a first step toward fundamentally changing how rodent movement studies are conducted. By providing scientists with sensitive, quantitative measurement

  19. Self consistent particles dynamics in/out of the cusp region by using back tracking technics; a global 3D PIC simulation approach

    NASA Astrophysics Data System (ADS)

    Esmaeili, A.; Cai, D.; Lembege, B.; Nishikawa, K.

    2013-12-01

    Large scale three dimensionbal PIC (particle in cell) simulations are presently used in order to analyze the global solar wind-terrestrial magnetosphere intreraction within a full self-consistent approach, and where both electrons and ions are treated as an assembly of individual particles. This 3D kinetic approach allows us to analyze in particular the dynamics and the fine structures of the cusp region when including self consistently not only its whole neighborhood (in the terrestrial magnetosphere) but also the impact of the solar wind and the interplanetary field (IMF) features. Herein, we focuss our attention on the cusp region and in particular on the acceleration and precipitation of particles (both ions and electrons) within the cusp. In present simulations, the IMF is chosen northward, (i.e. where the X -reconnection region is just above the cusp, in the meridian plane). Back-trackings of self-consistent particles are analyzed in details in order to determine (i) which particles (just above the cusp) are precipitated deeply into the cusp, (ii) which populations are injected from the cusp into the nearby tail, (iii) where the particles suffer the largest energisation along their self-consistent trajectories, (iv) where these populations accumulate, and (v) where the most energetic particles are originally coming from. This approach allows to make a traking of particles within the scenario "solar wind-magnetosheath- cusp -nearbytail"; moreover it strongly differs from the standard test particles technics and allows to provide informations not accessible when using full MHD approach. Keywords: Tracing Particles, Particle In Cell (PIC) simulation, double cusp, test particles method, IMF, Solar wind, Magnetosphere

  20. Left Atrial Deformation Analysis in Patients with Corrected Tetralogy of Fallot by 3D Speckle-Tracking Echocardiography (from the MAGYAR-Path Study)

    PubMed Central

    Havasi, Kálmán; Domsik, Péter; Kalapos, Anita; McGhie, Jackie S.; Roos-Hesselink, Jolien W.; Forster, Tamás; Nemes, Attila

    2017-01-01

    Background Three-dimensional (3D) echocardiography coupled with speckle-tracking echocardiographic (STE) capability is a novel methodology which has been demontrated to be useful for the assessment of left atrial (LA) volumes and functional properties. There is increased scientific interest on myocardial deformation analysis in adult patients with corrected tetralogy of Fallot (cTOF). Objectives To compare LA volumes, volume-based functional properties and strain parameters between cTOF patients and age- and gender-matched healthy controls. Methods The study population consisted of 19 consecutive adult patients with cTOF in sinus rhythm nursing at the University of Szeged, Hungary (mean age: 37.9 ± 11.3 years, 8 men, who had repair at the age of 4.1 ± 2.5 years). They all had undergone standard transthoracic two-dimensional Doppler echocardiographic study extended with 3DSTE. Their results were compared to 23 age- and gender-matched healthy controls (mean age: 39.2 ± 10.6 years, 14 men). Results Increased LA volumes and reduced LA emptying fractions respecting cardiac cycle could be demonstrated in cTOF patients compared to controls. LA stroke volumes featuring all LA functions showed no differences between the 2 groups examined. LA global and mean segmental uni- and multidirectional peak strains featuring LA reservoir function were found to be diminished in adult patients with cTOF as compared to controls. Similarly to peak strains reduced global and mean segmental LA strains at atrial contraction characterizing atrial booster pump function could be demonstrated in cTOF patients as compared to controls. Conclusions Significant deterioration of all LA functions could be demonstrated in adult patients with cTOF late after repair. PMID:28327874

  1. Real-time automatic fiducial marker tracking in low contrast cine-MV images

    SciTech Connect

    Lin, Wei-Yang; Lin, Shu-Fang; Yang, Sheng-Chang; Liou, Shu-Cheng; Nath, Ravinder; Liu Wu

    2013-01-15

    Purpose: To develop a real-time automatic method for tracking implanted radiographic markers in low-contrast cine-MV patient images used in image-guided radiation therapy (IGRT). Methods: Intrafraction motion tracking using radiotherapy beam-line MV images have gained some attention recently in IGRT because no additional imaging dose is introduced. However, MV images have much lower contrast than kV images, therefore a robust and automatic algorithm for marker detection in MV images is a prerequisite. Previous marker detection methods are all based on template matching or its derivatives. Template matching needs to match object shape that changes significantly for different implantation and projection angle. While these methods require a large number of templates to cover various situations, they are often forced to use a smaller number of templates to reduce the computation load because their methods all require exhaustive search in the region of interest. The authors solve this problem by synergetic use of modern but well-tested computer vision and artificial intelligence techniques; specifically the authors detect implanted markers utilizing discriminant analysis for initialization and use mean-shift feature space analysis for sequential tracking. This novel approach avoids exhaustive search by exploiting the temporal correlation between consecutive frames and makes it possible to perform more sophisticated detection at the beginning to improve the accuracy, followed by ultrafast sequential tracking after the initialization. The method was evaluated and validated using 1149 cine-MV images from two prostate IGRT patients and compared with manual marker detection results from six researchers. The average of the manual detection results is considered as the ground truth for comparisons. Results: The average root-mean-square errors of our real-time automatic tracking method from the ground truth are 1.9 and 2.1 pixels for the two patients (0.26 mm/pixel). The

  2. A monoscopic method for real-time tumour tracking using combined occasional x-ray imaging and continuous respiratory monitoring

    NASA Astrophysics Data System (ADS)

    Cho, Byungchul; Suh, Yelin; Dieterich, Sonja; Keall, Paul J.

    2008-06-01

    Three major linear accelerator vendors offer gantry-mounted single (monoscopic) x-ray imagers. The use of monoscopic imaging to estimate three-dimensional (3D) target positions has not been fully explored. The purpose of this work is to develop and investigate a robust monoscopic method for real-time tumour tracking, combining occasional x-ray imaging and continuous external respiratory monitoring, and compare this with an established stereoscopic method. Monoscopic estimation of 3D target positions is a two-step procedure. Step (1) is similar to the stereoscopic approach using combined occasional x-ray imaging and real-time external respiratory monitoring, i.e. to establish the correlation between the target coordinates T(x, y, z) and the external respiratory signal (R) (sECM: stereoscopic external correlation model). However, in monoscopic estimation, the correlation between the two coordinates (xp, yp) projected on the imager plane and the external respiratory signal (mECM: monoscopic external correlation model) is established. With only a single projection, the component of the 3D target position, which is along the x-ray imaging direction, is unresolved. Therefore, step (2) is used to estimate the unresolved component (zpar) by building a correlation model between the unresolved component and the two other components projected on the imager (ICM: internal correlation model) with a prior 3D target trajectory that may be obtained by 4DCT, MV/kV imaging or 4DCBCT. At the time of prediction, (xp, yp) are estimated from (R) using the correlation model in step (1), and then zpar is estimated from the estimated (xp, yp) using the correlation model in step (2). The performance of the proposed method was evaluated under various model update intervals and compared with the stereoscopic estimation method using 160 tumour trajectory and external respiratory motion data recorded at 25 Hz from 46 thoracic and abdominal cancer patients who underwent hypofractionated

  3. Real-time tumor tracking using implanted positron emission markers: concept and simulation study.

    PubMed

    Xu, Tong; Wong, Jerry T; Shikhaliev, Polad M; Ducote, Justin L; Al-Ghazi, Muthana S; Molloi, Sabee

    2006-07-01

    The delivery accuracy of radiation therapy for pulmonary and abdominal tumors suffers from tumor motion due to respiration. Respiratory gating should be applied to avoid the use of a large target volume margin that results in a substantial dose to the surrounding normal tissue. Precise respiratory gating requires the exact spatial position of the tumor to be determined in real time during treatment. Usually, fiducial markers are implanted inside or next to the tumor to provide both accurate patient setup and real-time tumor tracking. However, current tumor tracking systems require either substantial x-ray exposure to the patient or large fiducial markers that limit the value of their application for pulmonary tumors. We propose a real-time tumor tracking system using implanted positron emission markers (PeTrack). Each marker will be labeled with low activity positron emitting isotopes, such as 124I, 74As, or 84Rb. These isotopes have half-lives comparable to the duration of radiation therapy (from a few days to a few weeks). The size of the proposed PeTrack marker will be 0.5-0.8 mm, which is approximately one-half the size of markers currently employed in other techniques. By detecting annihilation gammas using position-sensitive detectors, multiple positron emission markers can be tracked in real time. A multimarker localization algorithm was developed using an Expectation-Maximization clustering technique. A Monte Carlo simulation model was developed for the PeTrack system. Patient dose, detector sensitivity, and scatter fraction were evaluated. Depending on the isotope, the lifetime dose from a 3.7 MBq PeTrack marker was determined to be 0.7-5.0 Gy at 10 mm from the marker. At the center of the field of view (FOV), the sensitivity of the PeTrack system was 240-320 counts/s per 1 MBq marker activity within a 30 cm thick patient. The sensitivity was reduced by 45% when the marker was near the edge of the FOV. The scatter fraction ranged from 12% (124I, 74As) to

  4. Real-time tumor tracking using implanted positron emission markers: Concept and simulation study

    SciTech Connect

    Xu Tong; Wong, Jerry T.; Shikhaliev, Polad M.; Ducote, Justin L.; Al-Ghazi, Muthana S.; Molloi, Sabee

    2006-07-15

    The delivery accu