Science.gov

Sample records for realtime 3d tracking

  1. Real-Time 3D Tracking and Reconstruction on Mobile Phones.

    PubMed

    Prisacariu, Victor Adrian; Kähler, Olaf; Murray, David W; Reid, Ian D

    2015-05-01

    We present a novel framework for jointly tracking a camera in 3D and reconstructing the 3D model of an observed object. Due to the region based approach, our formulation can handle untextured objects, partial occlusions, motion blur, dynamic backgrounds and imperfect lighting. Our formulation also allows for a very efficient implementation which achieves real-time performance on a mobile phone, by running the pose estimation and the shape optimisation in parallel. We use a level set based pose estimation but completely avoid the, typically required, explicit computation of a global distance. This leads to tracking rates of more than 100 Hz on a desktop PC and 30 Hz on a mobile phone. Further, we incorporate additional orientation information from the phone's inertial sensor which helps us resolve the tracking ambiguities inherent to region based formulations. The reconstruction step first probabilistically integrates 2D image statistics from selected keyframes into a 3D volume, and then imposes coherency and compactness using a total variational regularisation term. The global optimum of the overall energy function is found using a continuous max-flow algorithm and we show that, similar to tracking, the integration of per voxel posteriors instead of likelihoods improves the precision and accuracy of the reconstruction.

  2. Real-time 3D target tracking in MRI guided focused ultrasound ablations in moving tissues.

    PubMed

    Ries, Mario; de Senneville, Baudouin Denis; Roujol, Sébastien; Berber, Yasmina; Quesson, Bruno; Moonen, Chrit

    2010-12-01

    Magnetic resonance imaging-guided high intensity focused ultrasound is a promising method for the noninvasive ablation of pathological tissue in abdominal organs such as liver and kidney. Due to the high perfusion rates of these organs, sustained sonications are required to achieve a sufficiently high temperature elevation to induce necrosis. However, the constant displacement of the target due to the respiratory cycle render continuous ablations challenging, since dynamic repositioning of the focal point is required. This study demonstrates subsecond 3D high intensity focused ultrasound-beam steering under magnetic resonance-guidance for the real-time compensation of respiratory motion. The target is observed in 3D space by coupling rapid 2D magnetic resonance-imaging with prospective slice tracking based on pencil-beam navigator echoes. The magnetic resonance-data is processed in real-time by a computationally efficient reconstruction pipeline, which provides the position, the temperature and the thermal dose on-the-fly, and which feeds corrections into the high intensity focused ultrasound-ablator. The effect of the residual update latency is reduced by using a 3D Kalman-predictor for trajectory anticipation. The suggested method is characterized with phantom experiments and verified in vivo on porcine kidney. The results show that for update frequencies of more than 10 Hz and latencies of less then 114 msec, temperature elevations can be achieved, which are comparable to static experiments. Copyright © 2010 Wiley-Liss, Inc.

  3. Automatic alignment of standard views in 3D echocardiograms using real-time tracking

    NASA Astrophysics Data System (ADS)

    Orderud, Fredrik; Torp, Hans; Rabben, Stein Inge

    2009-02-01

    In this paper, we present an automatic approach for alignment of standard apical and short-axis slices, and correcting them for out-of-plane motion in 3D echocardiography. This is enabled by using real-time Kalman tracking to perform automatic left ventricle segmentation using a coupled deformable model, consisting of a left ventricle model, as well as structures for the right ventricle and left ventricle outflow tract. Landmark points from the segmented model are then used to generate standard apical and short-axis slices. The slices are automatically updated after tracking in each frame to correct for out-of-plane motion caused by longitudinal shortening of the left ventricle. Results from a dataset of 35 recordings demonstrate the potential for automating apical slice initialization and dynamic short-axis slices. Apical 4-chamber, 2-chamber and long-axis slices are generated based on an assumption of fixed angle between the slices, and short-axis slices are generated so that they follow the same myocardial tissue over the entire cardiac cycle. The error compared to manual annotation was 8.4 +/- 3.5 mm for apex, 3.6 +/- 1.8 mm for mitral valve and 8.4 +/- 7.4 for apical 4-chamber view. The high computational efficiency and automatic behavior of the method enables it to operate in real-time, potentially during image acquisition.

  4. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    NASA Astrophysics Data System (ADS)

    Kanphet, J.; Suriyapee, S.; Dumrongkijudom, N.; Sanghangthum, T.; Kumkhwao, J.; Wisetrintong, M.

    2016-03-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable.

  5. Real-time 2D/3D registration for tumor motion tracking during radiotherapy

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Gendrin, C.; Bloch, C.; Spoerk, J.; Pawiro, S. A.; Weber, C.; Figl, M.; Stock, M.; Georg, D.; Bergmann, H.; Birkfellner, W.

    2012-02-01

    Organ motion during radiotherapy is one of causes of uncertainty in dose delivery. To cope with this, the planned target volume (PTV) has to be larger than needed to guarantee full tumor irradiation. Existing methods deal with the problem by performing tumor tracking using implanted fiducial markers or magnetic sensors. In this work, we investigate the feasibility of using x-ray based real time 2D/3D registration for non-invasive tumor motion tracking during radiotherapy. Our method uses purely intensity based techniques, thus avoiding markers or fiducials. X-rays are acquired during treatment at a rate of 5.4Hz. We iteratively compare each x-ray with a set of digitally reconstructed radiographs (DRR) generated from the planning volume dataset, finding the optimal match between the x-ray and one of the DRRs. The DRRs are generated using a ray-casting algorithm, implemented using general purpose computation on graphics hardware (GPGPU) programming techniques using CUDA for greater performance. Validation is conducted off-line using a phantom and five clinical patient data sets. The registration is performed on a region of interest (ROI) centered around the PTV. The phantom motion is measured with an rms error of 2.1 mm and mean registration time is 220 ms. For the patient data sets, a sinusoidal movement that clearly correlates to the breathing cycle is seen. Mean registration time is always under 105 ms which is well suited for our purposes. These results demonstrate that real-time organ motion monitoring using image based markerless registration is feasible.

  6. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  7. A photogrammetric approach for real-time 3D localization and tracking of pedestrians in monocular infrared imagery

    NASA Astrophysics Data System (ADS)

    Kundegorski, Mikolaj E.; Breckon, Toby P.

    2014-10-01

    Target tracking within conventional video imagery poses a significant challenge that is increasingly being addressed via complex algorithmic solutions. The complexity of this problem can be fundamentally attributed to the ambiguity associated with actual 3D scene position of a given tracked object in relation to its observed position in 2D image space. We propose an approach that challenges the current trend in complex tracking solutions by addressing this fundamental ambiguity head-on. In contrast to prior work in the field, we leverage the key advantages of thermal-band infrared (IR) imagery for the pedestrian localization to show that robust localization and foreground target separation, afforded via such imagery, facilities accurate 3D position estimation to within the error bounds of conventional Global Position System (GPS) positioning. This work investigates the accuracy of classical photogrammetry, within the context of current target detection and classification techniques, as a means of recovering the true 3D position of pedestrian targets within the scene. Based on photogrammetric estimation of target position, we then illustrate the efficiency of regular Kalman filter based tracking operating on actual 3D pedestrian scene trajectories. We present both a statistical and experimental analysis of the associated errors of this approach in addition to real-time 3D pedestrian tracking using monocular infrared (IR) imagery from a thermal-band camera.

  8. Passive Markers for Tracking Surgical Instruments in Real-Time 3-D Ultrasound Imaging

    PubMed Central

    Stoll, Jeffrey; Ren, Hongliang; Dupont, Pierre E.

    2013-01-01

    A family of passive echogenic markers is presented by which the position and orientation of a surgical instrument can be determined in a 3-D ultrasound volume, using simple image processing. Markers are attached near the distal end of the instrument so that they appear in the ultrasound volume along with the instrument tip. They are detected and measured within the ultrasound image, thus requiring no external tracking device. This approach facilitates imaging instruments and tissue simultaneously in ultrasound-guided interventions. Marker-based estimates of instrument pose can be used in augmented reality displays or for image-based servoing. Design principles for marker shapes are presented that ensure imaging system and measurement uniqueness constraints are met. An error analysis is included that can be used to guide marker design and which also establishes a lower bound on measurement uncertainty. Finally, examples of marker measurement and tracking algorithms are presented along with experimental validation of the concepts. PMID:22042148

  9. Passive markers for tracking surgical instruments in real-time 3-D ultrasound imaging.

    PubMed

    Stoll, Jeffrey; Ren, Hongliang; Dupont, Pierre E

    2012-03-01

    A family of passive echogenic markers is presented by which the position and orientation of a surgical instrument can be determined in a 3-D ultrasound volume, using simple image processing. Markers are attached near the distal end of the instrument so that they appear in the ultrasound volume along with the instrument tip. They are detected and measured within the ultrasound image, thus requiring no external tracking device. This approach facilitates imaging instruments and tissue simultaneously in ultrasound-guided interventions. Marker-based estimates of instrument pose can be used in augmented reality displays or for image-based servoing. Design principles for marker shapes are presented that ensure imaging system and measurement uniqueness constraints are met. An error analysis is included that can be used to guide marker design and which also establishes a lower bound on measurement uncertainty. Finally, examples of marker measurement and tracking algorithms are presented along with experimental validation of the concepts.

  10. Real-time 3D motion tracking for small animal brain PET

    NASA Astrophysics Data System (ADS)

    Kyme, A. Z.; Zhou, V. W.; Meikle, S. R.; Fulton, R. R.

    2008-05-01

    High-resolution positron emission tomography (PET) imaging of conscious, unrestrained laboratory animals presents many challenges. Some form of motion correction will normally be necessary to avoid motion artefacts in the reconstruction. The aim of the current work was to develop and evaluate a motion tracking system potentially suitable for use in small animal PET. This system is based on the commercially available stereo-optical MicronTracker S60 which we have integrated with a Siemens Focus-220 microPET scanner. We present measured performance limits of the tracker and the technical details of our implementation, including calibration and synchronization of the system. A phantom study demonstrating motion tracking and correction was also performed. The system can be calibrated with sub-millimetre accuracy, and small lightweight markers can be constructed to provide accurate 3D motion data. A marked reduction in motion artefacts was demonstrated in the phantom study. The techniques and results described here represent a step towards a practical method for rigid-body motion correction in small animal PET. There is scope to achieve further improvements in the accuracy of synchronization and pose measurements in future work.

  11. Real-time tracking of the left ventricle in 3D echocardiography using a state estimation approach.

    PubMed

    Orderud, Fredrik; Hansgård, Jøger; Rabben, Stein I

    2007-01-01

    In this paper we present a framework for real-time tracking of deformable contours in volumetric datasets. The framework supports composite deformation models, controlled by parameters for contour shape in addition to global pose. Tracking is performed in a sequential state estimation fashion, using an extended Kalman filter, with measurement processing in information space to effectively predict and update contour deformations in real-time. A deformable B-spline surface coupled with a global pose transform is used to model shape changes of the left ventricle of the heart. Successful tracking of global motion and local shape changes without user intervention is demonstrated on a dataset consisting of 21 3D echocardiography recordings. Real-time tracking using the proposed approach requires a modest CPU load of 13% on a modern computer. The segmented volumes compare to a semi-automatic segmentation tool with 95% limits of agreement in the interval 4.1 +/- 24.6 ml (r = 0.92).

  12. Real-Time, Multiple, Pan/Tilt/Zoom, Computer Vision Tracking, and 3D Position Estimating System for Unmanned Aerial System Metrology

    DTIC Science & Technology

    2013-10-18

    2012. 136. Zhang, J., Y. Wang, J. Chen, and K. Xue. “A framework of surveillance system using a PTZ camera,” Computer Science and Information Technology...Real-Time, Multiple, Pan/Tilt/Zoom, Computer Vision Tracking, and 3D Position Estimating System for Small Unmanned Aircraft System Metrology...United States Government. AFIT-ENY-DS-13-D- Real-Time, Multiple, Pan/Tilt/Zoom, Computer Vision Tracking, and 3D Position Estimating System for Small

  13. Spatial and temporal performance of 3D optical surface imaging for real-time head position tracking.

    PubMed

    Wiersma, Rodney D; Tomarken, S L; Grelewicz, Zachary; Belcher, Andrew H; Kang, Hyejoo

    2013-11-01

    The spatial and temporal tracking performance of a commercially available 3D optical surface imaging system is evaluated for its potential use in frameless stereotactic radiosurgery head tracking applications. Both 3D surface and infrared (IR) marker tracking were performed simultaneously on a head phantom mounted on an xyz motion stage and on four human subjects. To allow spatial and temporal comparison on human subjects, three points were simultaneously monitored, including the upper facial region (3D surface), a dental plate (IR markers), and upper forehead (IR markers). For both static and dynamic phantom studies, the 3D surface tracker was found to have a root mean squared error (RMSE) of approximately 0.30 mm for region-of-interest (ROI) surface sizes greater than 1000 vertex points. Although, the processing period (1/fps) of the 3D surface system was found to linearly increase as a function of the number of ROI vertex points, the tracking accuracy was found to be independent of ROI size provided that the ROI was sufficiently large and contained features for registration. For human subjects, the RMSE between 3D surface tracking and IR marker tracking modalities was 0.22 mm left-right (x-axis), 0.44 mm superior-inferior (y-axis), 0.27 mm anterior-posterior (z-axis), 0.29° pitch (around x-axis), 0.18° roll (around y-axis), and 0.15° yaw (around z-axis). 3D surface imaging has the potential to provide submillimeter level head motion tracking. This is provided that a highly accurate camera-to-LINAC frame of reference calibration can be performed and that the reference ROI is of sufficient size and contains suitable surface features for registration.

  14. Accuracy of real-time single- and multi-beat 3-d speckle tracking echocardiography in vitro.

    PubMed

    Hjertaas, Johannes Just; Fosså, Henrik; Dybdahl, Grete Lunestad; Grüner, Renate; Lunde, Per; Matre, Knut

    2013-06-01

    With little data published on the accuracy of cardiac 3-D strain measurements, we investigated the agreement between 3-D echocardiography and sonomicrometry in an in vitro model with a polyvinyl alcohol phantom. A cardiac scanner with a 3-D probe was used to acquire recordings at 15 different stroke volumes at a heart rate of 60 beats/min, and eight different stroke volumes at a heart rate of 120 beats/min. Sonomicrometry was used as a reference, monitoring longitudinal, circumferential and radial lengths. Both single- and multi-beat acquisitions were recorded. Strain values were compared with sonomicrometer strain using linear correlation coefficients and Bland-Altman analysis. Multi-beat acquisition showed good agreement, whereas real-time images showed less agreement. The best correlation was obtained for a heart rate 60 of beats/min at a volume rate 36.6 volumes/s.

  15. Performance and suitability assessment of a real-time 3D electromagnetic needle tracking system for interstitial brachytherapy

    PubMed Central

    Boutaleb, Samir; Fillion, Olivier; Bonillas, Antonio; Hautvast, Gilion; Binnekamp, Dirk; Beaulieu, Luc

    2015-01-01

    Purpose Accurate insertion and overall needle positioning are key requirements for effective brachytherapy treatments. This work aims at demonstrating the accuracy performance and the suitability of the Aurora® V1 Planar Field Generator (PFG) electromagnetic tracking system (EMTS) for real-time treatment assistance in interstitial brachytherapy procedures. Material and methods The system's performance was characterized in two distinct studies. First, in an environment free of EM disturbance, the boundaries of the detection volume of the EMTS were characterized and a tracking error analysis was performed. Secondly, a distortion analysis was conducted as a means of assessing the tracking accuracy performance of the system in the presence of potential EM disturbance generated by the proximity of standard brachytherapy components. Results The tracking accuracy experiments showed that positional errors were typically 2 ± 1 mm in a zone restricted to the first 30 cm of the detection volume. However, at the edges of the detection volume, sensor position errors of up to 16 mm were recorded. On the other hand, orientation errors remained low at ± 2° for most of the measurements. The EM distortion analysis showed that the presence of typical brachytherapy components in vicinity of the EMTS had little influence on tracking accuracy. Position errors of less than 1 mm were recorded with all components except with a metallic arm support, which induced a mean absolute error of approximately 1.4 mm when located 10 cm away from the needle sensor. Conclusions The Aurora® V1 PFG EMTS possesses a great potential for real-time treatment assistance in general interstitial brachytherapy. In view of our experimental results, we however recommend that the needle axis remains as parallel as possible to the generator surface during treatment and that the tracking zone be restricted to the first 30 cm from the generator surface. PMID:26622231

  16. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    SciTech Connect

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon; Ra, Jong Beom; Lee, Jae Young

    2015-01-15

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a

  17. Improved image guidance technique for minimally invasive mitral valve repair using real-time tracked 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Rankin, Adam; Moore, John; Bainbridge, Daniel; Peters, Terry

    2016-03-01

    In the past ten years, numerous new surgical and interventional techniques have been developed for treating heart valve disease without the need for cardiopulmonary bypass. Heart valve repair is now being performed in a blood-filled environment, reinforcing the need for accurate and intuitive imaging techniques. Previous work has demonstrated how augmenting ultrasound with virtual representations of specific anatomical landmarks can greatly simplify interventional navigation challenges and increase patient safety. These techniques often complicate interventions by requiring additional steps taken to manually define and initialize virtual models. Furthermore, overlaying virtual elements into real-time image data can also obstruct the view of salient image information. To address these limitations, a system was developed that uses real-time volumetric ultrasound alongside magnetically tracked tools presented in an augmented virtuality environment to provide a streamlined navigation guidance platform. In phantom studies simulating a beating-heart navigation task, procedure duration and tool path metrics have achieved comparable performance to previous work in augmented virtuality techniques, and considerable improvement over standard of care ultrasound guidance.

  18. Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation.

    PubMed

    Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud

    2017-01-01

    In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges.

  19. Use of MV and kV imager correlation for maintaining continuous real-time 3D internal marker tracking during beam interruptions.

    PubMed

    Wiersma, R D; Riaz, N; Dieterich, Sonja; Suh, Yelin; Xing, L

    2009-01-07

    The integration of onboard kV imaging together with a MV electronic portal imaging device (EPID) on linear accelerators (LINAC) can provide an easy to implement real-time 3D organ position monitoring solution for treatment delivery. Currently, real-time MV-kV tracking has only been demonstrated by simultaneous imagining by both MV and kV imaging devices. However, modalities such as step-and-shoot IMRT (SS-IMRT), which inherently contain MV beam interruptions, can lead to loss of target information necessary for 3D localization. Additionally, continuous kV imaging throughout the treatment delivery can lead to high levels of imaging dose to the patient. This work demonstrates for the first time how full 3D target tracking can be maintained even in the presence of such beam interruption, or MV/kV beam interleave, by use of a relatively simple correlation model together with MV-kV tracking. A moving correlation model was constructed using both present and prior positions of the marker in the available MV or kV image to compute the position of the marker on the interrupted imager. A commercially available radiotherapy system, equipped with both MV and kV imaging devices, was used to deliver typical SS-IMRT lung treatment plans to a 4D phantom containing internally embedded metallic markers. To simulate actual lung tumor motion, previous recorded 4D lung patient motion data were used. Lung tumor motion data of five separate patients were inputted into the 4D phantom, and typical SS-IMRT lung plans were delivered to simulate actual clinical deliveries. Application of the correlation model to SS-IMRT lung treatment deliveries was found to be an effective solution for maintaining continuous 3D tracking during 'step' beam interruptions. For deliveries involving five or more gantry angles with 50 or more fields per plan, the positional errors were found to have < or =1 mm root mean squared error (RMSE) in all three spatial directions. In addition to increasing the robustness

  20. Use of MV and kV imager correlation for maintaining continuous real-time 3D internal marker tracking during beam interruptions

    NASA Astrophysics Data System (ADS)

    Wiersma, R. D.; Riaz, N.; Dieterich, Sonja; Suh, Yelin; Xing, L.

    2009-01-01

    The integration of onboard kV imaging together with a MV electronic portal imaging device (EPID) on linear accelerators (LINAC) can provide an easy to implement real-time 3D organ position monitoring solution for treatment delivery. Currently, real-time MV-kV tracking has only been demonstrated by simultaneous imagining by both MV and kV imaging devices. However, modalities such as step-and-shoot IMRT (SS-IMRT), which inherently contain MV beam interruptions, can lead to loss of target information necessary for 3D localization. Additionally, continuous kV imaging throughout the treatment delivery can lead to high levels of imaging dose to the patient. This work demonstrates for the first time how full 3D target tracking can be maintained even in the presence of such beam interruption, or MV/kV beam interleave, by use of a relatively simple correlation model together with MV-kV tracking. A moving correlation model was constructed using both present and prior positions of the marker in the available MV or kV image to compute the position of the marker on the interrupted imager. A commercially available radiotherapy system, equipped with both MV and kV imaging devices, was used to deliver typical SS-IMRT lung treatment plans to a 4D phantom containing internally embedded metallic markers. To simulate actual lung tumor motion, previous recorded 4D lung patient motion data were used. Lung tumor motion data of five separate patients were inputted into the 4D phantom, and typical SS-IMRT lung plans were delivered to simulate actual clinical deliveries. Application of the correlation model to SS-IMRT lung treatment deliveries was found to be an effective solution for maintaining continuous 3D tracking during 'step' beam interruptions. For deliveries involving five or more gantry angles with 50 or more fields per plan, the positional errors were found to have <=1 mm root mean squared error (RMSE) in all three spatial directions. In addition to increasing the robustness of

  1. Real-time intensity based 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Steiner, E.; Stock, M.; Georg, D.; Birkfellner, W.

    2014-03-01

    Intra-fractional respiratorymotion during radiotherapy is one of themain sources of uncertainty in dose application creating the need to extend themargins of the planning target volume (PTV). Real-time tumormotion tracking by 2D/3D registration using on-board kilo-voltage (kV) imaging can lead to a reduction of the PTV. One limitation of this technique when using one projection image, is the inability to resolve motion along the imaging beam axis. We present a retrospective patient study to investigate the impact of paired portal mega-voltage (MV) and kV images, on registration accuracy. We used data from eighteen patients suffering from non small cell lung cancer undergoing regular treatment at our center. For each patient we acquired a planning CT and sequences of kV and MV images during treatment. Our evaluation consisted of comparing the accuracy of motion tracking in 6 degrees-of-freedom(DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. We use graphics processing unit rendering for real-time performance. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 3.3 mm to 1.8 mm and the motion along AP was successfully extracted. The mean registration time was of 190+/-35ms. Our evaluation shows that using kVMV image pairs leads to improved motion extraction in 6 DOF. Therefore, this approach is suitable for accurate, real-time tumor motion tracking with a conventional LINAC.

  2. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV-kV imaging.

    PubMed

    Liu, W; Wiersma, R D; Mao, W; Luxton, G; Xing, L

    2008-12-21

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from approximately 0.5 mm for the normal adult breathing pattern to approximately 1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general

  3. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV kV imaging

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wiersma, R. D.; Mao, W.; Luxton, G.; Xing, L.

    2008-12-01

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from ~0.5 mm for the normal adult breathing pattern to ~1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real-time

  4. Making Tracks on Mars 3-D

    NASA Image and Video Library

    2004-08-12

    NASA Mars Exploration Rover Spirit has been making tracks on Mars for seven months now, well beyond its original 90-day mission, when it reached Columbia Hills. 3D glasses are necessary to view this image.

  5. Three-Dimensional Rotation, Twist and Torsion Analyses Using Real-Time 3D Speckle Tracking Imaging: Feasibility, Reproducibility, and Normal Ranges in Pediatric Population

    PubMed Central

    Han, Wei; Gao, Jun; He, Lin; Yang, Yali; Yin, Ping; Xie, Mingxing; Ge, Shuping

    2016-01-01

    Background and Objective The specific aim of this study was to evaluate the feasibility, reproducibility and maturational changes of LV rotation, twist and torsion variables by real-time 3D speckle-tracking echocardiography (RT3DSTE) in children. Methods A prospective study was conducted in 347 consecutive healthy subjects (181 males/156 females, mean age 7.12 ± 5.3 years, and range from birth to 18-years) using RT 3D echocardiography (3DE). The LV rotation, twist and torsion measurements were made off-line using TomTec software. Manual landmark selection and endocardial border editing were performed in 3 planes (apical “2”-, “4”-, and “3”- chamber views) and semi-automated tracking yielded LV rotation, twist and torsion measurements. LV rotation, twist and torsion analysis by RT 3DSTE were feasible in 307 out of 347 subjects (88.5%). Results There was no correlation between rotation or twist and age, height, weight, BSA or heart rate, respectively. However, there was statistically significant, but very modest correlation between LV torsion and age (R2 = 0.036, P< 0.001). The normal ranges were defined for rotation and twist in this cohort, and for torsion for each age group. The intra-observer and inter-observer variabilities for apical and basal rotation, twist and torsion ranged from 7.3% ± 3.8% to 12.3% ± 8.8% and from 8.8% ± 4.6% to 15.7% ± 10.1%, respectively. Conclusions We conclude that analysis of LV rotation, twist and torsion by this new RT3D STE is feasible and reproducible in pediatric population. There is no maturational change in rotation and twist, but torsion decreases with age in this cohort. Further refinement is warranted to validate the utility of this new methodology in more sensitive and quantitative evaluation of congenital and acquired heart diseases in children. PMID:27427968

  6. Automatic respiration tracking for radiotherapy using optical 3D camera

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Rapid optical three-dimensional (O3D) imaging systems provide accurate digitized 3D surface data in real-time, with no patient contact nor radiation. The accurate 3D surface images offer crucial information in image-guided radiation therapy (IGRT) treatments for accurate patient repositioning and respiration management. However, applications of O3D imaging techniques to image-guided radiotherapy have been clinically challenged by body deformation, pathological and anatomical variations among individual patients, extremely high dimensionality of the 3D surface data, and irregular respiration motion. In existing clinical radiation therapy (RT) procedures target displacements are caused by (1) inter-fractional anatomy changes due to weight, swell, food/water intake; (2) intra-fractional variations from anatomy changes within any treatment session due to voluntary/involuntary physiologic processes (e.g. respiration, muscle relaxation); (3) patient setup misalignment in daily reposition due to user errors; and (4) changes of marker or positioning device, etc. Presently, viable solution is lacking for in-vivo tracking of target motion and anatomy changes during the beam-on time without exposing patient with additional ionized radiation or high magnet field. Current O3D-guided radiotherapy systems relay on selected points or areas in the 3D surface to track surface motion. The configuration of the marks or areas may change with time that makes it inconsistent in quantifying and interpreting the respiration patterns. To meet the challenge of performing real-time respiration tracking using O3D imaging technology in IGRT, we propose a new approach to automatic respiration motion analysis based on linear dimensionality reduction technique based on PCA (principle component analysis). Optical 3D image sequence is decomposed with principle component analysis into a limited number of independent (orthogonal) motion patterns (a low dimension eigen-space span by eigen-vectors). New

  7. 3-D model-based vehicle tracking.

    PubMed

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  8. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2016-07-12

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  9. Ames Lab 101: Real-Time 3D Imaging

    SciTech Connect

    Zhang, Song

    2010-01-01

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  10. Light driven micro-robotics with holographic 3D tracking

    NASA Astrophysics Data System (ADS)

    Glückstad, Jesper

    2016-04-01

    We recently pioneered the concept of light-driven micro-robotics including the new and disruptive 3D-printed micro-tools coined Wave-guided Optical Waveguides that can be real-time optically trapped and "remote-controlled" in a volume with six-degrees-of-freedom. To be exploring the full potential of this new drone-like 3D light robotics approach in challenging microscopic geometries requires a versatile and real-time reconfigurable light coupling that can dynamically track a plurality of "light robots" in 3D to ensure continuous optimal light coupling on the fly. Our latest developments in this new and exciting area will be reviewed in this invited paper.

  11. Tracking earthquake source evolution in 3-D

    NASA Astrophysics Data System (ADS)

    Kennett, B. L. N.; Gorbatov, A.; Spiliopoulos, S.

    2014-08-01

    Starting from the hypocentre, the point of initiation of seismic energy, we seek to estimate the subsequent trajectory of the points of emission of high-frequency energy in 3-D, which we term the `evocentres'. We track these evocentres as a function of time by energy stacking for putative points on a 3-D grid around the hypocentre that is expanded as time progresses, selecting the location of maximum energy release as a function of time. The spatial resolution in the neighbourhood of a target point can be simply estimated by spatial mapping using the properties of isochrons from the stations. The mapping of a seismogram segment to space is by inverse slowness, and thus more distant stations have a broader spatial contribution. As in hypocentral estimation, the inclusion of a wide azimuthal distribution of stations significantly enhances 3-D capability. We illustrate this approach to tracking source evolution in 3-D by considering two major earthquakes, the 2007 Mw 8.1 Solomons islands event that ruptured across a plate boundary and the 2013 Mw 8.3 event 610 km beneath the Sea of Okhotsk. In each case we are able to provide estimates of the evolution of high-frequency energy that tally well with alternative schemes, but also to provide information on the 3-D characteristics that is not available from backprojection from distant networks. We are able to demonstrate that the major characteristics of event rupture can be captured using just a few azimuthally distributed stations, which opens the opportunity for the approach to be used in a rapid mode immediately after a major event to provide guidance for, for example tsunami warning for megathrust events.

  12. 3D hand tracking using Kalman filter in depth space

    NASA Astrophysics Data System (ADS)

    Park, Sangheon; Yu, Sunjin; Kim, Joongrock; Kim, Sungjin; Lee, Sangyoun

    2012-12-01

    Hand gestures are an important type of natural language used in many research areas such as human-computer interaction and computer vision. Hand gestures recognition requires the prior determination of the hand position through detection and tracking. One of the most efficient strategies for hand tracking is to use 2D visual information such as color and shape. However, visual-sensor-based hand tracking methods are very sensitive when tracking is performed under variable light conditions. Also, as hand movements are made in 3D space, the recognition performance of hand gestures using 2D information is inherently limited. In this article, we propose a novel real-time 3D hand tracking method in depth space using a 3D depth sensor and employing Kalman filter. We detect hand candidates using motion clusters and predefined wave motion, and track hand locations using Kalman filter. To verify the effectiveness of the proposed method, we compare the performance of the proposed method with the visual-based method. Experimental results show that the performance of the proposed method out performs visual-based method.

  13. Linear tracking for 3-D medical ultrasound imaging.

    PubMed

    Huang, Qing-Hua; Yang, Zhao; Hu, Wei; Jin, Lian-Wen; Wei, Gang; Li, Xuelong

    2013-12-01

    As the clinical application grows, there is a rapid technical development of 3-D ultrasound imaging. Compared with 2-D ultrasound imaging, 3-D ultrasound imaging can provide improved qualitative and quantitative information for various clinical applications. In this paper, we proposed a novel tracking method for a freehand 3-D ultrasound imaging system with improved portability, reduced degree of freedom, and cost. We designed a sliding track with a linear position sensor attached, and it transmitted positional data via a wireless communication module based on Bluetooth, resulting in a wireless spatial tracking modality. A traditional 2-D ultrasound probe fixed to the position sensor on the sliding track was used to obtain real-time B-scans, and the positions of the B-scans were simultaneously acquired when moving the probe along the track in a freehand manner. In the experiments, the proposed method was applied to ultrasound phantoms and real human tissues. The results demonstrated that the new system outperformed a previously developed freehand system based on a traditional six-degree-of-freedom spatial sensor in phantom and in vivo studies, indicating its merit in clinical applications for human tissues and organs.

  14. Real-Time 3D Visualization

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  15. 3D Tracking via Shoe Sensing

    PubMed Central

    Li, Fangmin; Liu, Guo; Liu, Jian; Chen, Xiaochuang; Ma, Xiaolin

    2016-01-01

    Most location-based services are based on a global positioning system (GPS), which only works well in outdoor environments. Compared to outdoor environments, indoor localization has created more buzz in recent years as people spent most of their time indoors working at offices and shopping at malls, etc. Existing solutions mainly rely on inertial sensors (i.e., accelerometer and gyroscope) embedded in mobile devices, which are usually not accurate enough to be useful due to the mobile devices’ random movements while people are walking. In this paper, we propose the use of shoe sensing (i.e., sensors attached to shoes) to achieve 3D indoor positioning. Specifically, a short-time energy-based approach is used to extract the gait pattern. Moreover, in order to improve the accuracy of vertical distance estimation while the person is climbing upstairs, a state classification is designed to distinguish the walking status including plane motion (i.e., normal walking and jogging horizontally), walking upstairs, and walking downstairs. Furthermore, we also provide a mechanism to reduce the vertical distance accumulation error. Experimental results show that we can achieve nearly 100% accuracy when extracting gait patterns from walking/jogging with a low-cost shoe sensor, and can also achieve 3D indoor real-time positioning with high accuracy. PMID:27801839

  16. Lagrangian 3D tracking of fluorescent microscopic objects in motion.

    PubMed

    Darnige, T; Figueroa-Morales, N; Bohec, P; Lindner, A; Clément, E

    2017-05-01

    We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.

  17. Integration of real-time 3D image acquisition and multiview 3D display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

    2014-03-01

    Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

  18. Tracked 3D ultrasound in radio-frequency liver ablation

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Fichtinger, Gabor; Taylor, Russell H.; Choti, Michael A.

    2003-05-01

    Recent studies have shown that radio frequency (RF) ablation is a simple, safe and potentially effective treatment for selected patients with liver metastases. Despite all recent therapeutic advancements, however, intra-procedural target localization and precise and consistent placement of the tissue ablator device are still unsolved problems. Various imaging modalities, including ultrasound (US) and computed tomography (CT) have been tried as guidance modalities. Transcutaneous US imaging, due to its real-time nature, may be beneficial in many cases, but unfortunately, fails to adequately visualize the tumor in many cases. Intraoperative or laparoscopic US, on the other hand, provides improved visualization and target imaging. This paper describes a system for computer-assisted RF ablation of liver tumors, combining navigational tracking of a conventional imaging ultrasound probe to produce 3D ultrasound imaging with a tracked RF ablation device supported by a passive mechanical arm and spatially registered to the ultrasound volume.

  19. Speeding up 3D speckle tracking using PatchMatch

    NASA Astrophysics Data System (ADS)

    Zontak, Maria; O'Donnell, Matthew

    2016-03-01

    Echocardiography provides valuable information to diagnose heart dysfunction. A typical exam records several minutes of real-time cardiac images. To enable complete analysis of 3D cardiac strains, 4-D (3-D+t) echocardiography is used. This results in a huge dataset and requires effective automated analysis. Ultrasound speckle tracking is an effective method for tissue motion analysis. It involves correlation of a 3D kernel (block) around a voxel with kernels in later frames. The search region is usually confined to a local neighborhood, due to biomechanical and computational constraints. For high strains and moderate frame-rates, however, this search region will remain large, leading to a considerable computational burden. Moreover, speckle decorrelation (due to high strains) leads to errors in tracking. To solve this, spatial motion coherency between adjacent voxels should be imposed, e.g., by averaging their correlation functions.1 This requires storing correlation functions for neighboring voxels, thus increasing memory demands. In this work, we propose an efficient search using PatchMatch, 2 a powerful method to find correspondences between images. Here we adopt PatchMatch for 3D volumes and radio-frequency signals. As opposed to an exact search, PatchMatch performs random sampling of the search region and propagates successive matches among neighboring voxels. We show that: 1) Inherently smooth offset propagation in PatchMatch contributes to spatial motion coherence without any additional processing or memory demand. 2) For typical scenarios, PatchMatch is at least 20 times faster than the exact search, while maintaining comparable tracking accuracy.

  20. Real-time monitoring of 3D cell culture using a 3D capacitance biosensor.

    PubMed

    Lee, Sun-Mi; Han, Nalae; Lee, Rimi; Choi, In-Hong; Park, Yong-Beom; Shin, Jeon-Soo; Yoo, Kyung-Hwa

    2016-03-15

    Three-dimensional (3D) cell cultures have recently received attention because they represent a more physiologically relevant environment compared to conventional two-dimensional (2D) cell cultures. However, 2D-based imaging techniques or cell sensors are insufficient for real-time monitoring of cellular behavior in 3D cell culture. Here, we report investigations conducted with a 3D capacitance cell sensor consisting of vertically aligned pairs of electrodes. When GFP-expressing human breast cancer cells (GFP-MCF-7) encapsulated in alginate hydrogel were cultured in a 3D cell culture system, cellular activities, such as cell proliferation and apoptosis at different heights, could be monitored non-invasively and in real-time by measuring the change in capacitance with the 3D capacitance sensor. Moreover, we were able to monitor cell migration of human mesenchymal stem cells (hMSCs) with our 3D capacitance sensor.

  1. A novel class of machine-learning-driven real-time 2D/3D tracking methods: texture model registration (TMR)

    NASA Astrophysics Data System (ADS)

    Steininger, Philipp; Neuner, Markus; Fritscher, Karl; Sedlmayer, Felix; Deutschmann, Heinrich

    2011-03-01

    We present a novel view on 2D/3D image registration by introducing a generic algorithmic framework that is based on supervised machine learning (SML). First and foremost, this class of algorithms, referred to as texture model registration (TMR), aims at making 2D/3D registration applicable for time-critical image guided medical procedures. TMR methods are two-stage. In a first offline pre-computational stage, a prediction rule is derived from a pre-interventional 3D image and according geometric constraints. This is achieved by computing digitally reconstructed radiographs, pre-processing them, extracting their texture, and applying SML methods. In a second online stage, the inferred rule is used for predicting the spatial rigid transformation of unseen intrainterventional 2D images. A first simple concrete TMR implementation, referred to as TMR-PCR, is introduced. This approach involves principal component regression (PCR) and simple intermediate pre-processing steps. Using TMR-PCR, first experimental results on five clinical IGRT 3D data sets and synthetic intra-interventional images are presented. The implementation showed an average registration rate of 48 Hz over 40000 registrations, and succeeded in the majority of cases with a mean target registration error smaller than 2 mm. Finally, the potential and characteristics of the proposed methodical framework are discussed.

  2. Real-time catheter tracking for high-dose-rate prostate brachytherapy using an electromagnetic 3D-guidance device: A preliminary performance study

    SciTech Connect

    Zhou Jun; Sebastian, Evelyn; Mangona, Victor; Yan Di

    2013-02-15

    Purpose: In order to increase the accuracy and speed of catheter reconstruction in a high-dose-rate (HDR) prostate implant procedure, an automatic tracking system has been developed using an electromagnetic (EM) device (trakSTAR, Ascension Technology, VT). The performance of the system, including the accuracy and noise level with various tracking parameters and conditions, were investigated. Methods: A direct current (dc) EM transmitter (midrange model) and a sensor with diameter of 1.3 mm (Model 130) were used in the trakSTAR system for tracking catheter position during HDR prostate brachytherapy. Localization accuracy was assessed under both static and dynamic analyses conditions. For the static analysis, a calibration phantom was used to investigate error dependency on operating room (OR) table height (bottom vs midposition vs top), sensor position (distal tip of catheter vs connector end of catheter), direction [left-right (LR) vs anterior-posterior (AP) vs superior-inferior (SI)], sampling frequency (40 vs 80 vs 120 Hz), and interference from OR equipment (present vs absent). The mean and standard deviation of the localization offset in each direction and the corresponding error vectors were calculated. For dynamic analysis, the paths of five straight catheters were tracked to study the effects of directions, sampling frequency, and interference of EM field. Statistical analysis was conducted to compare the results in different configurations. Results: When interference was present in the static analysis, the error vectors were significantly higher at the top table position (3.3 {+-} 1.3 vs 1.8 {+-} 0.9 mm at bottom and 1.7 {+-} 1.0 mm at middle, p < 0.001), at catheter end position (3.1 {+-} 1.1 vs 1.4 {+-} 0.7 mm at the tip position, p < 0.001), and at 40 Hz sampling frequency (2.6 {+-} 1.1 vs 2.4 {+-} 1.5 mm at 80 Hz and 1.8 {+-} 1.1 at 160 Hz, p < 0.001). So did the mean offset errors in the LR direction (-1.7 {+-} 1.4 vs 0.4 {+-} 0.5 mm in AP and 0

  3. Real-time catheter tracking for high-dose-rate prostate brachytherapy using an electromagnetic 3D-guidance device: a preliminary performance study.

    PubMed

    Zhou, Jun; Sebastian, Evelyn; Mangona, Victor; Yan, Di

    2013-02-01

    In order to increase the accuracy and speed of catheter reconstruction in a high-dose-rate (HDR) prostate implant procedure, an automatic tracking system has been developed using an electromagnetic (EM) device (trakSTAR, Ascension Technology, VT). The performance of the system, including the accuracy and noise level with various tracking parameters and conditions, were investigated. A direct current (dc) EM transmitter (midrange model) and a sensor with diameter of 1.3 mm (Model 130) were used in the trakSTAR system for tracking catheter position during HDR prostate brachytherapy. Localization accuracy was assessed under both static and dynamic analyses conditions. For the static analysis, a calibration phantom was used to investigate error dependency on operating room (OR) table height (bottom vs midposition vs top), sensor position (distal tip of catheter vs connector end of catheter), direction [left-right (LR) vs anterior-posterior (AP) vs superior-inferior (SI)], sampling frequency (40 vs 80 vs 120 Hz), and interference from OR equipment (present vs absent). The mean and standard deviation of the localization offset in each direction and the corresponding error vectors were calculated. For dynamic analysis, the paths of five straight catheters were tracked to study the effects of directions, sampling frequency, and interference of EM field. Statistical analysis was conducted to compare the results in different configurations. When interference was present in the static analysis, the error vectors were significantly higher at the top table position (3.3 ± 1.3 vs 1.8 ± 0.9 mm at bottom and 1.7 ± 1.0 mm at middle, p < 0.001), at catheter end position (3.1 ± 1.1 vs 1.4 ± 0.7 mm at the tip position, p < 0.001), and at 40 Hz sampling frequency (2.6 ± 1.1 vs 2.4 ± 1.5 mm at 80 Hz and 1.8 ± 1.1 at 160 Hz, p < 0.001). So did the mean offset errors in the LR direction (-1.7 ± 1.4 vs 0.4 ± 0.5 mm in AP and 0.8 ± 0.8 mm in SI directions, p < 0.001). The error

  4. Electrically tunable lens speeds up 3D orbital tracking

    PubMed Central

    Annibale, Paolo; Dvornikov, Alexander; Gratton, Enrico

    2015-01-01

    3D orbital particle tracking is a versatile and effective microscopy technique that allows following fast moving fluorescent objects within living cells and reconstructing complex 3D shapes using laser scanning microscopes. We demonstrated notable improvements in the range, speed and accuracy of 3D orbital particle tracking by replacing commonly used piezoelectric stages with Electrically Tunable Lens (ETL) that eliminates mechanical movement of objective lenses. This allowed tracking and reconstructing shape of structures extending 500 microns in the axial direction. Using the ETL, we tracked at high speed fluorescently labeled genomic loci within the nucleus of living cells with unprecedented temporal resolution of 8ms using a 1.42NA oil-immersion objective. The presented technology is cost effective and allows easy upgrade of scanning microscopes for fast 3D orbital tracking. PMID:26114037

  5. Ct3d: tracking microglia motility in 3D using a novel cosegmentation approach.

    PubMed

    Xiao, Hang; Li, Ying; Du, Jiulin; Mosig, Axel

    2011-02-15

    Cell tracking is an important method to quantitatively analyze time-lapse microscopy data. While numerous methods and tools exist for tracking cells in 2D time-lapse images, only few and very application-specific tracking tools are available for 3D time-lapse images, which is of high relevance in immunoimaging, in particular for studying the motility of microglia in vivo. We introduce a novel algorithm for tracking cells in 3D time-lapse microscopy data, based on computing cosegmentations between component trees representing individual time frames using the so-called tree-assignments. For the first time, our method allows to track microglia in three dimensional confocal time-lapse microscopy images. We also evaluate our method on synthetically generated data, demonstrating that our algorithm is robust even in the presence of different types of inhomogeneous background noise. Our algorithm is implemented in the ct3d package, which is available under http://www.picb.ac.cn/patterns/Software/ct3d; supplementary videos are available from http://www.picb.ac.cn/patterns/Supplements/ct3d.

  6. Real-time 3D video conference on generic hardware

    NASA Astrophysics Data System (ADS)

    Desurmont, X.; Bruyelle, J. L.; Ruiz, D.; Meessen, J.; Macq, B.

    2007-02-01

    Nowadays, video-conference tends to be more and more advantageous because of the economical and ecological cost of transport. Several platforms exist. The goal of the TIFANIS immersive platform is to let users interact as if they were physically together. Unlike previous teleimmersion systems, TIFANIS uses generic hardware to achieve an economically realistic implementation. The basic functions of the system are to capture the scene, transmit it through digital networks to other partners, and then render it according to each partner's viewing characteristics. The image processing part should run in real-time. We propose to analyze the whole system. it can be split into different services like central processing unit (CPU), graphical rendering, direct memory access (DMA), and communications trough the network. Most of the processing is done by CPU resource. It is composed of the 3D reconstruction and the detection and tracking of faces from the video stream. However, the processing needs to be parallelized in several threads that have as little dependencies as possible. In this paper, we present these issues, and the way we deal with them.

  7. 3D gaze tracking system for NVidia 3D Vision®.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  8. Multiview 3-D Echocardiography Fusion with Breath-Hold Position Tracking Using an Optical Tracking System.

    PubMed

    Punithakumar, Kumaradevan; Hareendranathan, Abhilash R; McNulty, Alexander; Biamonte, Marina; He, Allen; Noga, Michelle; Boulanger, Pierre; Becher, Harald

    2016-08-01

    Recent advances in echocardiography allow real-time 3-D dynamic image acquisition of the heart. However, one of the major limitations of 3-D echocardiography is the limited field of view, which results in an acquisition insufficient to cover the whole geometry of the heart. This study proposes the novel approach of fusing multiple 3-D echocardiography images using an optical tracking system that incorporates breath-hold position tracking to infer that the heart remains at the same position during different acquisitions. In six healthy male volunteers, 18 pairs of apical/parasternal 3-D ultrasound data sets were acquired during a single breath-hold as well as in subsequent breath-holds. The proposed method yielded a field of view improvement of 35.4 ± 12.5%. To improve the quality of the fused image, a wavelet-based fusion algorithm was developed that computes pixelwise likelihood values for overlapping voxels from multiple image views. The proposed wavelet-based fusion approach yielded significant improvement in contrast (66.46 ± 21.68%), contrast-to-noise ratio (49.92 ± 28.71%), signal-to-noise ratio (57.59 ± 47.85%) and feature count (13.06 ± 7.44%) in comparison to individual views.

  9. A non-disruptive technology for robust 3D tool tracking for ultrasound-guided interventions.

    PubMed

    Mung, Jay; Vignon, Francois; Jain, Ameet

    2011-01-01

    In the past decade ultrasound (US) has become the preferred modality for a number of interventional procedures, offering excellent soft tissue visualization. The main limitation however is limited visualization of surgical tools. A new method is proposed for robust 3D tracking and US image enhancement of surgical tools under US guidance. Small US sensors are mounted on existing surgical tools. As the imager emits acoustic energy, the electrical signal from the sensor is analyzed to reconstruct its 3D coordinates. These coordinates can then be used for 3D surgical navigation, similar to current day tracking systems. A system with real-time 3D tool tracking and image enhancement was implemented on a commercial ultrasound scanner and 3D probe. Extensive water tank experiments with a tracked 0.2mm sensor show robust performance in a wide range of imaging conditions and tool position/orientations. The 3D tracking accuracy was 0.36 +/- 0.16mm throughout the imaging volume of 55 degrees x 27 degrees x 150mm. Additionally, the tool was successfully tracked inside a beating heart phantom. This paper proposes an image enhancement and tool tracking technology with sub-mm accuracy for US-guided interventions. The technology is non-disruptive, both in terms of existing clinical workflow and commercial considerations, showing promise for large scale clinical impact.

  10. Real-time structured light intraoral 3D measurement pipeline

    NASA Astrophysics Data System (ADS)

    Gheorghe, Radu; Tchouprakov, Andrei; Sokolov, Roman

    2013-02-01

    Computer aided design and manufacturing (CAD/CAM) is increasingly becoming a standard feature and service provided to patients in dentist offices and denture manufacturing laboratories. Although the quality of the tools and data has slowly improved in the last years, due to various surface measurement challenges, practical, accurate, invivo, real-time 3D high quality data acquisition and processing still needs improving. Advances in GPU computational power have allowed for achieving near real-time 3D intraoral in-vivo scanning of patient's teeth. We explore in this paper, from a real-time perspective, a hardware-software-GPU solution that addresses all the requirements mentioned before. Moreover we exemplify and quantify the hard and soft deadlines required by such a system and illustrate how they are supported in our implementation.

  11. PRIMAS: a real-time 3D motion-analysis system

    NASA Astrophysics Data System (ADS)

    Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans

    1994-03-01

    The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.

  12. Deployment of a 3D tag tracking method utilising RFID

    NASA Astrophysics Data System (ADS)

    Wasif Reza, Ahmed; Yun, Teoh Wei; Dimyati, Kaharudin; Geok Tan, Kim; Ariffin Noordin, Kamarul

    2012-04-01

    Recent trend shows that one of the crucial problems faced while using radio frequency to track the objects is the inconsistency of the signal strength reception, which can be mainly due to the environmental factors and the blockage, which always have the most impact on the tracking accuracy. Besides, three dimensions are more relevant to a warehouse scanning. Therefore, this study proposes a highly accurate and new three-dimensional (3D) radio frequency identification-based indoor tracking system with the consideration of different attenuation factors and obstacles. The obtained results show that the proposed system yields high-quality performance with an average error as low as 0.27 m (without obstacles and attenuation effects). The obtained results also show that the proposed tracking technique can achieve relatively lower errors (0.4 and 0.36 m, respectively) even in the presence of the highest attenuation effect, e = 3.3 or when the environment is largely affected by 50% of the obstacles. Furthermore, the superiority of the proposed 3D tracking system has been proved by comparing with other existing approaches. The 3D tracking system proposed in this study can be applicable to a warehouse scanning.

  13. Autonomous Real-Time Interventional Scan Plane Control With a 3-D Shape-Sensing Needle

    PubMed Central

    Plata, Juan Camilo; Holbrook, Andrew B.; Park, Yong-Lae; Pauly, Kim Butts; Daniel, Bruce L.; Cutkosky, Mark R.

    2016-01-01

    This study demonstrates real-time scan plane control dependent on three-dimensional needle bending, as measured from magnetic resonance imaging (MRI)-compatible optical strain sensors. A biopsy needle with embedded fiber Bragg grating (FBG) sensors to measure surface strains is used to estimate its full 3-D shape and control the imaging plane of an MR scanner in real-time, based on the needle’s estimated profile. The needle and scanner coordinate frames are registered to each other via miniature radio-frequency (RF) tracking coils, and the scan planes autonomously track the needle as it is deflected, keeping its tip in view. A 3-D needle annotation is superimposed over MR-images presented in a 3-D environment with the scanner’s frame of reference. Scan planes calculated based on the FBG sensors successfully follow the tip of the needle. Experiments using the FBG sensors and RF coils to track the needle shape and location in real-time had an average root mean square error of 4.2 mm when comparing the estimated shape to the needle profile as seen in high resolution MR images. This positional variance is less than the image artifact caused by the needle in high resolution SPGR (spoiled gradient recalled) images. Optical fiber strain sensors can estimate a needle’s profile in real-time and be used for MRI scan plane control to potentially enable faster and more accurate physician response. PMID:24968093

  14. Autonomous real-time interventional scan plane control with a 3-D shape-sensing needle.

    PubMed

    Elayaperumal, Santhi; Plata, Juan Camilo; Holbrook, Andrew B; Park, Yong-Lae; Pauly, Kim Butts; Daniel, Bruce L; Cutkosky, Mark R

    2014-11-01

    This study demonstrates real-time scan plane control dependent on three-dimensional needle bending, as measured from magnetic resonance imaging (MRI)-compatible optical strain sensors. A biopsy needle with embedded fiber Bragg grating (FBG) sensors to measure surface strains is used to estimate its full 3-D shape and control the imaging plane of an MR scanner in real-time, based on the needle's estimated profile. The needle and scanner coordinate frames are registered to each other via miniature radio-frequency (RF) tracking coils, and the scan planes autonomously track the needle as it is deflected, keeping its tip in view. A 3-D needle annotation is superimposed over MR-images presented in a 3-D environment with the scanner's frame of reference. Scan planes calculated based on the FBG sensors successfully follow the tip of the needle. Experiments using the FBG sensors and RF coils to track the needle shape and location in real-time had an average root mean square error of 4.2 mm when comparing the estimated shape to the needle profile as seen in high resolution MR images. This positional variance is less than the image artifact caused by the needle in high resolution SPGR (spoiled gradient recalled) images. Optical fiber strain sensors can estimate a needle's profile in real-time and be used for MRI scan plane control to potentially enable faster and more accurate physician response.

  15. VPython: Writing Real-time 3D Physics Programs

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth

    2001-06-01

    VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.

  16. Monocular 3-D gait tracking in surveillance scenes.

    PubMed

    Rogez, Grégory; Rihan, Jonathan; Guerrero, Jose J; Orrite, Carlos

    2014-06-01

    Gait recognition can potentially provide a noninvasive and effective biometric authentication from a distance. However, the performance of gait recognition systems will suffer in real surveillance scenarios with multiple interacting individuals and where the camera is usually placed at a significant angle and distance from the floor. We present a methodology for view-invariant monocular 3-D human pose tracking in man-made environments in which we assume that observed people move on a known ground plane. First, we model 3-D body poses and camera viewpoints with a low dimensional manifold and learn a generative model of the silhouette from this manifold to a reduced set of training views. During the online stage, 3-D body poses are tracked using recursive Bayesian sampling conducted jointly over the scene's ground plane and the pose-viewpoint manifold. For each sample, the homography that relates the corresponding training plane to the image points is calculated using the dominant 3-D directions of the scene, the sampled location on the ground plane and the sampled camera view. Each regressed silhouette shape is projected using this homographic transformation and is matched in the image to estimate its likelihood. Our framework is able to track 3-D human walking poses in a 3-D environment exploring only a 4-D state space with success. In our experimental evaluation, we demonstrate the significant improvements of the homographic alignment over a commonly used similarity transformation and provide quantitative pose tracking results for the monocular sequences with a high perspective effect from the CAVIAR dataset.

  17. 3D imaging of neutron tracks using confocal microscopy

    NASA Astrophysics Data System (ADS)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  18. 3D deformable organ model based liver motion tracking in ultrasound videos

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Bae; Hwang, Youngkyoo; Oh, Young-Taek; Bang, Won-Chul; Lee, Heesae; Kim, James D. K.; Kim, Chang Yeong

    2013-03-01

    This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor's 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.

  19. Real-time 3D Eye Performance Reconstruction for RGBD Cameras.

    PubMed

    Wen, Quan; Xu, Feng; Yong, Jun-Hai

    2016-12-19

    This paper proposes a real-time method for 3D eye performance reconstruction using a single RGBD sensor. Combined with facial surface tracking, our method generates more pleasing facial performance with vivid eye motions. In our method, a novel scheme is proposed to estimate eyeball motions by minimizing the differences between a rendered eyeball and the recorded image. Our method considers and handles different appearances of human irises, lighting variations and highlights on images via the proposed eyeball model and the L0-based optimization. Robustness and real-time optimization are achieved through the novel 3D Taylor expansion-based linearization. Furthermore, we propose an online bidirectional regression method to handle occlusions and other tracking failures on either of the two eyes from the information of the opposite eye. Experiments demonstrate that our technique achieves robust and accurate eye performance reconstruction for different iris appearances, with various head/face/eye motions, and under different lighting conditions.

  20. Real-time rendering method and performance evaluation of composable 3D lenses for interactive VR.

    PubMed

    Borst, Christoph W; Tiesel, Jan-Phillip; Best, Christopher M

    2010-01-01

    We present and evaluate a new approach for real-time rendering of composable 3D lenses for polygonal scenes. Such lenses, usually called "volumetric lenses," are an extension of 2D Magic Lenses to 3D volumes in which effects are applied to scene elements. Although the composition of 2D lenses is well known, 3D composition was long considered infeasible due to both geometric and semantic complexity. Nonetheless, for a scene with multiple interactive 3D lenses, the problem of intersecting lenses must be considered. Intersecting 3D lenses in meaningful ways supports new interfaces such as hierarchical 3D windows, 3D lenses for managing and composing visualization options, or interactive shader development by direct manipulation of lenses providing component effects. Our 3D volumetric lens approach differs from other approaches and is one of the first to address efficient composition of multiple lenses. It is well-suited to head-tracked VR environments because it requires no view-dependent generation of major data structures, allowing caching and reuse of full or partial results. A Composite Shader Factory module composes shader programs for rendering composite visual styles and geometry of intersection regions. Geometry is handled by Boolean combinations of region tests in fragment shaders, which allows both convex and nonconvex CSG volumes for lens shape. Efficiency is further addressed by a Region Analyzer module and by broad-phase culling. Finally, we consider the handling of order effects for composed 3D lenses.

  1. A Hierarchical Optimization Algorithm Based on GPU for Real-Time 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Lin, Jin-hua; Wang, Lu; Wang, Yan-jie

    2017-06-01

    In machine vision sensing system, it is important to realize high-quality real-time 3D reconstruction in large-scale scene. The recent online approach performed well, but scaling up the reconstruction, it causes pose estimation drift, resulting in the cumulative error, usually requiring a large number of off-line operation to completely correct the error, reducing the reconstruction performance. In order to optimize the traditional volume fusion method and improve the old frame-to-frame pose estimation strategy, this paper presents a real-time CPU to Graphic Processing Unit reconstruction system. Based on a robust camera pose estimation strategy, the algorithm fuses all the RGB-D input values into an effective hierarchical optimization framework, and optimizes each frame according to the global camera attitude, eliminating the serious dependence on the tracking timeliness and continuously tracking globally optimized frames. The system estimates the global optimization of gestures (bundling) in real-time, supports for robust tracking recovery (re-positioning), and re-estimation of large-scale 3D scenes to ensure global consistency. It uses a set of sparse corresponding features, geometric and ray matching functions in one of the parallel optimization systems. The experimental results show that the average reconstruction time is 415 ms per frame, the ICP pose is estimated 20 times in 100.0 ms. For large scale 3D reconstruction scene, the system performs well in online reconstruction area, keeping the reconstruction accuracy at the same time.

  2. Fast and reliable active appearance model search for 3-D face tracking.

    PubMed

    Dornaika, F; Ahlberg, J

    2004-08-01

    This paper addresses the three-dimensional (3-D) tracking of pose and animation of the human face in monocular image sequences using active appearance models. The major problem of the classical appearance-based adaptation is the high computational time resulting from the inclusion of a synthesis step in the iterative optimization. Whenever the dimension of the face space is large, a real-time performance cannot be achieved. In this paper, we aim at designing a fast and stable active appearance model search for 3-D face tracking. The main contribution is a search algorithm whose CPU-time is not dependent on the dimension of the face space. Using this algorithm, we show that both the CPU-time and the likelihood of a nonaccurate tracking are reduced. Experiments evaluating the effectiveness of the proposed algorithm are reported, as well as method comparison and tracking synthetic and real image sequences.

  3. Tracking variable number of multiple subcellular structures in 3D.

    PubMed

    Wen, Quan; Gao, Jean

    2009-01-01

    With the introduction of sensitive and fast electronic imaging devices and the development of biological methods to tag proteins of interest by green fluorescent proteins (GFP), it has now become critical to develop automatic quantitative data analysis tools to study the live cell dynamics at subcellular level. In this paper, a sequential Monte Carlo (SMC) method to track variable number of multiple 3D subcellular structures is proposed. First, multiple subcellular structures are represented by a joint state. Then the distribution of the dimension changing joint state is sampled efficiently by the reverse jump Markov chain Monte Carlo (RJMCMC) method designed with update move, identity switch move, disappearing move, and appearing move. The experimental results show that the proposed method can successfully track multiple 3D subcellular structures with different motion modalities such as object appearing and disappearing.

  4. A 3D diamond detector for particle tracking

    NASA Astrophysics Data System (ADS)

    Bachmair, F.; Bäni, L.; Bergonzo, P.; Caylar, B.; Forcolin, G.; Haughton, I.; Hits, D.; Kagan, H.; Kass, R.; Li, L.; Oh, A.; Phan, S.; Pomorski, M.; Smith, D. S.; Tyzhnevyi, V.; Wallny, R.; Whitehead, D.

    2015-06-01

    A novel device using single-crystal chemical vapour deposited diamond and resistive electrodes in the bulk forming a 3D diamond detector is presented. The electrodes of the device were fabricated with laser assisted phase change of diamond into a combination of diamond-like carbon, amorphous carbon and graphite. The connections to the electrodes of the device were made using a photo-lithographic process. The electrical and particle detection properties of the device were investigated. A prototype detector system consisting of the 3D device connected to a multi-channel readout was successfully tested with 120 GeV protons proving the feasibility of the 3D diamond detector concept for particle tracking applications for the first time.

  5. Characterisation of walking loads by 3D inertial motion tracking

    NASA Astrophysics Data System (ADS)

    Van Nimmen, K.; Lombaert, G.; Jonkers, I.; De Roeck, G.; Van den Broeck, P.

    2014-09-01

    The present contribution analyses the walking behaviour of pedestrians in situ by 3D inertial motion tracking. The technique is first tested in laboratory experiments with simultaneous registration of the ground reaction forces. The registered motion of the pedestrian allows for the identification of stride-to-stride variations, which is usually disregarded in the simulation of walking forces. Subsequently, motion tracking is used to register the walking behaviour of (groups of) pedestrians during in situ measurements on a footbridge. The calibrated numerical model of the structure and the information gathered using the motion tracking system enables detailed simulation of the step-by-step pedestrian induced vibrations. Accounting for the in situ identified walking variability of the test-subjects leads to a significantly improved agreement between the measured and the simulated structural response.

  6. Tracking people and cars using 3D modeling and CCTV.

    PubMed

    Edelman, Gerda; Bijhold, Jurrien

    2010-10-10

    The aim of this study was to find a method for the reconstruction of movements of people and cars using CCTV footage and a 3D model of the environment. A procedure is proposed, in which video streams are synchronized and displayed in a 3D model, by using virtual cameras. People and cars are represented by cylinders and boxes, which are moved in the 3D model, according to their movements as shown in the video streams. The procedure was developed and tested in an experimental setup with test persons who logged their GPS coordinates as a recording of the ground truth. Results showed that it is possible to implement this procedure and to reconstruct movements of people and cars from video recordings. The procedure was also applied to a forensic case. In this work we experienced that more situational awareness was created by the 3D model, which made it easier to track people on multiple video streams. Based on all experiences from the experimental set up and the case, recommendations are formulated for use in practice.

  7. Coverage Assessment and Target Tracking in 3D Domains

    PubMed Central

    Boudriga, Noureddine; Hamdi, Mohamed; Iyengar, Sitharama

    2011-01-01

    Recent advances in integrated electronic devices motivated the use of Wireless Sensor Networks (WSNs) in many applications including domain surveillance and mobile target tracking, where a number of sensors are scattered within a sensitive region to detect the presence of intruders and forward related events to some analysis center(s). Obviously, sensor deployment should guarantee an optimal event detection rate and should reduce coverage holes. Most of the coverage control approaches proposed in the literature deal with two-dimensional zones and do not develop strategies to handle coverage in three-dimensional domains, which is becoming a requirement for many applications including water monitoring, indoor surveillance, and projectile tracking. This paper proposes efficient techniques to detect coverage holes in a 3D domain using a finite set of sensors, repair the holes, and track hostile targets. To this end, we use the concepts of Voronoi tessellation, Vietoris complex, and retract by deformation. We show in particular that, through a set of iterative transformations of the Vietoris complex corresponding to the deployed sensors, the number of coverage holes can be computed with a low complexity. Mobility strategies are also proposed to repair holes by moving appropriately sensors towards the uncovered zones. The tracking objective is to set a non-uniform WSN coverage within the monitored domain to allow detecting the target(s) by the set of sensors. We show, in particular, how the proposed algorithms adapt to cope with obstacles. Simulation experiments are carried out to analyze the efficiency of the proposed models. To our knowledge, repairing and tracking is addressed for the first time in 3D spaces with different sensor coverage schemes. PMID:22163733

  8. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  9. Coverage assessment and target tracking in 3D domains.

    PubMed

    Boudriga, Noureddine; Hamdi, Mohamed; Iyengar, Sitharama

    2011-01-01

    Recent advances in integrated electronic devices motivated the use of Wireless Sensor Networks (WSNs) in many applications including domain surveillance and mobile target tracking, where a number of sensors are scattered within a sensitive region to detect the presence of intruders and forward related events to some analysis center(s). Obviously, sensor deployment should guarantee an optimal event detection rate and should reduce coverage holes. Most of the coverage control approaches proposed in the literature deal with two-dimensional zones and do not develop strategies to handle coverage in three-dimensional domains, which is becoming a requirement for many applications including water monitoring, indoor surveillance, and projectile tracking. This paper proposes efficient techniques to detect coverage holes in a 3D domain using a finite set of sensors, repair the holes, and track hostile targets. To this end, we use the concepts of Voronoi tessellation, Vietoris complex, and retract by deformation. We show in particular that, through a set of iterative transformations of the Vietoris complex corresponding to the deployed sensors, the number of coverage holes can be computed with a low complexity. Mobility strategies are also proposed to repair holes by moving appropriately sensors towards the uncovered zones. The tracking objective is to set a non-uniform WSN coverage within the monitored domain to allow detecting the target(s) by the set of sensors. We show, in particular, how the proposed algorithms adapt to cope with obstacles. Simulation experiments are carried out to analyze the efficiency of the proposed models. To our knowledge, repairing and tracking is addressed for the first time in 3D spaces with different sensor coverage schemes.

  10. Real-time system for 3D neurosurgical planning

    NASA Astrophysics Data System (ADS)

    Goble, John C.; Snell, John W.; Hinckley, Ken; Kassell, Neal F.

    1994-09-01

    We have designed and implemented a computer-based system that permits rapid acquisition of digital medical images, multi- modality registration and segmentation, and 3D planning of minimally invasive neurosurgical procedures. The system, known as Netra, is optimized for real-time planning: imaging, pre- processing and planning are performed on the morning of surgery in clinically useful times. We have tested the system on procedures such as needle biopsies, depth electrode placements and craniectomies for arteriovenous malformations, aneurysms and tumors. We describe in this paper the core algorithms of our system, and discuss issues related to implementation, validation and user acceptance. We focus on techniques for physician interaction that encourage active participation by the surgeon as principal operator of the visualization and planning system.

  11. 3-D model-based tracking for UAV indoor localization.

    PubMed

    Teulière, Céline; Marchand, Eric; Eck, Laurent

    2015-05-01

    This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights.

  12. Design and Performance Evaluation on Ultra-Wideband Time-Of-Arrival 3D Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Dusl, John

    2012-01-01

    A three-dimensional (3D) Ultra-Wideband (UWB) Time--of-Arrival (TOA) tracking system has been studied at NASA Johnson Space Center (JSC) to provide the tracking capability inside the International Space Station (ISS) modules for various applications. One of applications is to locate and report the location where crew experienced possible high level of carbon-dioxide and felt upset. In order to accurately locate those places in a multipath intensive environment like ISS modules, it requires a robust real-time location system (RTLS) which can provide the required accuracy and update rate. A 3D UWB TOA tracking system with two-way ranging has been proposed and studied. The designed system will be tested in the Wireless Habitat Testbed which simulates the ISS module environment. In this presentation, we discuss the 3D TOA tracking algorithm and the performance evaluation based on different tracking baseline configurations. The simulation results show that two configurations of the tracking baseline are feasible. With 100 picoseconds standard deviation (STD) of TOA estimates, the average tracking error 0.2392 feet (about 7 centimeters) can be achieved for configuration Twisted Rectangle while the average tracking error 0.9183 feet (about 28 centimeters) can be achieved for configuration Slightly-Twisted Top Rectangle . The tracking accuracy can be further improved with the improvement of the STD of TOA estimates. With 10 picoseconds STD of TOA estimates, the average tracking error 0.0239 feet (less than 1 centimeter) can be achieved for configuration "Twisted Rectangle".

  13. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system.

    PubMed

    Baumann, Michael; Mozer, Pierre; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space and thus renders optimization efficient. The method was tested on 237 prostate volumes acquired from 14 different patients for 3D to 3D and 3D to orthogonal 2D slices registration. The 3D-3D version of the algorithm converged correctly in 96.7% of all cases in 6.5s with an accuracy of 1.41mm (r.m.s.) and 3.84mm (max). The 3D to slices method yielded a success rate of 88.9% in 2.3s with an accuracy of 1.37mm (r.m.s.) and 4.3mm (max).

  14. 3D Tracking of small-scale convective upflows

    NASA Astrophysics Data System (ADS)

    Lemmerer, Birgit; Hanslmeier, Arnold; Veronig, Astrid; Muthsam, Herbert; Piantschitsch, Isabell

    2015-08-01

    High resolution simulations and observations of the solar photosphere and convection zone show a new population of small granules with diameters less than 800 km. The mechanism of formation and dissipation is still unclear. We developed automated detection and tracking algorithms to study their evolution as well as their physical and statistical properties in 2D. We found that small granules may not result from the fragmentation of larger granules because they show a small variation in size from the point of appearance at the photosphere until their dissolution. In this study we present a newly developed 3D segmentation and tracking algorithm for the analysis of small-scale convective cells in high resolution simulations. We study the 3D topology and evolution of convective upflows and their interaction with strong vortex motions and magnetic flux tubes. We show that the evolution of small-scale convective upflows in the convection zone is mainly governed by strong vortex motions within downdrafts rather than by strong magnetic fields.

  15. Handheld real-time volumetric 3-D gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Haefner, Andrew; Barnowski, Ross; Luke, Paul; Amman, Mark; Vetter, Kai

    2017-06-01

    This paper presents the concept of real-time fusion of gamma-ray imaging and visual scene data for a hand-held mobile Compton imaging system in 3-D. The ability to obtain and integrate both gamma-ray and scene data from a mobile platform enables improved capabilities in the localization and mapping of radioactive materials. This not only enhances the ability to localize these materials, but it also provides important contextual information of the scene which once acquired can be reviewed and further analyzed subsequently. To demonstrate these concepts, the high-efficiency multimode imager (HEMI) is used in a hand-portable implementation in combination with a Microsoft Kinect sensor. This sensor, in conjunction with open-source software, provides the ability to create a 3-D model of the scene and to track the position and orientation of HEMI in real-time. By combining the gamma-ray data and visual data, accurate 3-D maps of gamma-ray sources are produced in real-time. This approach is extended to map the location of radioactive materials within objects with unknown geometry.

  16. Real-time 3D avatars for tele-rehabilitation in virtual reality.

    PubMed

    Kurillo, Gregorij; Koritnik, Tomaz; Bajd, Tadej; Bajcsy, Ruzena

    2011-01-01

    We present work in progress on a tele-immersion system for telerehabilitation using real-time stereo vision and virtual environments. Stereo reconstruction is used to capture user's 3D avatar in real time and project it into a shared virtual environment, enabling a patient and therapist to interact remotely. Captured data can also be used to analyze the movement and provide feedback to the patient as we present in a preliminary study of stepping-in-place task. Such tele-presence system could in the future allow patients to interact remotely with remote physical therapist and virtual environment while objectively tracking their performance.

  17. 3D visualisation and analysis of single and coalescing tracks in Solid state Nuclear Track Detectors

    NASA Astrophysics Data System (ADS)

    Wertheim, David; Gillmore, Gavin; Brown, Louise; Petford, Nick

    2010-05-01

    Exposure to radon gas (222Rn) and associated ionising decay products can cause lung cancer in humans (1). Solid state Nuclear Track Detectors (SSNTDs) can be used to monitor radon concentrations (2). Radon particles form tracks in the detectors and these tracks can be etched in order to enable 2D surface image analysis. We have previously shown that confocal microscopy can be used for 3D visualisation of etched SSNTDs (3). The aim of the study was to further investigate track angles and patterns in SSNTDs. A 'LEXT' confocal laser scanning microscope (Olympus Corporation, Japan) was used to acquire 3D image datasets of five CR-39 plastic SSNTD's. The resultant 3D visualisations were analysed by eye and inclination angles assessed on selected tracks. From visual assessment, single isolated tracks as well as coalescing tracks were observed on the etched detectors. In addition varying track inclination angles were observed. Several different patterns of track formation were seen such as single isolated and double coalescing tracks. The observed track angles of inclination may help to assess the angle at which alpha particles hit the detector. Darby, S et al. Radon in homes and risk of lung cancer : collaborative analysis of individual data from 13 European case-control studies. British Medical Journal 2005; 330, 223-226. Phillips, P.S., Denman, A.R., Crockett, R.G.M., Gillmore, G., Groves-Kirkby, C.J., Woolridge, A., Comparative Analysis of Weekly vs. Three monthly radon measurements in dwellings. DEFRA Report No., DEFRA/RAS/03.006. (2004). Wertheim D, Gillmore G, Brown L, and Petford N. A new method of imaging particle tracks in Solid State Nuclear Track Detectors. Journal of Microscopy 2010; 237: 1-6.

  18. Towards real-time 3D ultrasound planning and personalized 3D printing for breast HDR brachytherapy treatment.

    PubMed

    Poulin, Eric; Gardi, Lori; Fenster, Aaron; Pouliot, Jean; Beaulieu, Luc

    2015-03-01

    Two different end-to-end procedures were tested for real-time planning in breast HDR brachytherapy treatment. Both methods are using a 3D ultrasound (3DUS) system and a freehand catheter optimization algorithm. They were found fast and efficient. We demonstrated a proof-of-concept approach for personalized real-time guidance and planning to breast HDR brachytherapy treatments. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Real-time 3D dose imaging in water phantoms: reconstruction from simultaneous EPID-Cherenkov 3D imaging (EC3D)

    NASA Astrophysics Data System (ADS)

    Bruza, P.; Andreozzi, J. M.; Gladstone, D. J.; Jarvis, L. A.; Rottmann, J.; Pogue, B. W.

    2017-05-01

    Combination of electronic portal imaging device (EPID) transmission imaging with frontal Cherenkov imaging enabled real-time 3D dosimetry of clinical X-ray beams in water phantoms. The EPID provides a 2D transverse distribution of attenuation which can be back-projected to estimate accumulated dose, while the Cherenkov image provides an accurate lateral view of the dose versus depth. Assuming homogeneous density and composition of the phantom, both images can be linearly combined into a true 3D distribution of the deposited dose. We describe the algorithm for volumetric dose reconstruction, and demonstrate the results of a volumetric modulated arc therapy (VMAT) 3D dosimetry.

  20. Holographic microscopy for 3D tracking of bacteria

    NASA Astrophysics Data System (ADS)

    Nadeau, Jay; Cho, Yong Bin; El-Kholy, Marwan; Bedrossian, Manuel; Rider, Stephanie; Lindensmith, Christian; Wallace, J. Kent

    2016-03-01

    Understanding when, how, and if bacteria swim is key to understanding critical ecological and biological processes, from carbon cycling to infection. Imaging motility by traditional light microscopy is limited by focus depth, requiring cells to be constrained in z. Holographic microscopy offers an instantaneous 3D snapshot of a large sample volume, and is therefore ideal in principle for quantifying unconstrained bacterial motility. However, resolving and tracking individual cells is difficult due to the low amplitude and phase contrast of the cells; the index of refraction of typical bacteria differs from that of water only at the second decimal place. In this work we present a combination of optical and sample-handling approaches to facilitating bacterial tracking by holographic phase imaging. The first is the design of the microscope, which is an off-axis design with the optics along a common path, which minimizes alignment issues while providing all of the advantages of off-axis holography. Second, we use anti-reflective coated etalon glass in the design of sample chambers, which reduce internal reflections. Improvement seen with the antireflective coating is seen primarily in phase imaging, and its quantification is presented here. Finally, dyes may be used to increase phase contrast according to the Kramers-Kronig relations. Results using three test strains are presented, illustrating the different types of bacterial motility characterized by an enteric organism (Escherichia coli), an environmental organism (Bacillus subtilis), and a marine organism (Vibrio alginolyticus). Data processing steps to increase the quality of the phase images and facilitate tracking are also discussed.

  1. 3D whiteboard: collaborative sketching with 3D-tracked smart phones

    NASA Astrophysics Data System (ADS)

    Lue, James; Schulze, Jürgen P.

    2014-02-01

    We present the results of our investigation of the feasibility of a new approach for collaborative drawing in 3D, based on Android smart phones. Our approach utilizes a number of fiduciary markers, placed in the working area where they can be seen by the smart phones' cameras, in order to estimate the pose of each phone in the room. Our prototype allows two users to draw 3D objects with their smart phones by moving their phones around in 3D space. For example, 3D lines are drawn by recording the path of the phone as it is moved around in 3D space, drawing line segments on the screen along the way. Each user can see the virtual drawing space on their smart phones' displays, as if the display was a window into this space. Besides lines, our prototype application also supports 3D geometry creation, geometry transformation operations, and it shows the location of the other user's phone.

  2. Robust 3D Object Tracking from Monocular Images using Stable Parts.

    PubMed

    Crivellaro, Alberto; Rad, Mahdi; Verdie, Yannick; Yi, Kwang Moo; Fua, Pascal; Lepetit, Vincent

    2017-05-26

    We present an algorithm for estimating the pose of a rigid object in real-time under challenging conditions. Our method effectively handles poorly textured objects in cluttered, changing environments, even when their appearance is corrupted by large occlusions, and it relies on grayscale images to handle metallic environments on which depth cameras would fail. As a result, our method is suitable for practical Augmented Reality applications including industrial environments. At the core of our approach is a novel representation for the 3D pose of object parts: We predict the 3D pose of each part in the form of the 2D projections of a few control points. The advantages of this representation is three-fold: We can predict the 3D pose of the object even when only one part is visible; when several parts are visible, we can easily combine them to compute a better pose of the object; the 3D pose we obtain is usually very accurate, even when only few parts are visible. We show how to use this representation in a robust 3D tracking framework. In addition to extensive comparisons with the state-of-the-art, we demonstrate our method on a practical Augmented Reality application for maintenance assistance in the ATLAS particle detector at CERN.

  3. Feasibility study: real-time 3-D ultrasound imaging of the brain.

    PubMed

    Smith, Stephen W; Chu, Kengyeh; Idriss, Salim F; Ivancevich, Nikolas M; Light, Edward D; Wolf, Patrick D

    2004-10-01

    We tested the feasibility of real-time, 3-D ultrasound (US) imaging in the brain. The 3-D scanner uses a matrix phased-array transducer of 512 transmit channels and 256 receive channels operating at 2.5 MHz with a 15-mm diameter footprint. The real-time system scans a 65 degrees pyramid, producing up to 30 volumetric scans per second, and features up to five image planes as well as 3-D rendering, 3-D pulsed-wave and color Doppler. In a human subject, the real-time 3-D scans produced simultaneous transcranial horizontal (axial), coronal and sagittal image planes and real-time volume-rendered images of the gross anatomy of the brain. In a transcranial sheep model, we obtained real-time 3-D color flow Doppler scans and perfusion images using bolus injection of contrast agents into the internal carotid artery.

  4. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    PubMed

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  5. Comparison of 2D and 3D modeled tumor motion estimation/prediction for dynamic tumor tracking during arc radiotherapy.

    PubMed

    Liu, Wu; Ma, Xiangyu; Yan, Huagang; Chen, Zhe; Nath, Ravinder; Li, Haiyun

    2017-03-06

    Many real-time imaging techniques have been developed to localize the target in 3D space or in 2D beam's eye view (BEV) plane for intrafraction motion tracking in radiation therapy. With tracking system latency, 3D-modeled method is expected to be more accurate even in terms of 2D BEV tracking error. No quantitative analysis, however, has been reported. In this study, we simulated co-planar arc deliveries using respiratory motion data acquired from 42 patients to quantitatively compare the accuracy between 2D BEV and 3D-modeled tracking in arc therapy and determine whether 3D information is needed for motion tracking. We used our previously developed low kV dose adaptive MV-kV imaging and motion compensation framework as a representative of 3D-modeled methods. It optimizes the balance between additional kV imaging dose and 3D tracking accuracy and solves the MLC blockage issue. With simulated Gaussian marker detection errors (zero mean and 0.39 mm standard deviation) and ~155/310/460 ms tracking system latencies, the mean percentage of time that the target moved >2 mm from the predicted 2D BEV position are 1.1%/4.0%/7.8% and 1.3%/5.8%/11.6% for 3D-modeled and 2D-only tracking, respectively. The corresponding average BEV RMS errors are 0.67/0.90/1.13 mm and 0.79/1.10/1.37 mm. Compared to the 2D method, the 3D method reduced the average RMS unresolved motion along the beam direction from ~3 mm to ~1 mm, resulting on average only <1% dosimetric advantage in the depth direction. Only for a small fraction of the patients, when tracking latency is long, the 3D-modeled method showed significant improvement of BEV tracking accuracy, indicating potential dosimetric advantage. However, if the tracking latency is short (~150 ms or less), those improvements are limited. Therefore, 2D BEV tracking has sufficient targeting accuracy for most clinical cases. The 3D technique is, however, still important in solving the MLC blockage problem during 2D BEV tracking.

  6. Comparison of 2D and 3D modeled tumor motion estimation/prediction for dynamic tumor tracking during arc radiotherapy

    NASA Astrophysics Data System (ADS)

    Liu, Wu; Ma, Xiangyu; Yan, Huagang; Chen, Zhe; Nath, Ravinder; Li, Haiyun

    2017-05-01

    Many real-time imaging techniques have been developed to localize a target in 3D space or in a 2D beam’s eye view (BEV) plane for intrafraction motion tracking in radiation therapy. With tracking system latency, the 3D-modeled method is expected to be more accurate even in terms of 2D BEV tracking error. No quantitative analysis, however, has been reported. In this study, we simulated co-planar arc deliveries using respiratory motion data acquired from 42 patients to quantitatively compare the accuracy between 2D BEV and 3D-modeled tracking in arc therapy and to determine whether 3D information is needed for motion tracking. We used our previously developed low kV dose adaptive MV-kV imaging and motion compensation framework as a representative of 3D-modeled methods. It optimizes the balance between additional kV imaging dose and 3D tracking accuracy and solves the MLC blockage issue. With simulated Gaussian marker detection errors (zero mean and 0.39 mm standard deviation) and ~155/310/460 ms tracking system latencies, the mean percentage of time that the target moved  >2 mm from the predicted 2D BEV position are 1.1%/4.0%/7.8% and 1.3%/5.8%/11.6% for the 3D-modeled and 2D-only tracking, respectively. The corresponding average BEV RMS errors are 0.67/0.90/1.13 mm and 0.79/1.10/1.37 mm. Compared to the 2D method, the 3D method reduced the average RMS unresolved motion along the beam direction from ~3 mm to ~1 mm, resulting in on average only  <1% dosimetric advantage in the depth direction. Only for a small fraction of the patients, when tracking latency is long, the 3D-modeled method showed significant improvement of BEV tracking accuracy, indicating potential dosimetric advantage. However, if the tracking latency is short (~150 ms or less), those improvements are limited. Therefore, 2D BEV tracking has sufficient targeting accuracy for most clinical cases. The 3D technique is, however, still important in solving the MLC blockage problem

  7. 3D imaging of particle tracks in Solid State Nuclear Track Detectors

    NASA Astrophysics Data System (ADS)

    Wertheim, D.; Gillmore, G.; Brown, L.; Petford, N.

    2009-04-01

    Inhalation of radon gas (222Rn) and associated ionizing decay products is known to cause lung cancer in human. In the U.K., it has been suggested that 3 to 5 % of total lung cancer deaths can be linked to elevated radon concentrations in the home and/or workplace. Radon monitoring in buildings is therefore routinely undertaken in areas of known risk. Indeed, some organisations such as the Radon Council in the UK and the Environmental Protection Agency in the USA, advocate a ‘to test is best' policy. Radon gas occurs naturally, emanating from the decay of 238U in rock and soils. Its concentration can be measured using CR?39 plastic detectors which conventionally are assessed by 2D image analysis of the surface; however there can be some variation in outcomes / readings even in closely spaced detectors. A number of radon measurement methods are currently in use (for examples, activated carbon and electrets) but the most widely used are CR?39 solid state nuclear track?etch detectors (SSNTDs). In this technique, heavily ionizing alpha particles leave tracks in the form of radiation damage (via interaction between alpha particles and the atoms making up the CR?39 polymer). 3D imaging of the tracks has the potential to provide information relating to angle and energy of alpha particles but this could be time consuming. Here we describe a new method for rapid high resolution 3D imaging of SSNTDs. A ‘LEXT' OLS3100 confocal laser scanning microscope was used in confocal mode to successfully obtain 3D image data on four CR?39 plastic detectors. 3D visualisation and image analysis enabled characterisation of track features. This method may provide a means of rapid and detailed 3D analysis of SSNTDs. Keywords: Radon; SSNTDs; confocal laser scanning microscope; 3D imaging; LEXT

  8. Tracking tissue section surfaces for automated 3D confocal cytometry

    NASA Astrophysics Data System (ADS)

    Agustin, Ramses; Price, Jeffrey H.

    2002-05-01

    Three-dimensional cytometry, whereby large volumes of tissue would be measured automatically, requires a computerized method for detecting the upper and lower tissue boundaries. In conventional confocal microscopy, the user interactively sets limits for axial scanning for each field-of-view. Biological specimens vary in section thickness, thereby driving the requirement for setting vertical scan limits. Limits could be set arbitrarily large to ensure the entire tissue is scanned, but automatic surface identification would eliminate storing undue numbers of empty optical sections and forms the basis for incorporating lateral microscope stage motion to collect unlimited numbers of stacks. This walk-away automation of 3D confocal scanning for biological imaging is the first sep towards practical, computerized statistical sampling from arbitrarily large tissue volumes. Preliminary results for automatic tissue surface tracking were obtained for phase-contrast microscopy by measuring focus sharpness (previously used for high-speed autofocus by our group). Measurements were taken from 5X5 fields-of-view from hamster liver sections, varying from five to twenty microns in thickness, then smoothed to lessen variations of in-focus information at each axial position. Because image sharpness (as the power of high spatial frequency components) drops across the axial boundaries of a tissue section, mathematical quantities including the full-width at half-maximum, extrema in the first derivative, and second derivative were used to locate the proximal and distal surfaces of a tissue. Results from these tests were evaluated against manual (i.e., visual) determination of section boundaries.

  9. Registration of real-time 3-D ultrasound images of the heart for novel 3-D stress echocardiography.

    PubMed

    Shekhar, Raj; Zagrodsky, Vladimir; Garcia, Mario J; Thomas, James D

    2004-09-01

    Stress echocardiography is a routinely used clinical procedure to diagnose cardiac dysfunction by comparing wall motion information in prestress and poststress ultrasound images. Incomplete data, complicated imaging protocols and misaligned prestress and poststress views, however, are known limitations of conventional stress echocardiography. We discuss how the first two limitations are overcome via the use of real-time three-dimensional (3-D) ultrasound imaging, an emerging modality, and have called the new procedure "3-D stress echocardiography." We also show that the problem of misaligned views can be solved by registration of prestress and poststress 3-D image sequences. Such images are misaligned because of variations in placing the ultrasound transducer and stress-induced anatomical changes. We have developed a technique to temporally align 3-D images of the two sequences first and then to spatially register them to rectify probe placement error while preserving the stress-induced changes. The 3-D spatial registration is mutual information-based. Image registration used in conjunction with 3-D stress echocardiography can potentially improve the diagnostic accuracy of stress testing.

  10. 3D Printed "Earable" Smart Devices for Real-Time Detection of Core Body Temperature.

    PubMed

    Ota, Hiroki; Chao, Minghan; Gao, Yuji; Wu, Eric; Tai, Li-Chia; Chen, Kevin; Matsuoka, Yasutomo; Iwai, Kosuke; Fahad, Hossain M; Gao, Wei; Nyein, Hnin Yin Yin; Lin, Liwei; Javey, Ali

    2017-07-28

    Real-time detection of basic physiological parameters such as blood pressure and heart rate is an important target in wearable smart devices for healthcare. Among these, the core body temperature is one of the most important basic medical indicators of fever, insomnia, fatigue, metabolic functionality, and depression. However, traditional wearable temperature sensors are based upon the measurement of skin temperature, which can vary dramatically from the true core body temperature. Here, we demonstrate a three-dimensional (3D) printed wearable "earable" smart device that is designed to be worn on the ear to track core body temperature from the tympanic membrane (i.e., ear drum) based on an infrared sensor. The device is fully integrated with data processing circuits and a wireless module for standalone functionality. Using this smart earable device, we demonstrate that the core body temperature can be accurately monitored regardless of the environment and activity of the user. In addition, a microphone and actuator are also integrated so that the device can also function as a bone conduction hearing aid. Using 3D printing as the fabrication method enables the device to be customized for the wearer for more personalized healthcare. This smart device provides an important advance in realizing personalized health care by enabling real-time monitoring of one of the most important medical parameters, core body temperature, employed in preliminary medical screening tests.

  11. Ion track reconstruction in 3D using alumina-based fluorescent nuclear track detectors

    NASA Astrophysics Data System (ADS)

    Niklas, M.; Bartz, J. A.; Akselrod, M. S.; Abollahi, A.; Jäkel, O.; Greilich, S.

    2013-09-01

    Fluorescent nuclear track detectors (FNTDs) based on Al2O3: C, Mg single crystal combined with confocal microscopy provide 3D information on ion tracks with a resolution only limited by light diffraction. FNTDs are also ideal substrates to be coated with cells to engineer cell-fluorescent ion track hybrid detectors (Cell-Fit-HD). This radiobiological tool enables a novel platform linking cell responses to physical dose deposition on a sub-cellular level in proton and heavy ion therapies. To achieve spatial correlation between single ion hits in the cell coating and its biological response the ion traversals have to be reconstructed in 3D using the depth information gained by the FNTD read-out. FNTDs were coated with a confluent human lung adenocarcinoma epithelial (A549) cell layer. Carbon ion irradiation of the hybrid detector was performed perpendicular and angular to the detector surface. In situ imaging of the fluorescently labeled cell layer and the FNTD was performed in a sequential read-out. Making use of the trajectory information provided by the FNTD the accuracy of 3D track reconstruction of single particles traversing the hybrid detector was studied. The accuracy is strongly influenced by the irradiation angle and therefore by complexity of the FNTD signal. Perpendicular irradiation results in highest accuracy with error of smaller than 0.10°. The ability of FNTD technology to provide accurate 3D ion track reconstruction makes it a powerful tool for radiobiological investigations in clinical ion beams, either being used as a substrate to be coated with living tissue or being implanted in vivo.

  12. A real-time moment-tensor inversion system (GRiD-MT-3D) using 3-D Green's functions

    NASA Astrophysics Data System (ADS)

    Nagao, A.; Furumura, T.; Tsuruoka, H.

    2016-12-01

    We developed a real-time moment-tensor inversion system using 3-D Green's functions (GRiD-MT-3D) by improving the current system (GRiD-MT; Tsuruoka et al., 2009), which uses 1-D Green's functions for longer periods than 20 s. Our moment-tensor inversion is applied to the real-time monitoring of earthquakes occurring beneath Kanto basin area. The basin, which is constituted of thick sediment layers, lies on the complex subduction of the Philippine-Sea Plate and the Pacific Plate that can significantly affect the seismic wave propagation. We compute 3-D Green's functions using finite-difference-method (FDM) simulations considering a 3-D velocity model, which is based on the Japan Integrated Velocity Structure Model (Koketsu et al., 2012), that includes crust, mantle, and subducting plates. The 3-D FDM simulations are computed over a volume of 468 km by 432 km by 120 km in the EW, NS, and depth directions, respectively, that is discretized into 0.25 km grids. Considering that the minimum S wave velocity of the sedimentary layer is 0.5 km/s, simulations can compute seismograms up to 0.5 Hz. We calculate Green's functions between 24,700 sources, which are distributed every 0.1° in the horizontal direction and every 9 km in depth direction, and 13 F-net stations. To compute this large number of Green's functions, we used the EIC parallel computer of ERI. The reciprocity theory, which switches the source and station positions, is used to reduce total computation costs. It took 156 hours to compute all the Green's functions. Results show that at long-periods (T>15 s), only small differences are observed between the 3-D and 1-D Green's functions as indicated by high correlation coefficients of 0.9 between the waveforms. However, at shorter periods (T<10 s), the differences become larger and the correlation coefficients drop to 0.5. The effect of the 3-D heterogeneous structure especially affects the Green's functions for the ray paths that across complex geological

  13. Real-Time Cell Cycle Imaging in a 3D Cell Culture Model of Melanoma.

    PubMed

    Spoerri, Loredana; Beaumont, Kimberley A; Anfosso, Andrea; Haass, Nikolas K

    2017-01-01

    Aberrant cell cycle progression is a hallmark of solid tumors; therefore, cell cycle analysis is an invaluable technique to study cancer cell biology. However, cell cycle progression has been most commonly assessed by methods that are limited to temporal snapshots or that lack spatial information. Here, we describe a technique that allows spatiotemporal real-time tracking of cell cycle progression of individual cells in a multicellular context. The power of this system lies in the use of 3D melanoma spheroids generated from melanoma cells engineered with the fluorescent ubiquitination-based cell cycle indicator (FUCCI). This technique allows us to gain further and more detailed insight into several relevant aspects of solid cancer cell biology, such as tumor growth, proliferation, invasion, and drug sensitivity.

  14. A 3D feature point tracking method for ion radiation

    NASA Astrophysics Data System (ADS)

    Kouwenberg, Jasper J. M.; Ulrich, Leonie; Jäkel, Oliver; Greilich, Steffen

    2016-06-01

    A robust and computationally efficient algorithm for automated tracking of high densities of particles travelling in (semi-) straight lines is presented. It extends the implementation of (Sbalzarini and Koumoutsakos 2005) and is intended for use in the analysis of single ion track detectors. By including information of existing tracks in the exclusion criteria and a recursive cost minimization function, the algorithm is robust to variations on the measured particle tracks. A trajectory relinking algorithm was included to resolve the crossing of tracks in high particle density images. Validation of the algorithm was performed using fluorescent nuclear track detectors (FNTD) irradiated with high- and low (heavy) ion fluences and showed less than 1% faulty trajectories in the latter.

  15. A Hidden Markov Model for 3D Catheter Tip Tracking with 2D X-ray Catheterization Sequence and 3D Rotational Angiography.

    PubMed

    Ambrosini, Pierre; Smal, Ihor; Ruijters, Daniel; Niessen, Wiro; Moelker, Adriaan; van Walsum, Theo

    2016-11-07

    In minimal invasive image guided catheterization procedures, physicians require information of the catheter position with respect to the patient's vasculature. However, in fluoroscopic images, visualization of the vasculature requires toxic contrast agent. Static vasculature roadmapping, which can reduce the usage of iodine contrast, is hampered by the breathing motion in abdominal catheterization. In this paper, we propose a method to track the catheter tip inside the patient's 3D vessel tree using intra-operative single-plane 2D X-ray image sequences and a peri-operative 3D rotational angiography (3DRA). The method is based on a hidden Markov model (HMM) where states of the model are the possible positions of the catheter tip inside the 3D vessel tree. The transitions from state to state model the probabilities for the catheter tip to move from one position to another. The HMM is updated following the observation scores, based on the registration between the 2D catheter centerline extracted from the 2D X-ray image, and the 2D projection of 3D vessel tree centerline extracted from the 3DRA. The method is extensively evaluated on simulated and clinical datasets acquired during liver abdominal catheterization. The evaluations show a median 3D tip tracking error of 2.3 mm with optimal settings in simulated data. The registered vessels close to the tip have a median distance error of 4.7 mm with angiographic data and optimal settings. Such accuracy is sufficient to help the physicians with an up-to-date roadmapping. The method tracks in real-time the catheter tip and enables roadmapping during catheterization procedures.

  16. LayTracks3D: A new approach for meshing general solids using medial axis transform

    SciTech Connect

    Quadros, William Roshan

    2015-08-22

    This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to the MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.

  17. Oblique needle segmentation and tracking for 3D TRUS guided prostate brachytherapy

    SciTech Connect

    Wei Zhouping; Gardi, Lori; Downey, Donal B.; Fenster, Aaron

    2005-09-15

    An algorithm was developed in order to segment and track brachytherapy needles inserted along oblique trajectories. Three-dimensional (3D) transrectal ultrasound (TRUS) images of the rigid rod simulating the needle inserted into the tissue-mimicking agar and chicken breast phantoms were obtained to test the accuracy of the algorithm under ideal conditions. Because the robot possesses high positioning and angulation accuracies, we used the robot as a ''gold standard,'' and compared the results of algorithm segmentation to the values measured by the robot. Our testing results showed that the accuracy of the needle segmentation algorithm depends on the needle insertion distance into the 3D TRUS image and the angulations with respect to the TRUS transducer, e.g., at a 10 deg. insertion anglulation in agar phantoms, the error of the algorithm in determining the needle tip position was less than 1 mm when the insertion distance was greater than 15 mm. Near real-time needle tracking was achieved by scanning a small volume containing the needle. Our tests also showed that, the segmentation time was less than 60 ms, and the scanning time was less than 1.2 s, when the insertion distance into the 3D TRUS image was less than 55 mm. In our needle tracking tests in chicken breast phantoms, the errors in determining the needle orientation were less than 2 deg. in robot yaw and 0.7 deg. in robot pitch orientations, for up to 20 deg. needle insertion angles with the TRUS transducer in the horizontal plane when the needle insertion distance was greater than 15 mm.

  18. Real-time hardware for a new 3D display

    NASA Astrophysics Data System (ADS)

    Kaufmann, B.; Akil, M.

    2006-02-01

    We describe in this article a new multi-view auto-stereoscopic display system with a real time architecture to generate images of n different points of view of a 3D scene. This architecture generates all the different points of view with only one generation process, the different pictures are not generated independently but all at the same time. The architecture generates a frame buffer that contains all the voxels with their three dimensions and regenerates the different pictures on demand from this frame buffer. The need of memory is decreased because there is no redundant information in the buffer.

  19. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  20. Study of a viewer tracking system with multiview 3D display

    NASA Astrophysics Data System (ADS)

    Yang, Jinn-Cherng; Wu, Chang-Shuo; Hsiao, Chuan-Heng; Yang, Ming-Chieh; Liu, Wen-Chieh; Hung, Yi-Ping

    2008-02-01

    An autostereoscopic display provides users great enjoyment of stereo visualization without uncomfortable and inconvenient drawbacks of wearing stereo glasses. However, bandwidth constraints of current multi-view 3D display severely restrict the number of views that can be simultaneously displayed without degrading resolution or increasing display cost unacceptably. An alternative to multiple view presentation is that the position of observer can be measured by using viewer-tracking sensor. It is a very important module of the viewer-tracking component for fluently rendering and accurately projecting the stereo video. In order to render stereo content with respect to user's view points and to optically project the content onto the left and right eyes of the user accurately, the real-time viewer tracking technique that allows the user to move around freely when watching the autostereoscopic display is developed in this study. It comprises the face detection by using multiple eigenspaces of various lighting conditions, fast block matching for tracking four motion parameters of the user's face region. The Edge Orientation Histogram (EOH) on Real AdaBoost to improve the performance of original AdaBoost algorithm is also applied in this study. The AdaBoost algorithm using Haar feature in OpenCV library developed by Intel to detect human face and enhance the accuracy performance with rotating image. The frame rate of viewer tracking process can achieve up to 15 Hz. Since performance of the viewer tracking autostereoscopic display is still influenced under variant environmental conditions, the accuracy, robustness and efficiency of the viewer-tracking system are evaluated in this study.

  1. 3D coronary motion tracking in swine models with MR tracking catheters.

    PubMed

    Schmidt, Ehud J; Yoneyama, Ryuichi; Dumoulin, Charles L; Darrow, Robert D; Klein, Eric; Kiruluta, Andrew J M; Hayase, Motoya

    2009-01-01

    To develop MR-tracked catheters to delineate the three-dimensional motion of coronary arteries at high spatial and temporal resolution. Catheters with three tracking microcoils were placed into nine swine. During breath-holds, electrocardiographic (ECG)-synchronized 3D motion was measured at varying vessel depths. 3D motion was measured in American Heart Association left anterior descending (LAD) segments 6-7, left circumflex (LCX) segments 11-15, and right coronary artery (RCA) segments 2-3, at 60-115 beats/min heart rates. Similar-length cardiac cycles were averaged. Intercoil cross-correlation identified early systolic phase (ES) and determined segment motion delay. Translational and rotational motion, as a function of cardiac phase, is shown, with directionality and amplitude varying along the vessel length. Rotation (peak-to-peak solid-angle RCA approximately 0.10, LAD approximately 0.06, LCX approximately 0.18 radian) occurs primarily during fast translational motion and increases distally. LCX displacement increases with heart rate by 18%. Phantom simulations of motion effects on high-resolution images, using RCA results, show artifacts due to translation and rotation. Magnetic resonance imaging (MRI) tracking catheters quantify motion at 20 fps and 1 mm(3) resolution at multiple vessel depths, exceeding that available with other techniques. Imaging artifacts due to rotation are demonstrated. Motion-tracking catheters may provide physiological information during interventions and improve imaging spatial resolution.

  2. Real-Time Camera Guidance for 3d Scene Reconstruction

    NASA Astrophysics Data System (ADS)

    Schindler, F.; Förstner, W.

    2012-07-01

    We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  3. Nonintrusive viewpoint tracking for 3D for perception in smart video conference

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Martinez-Ponte, Isabel; Meessen, Jerome; Delaigle, Jean-François

    2006-02-01

    Globalisation of people's interaction in the industrial world and ecological cost of transport make video-conference an interesting solution for collaborative work. However, the lack of immersive perception makes video-conference not appealing. TIFANIS tele-immersion system was conceived to let users interact as if they were physically together. In this paper, we focus on an important feature of the immersive system: the automatic tracking of the user's point of view in order to render correctly in his display the scene from the ther site. Viewpoint information has to be computed in a very short time and the detection system should be no intrusive, otherwise it would become cumbersome for the user, i.e. he would lose the feeling of "being there". The viewpoint detection system consists of several modules. First, an analysis module identifies and follows regions of interest (ROI) where faces are detected. We will show the cooperative approach between spatial detection and temporal tracking. Secondly, an eye detector finds the position of the eyes within faces. Then, the 3D positions of the eyes are deduced using stereoscopic images from a binocular camera. Finally, the 3D scene is rendered in real-time according to the new point of view.

  4. Ebstein's anomaly assessed by real-time 3-D echocardiography.

    PubMed

    Acar, Philippe; Abadir, Sylvia; Roux, Daniel; Taktak, Assaad; Dulac, Yves; Glock, Yves; Fournial, Gerard

    2006-08-01

    The outcome of patients with Ebstein's malformation depends mainly on the severity of the tricuspid valve malformation. Accurate description of the tricuspid anatomy by two-dimensional echocardiography remains difficult. We applied real-time three-dimensional echocardiography to 3 patients with Ebstein's anomaly. Preoperative and postoperative descriptions of the tricuspid valve were obtained from views taken inside the right ventricle. Surface of the leaflets as well as the commissures were obtained by three-dimensional echocardiography. Real time three-dimensional echocardiography is a promising tool, providing new views that will help to evaluate the ability and efficiency of surgical valve repair in patient with Ebstein's malformation.

  5. Measuring anisotropy as a function of scale in turbulence using 3D particle tracking

    NASA Astrophysics Data System (ADS)

    Wijesinghe, Susantha; Voth, Greg

    2012-02-01

    We report the first full 3D experimental measurements of anisotropy as a function of scale in turbulence. From 3D particle tracks obtained with stereoscpic high speed video, we measure the Eulerian structure functions and decompose them into irreducible representation of SO(3) rotation group. This method allows us to quantify the anisotropy in different sectors, specified by j and m of the spherical harmonics Yjm(,), at all scales in the flow. We study a turbulent flow between two oscillating grids in an octagonal tank filled with 1100 l of water with Rλ=265. An image compression system processes high-speed video from four cameras in real-time allowing us to acquire huge data sets required for full 3D measurement of anisotropy as a function of scale. Careful selection of a sample of measurements with isotropic orientations is necessary to ensure that anisotropy of the measurement system does not affect the measured anisotropy of the flow. Increasing j sectors show faster decay of anisotropy as scale decreases, consistent with the idea that the small scales should become isotropic at very high Reynolds number. However, conditioning the measured anisotropy on the instantaneous velocity reveals that characterization of anisotropy in an inhom

  6. Real-time 3-D ultrasound scan conversion using a multicore processor.

    PubMed

    Zhuang, Bo; Shamdasani, Vijay; Sikdar, Siddhartha; Managuli, Ravi; Kim, Yongmin

    2009-07-01

    Real-time 3-D ultrasound scan conversion (SC) in software has not been practical due to its high computation and I/O data handling requirements. In this paper, we describe software-based 3-D SC with high volume rates using a multicore processor, Cell. We have implemented both 3-D SC approaches: 1) the separable 3-D SC where two 2-D coordinate transformations in orthogonal planes are performed in sequence and 2) the direct 3-D SC where the coordinate transformation is directly handled in 3-D. One Cell processor can scan-convert a 192 x 192 x 192 16-bit volume at 87.8 volumes/s with the separable 3-D SC algorithm and 28 volumes/s with the direct 3-D SC algorithm.

  7. Realization of real-time interactive 3D image holographic display [Invited].

    PubMed

    Chen, Jhen-Si; Chu, Daping

    2016-01-20

    Realization of a 3D image holographic display supporting real-time interaction requires fast actions in data uploading, hologram calculation, and image projection. These three key elements will be reviewed and discussed, while algorithms of rapid hologram calculation will be presented with the corresponding results. Our vision of interactive holographic 3D displays will be discussed.

  8. A 3D diamond detector for particle tracking

    NASA Astrophysics Data System (ADS)

    Artuso, M.; Bachmair, F.; Bäni, L.; Bartosik, M.; Beacham, J.; Bellini, V.; Belyaev, V.; Bentele, B.; Berdermann, E.; Bergonzo, P.; Bes, A.; Brom, J.-M.; Bruzzi, M.; Cerv, M.; Chau, C.; Chiodini, G.; Chren, D.; Cindro, V.; Claus, G.; Collot, J.; Costa, S.; Cumalat, J.; Dabrowski, A.; D`Alessandro, R.; de Boer, W.; Dehning, B.; Dobos, D.; Dünser, M.; Eremin, V.; Eusebi, R.; Forcolin, G.; Forneris, J.; Frais-Kölbl, H.; Gan, K. K.; Gastal, M.; Goffe, M.; Goldstein, J.; Golubev, A.; Gonella, L.; Gorišek, A.; Graber, L.; Grigoriev, E.; Grosse-Knetter, J.; Gui, B.; Guthoff, M.; Haughton, I.; Hidas, D.; Hits, D.; Hoeferkamp, M.; Hofmann, T.; Hosslet, J.; Hostachy, J.-Y.; Hügging, F.; Jansen, H.; Janssen, J.; Kagan, H.; Kanxheri, K.; Kasieczka, G.; Kass, R.; Kassel, F.; Kis, M.; Kramberger, G.; Kuleshov, S.; Lacoste, A.; Lagomarsino, S.; Lo Giudice, A.; Maazouzi, C.; Mandic, I.; Mathieu, C.; McFadden, N.; McGoldrick, G.; Menichelli, M.; Mikuž, M.; Morozzi, A.; Moss, J.; Mountain, R.; Murphy, S.; Oh, A.; Olivero, P.; Parrini, G.; Passeri, D.; Pauluzzi, M.; Pernegger, H.; Perrino, R.; Picollo, F.; Pomorski, M.; Potenza, R.; Quadt, A.; Re, A.; Riley, G.; Roe, S.; Sapinski, M.; Scaringella, M.; Schnetzer, S.; Schreiner, T.; Sciortino, S.; Scorzoni, A.; Seidel, S.; Servoli, L.; Sfyrla, A.; Shimchuk, G.; Smith, D. S.; Sopko, B.; Sopko, V.; Spagnolo, S.; Spanier, S.; Stenson, K.; Stone, R.; Sutera, C.; Taylor, A.; Traeger, M.; Tromson, D.; Trischuk, W.; Tuve, C.; Uplegger, L.; Velthuis, J.; Venturi, N.; Vittone, E.; Wagner, S.; Wallny, R.; Wang, J. C.; Weilhammer, P.; Weingarten, J.; Weiss, C.; Wengler, T.; Wermes, N.; Yamouni, M.; Zavrtanik, M.

    2016-07-01

    In the present study, results towards the development of a 3D diamond sensor are presented. Conductive channels are produced inside the sensor bulk using a femtosecond laser. This electrode geometry allows full charge collection even for low quality diamond sensors. Results from testbeam show that charge is collected by these electrodes. In order to understand the channel growth parameters, with the goal of producing low resistivity channels, the conductive channels produced with a different laser setup are evaluated by Raman spectroscopy.

  9. Real-time 3D vibration measurements in microstructures

    NASA Astrophysics Data System (ADS)

    Kowarsch, Robert; Ochs, Wanja; Giesen, Moritz; Dräbenstedt, Alexander; Winter, Marcus; Rembe, Christian

    2012-04-01

    The real-time measurement of three-dimensional vibrations is currently a major interest of academic research and industrial device characterization. The most common and practical solution used so far consists of three single-point laser-Doppler vibrometers which measure vibrations of a scattering surface from three directions. The resulting three velocity vectors are transformed into a Cartesian coordinate system. This technique does also work for microstructures but has some drawbacks: (1) The surface needs to scatter light, (2) the three laser beams can generate optical crosstalk if at least two laser frequencies match within the demodulation bandwidth, and (3) the laser beams have to be separated on the surface under test to minimize optical crosstalk such that reliable measurements are possible. We present a novel optical approach, based on the direction-dependent Doppler effect, which overcomes all the drawbacks of the current technology. We have realized a demonstrator with a measurement spot of < 3.5 μm diameter that does not suffer from optical crosstalk because only one laser beam impinges the specimen surface while the light is collected from three different directions.

  10. Advanced time-of-flight range camera with novel real-time 3D image processing

    NASA Astrophysics Data System (ADS)

    Koenig, Bernhard; Hosticka, Bedrich; Mengel, Peter; Listl, Ludwig

    2007-09-01

    We present a solid state range camera covering measuring distances from 2 m to 25 m and novel real-time 3D image processing algorithms for object detection, tracking and classification based on the three-dimensional features of the camera's output data. The technology is based on a 64x8 pixel array CMOS image sensor which is capable of capturing three-dimensional images by executing indirect time-of-flight (ToF) measurement of NIR laser pulses emitted by the camera and reflected by the objects in the cameras field of view. Here the so-called "multiple double short time integration" (MDSI) method enables unprecedented reliability and robustness with respect to suppression of background irradiance and insensitiveness to reflectivity variations in the object scene. Output data are conventional intensity values and distance values with accuracies in the centimeter range at image repetition rates up to 100 Hz. An evaluation of the camera's performance in typical road safety related test scenarios is subject of this paper. Furthermore we introduce real-time image processing of the output data stream of the camera aiming at the segmentation of objects being located in the camera's surrounding and the derivation of reliable position, speed and acceleration estimates. The segmentation algorithm utilizes the position information of all three spatial dimensions as well as the intensity values and thus yields significant segmentation improvement compared to segmentation in conventional 2D pictures. Position, velocity and acceleration values of the segmented objects are estimated by means of Kalman filtering in 3D space. The filter is dynamically adapting to the measurement conditions to take care of changes of the scene data properties. Flow and performance of the whole processing chain are presented by means of example scenes.

  11. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  12. THE THOMSON SURFACE. III. TRACKING FEATURES IN 3D

    SciTech Connect

    Howard, T. A.; DeForest, C. E.; Tappin, S. J.; Odstrcil, D.

    2013-03-01

    In this, the final installment in a three-part series on the Thomson surface, we present simulated observations of coronal mass ejections (CMEs) observed by a hypothetical polarizing white light heliospheric imager. Thomson scattering yields a polarization signal that can be exploited to locate observed features in three dimensions relative to the Thomson surface. We consider how the appearance of the CME changes with the direction of trajectory, using simulations of a simple geometrical shape and also of a more realistic CME generated using the ENLIL model. We compare the appearance in both unpolarized B and polarized pB light, and show that there is a quantifiable difference in the measured brightness of a CME between unpolarized and polarized observations. We demonstrate a technique for using this difference to extract the three-dimensional (3D) trajectory of large objects such as CMEs. We conclude with a discussion on how a polarizing heliospheric imager could be used to extract 3D trajectory information about CMEs or other observed features.

  13. High resolution 3D insider detection and tracking.

    SciTech Connect

    Nelson, Cynthia Lee

    2003-09-01

    Vulnerability analysis studies show that one of the worst threats against a facility is that of an active insider during an emergency evacuation. When a criticality or other emergency alarm occurs, employees immediately proceed along evacuation routes to designated areas. Procedures are then implemented to account for all material, classified parts, etc. The 3-Dimensional Video Motion Detection (3DVMD) technology could be used to detect and track possible insider activities during alarm situations, as just described, as well as during normal operating conditions. The 3DVMD technology uses multiple cameras to create 3-dimensional detection volumes or zones. Movement throughout detection zones is tracked and high-level information, such as the number of people and their direction of motion, is extracted. In the described alarm scenario, deviances of evacuation procedures taken by an individual could be immediately detected and relayed to a central alarm station. The insider could be tracked and any protected items removed from the area could be flagged. The 3DVMD technology could also be used to monitor such items as machines that are used to build classified parts. During an alarm, detections could be made if items were removed from the machine. Overall, the use of 3DVMD technology during emergency evacuations would help to prevent the loss of classified items and would speed recovery from emergency situations. Further security could also be added by analyzing tracked behavior (motion) as it corresponds to predicted behavior, e.g., behavior corresponding with the execution of required procedures. This information would be valuable for detecting a possible insider not only during emergency situations, but also during times of normal operation.

  14. An optical real-time 3D measurement for analysis of facial shape and movement

    NASA Astrophysics Data System (ADS)

    Zhang, Qican; Su, Xianyu; Chen, Wenjing; Cao, Yiping; Xiang, Liqun

    2003-12-01

    Optical non-contact 3-D shape measurement provides a novel and useful tool for analysis of facial shape and movement in presurgical and postsurgical regular check. In this article we present a system, which allows a precise 3-D visualization of the patient's facial before and after craniofacial surgery. We discussed, in this paper, the real time 3-D image capture, processing and the 3-D phase unwrapping method to recover complex shape deformation when the movement of the mouth. The result of real-time measurement for facial shape and movement will be helpful for the more ideal effect in plastic surgery.

  15. Feature point based 3D tracking of multiple fish from multi-view images

    PubMed Central

    Qian, Zhi-Ming

    2017-01-01

    A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly. PMID:28665966

  16. Tracking 3-D body motion for docking and robot control

    NASA Technical Reports Server (NTRS)

    Donath, M.; Sorensen, B.; Yang, G. B.; Starr, R.

    1987-01-01

    An advanced method of tracking three-dimensional motion of bodies has been developed. This system has the potential to dynamically characterize machine and other structural motion, even in the presence of structural flexibility, thus facilitating closed loop structural motion control. The system's operation is based on the concept that the intersection of three planes defines a point. Three rotating planes of laser light, fixed and moving photovoltaic diode targets, and a pipe-lined architecture of analog and digital electronics are used to locate multiple targets whose number is only limited by available computer memory. Data collection rates are a function of the laser scan rotation speed and are currently selectable up to 480 Hz. The tested performance on a preliminary prototype designed for 0.1 in accuracy (for tracking human motion) at a 480 Hz data rate includes a worst case resolution of 0.8 mm (0.03 inches), a repeatability of plus or minus 0.635 mm (plus or minus 0.025 inches), and an absolute accuracy of plus or minus 2.0 mm (plus or minus 0.08 inches) within an eight cubic meter volume with all results applicable at the 95 percent level of confidence along each coordinate region. The full six degrees of freedom of a body can be computed by attaching three or more target detectors to the body of interest.

  17. Modeling cell migration on filamentous tracks in 3D

    NASA Astrophysics Data System (ADS)

    Schwarz, J. M.

    2014-03-01

    Cell motility is integral to a number of physiological processes ranging from wound healing to immune response to cancer metastasis. Many studies of cell migration, both experimental and theoretical, have addressed various aspects of it in two dimensions, including protrusion and retraction at the level of single cells. However, the in vivo environment for a crawling cell is typically a three-dimensional environment, consisting of the extracellular matrix (ECM) and surrounding cells. Recent experiments demonstrate that some cells crawling along fibers of the ECM mimic the geometry of the fibers to become long and thin, as opposed to fan-like in two dimensions, and can remodel the ECM. Inspired by these experiments, a model cell consisting of beads and springs that moves along a tense semiflexible filamentous track is constructed and studied, paying particular attention to the mechanical feedback between the model cell and the track, as mediated by the active myosin-driven contractility and the catch/slip bond behavior of the focal adhesions, as the model cell crawls. This simple construction can then be scaled up to a model cell moving along a three-dimensional filamentous network, with a prescribed microenvironment, in order to make predictions for proposed experiments.

  18. Experimental characterization of 3D localization techniques for particle-tracking and super-resolution microscopy.

    PubMed

    Mlodzianoski, Michael J; Juette, Manuel F; Beane, Glen L; Bewersdorf, Joerg

    2009-05-11

    Three-dimensional (3D) particle localization at the nanometer scale plays a central role in 3D particle tracking and 3D localization-based super-resolution microscopy. Here we introduce a localization algorithm that is independent of theoretical models and therefore generally applicable to a large number of experimental realizations. Applying this algorithm and a convertible experimental setup we compare the performance of the two major 3D techniques based on astigmatic distortions and on multiplane detection. In both methods we obtain experimental 3D localization accuracies in agreement with theoretical predictions and characterize the depth dependence of the localization accuracy in detail.

  19. Analysis of a vibrating interventional device to improve 3-D colormark tracking.

    PubMed

    Fronheiser, Matthew P; Smith, Stephen W

    2007-08-01

    Ultrasound guidance of interventional devices during minimally invasive surgical procedures has been investigated by many researchers. Previously, we extended the methods used by the Colormark tracking system to several interventional devices using a real-time, three-dimensional (3-D) ultrasound system. These results showed that we needed to improve the efficiency and reliability of the tracking. In this paper, we describe an analytical model to predict the transverse vibrations along the length of an atrial septal puncture needle to enable design improvements of the tracking system. We assume the needle can be modeled as a hollow bar with a circular cross section with a fixed proximal end and a free distal end that is suspended vertically to ignore gravity effects. The initial results show an ability to predict the natural nodes and antinodes along the needle using the characteristic equation for free vibrations. Simulations show that applying a forcing function to the device at a natural antinode yields an order of magnitude larger vibration than when driving the device at a node. Pulsed wave spectral Doppler data was acquired along the distal portion of the needle in a water tank using a 2-D matrix array transesophageal echocardiography probe. This data was compared to simulations of forced vibrations from the model. These initial results suggest that the model is a good first order approximation of the vibrating device in a water tank. It is our belief that knowing the location of the natural nodes and antinodes will improve our ability to drive the device to ensure the vibrations at the proximal end will reach the tip of the device, which in turn should improve our ability to track the device in vivo.

  20. Tracking Protein-coated Particles in 3D.

    NASA Astrophysics Data System (ADS)

    Gratton, Enrico

    2006-03-01

    The utilization of 2-photon microscopy in the field of Cell Biology is of increasing importance because it allows imaging of living cells, including those systems where UV imaging is not possible due to photobleaching or photodamage limitations. We propose a novel approach using 2-photon excitation based on the use of a scanner to produce an effective ``intensity trap''. As the particle moves in this trap (note that there is no force applied on the particle at the power level we are using for particle detection), the detection system continuously calculates the position of the particle in the trap. As the position of the particle is calculated with respect to the trap, the scanner position is moved to minimize the ``modulation'' of the light intensity in the trap. In practice, we set the scanner to perform an orbit around the particle in about 1 millisecond. The sampling rate is chosen such that many points (32 or 64) are acquired during the orbit. An FFT (Fast Fourier Transform) is performed on the points acquired during one orbit or after a series of orbits. The DC, AC and phase of the first harmonic of the FFT are calculated. The value of the modulation varies monotonically as the distance of the particle from the center of the orbit is increased so that for every value of the modulation we can estimate the value of the distance of the particle from the center of the orbit. The phase of the first harmonic gives the angular position of the particle with respect to the scanner zero phase which is known relative to the lab coordinates. The effective bandwidth of the tracking system depends on the maximum frequency for sinusoidal oscillation of the scanner, which is about 5 kHz for our galvano-scanner and on the number of photons needed for detecting the particle against the noise. Of course, there are other important considerations. First, if the motion of the particle is too fast such that after one orbit the particle moves too far from the new position calculated

  1. On the dynamics of jellyfish locomotion via 3D particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Piper, Matthew; Kim, Jin-Tae; Chamorro, Leonardo P.

    2016-11-01

    The dynamics of jellyfish (Aurelia aurita) locomotion is experimentally studied via 3D particle tracking velocimetry. 3D locations of the bell tip are tracked over 1.5 cycles to describe the jellyfish path. Multiple positions of the jellyfish bell margin are initially tracked in 2D from four independent planes and individually projected in 3D based on the jellyfish path and geometrical properties of the setup. A cubic spline interpolation and the exponentially weighted moving average are used to estimate derived quantities, including velocity and acceleration of the jellyfish locomotion. We will discuss distinctive features of the jellyfish 3D motion at various swimming phases, and will provide insight on the 3D contraction and relaxation in terms of the locomotion, the steadiness of the bell margin eccentricity, and local Reynolds number based on the instantaneous mean diameter of the bell.

  2. Tensor3D: A computer graphics program to simulate 3D real-time deformation and visualization of geometric bodies

    NASA Astrophysics Data System (ADS)

    Pallozzi Lavorante, Luca; Dirk Ebert, Hans

    2008-07-01

    Tensor3D is a geometric modeling program with the capacity to simulate and visualize in real-time the deformation, specified through a tensor matrix and applied to triangulated models representing geological bodies. 3D visualization allows the study of deformational processes that are traditionally conducted in 2D, such as simple and pure shears. Besides geometric objects that are immediately available in the program window, the program can read other models from disk, thus being able to import objects created with different open-source or proprietary programs. A strain ellipsoid and a bounding box are simultaneously shown and instantly deformed with the main object. The principal axes of strain are visualized as well to provide graphical information about the orientation of the tensor's normal components. The deformed models can also be saved, retrieved later and deformed again, in order to study different steps of progressive strain, or to make this data available to other programs. The shape of stress ellipsoids and the corresponding Mohr circles defined by any stress tensor can also be represented. The application was written using the Visualization ToolKit, a powerful scientific visualization library in the public domain. This development choice, allied to the use of the Tcl/Tk programming language, which is independent on the host computational platform, makes the program a useful tool for the study of geometric deformations directly in three dimensions in teaching as well as research activities.

  3. Ultra-Wideband Time-Difference-of-Arrival High Resolution 3D Proximity Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Dekome, Kent; Dusl, John

    2010-01-01

    This paper describes a research and development effort for a prototype ultra-wideband (UWB) tracking system that is currently under development at NASA Johnson Space Center (JSC). The system is being studied for use in tracking of lunar./Mars rovers and astronauts during early exploration missions when satellite navigation systems are not available. U IATB impulse radio (UWB-IR) technology is exploited in the design and implementation of the prototype location and tracking system. A three-dimensional (3D) proximity tracking prototype design using commercially available UWB products is proposed to implement the Time-Difference- Of-Arrival (TDOA) tracking methodology in this research effort. The TDOA tracking algorithm is utilized for location estimation in the prototype system, not only to exploit the precise time resolution possible with UWB signals, but also to eliminate the need for synchronization between the transmitter and the receiver. Simulations show that the TDOA algorithm can achieve the fine tracking resolution with low noise TDOA estimates for close-in tracking. Field tests demonstrated that this prototype UWB TDOA High Resolution 3D Proximity Tracking System is feasible for providing positioning-awareness information in a 3D space to a robotic control system. This 3D tracking system is developed for a robotic control system in a facility called "Moonyard" at Honeywell Defense & System in Arizona under a Space Act Agreement.

  4. CISUS: an integrated 3D ultrasound system for IGT using a modular tracking API

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Viswanathan, Anand; Pieper, Steve; Choti, Michael A.; Taylor, Russell H.; Kikinis, Ron; Fichtinger, Gabor

    2004-05-01

    Ultrasound has become popular in clinical/surgical applications, both as the primary image guidance modality and also in conjunction with other modalities like CT or MRI. Three dimensional ultrasound (3DUS) systems have also demonstrated usefulness in image-guided therapy (IGT). At the same time, however, current lack of open-source and open-architecture multi-modal medical visualization systems prevents 3DUS from fulfilling its potential. Several stand-alone 3DUS systems, like Stradx or In-Vivo exist today. Although these systems have been found to be useful in real clinical setting, it is difficult to augment their functionality and integrate them in versatile IGT systems. To address these limitations, a robotic/freehand 3DUS open environment (CISUS) is being integrated into the 3D Slicer, an open-source research tool developed for medical image analysis and surgical planning. In addition, the system capitalizes on generic application programming interfaces (APIs) for tracking devices and robotic control. The resulting platform-independent open-source system may serve as a valuable tool to the image guided surgery community. Other researchers could straightforwardly integrate the generic CISUS system along with other functionalities (i.e. dual view visualization, registration, real-time tracking, segmentation, etc) to rapidly create their medical/surgical applications. Our current driving clinical application is robotically assisted and freehand 3DUS-guided liver ablation, which is fully being integrated under the CISUS-3D Slicer. Initial functionality and pre-clinical feasibility are demonstrated on phantom and ex-vivo animal models.

  5. A new 3D tracking method exploiting the capabilities of digital holography in microscopy

    NASA Astrophysics Data System (ADS)

    Miccio, L.; Memmolo, P.; Merola, F.; Fusco, S.; Embrione, V.; Netti, P. A.; Ferraro, P.

    2013-04-01

    A method for 3D tracking has been developed exploiting Digital Holographic Microscopy (DHM) features. In the framework of self-consistent platform for manipulation and measurement of biological specimen we use DHM for quantitative and completely label free analysis of specimen with low amplitude contrast. Tracking capability extend the potentiality of DHM allowing to monitor the motion of appropriate probes and correlate it with sample properties. Complete 3D tracking has been obtained for the probes avoiding the issue of amplitude refocusing in traditional tracking processing. Our technique belongs to the video tracking methods that, conversely from Quadrant Photo-Diode method, opens the possibility to track multiples probes. All the common used video tracking algorithms are based on the numerical analysis of amplitude images in the focus plane and the shift of the maxima in the image plane are measured after the application of an appropriate threshold. Our approach for video tracking uses different theoretical basis. A set of interferograms is recorded and the complex wavefields are managed numerically to obtain three dimensional displacements of the probes. The procedure works properly on an higher number of probes and independently from their size. This method overcomes the traditional video tracking issues as the inability to measure the axial movement and the choice of suitable threshold mask. The novel configuration allows 3D tracking of micro-particles and simultaneously can furnish Quantitative Phase-contrast maps of tracked micro-objects by interference microscopy, without changing the configuration. In this paper, we show a new concept for a compact interferometric microscope that can ensure the multifunctionality, accomplishing accurate 3D tracking and quantitative phase-contrast analysis. Experimental results are presented and discussed for in vitro cells. Through a very simple and compact optical arrangement we show how two different functionalities

  6. FPGA-based real-time anisotropic diffusion filtering of 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Castro-Pareja, Carlos R.; Dandekar, Omkar S.; Shekhar, Raj

    2005-02-01

    Three-dimensional ultrasonic imaging, especially the emerging real-time version of it, is particularly valuable in medical applications such as echocardiography, obstetrics and surgical navigation. A known problem with ultrasound images is their high level of speckle noise. Anisotropic diffusion filtering has been shown to be effective in enhancing the visual quality of 3D ultrasound images and as preprocessing prior to advanced image processing. However, due to its arithmetic complexity and the sheer size of 3D ultrasound images, it is not possible to perform online, real-time anisotropic diffusion filtering using standard software implementations. We present an FPGA-based architecture that allows performing anisotropic diffusion filtering of 3D images at acquisition rates, thus enabling the use of this filtering technique in real-time applications, such as visualization, registration and volume rendering.

  7. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  8. Processing 3D flash LADAR point-clouds in real-time for flight applications

    NASA Astrophysics Data System (ADS)

    Craig, R.; Gravseth, I.; Earhart, R. P.; Bladt, J.; Barnhill, S.; Ruppert, L.; Centamore, C.

    2007-04-01

    Ball Aerospace & Technologies Corp. has demonstrated real-time processing of 3D imaging LADAR point-cloud data to produce the industry's first time-of-flight (TOF) 3D video capability. This capability is uniquely suited to the rigorous demands of space and airborne flight applications and holds great promise in the area of autonomous navigation. It will provide long-range, three dimensional video information to autonomous flight software or pilots for immediate use in rendezvous and docking, proximity operations, landing, surface vision systems, and automatic target recognition and tracking. This is enabled by our new generation of FPGA based "pixel-tube" processors, coprocessors and their associated algorithms which have led to a number of advancements in high-speed wavefront processing along with additional advances in dynamic camera control, and space laser designs based on Ball's CALIPSO LIDAR. This evolution in LADAR is made possible by moving the mechanical complexity required for a scanning system into the electronics, where production, integration, testing and life-cycle costs can be significantly reduced. This technique requires a state of the art TOF read-out integrated circuit (ROIC) attached to a sensor array to collect high resolution temporal data, which is then processed through FPGAs. The number of calculations required to process the data is greatly reduced thanks to the fact that all points are captured at the same time and thus correlated. This correlation allows extremely efficient FPGA processing. This capability has been demonstrated in prototype form at both Marshal Space Flight Center and Langley Research Center on targets that represent docking and landing scenarios. This report outlines many aspects of this work as well as aspects of our recent testing at Marshall's Flight Robotics Laboratory.

  9. Real-time 3D data acquisition for augmented-reality man and machine interfacing

    NASA Astrophysics Data System (ADS)

    Guan, Chun; Hassebrook, Laurence G.; Lau, Daniel L.

    2003-08-01

    Based on recent discoveries, we present a method to project a single structured pattern and then reconstruct the three-dimensional range from the distortions in the reflected and captured image. Traditional structured light methods require several different patterns to recover the depth, without ambiguity and albedo sensitivity, and are corrupted by object movement during the projection/capture process. Our method efficiently combines multiple patterns into a single composite pattern projection -- allowing for real-time implementations. Because structured light techniques require standard image capture and projection technology, unlike time of arrival techniques, they are relatively low cost. Attaining low cost 3D video acquisition would have a profound impact on most applications that are presently limited to 2D video imaging. Furthermore, it would enable many other applications. In particular, we are studying real time depth imagery for tracking hand motion and rotation as an interface to a virtual reality. Applications include remote controlled robotic interfacing in space, advanced cockpit controls and computer interfacing for the disabled.

  10. Real-time 3D measurement based on structured light illumination considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, ShiLing

    2014-12-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. In traditional 3-D measurement system where the processing time is not a key factor, camera lens distortion correction is performed directly. However, for the time-critical high-speed applications, the time-consuming correction algorithm is inappropriate to be performed directly during the real-time process. To cope with this issue, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. And a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the merit of the LUT, the 3-D reconstruction can be achieved at 92.34 frames per second.

  11. The agreement between 3D, standard 2D and triplane 2D speckle tracking: effects of image quality and 3D volume rate.

    PubMed

    Trache, Tudor; Stöbe, Stephan; Tarr, Adrienn; Pfeiffer, Dietrich; Hagendorff, Andreas

    2014-12-01

    Comparison of 3D and 2D speckle tracking performed on standard 2D and triplane 2D datasets of normal and pathological left ventricular (LV) wall-motion patterns with a focus on the effect that 3D volume rate (3DVR), image quality and tracking artifacts have on the agreement between 2D and 3D speckle tracking. 37 patients with normal LV function and 18 patients with ischaemic wall-motion abnormalities underwent 2D and 3D echocardiography, followed by offline speckle tracking measurements. The values of 3D global, regional and segmental strain were compared with the standard 2D and triplane 2D strain values. Correlation analysis with the LV ejection fraction (LVEF) was also performed. The 3D and 2D global strain values correlated good in both normally and abnormally contracting hearts, though systematic differences between the two methods were observed. Of the 3D strain parameters, the area strain showed the best correlation with the LVEF. The numerical agreement of 3D and 2D analyses varied significantly with the volume rate and image quality of the 3D datasets. The highest correlation between 2D and 3D peak systolic strain values was found between 3D area and standard 2D longitudinal strain. Regional wall-motion abnormalities were similarly detected by 2D and 3D speckle tracking. 2DST of triplane datasets showed similar results to those of conventional 2D datasets. 2D and 3D speckle tracking similarly detect normal and pathological wall-motion patterns. Limited image quality has a significant impact on the agreement between 3D and 2D numerical strain values.

  12. The agreement between 3D, standard 2D and triplane 2D speckle tracking: effects of image quality and 3D volume rate

    PubMed Central

    Stöbe, Stephan; Tarr, Adrienn; Pfeiffer, Dietrich; Hagendorff, Andreas

    2014-01-01

    Comparison of 3D and 2D speckle tracking performed on standard 2D and triplane 2D datasets of normal and pathological left ventricular (LV) wall-motion patterns with a focus on the effect that 3D volume rate (3DVR), image quality and tracking artifacts have on the agreement between 2D and 3D speckle tracking. 37 patients with normal LV function and 18 patients with ischaemic wall-motion abnormalities underwent 2D and 3D echocardiography, followed by offline speckle tracking measurements. The values of 3D global, regional and segmental strain were compared with the standard 2D and triplane 2D strain values. Correlation analysis with the LV ejection fraction (LVEF) was also performed. The 3D and 2D global strain values correlated good in both normally and abnormally contracting hearts, though systematic differences between the two methods were observed. Of the 3D strain parameters, the area strain showed the best correlation with the LVEF. The numerical agreement of 3D and 2D analyses varied significantly with the volume rate and image quality of the 3D datasets. The highest correlation between 2D and 3D peak systolic strain values was found between 3D area and standard 2D longitudinal strain. Regional wall-motion abnormalities were similarly detected by 2D and 3D speckle tracking. 2DST of triplane datasets showed similar results to those of conventional 2D datasets. 2D and 3D speckle tracking similarly detect normal and pathological wall-motion patterns. Limited image quality has a significant impact on the agreement between 3D and 2D numerical strain values. PMID:26693303

  13. Real-time 3D-surface-guided head refixation useful for fractionated stereotactic radiotherapy

    SciTech Connect

    Li Shidong; Liu Dezhi; Yin Gongjie; Zhuang Ping; Geng, Jason

    2006-02-15

    Accurate and precise head refixation in fractionated stereotactic radiotherapy has been achieved through alignment of real-time 3D-surface images with a reference surface image. The reference surface image is either a 3D optical surface image taken at simulation with the desired treatment position, or a CT/MRI-surface rendering in the treatment plan with corrections for patient motion during CT/MRI scans and partial volume effects. The real-time 3D surface images are rapidly captured by using a 3D video camera mounted on the ceiling of the treatment vault. Any facial expression such as mouth opening that affects surface shape and location can be avoided using a new facial monitoring technique. The image artifacts on the real-time surface can generally be removed by setting a threshold of jumps at the neighboring points while preserving detailed features of the surface of interest. Such a real-time surface image, registered in the treatment machine coordinate system, provides a reliable representation of the patient head position during the treatment. A fast automatic alignment between the real-time surface and the reference surface using a modified iterative-closest-point method leads to an efficient and robust surface-guided target refixation. Experimental and clinical results demonstrate the excellent efficacy of <2 min set-up time, the desired accuracy and precision of <1 mm in isocenter shifts, and <1 deg. in rotation.

  14. A Framework for 3D Model-Based Visual Tracking Using a GPU-Accelerated Particle Filter.

    PubMed

    Brown, J A; Capson, D W

    2012-01-01

    A novel framework for acceleration of particle filtering approaches to 3D model-based, markerless visual tracking in monocular video is described. Specifically, we present a methodology for partitioning and mapping the computationally expensive weight-update stage of a particle filter to a graphics processing unit (GPU) to achieve particle- and pixel-level parallelism. Nvidia CUDA and Direct3D are employed to harness the massively parallel computational power of modern GPUs for simulation (3D model rendering) and evaluation (segmentation, feature extraction, and weight calculation) of hundreds of particles at high speeds. The proposed framework addresses the computational intensity that is intrinsic to all particle filter approaches, including those that have been modified to minimize the number of particles required for a particular task. Performance and tracking quality results for rigid object and articulated hand tracking experiments demonstrate markerless, model-based visual tracking on consumer-grade graphics hardware with pixel-level accuracy up to 95 percent at 60+ frames per second. The framework accelerates particle evaluation up to 49 times over a comparable CPU-only implementation, providing an increased particle count while maintaining real-time frame rates.

  15. Dense 3D Face Alignment from 2D Videos in Real-Time

    PubMed Central

    Jeni, László A.; Cohn, Jeffrey F.; Kanade, Takeo

    2016-01-01

    To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of markers and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction and extension to multi-view reconstruction. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org. PMID:27293385

  16. Holovideo: Real-time 3D range video encoding and decoding on GPU

    NASA Astrophysics Data System (ADS)

    Karpinsky, Nikolaus; Zhang, Song

    2012-02-01

    We present a 3D video-encoding technique called Holovideo that is capable of encoding high-resolution 3D videos into standard 2D videos, and then decoding the 2D videos back into 3D rapidly without significant loss of quality. Due to the nature of the algorithm, 2D video compression such as JPEG encoding with QuickTime Run Length Encoding (QTRLE) can be applied with little quality loss, resulting in an effective way to store 3D video at very small file sizes. We found that under a compression ratio of 134:1, Holovideo to OBJ file format, the 3D geometry quality drops at a negligible level. Several sets of 3D videos were captured using a structured light scanner, compressed using the Holovideo codec, and then uncompressed and displayed to demonstrate the effectiveness of the codec. With the use of OpenGL Shaders (GLSL), the 3D video codec can encode and decode in realtime. We demonstrated that for a video size of 512×512, the decoding speed is 28 frames per second (FPS) with a laptop computer using an embedded NVIDIA GeForce 9400 m graphics processing unit (GPU). Encoding can be done with this same setup at 18 FPS, making this technology suitable for applications such as interactive 3D video games and 3D video conferencing.

  17. Head Tracking for 3D Audio Using a GPS-Aided MEMS IMU

    DTIC Science & Technology

    2005-03-01

    Aircraft, Directional Signals, GPS/INS Fusion , GPS/INS Integration, Head Tracking Systems, IMU (Inertial Measurement Unit), Inertial Sensors, MEMS...HEAD TRACKING FOR 3D AUDIO USING A GPS-AIDED MEMS IMU THESIS Jacque M. Joffrion, Captain, USAF AFIT/GE/ENG/05-09 DEPARTMENT OF THE AIR FORCE AIR...the United States Government. AFIT/GE/ENG/05-09 HEAD TRACKING FOR 3D AUDIO USING A GPS-AIDED MEMS IMU THESIS Presented to the Faculty of the Department

  18. 3D model-based catheter tracking for motion compensation in EP procedures

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Liao, Rui; Hornegger, Joachim; Strobel, Norbert

    2010-02-01

    Atrial fibrillation is the most common sustained heart arrhythmia and a leading cause of stroke. Its treatment by radio-frequency catheter ablation, performed using fluoroscopic image guidance, is gaining increasingly more importance. Two-dimensional fluoroscopic navigation can take advantage of overlay images derived from pre-operative 3-D data to add anatomical details otherwise not visible under X-ray. Unfortunately, respiratory motion may impair the utility of these static overlay images for catheter navigation. We developed an approach for image-based 3-D motion compensation as a solution to this problem. A bi-plane C-arm system is used to take X-ray images of a special circumferential mapping catheter from two directions. In the first step of the method, a 3-D model of the device is reconstructed. Three-dimensional respiratory motion at the site of ablation is then estimated by tracking the reconstructed catheter model in 3-D. This step involves bi-plane fluoroscopy and 2-D/3-D registration. Phantom data and clinical data were used to assess our model-based catheter tracking method. Experiments involving a moving heart phantom yielded an average 2-D tracking error of 1.4 mm and an average 3-D tracking error of 1.1 mm. Our evaluation of clinical data sets comprised 469 bi-plane fluoroscopy frames (938 monoplane fluoroscopy frames). We observed an average 2-D tracking error of 1.0 mm +/- 0.4 mm and an average 3-D tracking error of 0.8 mm +/- 0.5 mm. These results demonstrate that model-based motion-compensation based on 2-D/3-D registration is both feasible and accurate.

  19. LayTracks3D: A new approach for meshing general solids using medial axis transform

    DOE PAGES

    Quadros, William Roshan

    2015-08-22

    This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to themore » MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.« less

  20. 3D-printed concentrators for tracking-integrated CPV modules

    NASA Astrophysics Data System (ADS)

    Apostoleris, Harry; Leland, Julian; Chiesa, Matteo; Stefancich, Marco

    2016-09-01

    We demonstrate 3D-printed nonimaging concentrators and propose a tracking integration scheme to reduce the external tracking requirements of CPV modules. In the proposed system, internal sun tracking is achieved by rotation of the mini-concentrators inside the module by small motors. We discuss the design principles employed in the development of the system, experimentally evaluate the performance of the concentrator prototypes, and propose practical modifications that may be made to improve on-site performance of the devices.

  1. Real-time 3D ultrasound imaging of infant tongue movements during breast-feeding.

    PubMed

    Burton, Pat; Deng, Jing; McDonald, Daren; Fewtrell, Mary S

    2013-09-01

    Whether infants use suction or peristaltic tongue movements or a combination to extract milk during breast-feeding is controversial. The aims of this pilot study were 1] to evaluate the feasibility of using 3D ultrasound scanning to visualise infant tongue movements; and 2] to ascertain whether peristaltic tongue movements could be demonstrated during breast-feeding. 15 healthy term infants, aged 2 weeks to 4 months were scanned during breast-feeding, using a real-time 3D ultrasound system, with a 7 MHz transducer placed sub-mentally. 1] The method proved feasible, with 72% of bi-plane datasets and 56% of real-time 3D datasets providing adequate coverage [>75%] of the infant tongue. 2] Peristaltic tongue movement was observed in 13 of 15 infants [83%] from real-time or reformatted truly mid-sagittal views under 3D guidance. This is the first study to demonstrate the feasibility of using 3D ultrasound to visualise infant tongue movements during breast-feeding. Peristaltic infant tongue movement was present in the majority of infants when the image plane was truly mid-sagittal but was not apparent if the image was slightly off the mid-sagittal plane. This should be considered in studies investigating the relative importance of vacuum and peristalsis for milk transfer. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. High-resolution real-time 3D shape measurement on a portable device

    NASA Astrophysics Data System (ADS)

    Karpinsky, Nikolaus; Hoke, Morgan; Chen, Vincent; Zhang, Song

    2013-09-01

    Recent advances in technology have enabled the acquisition of high-resolution 3D models in real-time though the use of structured light scanning techniques. While these advances are impressive, they require large amounts of computing power, thus being limited to using large desktop computers with high end CPUs and sometimes GPUs. This is undesirable in making high-resolution real-time 3D scanners ubiquitous in our mobile lives. To address this issue, this work describes and demonstrates a real-time 3D scanning system that is realized on a mobile device, namely a laptop computer, which can achieve speeds of 20fps 3D at a resolution of 640x480 per frame. By utilizing a graphics processing unit (GPU) as a multipurpose parallel processor, along with a parallel phase shifting technique, we are able to realize the entire 3D processing pipeline in parallel. To mitigate high speed camera transfer problems, which typically require a dedicated frame grabber, we make use of USB 3.0 along with direct memory access (DMA) to transfer camera images to the GPU. To demonstrate the effectiveness of the technique, we experiment with the scanner on both static geometry of a statue and dynamic geometry of a deforming material sample in front of the system.

  3. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  4. Confocal fluorometer for diffusion tracking in 3D engineered tissue constructs

    NASA Astrophysics Data System (ADS)

    Daly, D.; Zilioli, A.; Tan, N.; Buttenschoen, K.; Chikkanna, B.; Reynolds, J.; Marsden, B.; Hughes, C.

    2016-03-01

    We present results of the development of a non-contacting instrument, called fScan, based on scanning confocal fluorometry for assessing the diffusion of materials through a tissue matrix. There are many areas in healthcare diagnostics and screening where it is now widely accepted that the need for new quantitative monitoring technologies is a major pinch point in patient diagnostics and in vitro testing. With the increasing need to interpret 3D responses this commonly involves the need to track the diffusion of compounds, pharma-active species and cells through a 3D matrix of tissue. Methods are available but to support the advances that are currently only promised, this monitoring needs to be real-time, non-invasive, and economical. At the moment commercial meters tend to be invasive and usually require a sample of the medium to be removed and processed prior to testing. This methodology clearly has a number of significant disadvantages. fScan combines a fiber based optical arrangement with a compact, free space optical front end that has been integrated so that the sample's diffusion can be measured without interference. This architecture is particularly important due to the "wet" nature of the samples. fScan is designed to measure constructs located within standard well plates and a 2-D motion stage locates the required sample with respect to the measurement system. Results are presented that show how the meter has been used to evaluate movements of samples through collagen constructs in situ without disturbing their kinetic characteristics. These kinetics were little understood prior to these measurements.

  5. Real-Time 3D Contrast-Enhanced Transcranial Ultrasound and Aberration Correction

    PubMed Central

    Ivancevich, Nikolas M.; Pinton, Gianmarco F.; Nicoletto, Heather A.; Bennett, Ellen; Laskowitz, Daniel T.; Smith, Stephen W.

    2008-01-01

    Contrast-enhanced (CE) transcranial ultrasound (US) and reconstructed 3D transcranial ultrasound have shown advantages over traditional methods in a variety of cerebrovascular diseases. We present the results from a novel ultrasound technique, namely real-time 3D contrast-enhanced transcranial ultrasound. Using real-time 3D (RT3D) ultrasound and micro-bubble contrast agent, we scanned 17 healthy volunteers via a single temporal window and 9 via the sub-occipital window and report our detection rates for the major cerebral vessels. In 71% of subjects, both of our observers identified the ipsilateral circle of Willis from the temporal window, and in 59% we imaged the entire circle of Willis. From the sub-occipital window, both observers detected the entire vertebrobasilar circulation in 22% of subjects, and in 44% the basilar artery. After performing phase aberration correction on one subject, we were able to increase the diagnostic value of the scan, detecting a vessel not present in the uncorrected scan. These preliminary results suggest that RT3D CE transcranial US and RT3D CE transcranial US with phase aberration correction have the potential to greatly impact the field of neurosonology. PMID:18395321

  6. Towards real-time MRI-guided 3D localization of deforming targets for non-invasive cardiac radiosurgery.

    PubMed

    Ipsen, S; Blanck, O; Lowther, N J; Liney, G P; Rai, R; Bode, F; Dunst, J; Schweikard, A; Keall, P J

    2016-11-21

    Radiosurgery to the pulmonary vein antrum in the left atrium (LA) has recently been proposed for non-invasive treatment of atrial fibrillation (AF). Precise real-time target localization during treatment is necessary due to complex respiratory and cardiac motion and high radiation doses. To determine the 3D position of the LA for motion compensation during radiosurgery, a tracking method based on orthogonal real-time MRI planes was developed for AF treatments with an MRI-guided radiotherapy system. Four healthy volunteers underwent cardiac MRI of the LA. Contractile motion was quantified on 3D LA models derived from 4D scans with 10 phases acquired in end-exhalation. Three localization strategies were developed and tested retrospectively on 2D real-time scans (sagittal, temporal resolution 100 ms, free breathing). The best-performing method was then used to measure 3D target positions in 2D-2D orthogonal planes (sagittal-coronal, temporal resolution 200-252 ms, free breathing) in 20 configurations of a digital phantom and in the volunteer data. The 3D target localization accuracy was quantified in the phantom and qualitatively assessed in the real data. Mean cardiac contraction was  ⩽  3.9 mm between maximum dilation and contraction but anisotropic. A template matching approach with two distinct template phases and ECG-based selection yielded the highest 2D accuracy of 1.2 mm. 3D target localization showed a mean error of 3.2 mm in the customized digital phantoms. Our algorithms were successfully applied to the 2D-2D volunteer data in which we measured a mean 3D LA motion extent of 16.5 mm (SI), 5.8 mm (AP) and 3.1 mm (LR). Real-time target localization on orthogonal MRI planes was successfully implemented for highly deformable targets treated in cardiac radiosurgery. The developed method measures target shifts caused by respiration and cardiac contraction. If the detected motion can be compensated accordingly, an MRI-guided radiotherapy

  7. Towards real-time MRI-guided 3D localization of deforming targets for non-invasive cardiac radiosurgery

    NASA Astrophysics Data System (ADS)

    Ipsen, S.; Blanck, O.; Lowther, N. J.; Liney, G. P.; Rai, R.; Bode, F.; Dunst, J.; Schweikard, A.; Keall, P. J.

    2016-11-01

    Radiosurgery to the pulmonary vein antrum in the left atrium (LA) has recently been proposed for non-invasive treatment of atrial fibrillation (AF). Precise real-time target localization during treatment is necessary due to complex respiratory and cardiac motion and high radiation doses. To determine the 3D position of the LA for motion compensation during radiosurgery, a tracking method based on orthogonal real-time MRI planes was developed for AF treatments with an MRI-guided radiotherapy system. Four healthy volunteers underwent cardiac MRI of the LA. Contractile motion was quantified on 3D LA models derived from 4D scans with 10 phases acquired in end-exhalation. Three localization strategies were developed and tested retrospectively on 2D real-time scans (sagittal, temporal resolution 100 ms, free breathing). The best-performing method was then used to measure 3D target positions in 2D-2D orthogonal planes (sagittal-coronal, temporal resolution 200-252 ms, free breathing) in 20 configurations of a digital phantom and in the volunteer data. The 3D target localization accuracy was quantified in the phantom and qualitatively assessed in the real data. Mean cardiac contraction was  ⩽  3.9 mm between maximum dilation and contraction but anisotropic. A template matching approach with two distinct template phases and ECG-based selection yielded the highest 2D accuracy of 1.2 mm. 3D target localization showed a mean error of 3.2 mm in the customized digital phantoms. Our algorithms were successfully applied to the 2D-2D volunteer data in which we measured a mean 3D LA motion extent of 16.5 mm (SI), 5.8 mm (AP) and 3.1 mm (LR). Real-time target localization on orthogonal MRI planes was successfully implemented for highly deformable targets treated in cardiac radiosurgery. The developed method measures target shifts caused by respiration and cardiac contraction. If the detected motion can be compensated accordingly, an MRI-guided radiotherapy

  8. Extraction and tracking of MRI tagging sheets using a 3D Gabor filter bank.

    PubMed

    Qian, Zhen; Metaxas, Dimitris N; Axel, Leon

    2006-01-01

    In this paper, we present a novel method for automatically extracting the tagging sheets in tagged cardiac MR images, and tracking their displacement during the heart cycle, using a tunable 3D Gabor filter bank. Tagged MRI is a non-invasive technique for the study of myocardial deformation. We design the 3D Gabor filter bank based on the geometric characteristics of the tagging sheets. The tunable parameters of the Gabor filter bank are used to adapt to the myocardium deformation. The whole 3D image dataset is convolved with each Gabor filter in the filter bank, in the Fourier domain. Then we impose a set of deformable meshes onto the extracted tagging sheets and track them over time. Dynamic estimation of the filter parameters and the mesh internal smoothness are used to help the tracking. Some very encouraging results are shown.

  9. 3D tracking of mating events in wild swarms of the malaria mosquito Anopheles gambiae.

    PubMed

    Butail, Sachit; Manoukis, Nicholas; Diallo, Moussa; Yaro, Alpha S; Dao, Adama; Traoré, Sekou F; Ribeiro, José M; Lehmann, Tovi; Paley, Derek A

    2011-01-01

    We describe an automated tracking system that allows us to reconstruct the 3D kinematics of individual mosquitoes in swarms of Anopheles gambiae. The inputs to the tracking system are video streams recorded from a stereo camera system. The tracker uses a two-pass procedure to automatically localize and track mosquitoes within the swarm. A human-in-the-loop step verifies the estimates and connects broken tracks. The tracker performance is illustrated using footage of mating events filmed in Mali in August 2010.

  10. Real-time human pose detection and tracking for tele-rehabilitation in virtual reality.

    PubMed

    Obdržálek, Stěpán; Kurillo, Gregorij; Han, Jay; Abresch, Ted; Bajcsy, Ruzena

    2012-01-01

    We present a real-time algorithm for human pose detection and tracking from vision-based 3D data and its application to tele-rehabilitation in virtual environments. We employ stereo camera(s) to capture 3D avatars of geographically dislocated patient and therapist in real-time, while sending the data remotely and displaying it in a virtual scene. A pose detection and tracking algorithm extracts kinematic parameters from each participant and determines pose similarity. The pose similarity score is used to quantify patient's performance and provide real-time feedback for remote rehabilitation.

  11. Real-time 3-d intracranial ultrasound with an endoscopic matrix array transducer.

    PubMed

    Light, Edward D; Mukundan, Srinivasan; Wolf, Patrick D; Smith, Stephen W

    2007-08-01

    A transducer originally designed for transesophageal echocardiography (TEE) was adapted for real-time volumetric endoscopic imaging of the brain. The transducer consists of a 36 x 36 array with an interelement spacing of 0.18 mm. There are 504 transmitting and 252 receive channels placed in a regular pattern in the array. The operating frequency is 4.5 MHz with a -6 dB bandwidth of 30%. The transducer is fabricated on a 10-layer flexible circuit from Microconnex (Snoqualmie, WA, USA). The purpose of this study is to evaluate the clinical feasibility of real-time 3-D intracranial ultrasound with this device. The Volumetrics Medical Imaging (Durham, NC, USA) 3-D scanner was used to obtain images in a canine model. A transcalvarial acoustic window was created under general anesthesia in the animal laboratory by placing a 10-mm burr hole in the high parietal calvarium of a 50-kg canine subject. The burr-hole was placed in a left parasagittal location to avoid the sagittal sinus, and the transducer was placed against the intact dura mater for ultrasound imaging. Images of the lateral ventricles were produced, including real-time 3-D guidance of a needle puncture of one ventricle. In a second canine subject, contrast-enhanced 3-D Doppler color flow images were made of the cerebral vessels including the complete Circle of Willis. Clinical applications may include real-time 3-D guidance of cerebrospinal fluid extraction from the lateral ventricles and bedside evaluation of critically ill patients where computed tomography and magnetic resonance imaging techniques are unavailable.

  12. High-throughput 3D tracking of bacteria on a standard phase contrast microscope

    NASA Astrophysics Data System (ADS)

    Taute, K. M.; Gude, S.; Tans, S. J.; Shimizu, T. S.

    2015-11-01

    Bacteria employ diverse motility patterns in traversing complex three-dimensional (3D) natural habitats. 2D microscopy misses crucial features of 3D behaviour, but the applicability of existing 3D tracking techniques is constrained by their performance or ease of use. Here we present a simple, broadly applicable, high-throughput 3D bacterial tracking method for use in standard phase contrast microscopy. Bacteria are localized at micron-scale resolution over a range of 350 × 300 × 200 μm by maximizing image cross-correlations between their observed diffraction patterns and a reference library. We demonstrate the applicability of our technique to a range of bacterial species and exploit its high throughput to expose hidden contributions of bacterial individuality to population-level variability in motile behaviour. The simplicity of this powerful new tool for bacterial motility research renders 3D tracking accessible to a wider community and paves the way for investigations of bacterial motility in complex 3D environments.

  13. High-throughput 3D tracking of bacteria on a standard phase contrast microscope

    PubMed Central

    Taute, K.M.; Gude, S.; Tans, S.J.; Shimizu, T.S.

    2015-01-01

    Bacteria employ diverse motility patterns in traversing complex three-dimensional (3D) natural habitats. 2D microscopy misses crucial features of 3D behaviour, but the applicability of existing 3D tracking techniques is constrained by their performance or ease of use. Here we present a simple, broadly applicable, high-throughput 3D bacterial tracking method for use in standard phase contrast microscopy. Bacteria are localized at micron-scale resolution over a range of 350 × 300 × 200 μm by maximizing image cross-correlations between their observed diffraction patterns and a reference library. We demonstrate the applicability of our technique to a range of bacterial species and exploit its high throughput to expose hidden contributions of bacterial individuality to population-level variability in motile behaviour. The simplicity of this powerful new tool for bacterial motility research renders 3D tracking accessible to a wider community and paves the way for investigations of bacterial motility in complex 3D environments. PMID:26522289

  14. Optimal Local Searching for Fast and Robust Textureless 3D Object Tracking in Highly Cluttered Backgrounds.

    PubMed

    Seo, Byung-Kuk; Park, Jong-Il; Hinterstoisser, Stefan; Ilic, Slobodan

    2013-06-13

    Edge-based tracking is a fast and plausible approach for textureless 3D object tracking, but its robustness is still very challenging in highly cluttered backgrounds due to numerous local minima. To overcome this problem, we propose a novel method for fast and robust textureless 3D object tracking in highly cluttered backgrounds. The proposed method is based on optimal local searching of 3D-2D correspondences between a known 3D object model and 2D scene edges in an image with heavy background clutter. In our searching scheme, searching regions are partitioned into three levels (interior, contour, and exterior) with respect to the previous object region, and confident searching directions are determined by evaluating candidates of correspondences on their region levels; thus, the correspondences are searched among likely candidates in only the confident directions instead of searching through all candidates. To ensure the confident searching direction, we also adopt the region appearance, which is efficiently modeled on a newly defined local space (called a searching bundle). Experimental results and performance evaluations demonstrate that our method fully supports fast and robust textureless 3D object tracking even in highly cluttered backgrounds.

  15. Optimal local searching for fast and robust textureless 3D object tracking in highly cluttered backgrounds.

    PubMed

    Seo, Byung-Kuk; Park, Hanhoon; Park, Jong-Il; Hinterstoisser, Stefan; Ilic, Slobodan

    2014-01-01

    Edge-based tracking is a fast and plausible approach for textureless 3D object tracking, but its robustness is still very challenging in highly cluttered backgrounds due to numerous local minima. To overcome this problem, we propose a novel method for fast and robust textureless 3D object tracking in highly cluttered backgrounds. The proposed method is based on optimal local searching of 3D-2D correspondences between a known 3D object model and 2D scene edges in an image with heavy background clutter. In our searching scheme, searching regions are partitioned into three levels (interior, contour, and exterior) with respect to the previous object region, and confident searching directions are determined by evaluating candidates of correspondences on their region levels; thus, the correspondences are searched among likely candidates in only the confident directions instead of searching through all candidates. To ensure the confident searching direction, we also adopt the region appearance, which is efficiently modeled on a newly defined local space (called a searching bundle). Experimental results and performance evaluations demonstrate that our method fully supports fast and robust textureless 3D object tracking even in highly cluttered backgrounds.

  16. Hierarchical storage and visualization of real-time 3D data

    NASA Astrophysics Data System (ADS)

    Parry, Mitchell; Hannigan, Brendan; Ribarsky, William; Shaw, Christopher D.; Faust, Nickolas L.

    2001-08-01

    In this paper 'real-time 3D data' refers to volumetric data that are acquired and used as they are produced. Large scale, real-time data are difficult to store and analyze, either visually or by some other means, within the time frames required. Yet this is often quite important to do when decision-makers must receive and quickly act on new information. An example is weather forecasting, where forecasters must act on information received on severe storm development and movement. To meet the real-time requirements crude heuristics are often used to gather information from the original data. This is in spite of the fact that better and better real-time data are becoming available, the full use of which could significantly improve decisions. The work reported here addresses these issues by providing comprehensive data acquisition, analysis, and storage components with time budgets for the data management of each component. These components are put into a global geospatial hierarchical structure. The volumetric data are placed into this global structure, and it is shown how levels of detail can be derived and used within this structure. A volumetric visualization procedure is developed that conforms to the hierarchical structure and uses the levels of detail. These general methods are focused on the specific case of the VGIS global hierarchical structure and rendering system,. The real-time data considered are from collections of time- dependent 3D Doppler radars although the methods described here apply more generally to time-dependent volumetric data. This paper reports on the design and construction of the above hierarchical structures and volumetric visualizations. It also reports result for the specific application of 3D Doppler radar displayed over photo textured terrain height fields. Results are presented results for the specific application of 3D Doppler radar displayed over photo textured terrain height fields. Results are presented for display of time

  17. Improving segmentation of 3D touching cell nuclei using flow tracking on surface meshes.

    PubMed

    Li, Gang; Guo, Lei

    2012-01-01

    Automatic segmentation of touching cell nuclei in 3D microscopy images is of great importance in bioimage informatics and computational biology. This paper presents a novel method for improving 3D touching cell nuclei segmentation. Given binary touching nuclei by the method in Li et al. (2007), our method herein consists of several steps: surface mesh reconstruction and curvature information estimation; direction field diffusion on surface meshes; flow tracking on surface meshes; and projection of surface mesh segmentation to volumetric images. The method is validated on both synthesised and real 3D touching cell nuclei images, demonstrating its validity and effectiveness.

  18. Tracking 3D Picometer-Scale Motions of Single Nanoparticles with High-Energy Electron Probes

    PubMed Central

    Ogawa, Naoki; Hoshisashi, Kentaro; Sekiguchi, Hiroshi; Ichiyanagi, Kouhei; Matsushita, Yufuku; Hirohata, Yasuhisa; Suzuki, Seiichi; Ishikawa, Akira; Sasaki, Yuji C.

    2013-01-01

    We observed the high-speed anisotropic motion of an individual gold nanoparticle in 3D at the picometer scale using a high-energy electron probe. Diffracted electron tracking (DET) using the electron back-scattered diffraction (EBSD) patterns of labeled nanoparticles under wet-SEM allowed us to super-accurately measure the time-resolved 3D motion of individual nanoparticles in aqueous conditions. The highly precise DET data corresponded to the 3D anisotropic log-normal Gaussian distributions over time at the millisecond scale. PMID:23868465

  19. Detailed Evaluation of Five 3D Speckle Tracking Algorithms Using Synthetic Echocardiographic Recordings.

    PubMed

    Alessandrini, Martino; Heyde, Brecht; Queiros, Sandro; Cygan, Szymon; Zontak, Maria; Somphone, Oudom; Bernard, Olivier; Sermesant, Maxime; Delingette, Herve; Barbosa, Daniel; De Craene, Mathieu; ODonnell, Matthew; Dhooge, Jan

    2016-08-01

    A plethora of techniques for cardiac deformation imaging with 3D ultrasound, typically referred to as 3D speckle tracking techniques, are available from academia and industry. Although the benefits of single methods over alternative ones have been reported in separate publications, the intrinsic differences in the data and definitions used makes it hard to compare the relative performance of different solutions. To address this issue, we have recently proposed a framework to simulate realistic 3D echocardiographic recordings and used it to generate a common set of ground-truth data for 3D speckle tracking algorithms, which was made available online. The aim of this study was therefore to use the newly developed database to contrast non-commercial speckle tracking solutions from research groups with leading expertise in the field. The five techniques involved cover the most representative families of existing approaches, namely block-matching, radio-frequency tracking, optical flow and elastic image registration. The techniques were contrasted in terms of tracking and strain accuracy. The feasibility of the obtained strain measurements to diagnose pathology was also tested for ischemia and dyssynchrony.

  20. Vision-Based Long-Range 3D Tracking, applied to Underground Surveying Tasks

    NASA Astrophysics Data System (ADS)

    Mossel, Annette; Gerstweiler, Georg; Vonach, Emanuel; Kaufmann, Hannes; Chmelina, Klaus

    2014-04-01

    To address the need of highly automated positioning systems in underground construction, we present a long-range 3D tracking system based on infrared optical markers. It provides continuous 3D position estimation of static or kinematic targets with low latency over a tracking volume of 12 m x 8 m x 70 m (width x height x depth). Over the entire volume, relative 3D point accuracy with a maximal deviation ≤ 22 mm is ensured with possible target rotations of yaw, pitch = 0 - 45° and roll = 0 - 360°. No preliminary sighting of target(s) is necessary since the system automatically locks onto a target without user intervention and autonomously starts tracking as soon as a target is within the view of the system. The proposed system needs a minimal hardware setup, consisting of two machine vision cameras and a standard workstation for data processing. This allows for quick installation with minimal disturbance of construction work. The data processing pipeline ensures camera calibration and tracking during on-going underground activities. Tests in real underground scenarios prove the system's capabilities to act as 3D position measurement platform for multiple underground tasks that require long range, low latency and high accuracy. Those tasks include simultaneously tracking of personnel, machines or robots.

  1. Note: Time-gated 3D single quantum dot tracking with simultaneous spinning disk imaging

    SciTech Connect

    DeVore, M. S.; Stich, D. G.; Keller, A. M.; Phipps, M. E.; Hollingsworth, J. A.; Goodwin, P. M.; Werner, J. H.; Cleyrat, C.; Lidke, D. S.; Wilson, B. S.

    2015-12-15

    We describe recent upgrades to a 3D tracking microscope to include simultaneous Nipkow spinning disk imaging and time-gated single-particle tracking (SPT). Simultaneous 3D molecular tracking and spinning disk imaging enable the visualization of cellular structures and proteins around a given fluorescently labeled target molecule. The addition of photon time-gating to the SPT hardware improves signal to noise by discriminating against Raman scattering and short-lived fluorescence. In contrast to camera-based SPT, single-photon arrival times are recorded, enabling time-resolved spectroscopy (e.g., measurement of fluorescence lifetimes and photon correlations) to be performed during single molecule/particle tracking experiments.

  2. Medical applications of fast 3D cameras in real-time image-guided radiotherapy (IGRT) of cancer

    NASA Astrophysics Data System (ADS)

    Li, Shidong; Li, Tuotuo; Geng, Jason

    2013-03-01

    Dynamic volumetric medical imaging (4DMI) has reduced motion artifacts, increased early diagnosis of small mobile tumors, and improved target definition for treatment planning. High speed cameras for video, X-ray, or other forms of sequential imaging allow a live tracking of external or internal movement useful for real-time image-guided radiation therapy (IGRT). However, none of 4DMI can track real-time organ motion and no camera has correlated with 4DMI to show volumetric changes. With a brief review of various IGRT techniques, we propose a fast 3D camera for live-video stereovision, an automatic surface-motion identifier to classify body or respiratory motion, a mechanical model for synchronizing the external surface movement with the internal target displacement by combination use of the real-time stereovision and pre-treatment 4DMI, and dynamic multi-leaf collimation for adaptive aiming the moving target. Our preliminary results demonstrate that the technique is feasible and efficient in IGRT of mobile targets. A clinical trial has been initiated for validation of its spatial and temporal accuracies and dosimetric impact for intensity-modulated RT (IMRT), volumetric-modulated arc therapy (VMAT), and stereotactic body radiotherapy (SBRT) of any mobile tumors. The technique can be extended for surface-guided stereotactic needle insertion in biopsy of small lung nodules.

  3. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  4. Real-time 3D surface-image-guided beam setup in radiotherapy of breast cancer

    SciTech Connect

    Djajaputra, David; Li Shidong

    2005-01-01

    We describe an approach for external beam radiotherapy of breast cancer that utilizes the three-dimensional (3D) surface information of the breast. The surface data of the breast are obtained from a 3D optical camera that is rigidly mounted on the ceiling of the treatment vault. This 3D camera utilizes light in the visible range therefore it introduces no ionization radiation to the patient. In addition to the surface topographical information of the treated area, the camera also captures gray-scale information that is overlaid on the 3D surface image. This allows us to visualize the skin markers and automatically determine the isocenter position and the beam angles in the breast tangential fields. The field sizes and shapes of the tangential, supraclavicular, and internal mammary gland fields can all be determined according to the 3D surface image of the target. A least-squares method is first introduced for the tangential-field setup that is useful for compensation of the target shape changes. The entire process of capturing the 3D surface data and subsequent calculation of beam parameters typically requires less than 1 min. Our tests on phantom experiments and patient images have achieved the accuracy of 1 mm in shift and 0.5 deg. in rotation. Importantly, the target shape and position changes in each treatment session can both be corrected through this real-time image-guided system.

  5. Towards real-time change detection in videos based on existing 3D models

    NASA Astrophysics Data System (ADS)

    Ruf, Boitumelo; Schuchert, Tobias

    2016-10-01

    Image based change detection is of great importance for security applications, such as surveillance and reconnaissance, in order to find new, modified or removed objects. Such change detection can generally be performed by co-registration and comparison of two or more images. However, existing 3d objects, such as buildings, may lead to parallax artifacts in case of inaccurate or missing 3d information, which may distort the results in the image comparison process, especially when the images are acquired from aerial platforms like small unmanned aerial vehicles (UAVs). Furthermore, considering only intensity information may lead to failures in detection of changes in the 3d structure of objects. To overcome this problem, we present an approach that uses Structure-from-Motion (SfM) to compute depth information, with which a 3d change detection can be performed against an existing 3d model. Our approach is capable of the change detection in real-time. We use the input frames with the corresponding camera poses to compute dense depth maps by an image-based depth estimation algorithm. Additionally we synthesize a second set of depth maps, by rendering the existing 3d model from the same camera poses as those of the image-based depth map. The actual change detection is performed by comparing the two sets of depth maps with each other. Our method is evaluated on synthetic test data with corresponding ground truth as well as on real image test data.

  6. Real-time 3D human capture system for mixed-reality art and entertainment.

    PubMed

    Nguyen, Ta Huynh Duy; Qui, Tran Cong Thien; Xu, Ke; Cheok, Adrian David; Teo, Sze Lee; Zhou, ZhiYing; Mallawaarachchi, Asitha; Lee, Shang Ping; Liu, Wei; Teo, Hui Siang; Thang, Le Nam; Li, Yu; Kato, Hirokazu

    2005-01-01

    A real-time system for capturing humans in 3D and placing them into a mixed reality environment is presented in this paper. The subject is captured by nine cameras surrounding her. Looking through a head-mounted-display with a camera in front pointing at a marker, the user can see the 3D image of this subject overlaid onto a mixed reality scene. The 3D images of the subject viewed from this viewpoint are constructed using a robust and fast shape-from-silhouette algorithm. The paper also presents several techniques to produce good quality and speed up the whole system. The frame rate of our system is around 25 fps using only standard Intel processor-based personal computers. Besides a remote live 3D conferencing and collaborating system, we also describe an application of the system in art and entertainment, named Magic Land, which is a mixed reality environment where captured avatars of human and 3D computer generated virtual animations can form an interactive story and play with each other. This system demonstrates many technologies in human computer interaction: mixed reality, tangible interaction, and 3D communication. The result of the user study not only emphasizes the benefits, but also addresses some issues of these technologies.

  7. 3D real-time measurement system of seam with laser

    NASA Astrophysics Data System (ADS)

    Huang, Min-shuang; Huang, Jun-fen

    2014-02-01

    3-D Real-time Measurement System of seam outline based on Moiré Projection is proposed and designed. The system is composed of LD, grating, CCD, video A/D, FPGA, DSP and an output interface. The principle and hardware makeup of high-speed and real-time image processing circuit based on a Digital Signal Processor (DSP) and a Field Programmable Gate Array (FPGA) are introduced. Noise generation mechanism in poor welding field conditions is analyzed when Moiré stripes are projected on a welding workpiece surface. Median filter is adopted to smooth the acquired original laser image of seam, and then measurement results of a 3-D outline image of weld groove are provided.

  8. Integration of GPR and Laser Position Sensors for Real-Time 3D Data Fusion

    NASA Astrophysics Data System (ADS)

    Grasmueck, M.; Viggiano, D.

    2005-05-01

    Non-invasive 3D imaging visualizes anatomy and contents inside objects. Such tools are a commodity for medical doctors diagnosing a patient's health without scalpel and airport security staff inspecting the contents of baggage without opening. For geologists, hydrologists, archeologists and engineers wanting to see inside the shallow subsurface, such 3D tools are still a rarity. Theory and practice show that full-resolution 3D Ground Penetrating Radar (GPR) imaging requires unaliased recording of dipping reflections and diffractions. For a heterogeneous subsurface, minimum grid spacing of GPR measurements should be at least quarter wavelength or less in all directions. Consequently, positioning precision needs to be better than eighth wavelength for correct grid point assignment. Until now 3D GPR imaging has not been practical: data acquisition and processing took weeks to months, data analysis required geophysical training with no versatile 3D systems commercially available. We have integrated novel rotary laser positioning technology with GPR into a highly efficient and simple to use 3D imaging system. The laser positioning enables acquisition of centimeter accurate x, y, and z coordinates from multiple small detectors attached to moving GPR antennae. Positions streaming with 20 updates/second from each detector are fused in real-time with the GPR data. We developed software for automated data acquisition and real-time 3D GPR data quality control on slices at selected depths. Standard formatted (SEGY) data cubes and animations are generated within an hour after the last trace has been acquired. Examples can be seen at www.3dgpr.info. Such instant 3D GPR can be used as an on-site imaging tool supporting field work, hypothesis testing, and optimal sample collection. Rotary laser positioning has the flexibility to be integrated with multiple moving GPR antennae and other geophysical sensors enabling simple and efficient high resolution 3D data acquisition at

  9. Dictionary learning-based spatiotemporal regularization for 3D dense speckle tracking

    NASA Astrophysics Data System (ADS)

    Lu, Allen; Zontak, Maria; Parajuli, Nripesh; Stendahl, John C.; Boutagy, Nabil; Eberle, Melissa; O'Donnell, Matthew; Sinusas, Albert J.; Duncan, James S.

    2017-03-01

    Speckle tracking is a common method for non-rigid tissue motion analysis in 3D echocardiography, where unique texture patterns are tracked through the cardiac cycle. However, poor tracking often occurs due to inherent ultrasound issues, such as image artifacts and speckle decorrelation; thus regularization is required. Various methods, such as optical flow, elastic registration, and block matching techniques have been proposed to track speckle motion. Such methods typically apply spatial and temporal regularization in a separate manner. In this paper, we propose a joint spatiotemporal regularization method based on an adaptive dictionary representation of the dense 3D+time Lagrangian motion field. Sparse dictionaries have good signal adaptive and noise-reduction properties; however, they are prone to quantization errors. Our method takes advantage of the desirable noise suppression, while avoiding the undesirable quantization error. The idea is to enforce regularization only on the poorly tracked trajectories. Specifically, our method 1.) builds data-driven 4-dimensional dictionary of Lagrangian displacements using sparse learning, 2.) automatically identifies poorly tracked trajectories (outliers) based on sparse reconstruction errors, and 3.) performs sparse reconstruction of the outliers only. Our approach can be applied on dense Lagrangian motion fields calculated by any method. We demonstrate the effectiveness of our approach on a baseline block matching speckle tracking and evaluate performance of the proposed algorithm using tracking and strain accuracy analysis.

  10. Dynamic tracking of a deformable tissue based on 3D-2D MR-US image registration

    NASA Astrophysics Data System (ADS)

    Marami, Bahram; Sirouspour, Shahin; Fenster, Aaron; Capson, David W.

    2014-03-01

    Real-time registration of pre-operative magnetic resonance (MR) or computed tomography (CT) images with intra-operative Ultrasound (US) images can be a valuable tool in image-guided therapies and interventions. This paper presents an automatic method for dynamically tracking the deformation of a soft tissue based on registering pre-operative three-dimensional (3D) MR images to intra-operative two-dimensional (2D) US images. The registration algorithm is based on concepts in state estimation where a dynamic finite element (FE)- based linear elastic deformation model correlates the imaging data in the spatial and temporal domains. A Kalman-like filtering process estimates the unknown deformation states of the soft tissue using the deformation model and a measure of error between the predicted and the observed intra-operative imaging data. The error is computed based on an intensity-based distance metric, namely, modality independent neighborhood descriptor (MIND), and no segmentation or feature extraction from images is required. The performance of the proposed method is evaluated by dynamically deforming 3D pre-operative MR images of a breast phantom tissue based on real-time 2D images obtained from an US probe. Experimental results on different registration scenarios showed that deformation tracking converges in a few iterations. The average target registration error on the plane of 2D US images for manually selected fiducial points was between 0.3 and 1.5 mm depending on the size of deformation.

  11. 3D-Pathology: a real-time system for quantitative diagnostic pathology and visualisation in 3D

    NASA Astrophysics Data System (ADS)

    Gottrup, Christian; Beckett, Mark G.; Hager, Henrik; Locht, Peter

    2005-02-01

    This paper presents the results of the 3D-Pathology project conducted under the European EC Framework 5. The aim of the project was, through the application of 3D image reconstruction and visualization techniques, to improve the diagnostic and prognostic capabilities of medical personnel when analyzing pathological specimens using transmitted light microscopy. A fully automated, computer-controlled microscope system has been developed to capture 3D images of specimen content. 3D image reconstruction algorithms have been implemented and applied to the acquired volume data in order to facilitate the subsequent 3D visualization of the specimen. Three potential application fields, immunohistology, cromogenic in situ hybridization (CISH) and cytology, have been tested using the prototype system. For both immunohistology and CISH, use of the system furnished significant additional information to the pathologist.

  12. Real-Time Modeling and 3D Visualization of Source Dynamics and Connectivity Using Wearable EEG

    PubMed Central

    Mullen, Tim; Kothe, Christian; Chi, Yu Mike; Ojeda, Alejandro; Kerth, Trevor; Makeig, Scott; Cauwenberghs, Gert; Jung, Tzyy-Ping

    2014-01-01

    This report summarizes our recent efforts to deliver real-time data extraction, preprocessing, artifact rejection, source reconstruction, multivariate dynamical system analysis (including spectral Granger causality) and 3D visualization as well as classification within the open-source SIFT and BCILAB toolboxes. We report the application of such a pipeline to simulated data and real EEG data obtained from a novel wearable high-density (64-channel) dry EEG system. PMID:24110155

  13. Geometric-model-free tracking of extended targets using 3D lidar measurements

    NASA Astrophysics Data System (ADS)

    Steinemann, Philipp; Klappstein, Jens; Dickmann, Juergen; von Hundelshausen, Felix; Wünsche, Hans-Joachim

    2012-06-01

    Tracking of extended targets in high definition, 360-degree 3D-LIDAR (Light Detection and Ranging) measurements is a challenging task and a current research topic. It is a key component in robotic applications, and is relevant to path planning and collision avoidance. This paper proposes a new method without a geometric model to simultaneously track and accumulate 3D-LIDAR measurements of an object. The method itself is based on a particle filter and uses an object-related local 3D grid for each object. No geometric object hypothesis is needed. Accumulation allows coping with occlusions. The prediction step of the particle filter is governed by a motion model consisting of a deterministic and a probabilistic part. Since this paper is focused on tracking ground vehicles, a bicycle model is used for the deterministic part. The probabilistic part depends on the current state of each particle. A function for calculating the current probability density function for state transition is developed. It is derived in detail and based on a database consisting of vehicle dynamics measurements over several hundreds of kilometers. The adaptive probability density function narrows down the gating area for measurement data association. The second part of the proposed method addresses weighting the particles with a cost function. Different 3D-griddependent cost functions are presented and evaluated. Evaluations with real 3D-LIDAR measurements show the performance of the proposed method. The results are also compared to ground truth data.

  14. Feasibility of low-dose single-view 3D fiducial tracking concurrent with external beam delivery.

    PubMed

    Speidel, Michael A; Wilfley, Brian P; Hsu, Annie; Hristov, Dimitre

    2012-04-01

    In external-beam radiation therapy, existing on-board x-ray imaging chains orthogonal to the delivery beam cannot recover 3D target trajectories from a single view in real-time. This limits their utility for real-time motion management concurrent with beam delivery. To address this limitation, the authors propose a novel concept for on-board imaging based on the inverse-geometry Scanning-Beam Digital X-ray (SBDX) system and evaluate its feasibility for single-view 3D intradelivery fiducial tracking. A chest phantom comprising a posterior wall, a central lung volume, and an anterior wall was constructed. Two fiducials were placed along the mediastinal ridge between the lung cavities: a 1.5 mm diameter steel sphere superiorly and a gold cylinder (2.6 mm length × 0.9 mm diameter) inferiorly. The phantom was placed on a linear motion stage that moved sinusoidally. Fiducial motion was along the source-detector (z) axis of the SBDX system with ±10 mm amplitude and a programmed period of either 3.5 s or 5 s. The SBDX system was operated at 15 frames per second, 100 kVp, providing good apparent conspicuity of the fiducials. With the stage moving, detector data were acquired and subsequently reconstructed into 15 planes with a 12 mm plane-to-plane spacing using digital tomosynthesis. A tracking algorithm was applied to the image planes for each temporal frame to determine the position of each fiducial in (x,y,z)-space versus time. A 3D time-sinusoidal motion model was fit to the measured 3D coordinates and root mean square (RMS) deviations about the fitted trajectory were calculated. Tracked motion was sinusoidal and primarily along the source-detector (z) axis. The RMS deviation of the tracked z-coordinate ranged from 0.53 to 0.71 mm. The motion amplitude derived from the model fit agreed with the programmed amplitude to within 0.28 mm for the steel sphere and within -0.77 mm for the gold seed. The model fit periods agreed with the programmed periods to within 7

  15. Exploring Drug Dosing Regimens In Vitro Using Real-Time 3D Spheroid Tumor Growth Assays.

    PubMed

    Lal-Nag, Madhu; McGee, Lauren; Titus, Steven A; Brimacombe, Kyle; Michael, Sam; Sittampalam, Gurusingham; Ferrer, Marc

    2017-03-01

    Two-dimensional monolayer cell proliferation assays for cancer drug discovery have made the implementation of large-scale screens feasible but only seem to reflect a simplified view that oncogenes or tumor suppressor genes are the genetic drivers of cancer cell proliferation. However, there is now increased evidence that the cellular and physiological context in which these oncogenic events occur play a key role in how they drive tumor growth in vivo and, therefore, in how tumors respond to drug treatments. In vitro 3D spheroid tumor models are being developed to better mimic the physiology of tumors in vivo, in an attempt to improve the predictability and efficiency of drug discovery for the treatment of cancer. Here we describe the establishment of a real-time 3D spheroid growth, 384-well screening assay. The cells used in this study constitutively expressed green fluorescent protein (GFP), which enabled the real-time monitoring of spheroid formation and the effect of chemotherapeutic agents on spheroid size at different time points of sphere growth and drug treatment. This real-time 3D spheroid assay platform represents a first step toward the replication in vitro of drug dosing regimens being investigated in vivo. We hope that further development of this assay platform will allow the investigation of drug dosing regimens, efficacy, and resistance before preclinical and clinical studies.

  16. The BaBar Level 1 Drift-Chamber Trigger Upgrade With 3D Tracking

    SciTech Connect

    Chai, X.D.; /Iowa U.

    2005-11-29

    At BABAR, the Level 1 Drift Chamber trigger is being upgraded to reduce increasing background rates while the PEP-II luminosity keeps improving. This upgrade uses the drift time information and stereo wires in the drift chamber to perform a 3D track reconstruction that effectively rejects background events spread out along the beam line.

  17. Real-Time 3d Reconstruction from Images Taken from AN Uav

    NASA Astrophysics Data System (ADS)

    Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.

    2015-08-01

    We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.

  18. Moving Human Path Tracking Based on Video Surveillance in 3d Indoor Scenarios

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Zlatanova, Sisi; Wang, Zhe; Zhang, Yeting; Liu, Liu

    2016-06-01

    Video surveillance systems are increasingly used for a variety of 3D indoor applications. We can analyse human behaviour, discover and avoid crowded areas, monitor human traffic and so forth. In this paper we concentrate on use of surveillance cameras to track and reconstruct the path a person has followed. For the purpose we integrated video surveillance data with a 3D indoor model of the building and develop a single human moving path tracking method. We process the surveillance videos to detected single human moving traces; then we match the depth information of 3D scenes to the constructed 3D indoor network model and define the human traces in the 3D indoor space. Finally, the single human traces extracted from multiple cameras are connected with the help of the connectivity provided by the 3D network model. Using this approach, we can reconstruct the entire walking path. The provided experiments with a single person have verified the effectiveness and robustness of the method.

  19. A Microscopic Optically Tracking Navigation System That Uses High-resolution 3D Computer Graphics.

    PubMed

    Yoshino, Masanori; Saito, Toki; Kin, Taichi; Nakagawa, Daichi; Nakatomi, Hirofumi; Oyama, Hiroshi; Saito, Nobuhito

    2015-01-01

    Three-dimensional (3D) computer graphics (CG) are useful for preoperative planning of neurosurgical operations. However, application of 3D CG to intraoperative navigation is not widespread because existing commercial operative navigation systems do not show 3D CG in sufficient detail. We have developed a microscopic optically tracking navigation system that uses high-resolution 3D CG. This article presents the technical details of our microscopic optically tracking navigation system. Our navigation system consists of three components: the operative microscope, registration, and the image display system. An optical tracker was attached to the microscope to monitor the position and attitude of the microscope in real time; point-pair registration was used to register the operation room coordinate system, and the image coordinate system; and the image display system showed the 3D CG image in the field-of-view of the microscope. Ten neurosurgeons (seven males, two females; mean age 32.9 years) participated in an experiment to assess the accuracy of this system using a phantom model. Accuracy of our system was compared with the commercial system. The 3D CG provided by the navigation system coincided well with the operative scene under the microscope. Target registration error for our system was 2.9 ± 1.9 mm. Our navigation system provides a clear image of the operation position and the surrounding structures. Systems like this may reduce intraoperative complications.

  20. A Microscopic Optically Tracking Navigation System That Uses High-resolution 3D Computer Graphics

    PubMed Central

    YOSHINO, Masanori; SAITO, Toki; KIN, Taichi; NAKAGAWA, Daichi; NAKATOMI, Hirofumi; OYAMA, Hiroshi; SAITO, Nobuhito

    2015-01-01

    Three-dimensional (3D) computer graphics (CG) are useful for preoperative planning of neurosurgical operations. However, application of 3D CG to intraoperative navigation is not widespread because existing commercial operative navigation systems do not show 3D CG in sufficient detail. We have developed a microscopic optically tracking navigation system that uses high-resolution 3D CG. This article presents the technical details of our microscopic optically tracking navigation system. Our navigation system consists of three components: the operative microscope, registration, and the image display system. An optical tracker was attached to the microscope to monitor the position and attitude of the microscope in real time; point-pair registration was used to register the operation room coordinate system, and the image coordinate system; and the image display system showed the 3D CG image in the field-of-view of the microscope. Ten neurosurgeons (seven males, two females; mean age 32.9 years) participated in an experiment to assess the accuracy of this system using a phantom model. Accuracy of our system was compared with the commercial system. The 3D CG provided by the navigation system coincided well with the operative scene under the microscope. Target registration error for our system was 2.9 ± 1.9 mm. Our navigation system provides a clear image of the operation position and the surrounding structures. Systems like this may reduce intraoperative complications. PMID:26226982

  1. A full-parallax 3D display with restricted viewing zone tracking viewer's eye

    NASA Astrophysics Data System (ADS)

    Beppu, Naoto; Yendo, Tomohiro

    2015-03-01

    The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.

  2. Handling Motion-Blur in 3D Tracking and Rendering for Augmented Reality.

    PubMed

    Park, Youngmin; Lepetit, Vincent; Woo, Woontack

    2012-09-01

    The contribution of this paper is two-fold. First, we show how to extend the ESM algorithm to handle motion blur in 3D object tracking. ESM is a powerful algorithm for template matching-based tracking, but it can fail under motion blur. We introduce an image formation model that explicitly consider the possibility of blur, and shows its results in a generalization of the original ESM algorithm. This allows to converge faster, more accurately and more robustly even under large amount of blur. Our second contribution is an efficient method for rendering the virtual objects under the estimated motion blur. It renders two images of the object under 3D perspective, and warps them to create many intermediate images. By fusing these images we obtain a final image for the virtual objects blurred consistently with the captured image. Because warping is much faster than 3D rendering, we can create realistically blurred images at a very low computational cost.

  3. Mesoscopic in vivo 3-D tracking of sparse cell populations using angular multiplexed optical projection tomography

    PubMed Central

    Chen, Lingling; Alexandrov, Yuriy; Kumar, Sunil; Andrews, Natalie; Dallman, Margaret J.; French, Paul M. W.; McGinty, James

    2015-01-01

    We describe an angular multiplexed imaging technique for 3-D in vivo cell tracking of sparse cell distributions and optical projection tomography (OPT) with superior time-lapse resolution and a significantly reduced light dose compared to volumetric time-lapse techniques. We demonstrate that using dual axis OPT, where two images are acquired simultaneously at different projection angles, can enable localization and tracking of features in 3-D with a time resolution equal to the camera frame rate. This is achieved with a 200x reduction in light dose compared to an equivalent volumetric time-lapse single camera OPT acquisition with 200 projection angles. We demonstrate the application of this technique to mapping the 3-D neutrophil migration pattern observed over ~25.5 minutes in a live 2 day post-fertilisation transgenic LysC:GFP zebrafish embryo following a tail wound. PMID:25909009

  4. Mesoscopic in vivo 3-D tracking of sparse cell populations using angular multiplexed optical projection tomography.

    PubMed

    Chen, Lingling; Alexandrov, Yuriy; Kumar, Sunil; Andrews, Natalie; Dallman, Margaret J; French, Paul M W; McGinty, James

    2015-04-01

    We describe an angular multiplexed imaging technique for 3-D in vivo cell tracking of sparse cell distributions and optical projection tomography (OPT) with superior time-lapse resolution and a significantly reduced light dose compared to volumetric time-lapse techniques. We demonstrate that using dual axis OPT, where two images are acquired simultaneously at different projection angles, can enable localization and tracking of features in 3-D with a time resolution equal to the camera frame rate. This is achieved with a 200x reduction in light dose compared to an equivalent volumetric time-lapse single camera OPT acquisition with 200 projection angles. We demonstrate the application of this technique to mapping the 3-D neutrophil migration pattern observed over ~25.5 minutes in a live 2 day post-fertilisation transgenic LysC:GFP zebrafish embryo following a tail wound.

  5. Surveillance, detection, and 3D infrared tracking of bullets, rockets, mortars, and artillery

    NASA Astrophysics Data System (ADS)

    Leslie, Daniel H.; Hyman, Howard; Moore, Fritz; Squire, Mark D.

    2001-09-01

    We describe test results using the FIRST (Fast InfraRed Sniper Tracker) to detect, track, and range to bullets in flight for determining the location of the bullet launch point. The technology developed for the FIRST system can be used to provide detection and accurate 3D track data for other small threat objects including rockets, mortars, and artillery in addition to bullets. We discuss the radiometry and detection range for these objects, and discuss the trade-offs involved in design of the very fast optical system for acquisition, tracking, and ranging of these targets.

  6. Prospective motion correction for 3D pseudo-continuous arterial spin labeling using an external optical tracking system.

    PubMed

    Aksoy, Murat; Maclaren, Julian; Bammer, Roland

    2017-06-01

    Head motion is an unsolved problem in magnetic resonance imaging (MRI) studies of the brain. Real-time tracking using a camera has recently been proposed as a way to prevent head motion artifacts. As compared to navigator-based approaches that use MRI data to detect and correct motion, optical motion correction works independently of the MRI scanner, thus providing low-latency real-time motion updates without requiring any modifications to the pulse sequence. The purpose of this study was two-fold: 1) to demonstrate that prospective optical motion correction using an optical camera mitigates artifacts from head motion in three-dimensional pseudo-continuous arterial spin labeling (3D PCASL) acquisitions and 2) to assess the effect of latency differences between real-time optical motion tracking and navigator-style approaches (such as PROMO). An optical motion correction system comprising a single camera and a marker attached to the patient's forehead was used to track motion at a rate of 60fps. In the presence of motion, continuous tracking data from the optical system was used to update the scan plane in real-time during the 3D-PCASL acquisition. Navigator-style correction was simulated by using the tracking data from the optical system and performing updates only once per repetition time. Three normal volunteers and a patient were instructed to perform continuous and discrete head motion throughout the scan. Optical motion correction yielded superior image quality compared to uncorrected images or images using navigator-style correction. The standard deviations of pixel-wise CBF differences between reference and non-corrected, navigator-style-corrected and optical-corrected data were 14.28, 14.35 and 11.09mL/100g/min for continuous motion, and 12.42, 12.04 and 9.60mL/100g/min for discrete motion. Data obtained from the patient revealed that motion can obscure pathology and that application of optical prospective correction can successfully reveal the underlying

  7. Particle Filters and Occlusion Handling for Rigid 2D-3D Pose Tracking

    PubMed Central

    Lee, Jehoon; Sandhu, Romeil; Tannenbaum, Allen

    2013-01-01

    In this paper, we address the problem of 2D-3D pose estimation. Specifically, we propose an approach to jointly track a rigid object in a 2D image sequence and to estimate its pose (position and orientation) in 3D space. We revisit a joint 2D segmentation/3D pose estimation technique, and then extend the framework by incorporating a particle filter to robustly track the object in a challenging environment, and by developing an occlusion detection and handling scheme to continuously track the object in the presence of occlusions. In particular, we focus on partial occlusions that prevent the tracker from extracting an exact region properties of the object, which plays a pivotal role for region-based tracking methods in maintaining the track. To this end, a dynamical choice of how to invoke the objective functional is performed online based on the degree of dependencies between predictions and measurements of the system in accordance with the degree of occlusion and the variation of the object’s pose. This scheme provides the robustness to deal with occlusions of an obstacle with different statistical properties from that of the object of interest. Experimental results demonstrate the practical applicability and robustness of the proposed method in several challenging scenarios. PMID:24058277

  8. 3D model-based detection and tracking for space autonomous and uncooperative rendezvous

    NASA Astrophysics Data System (ADS)

    Shang, Yang; Zhang, Yueqiang; Liu, Haibo

    2015-10-01

    In order to fully navigate using a vision sensor, a 3D edge model based detection and tracking technique was developed. Firstly, we proposed a target detection strategy over a sequence of several images from the 3D model to initialize the tracking. The overall purpose of such approach is to robustly match each image with the model views of the target. Thus we designed a line segment detection and matching method based on the multi-scale space technology. Experiments on real images showed that our method is highly robust under various image changes. Secondly, we proposed a method based on 3D particle filter (PF) coupled with M-estimation to track and estimate the pose of the target efficiently. In the proposed approach, a similarity observation model was designed according to a new distance function of line segments. Then, based on the tracking results of PF, the pose was optimized using M-estimation. Experiments indicated that the proposed method can effectively track and accurately estimate the pose of freely moving target in unconstrained environment.

  9. Structured light 3D tracking system for measuring motions in PET brain imaging

    NASA Astrophysics Data System (ADS)

    Olesen, Oline V.; Jørgensen, Morten R.; Paulsen, Rasmus R.; Højgaard, Liselotte; Roed, Bjarne; Larsen, Rasmus

    2010-02-01

    Patient motion during scanning deteriorates image quality, especially for high resolution PET scanners. A new proposal for a 3D head tracking system for motion correction in high resolution PET brain imaging is set up and demonstrated. A prototype tracking system based on structured light with a DLP projector and a CCD camera is set up on a model of the High Resolution Research Tomograph (HRRT). Methods to reconstruct 3D point clouds of simple surfaces based on phase-shifting interferometry (PSI) are demonstrated. The projector and camera are calibrated using a simple stereo vision procedure where the projector is treated as a camera. Additionally, the surface reconstructions are corrected for the non-linear projector output prior to image capture. The results are convincing and a first step toward a fully automated tracking system for measuring head motions in PET imaging.

  10. Fast, automatic, and accurate catheter reconstruction in HDR brachytherapy using an electromagnetic 3D tracking system

    SciTech Connect

    Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc; Binnekamp, Dirk

    2015-03-15

    Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.

  11. Real-Time Large Scale 3d Reconstruction by Fusing Kinect and Imu Data

    NASA Astrophysics Data System (ADS)

    Huai, J.; Zhang, Y.; Yilmaz, A.

    2015-08-01

    Kinect-style RGB-D cameras have been used to build large scale dense 3D maps for indoor environments. These maps can serve many purposes such as robot navigation, and augmented reality. However, to generate dense 3D maps of large scale environments is still very challenging. In this paper, we present a mapping system for 3D reconstruction that fuses measurements from a Kinect and an inertial measurement unit (IMU) to estimate motion. Our major achievements include: (i) Large scale consistent 3D reconstruction is realized by volume shifting and loop closure; (ii) The coarse-to-fine iterative closest point (ICP) algorithm, the SIFT odometry, and IMU odometry are combined to robustly and precisely estimate pose. In particular, ICP runs routinely to track the Kinect motion. If ICP fails in planar areas, the SIFT odometry provides incremental motion estimate. If both ICP and the SIFT odometry fail, e.g., upon abrupt motion or inadequate features, the incremental motion is estimated by the IMU. Additionally, the IMU also observes the roll and pitch angles which can reduce long-term drift of the sensor assembly. In experiments on a consumer laptop, our system estimates motion at 8Hz on average while integrating color images to the local map and saving volumes of meshes concurrently. Moreover, it is immune to tracking failures, and has smaller drift than the state-of-the-art systems in large scale reconstruction.

  12. Real-time volume rendering of 4D image using 3D texture mapping

    NASA Astrophysics Data System (ADS)

    Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-05-01

    Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.

  13. Eye Tracking to Explore the Impacts of Photorealistic 3d Representations in Pedstrian Navigation Performance

    NASA Astrophysics Data System (ADS)

    Dong, Weihua; Liao, Hua

    2016-06-01

    Despite the now-ubiquitous two-dimensional (2D) maps, photorealistic three-dimensional (3D) representations of cities (e.g., Google Earth) have gained much attention by scientists and public users as another option. However, there is no consistent evidence on the influences of 3D photorealism on pedestrian navigation. Whether 3D photorealism can communicate cartographic information for navigation with higher effectiveness and efficiency and lower cognitive workload compared to the traditional symbolic 2D maps remains unknown. This study aims to explore whether the photorealistic 3D representation can facilitate processes of map reading and navigation in digital environments using a lab-based eye tracking approach. Here we show the differences of symbolic 2D maps versus photorealistic 3D representations depending on users' eye-movement and navigation behaviour data. We found that the participants using the 3D representation were less effective, less efficient and were required higher cognitive workload than using the 2D map for map reading. However, participants using the 3D representation performed more efficiently in self-localization and orientation at the complex decision points. The empirical results can be helpful to improve the usability of pedestrian navigation maps in future designs.

  14. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    NASA Astrophysics Data System (ADS)

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  15. MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems

    NASA Astrophysics Data System (ADS)

    Kopecky, Ken; Winer, Eliot

    2014-06-01

    Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.

  16. Real-time microscopic 3D shape measurement based on optimized pulse-width-modulation binary fringe projection

    NASA Astrophysics Data System (ADS)

    Hu, Yan; Chen, Qian; Feng, Shijie; Tao, Tianyang; Li, Hui; Zuo, Chao

    2017-07-01

    In recent years, tremendous progress has been made in 3D measurement techniques, contributing to the realization of faster and more accurate 3D measurement. As a representative of these techniques, fringe projection profilometry (FPP) has become a commonly used method for real-time 3D measurement, such as real-time quality control and online inspection. To date, most related research has been concerned with macroscopic 3D measurement, but microscopic 3D measurement, especially real-time microscopic 3D measurement, is rarely reported. However, microscopic 3D measurement plays an important role in 3D metrology and is indispensable in some applications in measuring micro scale objects like the accurate metrology of MEMS components of the final devices to ensure their proper performance. In this paper, we proposed a method which effectively combines optimized binary structured patterns with a number-theoretical phase unwrapping algorithm to realize real-time microscopic 3D measurement. A slight defocusing of our optimized binary patterns can considerably alleviate the measurement error based on four-step phase-shifting FPP, providing the binary patterns with a comparable performance to ideal sinusoidal patterns. The static measurement accuracy can reach 8 μm, and the experimental results of a vibrating earphone diaphragm reveal that our system can successfully realize real-time 3D measurement of 120 frames per second (FPS) with a measurement range of 8~\\text{mm}× 6~\\text{mm} in lateral and 8 mm in depth.

  17. Fusing inertial sensor data in an extended Kalman filter for 3D camera tracking.

    PubMed

    Erdem, Arif Tanju; Ercan, Ali Özer

    2015-02-01

    In a setup where camera measurements are used to estimate 3D egomotion in an extended Kalman filter (EKF) framework, it is well-known that inertial sensors (i.e., accelerometers and gyroscopes) are especially useful when the camera undergoes fast motion. Inertial sensor data can be fused at the EKF with the camera measurements in either the correction stage (as measurement inputs) or the prediction stage (as control inputs). In general, only one type of inertial sensor is employed in the EKF in the literature, or when both are employed they are both fused in the same stage. In this paper, we provide an extensive performance comparison of every possible combination of fusing accelerometer and gyroscope data as control or measurement inputs using the same data set collected at different motion speeds. In particular, we compare the performances of different approaches based on 3D pose errors, in addition to camera reprojection errors commonly found in the literature, which provides further insight into the strengths and weaknesses of different approaches. We show using both simulated and real data that it is always better to fuse both sensors in the measurement stage and that in particular, accelerometer helps more with the 3D position tracking accuracy, whereas gyroscope helps more with the 3D orientation tracking accuracy. We also propose a simulated data generation method, which is beneficial for the design and validation of tracking algorithms involving both camera and inertial measurement unit measurements in general.

  18. Alignment of 3D Building Models and TIR Video Sequences with Line Tracking

    NASA Astrophysics Data System (ADS)

    Iwaszczuk, D.; Stilla, U.

    2014-11-01

    Thermal infrared imagery of urban areas became interesting for urban climate investigations and thermal building inspections. Using a flying platform such as UAV or a helicopter for the acquisition and combining the thermal data with the 3D building models via texturing delivers a valuable groundwork for large-area building inspections. However, such thermal textures are useful for further analysis if they are geometrically correctly extracted. This can be achieved with a good coregistrations between the 3D building models and thermal images, which cannot be achieved by direct georeferencing. Hence, this paper presents methodology for alignment of 3D building models and oblique TIR image sequences taken from a flying platform. In a single image line correspondences between model edges and image line segments are found using accumulator approach and based on these correspondences an optimal camera pose is calculated to ensure the best match between the projected model and the image structures. Among the sequence the linear features are tracked based on visibility prediction. The results of the proposed methodology are presented using a TIR image sequence taken from helicopter in a densely built-up urban area. The novelty of this work is given by employing the uncertainty of the 3D building models and by innovative tracking strategy based on a priori knowledge from the 3D building model and the visibility checking.

  19. Development of a Wireless and Near Real-Time 3D Ultrasound Strain Imaging System.

    PubMed

    Chen, Zhaohong; Chen, Yongdong; Huang, Qinghua

    2016-04-01

    Ultrasound elastography is an important medical imaging tool for characterization of lesions. In this paper, we present a wireless and near real-time 3D ultrasound strain imaging system. It uses a 3D translating device to control a commercial linear ultrasound transducer to collect pre-compression and post-compression radio-frequency (RF) echo signal frames. The RF frames are wirelessly transferred to a high-performance server via a local area network (LAN). A dynamic programming strain estimation algorithm is implemented with the compute unified device architecture (CUDA) on the graphic processing unit (GPU) in the server to calculate the strain image after receiving a pre-compression RF frame and a post-compression RF frame at the same position. Each strain image is inserted into a strain volume which can be rendered in near real-time. We take full advantage of the translating device to precisely control the probe movement and compression. The GPU-based parallel computing techniques are designed to reduce the computation time. Phantom and in vivo experimental results demonstrate that our system can generate strain volumes with good quality and display an incrementally reconstructed volume image in near real-time.

  20. Display of real-time 3D sensor data in a DVE system

    NASA Astrophysics Data System (ADS)

    Völschow, Philipp; Münsterer, Thomas; Strobel, Michael; Kuhn, Michael

    2016-05-01

    This paper describes the implementation of displaying real-time processed LiDAR 3D data in a DVE pilot assistance system. The goal is to display to the pilot a comprehensive image of the surrounding world without misleading or cluttering information. 3D data which can be attributed, i.e. classified, to terrain or predefined obstacle classes is depicted differently from data belonging to elevated objects which could not be classified. Display techniques may be different for head-down and head-up displays to avoid cluttering of the outside view in the latter case. While terrain is shown as shaded surfaces with grid structures or as grid structures alone, respectively, classified obstacles are typically displayed with obstacle symbols only. Data from objects elevated above ground are displayed as shaded 3D points in space. In addition the displayed 3D points are accumulated over a certain time frame allowing on the one hand side a cohesive structure being displayed and on the other hand displaying moving objects correctly. In addition color coding or texturing can be applied based on known terrain features like land use.

  1. Real-time 3D human pose recognition from reconstructed volume via voxel classifiers

    NASA Astrophysics Data System (ADS)

    Yoo, ByungIn; Choi, Changkyu; Han, Jae-Joon; Lee, Changkyo; Kim, Wonjun; Suh, Sungjoo; Park, Dusik; Kim, Junmo

    2014-03-01

    This paper presents a human pose recognition method which simultaneously reconstructs a human volume based on ensemble of voxel classifiers from a single depth image in real-time. The human pose recognition is a difficult task since a single depth camera can capture only visible surfaces of a human body. In order to recognize invisible (self-occluded) surfaces of a human body, the proposed algorithm employs voxel classifiers trained with multi-layered synthetic voxels. Specifically, ray-casting onto a volumetric human model generates a synthetic voxel, where voxel consists of a 3D position and ID corresponding to the body part. The synthesized volumetric data which contain both visible and invisible body voxels are utilized to train the voxel classifiers. As a result, the voxel classifiers not only identify the visible voxels but also reconstruct the 3D positions and the IDs of the invisible voxels. The experimental results show improved performance on estimating the human poses due to the capability of inferring the invisible human body voxels. It is expected that the proposed algorithm can be applied to many fields such as telepresence, gaming, virtual fitting, wellness business, and real 3D contents control on real 3D displays.

  2. Holographic multi-focus 3D two-photon polymerization with real-time calculated holograms.

    PubMed

    Vizsnyiczai, Gaszton; Kelemen, Lóránd; Ormos, Pál

    2014-10-06

    Two-photon polymerization enables the fabrication of micron sized structures with submicron resolution. Spatial light modulators (SLM) have already been used to create multiple polymerizing foci in the photoresist by holographic beam shaping, thus enabling the parallel fabrication of multiple microstructures. Here we demonstrate the parallel two-photon polymerization of single 3D microstructures by multiple holographically translated foci. Multiple foci were created by phase holograms, which were calculated real-time on an NVIDIA CUDA GPU, and displayed on an electronically addressed SLM. A 3D demonstrational structure was designed that is built up from a nested set of dodecahedron frames of decreasing size. Each individual microstructure was fabricated with the parallel and coordinated motion of 5 holographic foci. The reproducibility and the high uniformity of features of the microstructures were verified by scanning electron microscopy.

  3. 2D array transducers for real-time 3D ultrasound guidance of interventional devices

    NASA Astrophysics Data System (ADS)

    Light, Edward D.; Smith, Stephen W.

    2009-02-01

    We describe catheter ring arrays for real-time 3D ultrasound guidance of devices such as vascular grafts, heart valves and vena cava filters. We have constructed several prototypes operating at 5 MHz and consisting of 54 elements using the W.L. Gore & Associates, Inc. micro-miniature ribbon cables. We have recently constructed a new transducer using a braided wiring technology from Precision Interconnect. This transducer consists of 54 elements at 4.8 MHz with pitch of 0.20 mm and typical -6 dB bandwidth of 22%. In all cases, the transducer and wiring assembly were integrated with an 11 French catheter of a Cook Medical deployment device for vena cava filters. Preliminary in vivo and in vitro testing is ongoing including simultaneous 3D ultrasound and x-ray fluoroscopy.

  4. 3D orbital tracking in a modified two-photon microscope: an application to the tracking of intracellular vesicles.

    PubMed

    Anzalone, Andrea; Annibale, Paolo; Gratton, Enrico

    2014-10-01

    The objective of this video protocol is to discuss how to perform and analyze a three-dimensional fluorescent orbital particle tracking experiment using a modified two-photon microscope(1). As opposed to conventional approaches (raster scan or wide field based on a stack of frames), the 3D orbital tracking allows to localize and follow with a high spatial (10 nm accuracy) and temporal resolution (50 Hz frequency response) the 3D displacement of a moving fluorescent particle on length-scales of hundreds of microns(2). The method is based on a feedback algorithm that controls the hardware of a two-photon laser scanning microscope in order to perform a circular orbit around the object to be tracked: the feedback mechanism will maintain the fluorescent object in the center by controlling the displacement of the scanning beam(3-5). To demonstrate the advantages of this technique, we followed a fast moving organelle, the lysosome, within a living cell(6,7). Cells were plated according to standard protocols, and stained using a commercially lysosome dye. We discuss briefly the hardware configuration and in more detail the control software, to perform a 3D orbital tracking experiment inside living cells. We discuss in detail the parameters required in order to control the scanning microscope and enable the motion of the beam in a closed orbit around the particle. We conclude by demonstrating how this method can be effectively used to track the fast motion of a labeled lysosome along microtubules in 3D within a live cell. Lysosomes can move with speeds in the range of 0.4-0.5 µm/sec, typically displaying a directed motion along the microtubule network(8).

  5. 3D Orbital Tracking in a Modified Two-photon Microscope: An Application to the Tracking of Intracellular Vesicles

    PubMed Central

    Gratton, Enrico

    2014-01-01

    The objective of this video protocol is to discuss how to perform and analyze a three-dimensional fluorescent orbital particle tracking experiment using a modified two-photon microscope1. As opposed to conventional approaches (raster scan or wide field based on a stack of frames), the 3D orbital tracking allows to localize and follow with a high spatial (10 nm accuracy) and temporal resolution (50 Hz frequency response) the 3D displacement of a moving fluorescent particle on length-scales of hundreds of microns2. The method is based on a feedback algorithm that controls the hardware of a two-photon laser scanning microscope in order to perform a circular orbit around the object to be tracked: the feedback mechanism will maintain the fluorescent object in the center by controlling the displacement of the scanning beam3-5. To demonstrate the advantages of this technique, we followed a fast moving organelle, the lysosome, within a living cell6,7. Cells were plated according to standard protocols, and stained using a commercially lysosome dye. We discuss briefly the hardware configuration and in more detail the control software, to perform a 3D orbital tracking experiment inside living cells. We discuss in detail the parameters required in order to control the scanning microscope and enable the motion of the beam in a closed orbit around the particle. We conclude by demonstrating how this method can be effectively used to track the fast motion of a labeled lysosome along microtubules in 3D within a live cell. Lysosomes can move with speeds in the range of 0.4-0.5 µm/sec, typically displaying a directed motion along the microtubule network8. PMID:25350070

  6. Measurement Matrix Optimization and Mismatch Problem Compensation for DLSLA 3-D SAR Cross-Track Reconstruction.

    PubMed

    Bao, Qian; Jiang, Chenglong; Lin, Yun; Tan, Weixian; Wang, Zhirui; Hong, Wen

    2016-08-22

    With a short linear array configured in the cross-track direction, downward looking sparse linear array three-dimensional synthetic aperture radar (DLSLA 3-D SAR) can obtain the 3-D image of an imaging scene. To improve the cross-track resolution, sparse recovery methods have been investigated in recent years. In the compressive sensing (CS) framework, the reconstruction performance depends on the property of measurement matrix. This paper concerns the technique to optimize the measurement matrix and deal with the mismatch problem of measurement matrix caused by the off-grid scatterers. In the model of cross-track reconstruction, the measurement matrix is mainly affected by the configuration of antenna phase centers (APC), thus, two mutual coherence based criteria are proposed to optimize the configuration of APCs. On the other hand, to compensate the mismatch problem of the measurement matrix, the sparse Bayesian inference based method is introduced into the cross-track reconstruction by jointly estimate the scatterers and the off-grid error. Experiments demonstrate the performance of the proposed APCs' configuration schemes and the proposed cross-track reconstruction method.

  7. Measurement Matrix Optimization and Mismatch Problem Compensation for DLSLA 3-D SAR Cross-Track Reconstruction

    PubMed Central

    Bao, Qian; Jiang, Chenglong; Lin, Yun; Tan, Weixian; Wang, Zhirui; Hong, Wen

    2016-01-01

    With a short linear array configured in the cross-track direction, downward looking sparse linear array three-dimensional synthetic aperture radar (DLSLA 3-D SAR) can obtain the 3-D image of an imaging scene. To improve the cross-track resolution, sparse recovery methods have been investigated in recent years. In the compressive sensing (CS) framework, the reconstruction performance depends on the property of measurement matrix. This paper concerns the technique to optimize the measurement matrix and deal with the mismatch problem of measurement matrix caused by the off-grid scatterers. In the model of cross-track reconstruction, the measurement matrix is mainly affected by the configuration of antenna phase centers (APC), thus, two mutual coherence based criteria are proposed to optimize the configuration of APCs. On the other hand, to compensate the mismatch problem of the measurement matrix, the sparse Bayesian inference based method is introduced into the cross-track reconstruction by jointly estimate the scatterers and the off-grid error. Experiments demonstrate the performance of the proposed APCs’ configuration schemes and the proposed cross-track reconstruction method. PMID:27556471

  8. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking.

    PubMed

    Dettmer, Simon L; Keyser, Ulrich F; Pagliara, Stefano

    2014-02-01

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  9. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    SciTech Connect

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    2014-02-15

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  10. Demonstration of digital hologram recording and 3D-scenes reconstruction in real-time

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Kulakov, Mikhail N.; Kurbatova, Ekaterina A.; Molodtsov, Dmitriy Y.; Rodin, Vladislav G.

    2016-04-01

    Digital holography is technique that allows to reconstruct information about 2D-objects and 3D-scenes. This is achieved by registration of interference pattern formed by two beams: object and reference ones. Pattern registered by the digital camera is processed. This allows to obtain amplitude and phase of the object beam. Reconstruction of shape of the 2D objects and 3D-scenes can be obtained numerically (using computer) and optically (using spatial light modulators - SLMs). In this work camera Megaplus II ES11000 was used for digital holograms recording. The camera has 4008 × 2672 pixels with sizes of 9 μm × 9 μm. For hologram recording, 50 mW frequency-doubled Nd:YAG laser with wavelength 532 nm was used. Liquid crystal on silicon SLM HoloEye PLUTO VIS was used for optical reconstruction of digital holograms. SLM has 1920 × 1080 pixels with sizes of 8 μm × 8 μm. At objects reconstruction 10 mW He-Ne laser with wavelength 632.8 nm was used. Setups for digital holograms recording and their optical reconstruction with the SLM were combined as follows. MegaPlus Central Control Software allows to display registered frames by the camera with a little delay on the computer monitor. The SLM can work as additional monitor. In result displayed frames can be shown on the SLM display in near real-time. Thus recording and reconstruction of the 3D-scenes was obtained in real-time. Preliminary, resolution of displayed frames was chosen equaled to the SLM one. Quantity of the pixels was limited by the SLM resolution. Frame rate was limited by the camera one. This holographic video setup was applied without additional program implementations that would increase time delays between hologram recording and object reconstruction. The setup was demonstrated for reconstruction of 3D-scenes.

  11. A 3D front tracking method on a CPU/GPU system

    SciTech Connect

    Bo, Wurigen; Grove, John

    2011-01-21

    We describe the method to port a sequential 3D interface tracking code to a GPU with CUDA. The interface is represented as a triangular mesh. Interface geometry properties and point propagation are performed on a GPU. Interface mesh adaptation is performed on a CPU. The convergence of the method is assessed from the test problems with given velocity fields. Performance results show overall speedups from 11 to 14 for the test problems under mesh refinement. We also briefly describe our ongoing work to couple the interface tracking method with a hydro solver.

  12. Error control in the set-up of stereo camera systems for 3d animal tracking

    NASA Astrophysics Data System (ADS)

    Cavagna, A.; Creato, C.; Del Castello, L.; Giardina, I.; Melillo, S.; Parisi, L.; Viale, M.

    2015-12-01

    Three-dimensional tracking of animal systems is the key to the comprehension of collective behavior. Experimental data collected via a stereo camera system allow the reconstruction of the 3d trajectories of each individual in the group. Trajectories can then be used to compute some quantities of interest to better understand collective motion, such as velocities, distances between individuals and correlation functions. The reliability of the retrieved trajectories is strictly related to the accuracy of the 3d reconstruction. In this paper, we perform a careful analysis of the most significant errors affecting 3d reconstruction, showing how the accuracy depends on the camera system set-up and on the precision of the calibration parameters.

  13. Real-Time Tracking of Implanted Markers During Radiation Treatment by Use of Simultaneous kV and MV Imaging

    DTIC Science & Technology

    2009-03-01

    to be on the order of less than 1 mm in all three spatial dimensions. This investigation has demonstrated the use of a real-time 3D fiducial tracking...images. In this proposal we report our implementation of such a real-time 3D tracking system and demonstrate that a spatial accuracy of < 1 mm is...other spatial dimension necessary for full 3D target localization. Compared to other fluoroscopic tracking systems, which require the use of two or

  14. Cooperative Wall-climbing Robots in 3D Environments for Surveillance and Target Tracking

    DTIC Science & Technology

    2009-02-08

    distribution of impeller vanes, volume of the chamber, and sealing effect , etc. Fig. 5 and 6 show some exemplary simulation results. In paper [11], we...Environments for Surveillance and Target Tracking 11 multiple nonholonomic mobile robots using Cartesian coordinates. Based on the special feature...gamma-ray or x-ray cargo inspection system. Three-dimensional (3D) measurements of the objects inside a cargo can be obtained by effectively

  15. 3D imaging of semiconductor colloid nanocrystals: on the way to nanodiagnostics of track membranes

    NASA Astrophysics Data System (ADS)

    Kulyk, S. I.; Eremchev, I. Y.; Gorshelev, A. A.; Naumov, A. V.; Zagorsky, D. L.; Kotova, S. P.; Volostnikov, V. G.; Vorontsov, E. N.

    2016-12-01

    The work concerns the feasibility of 3D optical diagnostic of porous media with subdifraction spatial resolution via epi-luminescence microscopy of single semiconductor colloid nanocrystals (quantum dots, QD) CdSe/ZnS used as emitting labels/nanoprobes. The nanoprecise reconstruction of axial coordinate is provided by double helix technique of point spread function transformation (DH-PSF). The results of QD localization in polycarbonate track membrane (TM) is presented.

  16. 3D imaging of semiconductor colloid nanocrystals: on the way to nanodiagnostics of track membranes

    NASA Astrophysics Data System (ADS)

    Kulyk, S. I.; Eremchev, I. Y.; Gorshelev, A. A.; Naumov, A. V.; Zagorsky, D. L.; Kotova, S. P.; Volostnikov, V. G.; Vorontsov, E. N.

    2017-01-01

    The work concerns the feasibility of 3D optical diagnostic of porous media with subdifraction spatial resolution via epi-luminescence microscopy of single semiconductor colloid nanocrystals (quantum dots, QD) CdSe/ZnS used as emitting labels/nanoprobes. The nanoprecise reconstruction of axial coordinate is provided by double helix technique of point spread function transformation (DH-PSF). The results of QD localization in polycarbonate track membrane (TM) is presented.

  17. Mechanical left ventricular dyssynchrony detection by endocardium displacement analysis with 3D speckle tracking technology.

    PubMed

    Li, Chi Hion; Carreras, Francesc; Leta, Rubén; Carballeira, Lidia; Pujadas, Sandra; Pons-Lladó, Guillem

    2010-12-01

    Myocardium deformation and displacement analysis by echocardiography has proven useful to evaluate the synchrony of myocardial mechanics. The aim of our study was to evaluate the mean standard deviation of time to longitudinal peak displacement in 16 cardiac segments by 3D echo wall motion Speckle Tracking analysis. We studied 15 patients with ventricular dyssynchrony-defined by a QRS > 120 ms in the ECG. We obtained the differences between time peaks of endocardial longitudinal displacement for 16 segments of the heart by 3D echo Speckle Tracking. We compared the temporal dispersion of these peaks with results obtained in a control group of 13 healthy individuals without dyssynchrony. The results showed a significant difference (p < 0.001) between the dispersion of standard deviation in the 13 patients in the control group (34 ms ± 19) and the 15 patients in the dyssynchrony group (117 ms ± 57). We describe a new parameter obtained by 3D echo wall motion Speckle Tracking analysis for the detection of dyssynchrony. It can be useful to identify dyssynchrony of left ventricular myocardial mechanics, to indicate the resynchronization therapy, to optimize the parameters of the device and to achieve a less operator-dependent evaluation.

  18. 3D ocular ultrasound using gaze tracking on the contralateral eye: a feasibility study.

    PubMed

    Afsham, Narges; Najafi, Mohammad; Abolmaesumi, Purang; Rohling, Robert

    2011-01-01

    A gaze-deviated examination of the eye with a 2D ultrasound transducer is a common and informative ophthalmic test; however, the complex task of the pose estimation of the ultrasound images relative to the eye affects 3D interpretation. To tackle this challenge, a novel system for 3D image reconstruction based on gaze tracking of the contralateral eye has been proposed. The gaze fixates on several target points and, for each fixation, the pose of the examined eye is inferred from the gaze tracking. A single camera system has been developed for pose estimation combined with subject-specific parameter identification. The ultrasound images are then transformed to the coordinate system of the examined eye to create a 3D volume. Accuracy of the proposed gaze tracking system and the pose estimation of the eye have been validated in a set of experiments. Overall system error, including pose estimation and calibration, are 3.12 mm and 4.68 degrees.

  19. Registration of 2D cardiac images to real-time 3D ultrasound volumes for 3D stress echocardiography

    NASA Astrophysics Data System (ADS)

    Leung, K. Y. Esther; van Stralen, Marijn; Voormolen, Marco M.; van Burken, Gerard; Nemes, Attila; ten Cate, Folkert J.; Geleijnse, Marcel L.; de Jong, Nico; van der Steen, Antonius F. W.; Reiber, Johan H. C.; Bosch, Johan G.

    2006-03-01

    Three-dimensional (3D) stress echocardiography is a novel technique for diagnosing cardiac dysfunction, by comparing wall motion of the left ventricle under different stages of stress. For quantitative comparison of this motion, it is essential to register the ultrasound data. We propose an intensity based rigid registration method to retrieve two-dimensional (2D) four-chamber (4C), two-chamber, and short-axis planes from the 3D data set acquired in the stress stage, using manually selected 2D planes in the rest stage as reference. The algorithm uses the Nelder-Mead simplex optimization to find the optimal transformation of one uniform scaling, three rotation, and three translation parameters. We compared registration using the SAD, SSD, and NCC metrics, performed on four resolution levels of a Gaussian pyramid. The registration's effectiveness was assessed by comparing the 3D positions of the registered apex and mitral valve midpoints and 4C direction with the manually selected results. The registration was tested on data from 20 patients. Best results were found using the NCC metric on data downsampled with factor two: mean registration errors were 8.1mm, 5.4mm, and 8.0° in the apex position, mitral valve position, and 4C direction respectively. The errors were close to the interobserver (7.1mm, 3.8mm, 7.4°) and intraobserver variability (5.2mm, 3.3mm, 7.0°), and better than the error before registration (9.4mm, 9.0mm, 9.9°). We demonstrated that the registration algorithm visually and quantitatively improves the alignment of rest and stress data sets, performing similar to manual alignment. This will improve automated analysis in 3D stress echocardiography.

  20. 3D Visualization of Monte-Carlo Simulation's of HZE Track Structure and Initial Chemical Species

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2009-01-01

    Heavy ions biophysics is important for space radiation risk assessment [1] and hadron-therapy [2]. The characteristic of heavy ions tracks include a very high energy deposition region close to the track (<20 nm) denoted as the track core, and an outer penumbra region consisting of individual secondary electrons (6-rays). A still open question is the radiobiological effects of 6- rays relative to the track core. Of importance is the induction of double-strand breaks (DSB) [3] and oxidative damage to the biomolecules and the tissue matrix, considered the most important lesions for acute and long term effects of radiation. In this work, we have simulated a 56Fe26+ ion track of 1 GeV/amu with our Monte-Carlo code RITRACKS [4]. The simulation results have been used to calculate the energy depiction and initial chemical species in a "voxelized" space, which is then visualized in 3D. Several voxels with dose >1000 Gy are found in the penumbra, some located 0.1 mm from the track core. In computational models, the DSB induction probability is calculated with radial dose [6], which may not take into account the higher RBE of electron track ends for DSB induction. Therefore, these simulations should help improve models of DSB induction and our understanding of heavy ions biophysics.

  1. 3D Visualization of Monte-Carlo Simulation's of HZE Track Structure and Initial Chemical Species

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2009-01-01

    Heavy ions biophysics is important for space radiation risk assessment [1] and hadron-therapy [2]. The characteristic of heavy ions tracks include a very high energy deposition region close to the track (<20 nm) denoted as the track core, and an outer penumbra region consisting of individual secondary electrons (6-rays). A still open question is the radiobiological effects of 6- rays relative to the track core. Of importance is the induction of double-strand breaks (DSB) [3] and oxidative damage to the biomolecules and the tissue matrix, considered the most important lesions for acute and long term effects of radiation. In this work, we have simulated a 56Fe26+ ion track of 1 GeV/amu with our Monte-Carlo code RITRACKS [4]. The simulation results have been used to calculate the energy depiction and initial chemical species in a "voxelized" space, which is then visualized in 3D. Several voxels with dose >1000 Gy are found in the penumbra, some located 0.1 mm from the track core. In computational models, the DSB induction probability is calculated with radial dose [6], which may not take into account the higher RBE of electron track ends for DSB induction. Therefore, these simulations should help improve models of DSB induction and our understanding of heavy ions biophysics.

  2. Real-time 3D radiation risk assessment supporting simulation of work in nuclear environments.

    PubMed

    Szőke, I; Louka, M N; Bryntesen, T R; Bratteli, J; Edvardsen, S T; RøEitrheim, K K; Bodor, K

    2014-06-01

    This paper describes the latest developments at the Institute for Energy Technology (IFE) in Norway, in the field of real-time 3D (three-dimensional) radiation risk assessment for the support of work simulation in nuclear environments. 3D computer simulation can greatly facilitate efficient work planning, briefing, and training of workers. It can also support communication within and between work teams, and with advisors, regulators, the media and public, at all the stages of a nuclear installation's lifecycle. Furthermore, it is also a beneficial tool for reviewing current work practices in order to identify possible gaps in procedures, as well as to support the updating of international recommendations, dissemination of experience, and education of the current and future generation of workers.IFE has been involved in research and development into the application of 3D computer simulation and virtual reality (VR) technology to support work in radiological environments in the nuclear sector since the mid 1990s. During this process, two significant software tools have been developed, the VRdose system and the Halden Planner, and a number of publications have been produced to contribute to improving the safety culture in the nuclear industry.This paper describes the radiation risk assessment techniques applied in earlier versions of the VRdose system and the Halden Planner, for visualising radiation fields and calculating dose, and presents new developments towards implementing a flexible and up-to-date dosimetric package in these 3D software tools, based on new developments in the field of radiation protection. The latest versions of these 3D tools are capable of more accurate risk estimation, permit more flexibility via a range of user choices, and are applicable to a wider range of irradiation situations than their predecessors.

  3. A Comprehensive Software System for Interactive, Real-time, Visual 3D Deterministic and Stochastic Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Li, S.

    2002-05-01

    Taking advantage of the recent developments in groundwater modeling research and computer, image and graphics processing, and objected oriented programming technologies, Dr. Li and his research group have recently developed a comprehensive software system for unified deterministic and stochastic groundwater modeling. Characterized by a new real-time modeling paradigm and improved computational algorithms, the software simulates 3D unsteady flow and reactive transport in general groundwater formations subject to both systematic and "randomly" varying stresses and geological and chemical heterogeneity. The software system has following distinct features and capabilities: Interactive simulation and real time visualization and animation of flow in response to deterministic as well as stochastic stresses. Interactive, visual, and real time particle tracking, random walk, and reactive plume modeling in both systematically and randomly fluctuating flow. Interactive statistical inference, scattered data interpolation, regression, and ordinary and universal Kriging, conditional and unconditional simulation. Real-time, visual and parallel conditional flow and transport simulations. Interactive water and contaminant mass balance analysis and visual and real-time flux update. Interactive, visual, and real time monitoring of head and flux hydrographs and concentration breakthroughs. Real-time modeling and visualization of aquifer transition from confined to unconfined to partially de-saturated or completely dry and rewetting Simultaneous and embedded subscale models, automatic and real-time regional to local data extraction; Multiple subscale flow and transport models Real-time modeling of steady and transient vertical flow patterns on multiple arbitrarily-shaped cross-sections and simultaneous visualization of aquifer stratigraphy, properties, hydrological features (rivers, lakes, wetlands, wells, drains, surface seeps), and dynamically adjusted surface flooding area

  4. 3D shape tracking of minimally invasive medical instruments using optical frequency domain reflectometry

    NASA Astrophysics Data System (ADS)

    Parent, Francois; Kanti Mandal, Koushik; Loranger, Sebastien; Watanabe Fernandes, Eric Hideki; Kashyap, Raman; Kadoury, Samuel

    2016-03-01

    We propose here a new alternative to provide real-time device tracking during minimally invasive interventions using a truly-distributed strain sensor based on optical frequency domain reflectometry (OFDR) in optical fibers. The guidance of minimally invasive medical instruments such as needles or catheters (ex. by adding a piezoelectric coating) has been the focus of extensive research in the past decades. Real-time tracking of instruments in medical interventions facilitates image guidance and helps the user to reach a pre-localized target more precisely. Image-guided systems using ultrasound imaging and shape sensors based on fiber Bragg gratings (FBG)-embedded optical fibers can provide retroactive feedback to the user in order to reach the targeted areas with even more precision. However, ultrasound imaging with electro-magnetic tracking cannot be used in the magnetic resonance imaging (MRI) suite, while shape sensors based on FBG embedded in optical fibers provides discrete values of the instrument position, which requires approximations to be made to evaluate its global shape. This is why a truly-distributed strain sensor based on OFDR could enhance the tracking accuracy. In both cases, since the strain is proportional to the radius of curvature of the fiber, a strain sensor can provide the three-dimensional shape of medical instruments by simply inserting fibers inside the devices. To faithfully follow the shape of the needle in the tracking frame, 3 fibers glued in a specific geometry are used, providing 3 degrees of freedom along the fiber. Near real-time tracking of medical instruments is thus obtained offering clear advantages for clinical monitoring in remotely controlled catheter or needle guidance. We present results demonstrating the promising aspects of this approach as well the limitations of using the OFDR technique.

  5. Ring array transducers for real-time 3-D imaging of an atrial septal occluder.

    PubMed

    Light, Edward D; Lindsey, Brooks D; Upchurch, Joseph A; Smith, Stephen W

    2012-08-01

    We developed new miniature ring array transducers integrated into interventional device catheters such as used to deploy atrial septal occluders. Each ring array consisted of 55 elements operating near 5 MHz with interelement spacing of 0.20 mm. It was constructed on a flat piece of copper-clad polyimide and then wrapped around an 11 French O.D. catheter. We used a braided cabling technology from Tyco Electronics Corporation to connect the elements to the Volumetric Medical Imaging (VMI) real-time 3-D ultrasound scanner. Transducer performance yielded a -6 dB fractional bandwidth of 20% centered at 4.7 MHz without a matching layer vs. average bandwidth of 60% centered at 4.4 MHz with a matching layer. Real-time 3-D rendered images of an en face view of a Gore Helex septal occluder in a water tank showed a finer texture of the device surface from the ring array with the matching layer.

  6. In vivo real-time 3-D intracardiac echo using PMUT arrays.

    PubMed

    Dausch, David E; Gilchrist, Kristin H; Carlson, James B; Hall, Stephen D; Castellucci, John B; von Ramm, Olaf T

    2014-10-01

    Piezoelectric micromachined ultrasound transducer (PMUT) matrix arrays were fabricated containing novel through-silicon interconnects and integrated into intracardiac catheters for in vivo real-time 3-D imaging. PMUT arrays with rectangular apertures containing 256 and 512 active elements were fabricated and operated at 5 MHz. The arrays were bulk micromachined in silicon-on-insulator substrates, and contained flexural unimorph membranes comprising the device silicon, lead zirconate titanate (PZT), and electrode layers. Through-silicon interconnects were fabricated by depositing a thin-film conformal copper layer in the bulk micromachined via under each PMUT membrane and photolithographically patterning this copper layer on the back of the substrate to facilitate contact with the individually addressable matrix array elements. Cable assemblies containing insulated 45-AWG copper wires and a termination silicon substrate were thermocompression bonded to the PMUT substrate for signal wire interconnection to the PMUT array. Side-viewing 14-Fr catheters were fabricated and introduced through the femoral vein in an adult porcine model. Real-time 3-D images were acquired from the right atrium using a prototype ultrasound scanner. Full 60° × 60° volume sectors were obtained with penetration depth of 8 to 10 cm at frame rates of 26 to 31 volumes per second.

  7. Twin-beam real-time position estimation of micro-objects in 3D

    NASA Astrophysics Data System (ADS)

    Gurtner, Martin; Zemánek, Jiří

    2016-12-01

    Various optical methods for measuring positions of micro-objects in 3D have been reported in the literature. Nevertheless, the majority of them are not suitable for real-time operation, which is needed, for example, for feedback position control. In this paper, we present a method for real-time estimation of the position of micro-objects in 3D1; the method is based on twin-beam illumination and requires only a very simple hardware setup whose essential part is a standard image sensor without any lens. The performance of the proposed method is tested during a micro-manipulation task in which the estimated position served as feedback for the controller. The experiments show that the estimate is accurate to within  ∼3 μm in the lateral position and  ∼7 μm in the axial distance with the refresh rate of 10 Hz. Although the experiments are done using spherical objects, the presented method could be modified to handle non-spherical objects as well.

  8. Potential benefits of dosimetric VMAT tracking verified with 3D film measurements.

    PubMed

    Crijns, Wouter; Defraene, Gilles; Van Herck, Hans; Depuydt, Tom; Haustermans, Karin; Maes, Frederik; Van den Heuvel, Frank

    2016-05-01

    To evaluate three different plan adaptation strategies using 3D film-stack dose measurements of both focal boost and hypofractionated prostate VMAT treatments. The adaptation strategies (a couch shift, geometric tracking, and dosimetric tracking) were applied for three realistic intrafraction prostate motions. A focal boost (35 × 2.2 and 35 × 2.7 Gy) and a hypofractionated (5 × 7.25 Gy) prostate VMAT plan were created for a heterogeneous phantom that allows for internal prostate motion. For these plans geometric tracking and dosimetric tracking were evaluated by ionization chamber (IC) point dose measurements (zero-D) and measurements using a stack of EBT3 films (3D). The geometric tracking applied translations, rotations, and scaling of the MLC aperture in response to realistic prostate motions. The dosimetric tracking additionally corrected the monitor units to resolve variations due to difference in depth, tissue heterogeneity, and MLC-aperture. The tracking was based on the positions of four fiducial points only. The film measurements were compared to the gold standard (i.e., IC measurements) and the planned dose distribution. Additionally, the 3D measurements were converted to dose volume histograms, tumor control probability, and normal tissue complication probability parameters (DVH/TCP/NTCP) as a direct estimate of clinical relevance of the proposed tracking. Compared to the planned dose distribution, measurements without prostate motion and tracking showed already a reduced homogeneity of the dose distribution. Adding prostate motion further blurs the DVHs for all treatment approaches. The clinical practice (no tracking) delivered the dose distribution inside the PTV but off target (CTV), resulting in boost dose errors up to 10%. The geometric and dosimetric tracking corrected the dose distribution's position. Moreover, the dosimetric tracking could achieve the planned boost DVH, but not the DVH of the more homogeneously irradiated prostate. A drawback

  9. Potential benefits of dosimetric VMAT tracking verified with 3D film measurements

    SciTech Connect

    Crijns, Wouter Depuydt, Tom; Haustermans, Karin; Defraene, Gilles; Van Herck, Hans; Maes, Frederik; Van den Heuvel, Frank

    2016-05-15

    Purpose: To evaluate three different plan adaptation strategies using 3D film-stack dose measurements of both focal boost and hypofractionated prostate VMAT treatments. The adaptation strategies (a couch shift, geometric tracking, and dosimetric tracking) were applied for three realistic intrafraction prostate motions. Methods: A focal boost (35 × 2.2 and 35 × 2.7 Gy) and a hypofractionated (5 × 7.25 Gy) prostate VMAT plan were created for a heterogeneous phantom that allows for internal prostate motion. For these plans geometric tracking and dosimetric tracking were evaluated by ionization chamber (IC) point dose measurements (zero-D) and measurements using a stack of EBT3 films (3D). The geometric tracking applied translations, rotations, and scaling of the MLC aperture in response to realistic prostate motions. The dosimetric tracking additionally corrected the monitor units to resolve variations due to difference in depth, tissue heterogeneity, and MLC-aperture. The tracking was based on the positions of four fiducial points only. The film measurements were compared to the gold standard (i.e., IC measurements) and the planned dose distribution. Additionally, the 3D measurements were converted to dose volume histograms, tumor control probability, and normal tissue complication probability parameters (DVH/TCP/NTCP) as a direct estimate of clinical relevance of the proposed tracking. Results: Compared to the planned dose distribution, measurements without prostate motion and tracking showed already a reduced homogeneity of the dose distribution. Adding prostate motion further blurs the DVHs for all treatment approaches. The clinical practice (no tracking) delivered the dose distribution inside the PTV but off target (CTV), resulting in boost dose errors up to 10%. The geometric and dosimetric tracking corrected the dose distribution’s position. Moreover, the dosimetric tracking could achieve the planned boost DVH, but not the DVH of the more homogeneously

  10. Miniature stereoscopic video system provides real-time 3D registration and image fusion for minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Yaron, Avi; Bar-Zohar, Meir; Horesh, Nadav

    2007-02-01

    Sophisticated surgeries require the integration of several medical imaging modalities, like MRI and CT, which are three-dimensional. Many efforts are invested in providing the surgeon with this information in an intuitive & easy to use manner. A notable development, made by Visionsense, enables the surgeon to visualize the scene in 3D using a miniature stereoscopic camera. It also provides real-time 3D measurements that allow registration of navigation systems as well as 3D imaging modalities, overlaying these images on the stereoscopic video image in real-time. The real-time MIS 'see through tissue' fusion solutions enable the development of new MIS procedures in various surgical segments, such as spine, abdomen, cardio-thoracic and brain. This paper describes 3D surface reconstruction and registration methods using Visionsense camera, as a step toward fully automated multi-modality 3D registration.

  11. Automated 3-D tracking of centrosomes in sequences of confocal image stacks.

    PubMed

    Kerekes, Ryan A; Gleason, Shaun S; Trivedi, Niraj; Solecki, David J

    2009-01-01

    In order to facilitate the study of neuron migration, we propose a method for 3-D detection and tracking of centrosomes in time-lapse confocal image stacks of live neuron cells. We combine Laplacian-based blob detection, adaptive thresholding, and the extraction of scale and roundness features to find centrosome-like objects in each frame. We link these detections using the joint probabilistic data association filter (JPDAF) tracking algorithm with a Newtonian state-space model tailored to the motion characteristics of centrosomes in live neurons. We apply our algorithm to image sequences containing multiple cells, some of which had been treated with motion-inhibiting drugs. We provide qualitative results and quantitative comparisons to manual segmentation and tracking results showing that our average motion estimates agree to within 13% of those computed manually by neurobiologists.

  12. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  13. Automated 3-D Tracking of Centrosomes in Sequences of Confocal Image Stacks

    SciTech Connect

    Kerekes, Ryan A; Gleason, Shaun Scott; Trivedi, Dr. Niraj; Solecki, Dr. David

    2009-01-01

    In order to facilitate the study of neuron migration, we propose a method for 3-D detection and tracking of centrosomes in time-lapse confocal image stacks of live neuron cells. We combine Laplacian-based blob detection, adaptive thresholding, and the extraction of scale and roundness features to find centrosome-like objects in each frame. We link these detections using the joint probabilistic data association filter (JPDAF) tracking algorithm with a Newtonian state-space model tailored to the motion characteristics of centrosomes in live neurons. We apply our algorithm to image sequences containing multiple cells, some of which had been treated with motion-inhibiting drugs. We provide qualitative results and quantitative comparisons to manual segmentation and tracking results showing that our motion estimates closely agree with those generated by neurobiology experts.

  14. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  15. Meanie3D - a mean-shift based, multivariate, multi-scale clustering and tracking algorithm

    NASA Astrophysics Data System (ADS)

    Simon, Jürgen-Lorenz; Malte, Diederich; Silke, Troemel

    2014-05-01

    Project OASE is the one of 5 work groups at the HErZ (Hans Ertel Centre for Weather Research), an ongoing effort by the German weather service (DWD) to further research at Universities concerning weather prediction. The goal of project OASE is to gain an object-based perspective on convective events by identifying them early in the onset of convective initiation and follow then through the entire lifecycle. The ability to follow objects in this fashion requires new ways of object definition and tracking, which incorporate all the available data sets of interest, such as Satellite imagery, weather Radar or lightning counts. The Meanie3D algorithm provides the necessary tool for this purpose. Core features of this new approach to clustering (object identification) and tracking are the ability to identify objects using the mean-shift algorithm applied to a multitude of variables (multivariate), as well as the ability to detect objects on various scales (multi-scale) using elements of Scale-Space theory. The algorithm works in 2D as well as 3D without modifications. It is an extension of a method well known from the field of computer vision and image processing, which has been tailored to serve the needs of the meteorological community. In spite of the special application to be demonstrated here (like convective initiation), the algorithm is easily tailored to provide clustering and tracking for a wide class of data sets and problems. In this talk, the demonstration is carried out on two of the OASE group's own composite sets. One is a 2D nationwide composite of Germany including C-Band Radar (2D) and Satellite information, the other a 3D local composite of the Bonn/Jülich area containing a high-resolution 3D X-Band Radar composite.

  16. Web GIS in practice V: 3-D interactive and real-time mapping in Second Life

    PubMed Central

    Boulos, Maged N Kamel; Burden, David

    2007-01-01

    This paper describes technologies from Daden Limited for geographically mapping and accessing live news stories/feeds, as well as other real-time, real-world data feeds (e.g., Google Earth KML feeds and GeoRSS feeds) in the 3-D virtual world of Second Life, by plotting and updating the corresponding Earth location points on a globe or some other suitable form (in-world), and further linking those points to relevant information and resources. This approach enables users to visualise, interact with, and even walk or fly through, the plotted data in 3-D. Users can also do the reverse: put pins on a map in the virtual world, and then view the data points on the Web in Google Maps or Google Earth. The technologies presented thus serve as a bridge between mirror worlds like Google Earth and virtual worlds like Second Life. We explore the geo-data display potential of virtual worlds and their likely convergence with mirror worlds in the context of the future 3-D Internet or Metaverse, and reflect on the potential of such technologies and their future possibilities, e.g. their use to develop emergency/public health virtual situation rooms to effectively manage emergencies and disasters in real time. The paper also covers some of the issues associated with these technologies, namely user interface accessibility and individual privacy. PMID:18042275

  17. Design of a parallel VLSI engine for real-time visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Bentum, Mark J.; Smit, Jaap

    1994-05-01

    Three dimensional medical scanners are widely available in today's hospitals to acquire a dataset of the human body without the need for surgery. The usefulness of this diagnostic information is limited by the lack of techniques to visualize the datasets. With the increasing computer power of today's workstations it is possible to make a transparent view of the 3D dataset. An interactive mode is necessary, however, to fully explore the 3D dataset. If both a high resolution and a high interactive speed is required, the necessary computational power is enormous. Therefore it is necessary to map the algorithms for volume visualization in a rather specific way onto (dedicated) chips to overcome the performance gap. This paper discusses a high-performance special-purpose low-power system, the Real-Time Volume Rendering Engine (RT-VRE), capable of rendering a 3D dataset of 2563 voxels onto a display of 7502 pixels with an interaction rate of 25 images per second. The RT-VRE allows biomedical engineers to interactively visualize and investigate their data.

  18. Laser 3-D measuring system and real-time visual feedback for teaching and correcting breathing.

    PubMed

    Povšič, Klemen; Fležar, Matjaž; Možina, Janez; Jezeršek, Matija

    2012-03-01

    We present a novel method for real-time 3-D body-shape measurement during breathing based on the laser multiple-line triangulation principle. The laser projector illuminates the measured surface with a pattern of 33 equally inclined light planes. Simultaneously, the camera records the distorted light pattern from a different viewpoint. The acquired images are transferred to a personal computer, where the 3-D surface reconstruction, shape analysis, and display are performed in real time. The measured surface displacements are displayed with a color palette, which enables visual feedback to the patient while breathing is being taught. The measuring range is approximately 400×600×500 mm in width, height, and depth, respectively, and the accuracy of the calibrated apparatus is ±0.7 mm. The system was evaluated by means of its capability to distinguish between different breathing patterns. The accuracy of the measured volumes of chest-wall deformation during breathing was verified using standard methods of volume measurements. The results show that the presented 3-D measuring system with visual feedback has great potential as a diagnostic and training assistance tool when monitoring and evaluating the breathing pattern, because it offers a simple and effective method of graphical communication with the patient. © 2012 Society of Photo-Optical Instrumentation Engineers (SPIE).

  19. A real-time misalignment correction algorithm for stereoscopic 3D cameras

    NASA Astrophysics Data System (ADS)

    Pekkucuksen, Ibrahim E.; Batur, Aziz Umit; Zhang, Buyue

    2012-03-01

    Camera calibration is an important problem for stereo 3-D cameras since the misalignment between the two views can lead to vertical disparities that significantly degrade 3-D viewing quality. Offline calibration during manufacturing is not always an option especially for mass produced cameras due to cost. In addition, even if one-time calibration is performed during manufacturing, its accuracy cannot be maintained indefinitely because environmental factors can lead to changes in camera hardware. In this paper, we propose a real-time stereo calibration solution that runs inside a consumer camera and continuously estimates and corrects for the misalignment between the stereo cameras. Our algorithm works by processing images of natural scenes and does not require the use of special calibration charts. The algorithm first estimates the disparity in horizontal and vertical directions between the corresponding blocks from stereo images. Then, this initial estimate is refined with two dimensional search using smaller sub-blocks. The displacement data and block coordinates are fed to a modified affine transformation model and outliers are discarded to keep the modeling error low. Finally, the estimated affine parameters are split by half and misalignment correction is applied to each view accordingly. The proposed algorithm significantly reduces the misalignment between stereo frames and enables a more comfortable 3-D viewing experience.

  20. 3D-wall motion tracking: a new tool for myocardial contractility analysis.

    PubMed

    Perez de Isla, Leopoldo; Montes, Cesar; Monzón, Tania; Herrero, José; Saltijeral, Adriana; Balcones, David Vivas; de Agustin, Alberto; Nuñez-Gil, Ivan; Fernández-Golfín, Covadonga; Almería, Carlos; Rodrigo, José Luis; Marcos-Alberca, Pedro; Macaya, Carlos; Zamorano, Jose

    2010-10-16

    BACKGROUND: Left-ventricular ejection fraction (LVEF), the most frequently used parameter to evaluate left ventricular (LV) systolic function, depends not only on LV contractility, but also on different variables such as pre-load and after-load. Three-dimensional wall motion tracking echocardiography (3D-WMT) is a new technique that provides information regarding different new parameters of LV systolic function. Our aim was to evaluate whether the new 3D-WMT-derived LV systolic function parameters are less dependent on load conditions than LVEF. METHODS: In order to modify the load conditions to study the dependence of the different LV systolic function parameters on them, a group of renal failure patients under chronic hemodialysis treatment was selected. The echocardiographic studies, including the 3D-WMT analysis, were performed immediately before and immediately after the hemodialysis session. RESULTS: Thirty-one consecutive patients were enrolled (mean age 65.5 ± 17.0 years; 74.2% men). There was a statistically significant change in predialysis and postdialysis, pre-load and after-load conditions (E/È ratio and systolic blood pressure) and in the LV end-diastolic volume and LVEF. Nevertheless, the findings did not show any significant change before and after dialysis in the 3D-WMT-derived parameters. CONCLUSIONS: LV 3D-wall motion tracking-derived systolic function parameters are less dependent on load conditions than LVEF. They might measure myocardial contractility in a more direct way than LVEF. Thus, hypothetically, they might be useful to detect early and subtle contractility impairments in a wide number of cardiac patients and they could help to optimize the clinical management of such patients.

  1. Real-time 3D MRI of contrast agents in whole living mice.

    PubMed

    Bled, Emilie; Hassen, Wadie Ben; Pourtau, Line; Mellet, Philippe; Lanz, Titus; Schüler, Dorothee; Voisin, Pierre; Franconi, Jean-Michel; Thiaudière, Eric; Miraux, Sylvain

    2011-01-01

    A specific mouse whole body coil and a dedicated gradient system at 4.7 T were coupled with an ultra-fast 3D gradient echo MRI and keyhole reconstruction technique to obtain 3D whole-body dynamic T(1)-weighted or T(2)*-weighted imaging. The technique was used to visualize the real-time distribution of non-targeting T(1) and T(2)* contrast agent (CA) in a glioma-bearing mouse model. T(1) dynamic contrast-enhancement imaging was performed with a fast imaging with steady-state precession sequence [echo time/repetition time (TE/TR), 1.32/3.7 ms] before and after CA injection (Gd-DOTA and BSA-Gd-DOTA) for 21 min. The temporal resolution was 1 image/6.5 s. T(2)* imaging (TE/TR, 4/8 ms) was performed before and after iron-based (small and ultra-small particles of iron oxide) CA injection for 45 min. The temporal resolution was 1 image/14 s. Signal-to-noise ratio curves were determined in various mouse organs. The whole-body coil and gradient systems made it possible to acquire data with sufficient and homogeneous signal-to-noise ratio on the whole animal. The spatial resolution allowed adequate depiction of the major organs, blood vessels and brain glioma. The distribution and the time-course of T(1) and T(2)* contrasts upon contrast agent injection were also assessed. 3D whole-body mouse MRI is feasible at high spatial resolution in movie mode and can be applied successfully to visualize real-time contrast agent distribution. This method should be effective in future preclinical molecular imaging studies. Copyright © 2011 John Wiley & Sons, Ltd.

  2. A brain-computer interface method combined with eye tracking for 3D interaction.

    PubMed

    Lee, Eui Chul; Woo, Jin Cheol; Kim, Jong Hwa; Whang, Mincheol; Park, Kang Ryoung

    2010-07-15

    With the recent increase in the number of three-dimensional (3D) applications, the need for interfaces to these applications has increased. Although the eye tracking method has been widely used as an interaction interface for hand-disabled persons, this approach cannot be used for depth directional navigation. To solve this problem, we propose a new brain computer interface (BCI) method in which the BCI and eye tracking are combined to analyze depth navigation, including selection and two-dimensional (2D) gaze direction, respectively. The proposed method is novel in the following five ways compared to previous works. First, a device to measure both the gaze direction and an electroencephalogram (EEG) pattern is proposed with the sensors needed to measure the EEG attached to a head-mounted eye tracking device. Second, the reliability of the BCI interface is verified by demonstrating that there is no difference between the real and the imaginary movements for the same work in terms of the EEG power spectrum. Third, depth control for the 3D interaction interface is implemented by an imaginary arm reaching movement. Fourth, a selection method is implemented by an imaginary hand grabbing movement. Finally, for the independent operation of gazing and the BCI, a mode selection method is proposed that measures a user's concentration by analyzing the pupil accommodation speed, which is not affected by the operation of gazing and the BCI. According to experimental results, we confirmed the feasibility of the proposed 3D interaction method using eye tracking and a BCI. Copyright 2010 Elsevier B.V. All rights reserved.

  3. Tracking-by-Detection of 3D Human Shapes: from Surfaces to Volumes.

    PubMed

    Huang, Chun-Hao; Allain, Benjamin; Boyer, Edmond; Franco, Jean-Sebastien; Tombari, Federico; Navab, Nassir; Ilic, Slobodan

    2017-08-15

    3D Human shape tracking consists in fitting a template model to temporal sequences of visual observations. It usually comprises an association step, that finds correspondences between the model and the input data, and a deformation step, that fits the model to the observations given correspondences. Most current approaches follow the Iterative-Closest-Point (ICP) paradigm, where the association step is carried out by searching for the nearest neighbors. It fails when large deformations occur and errors in the association tend to propagate over time. In this paper, we propose a discriminative alternative for the association, that leverages random forests to infer correspondences in one shot. Regardless the choice of shape parameterizations, being surface or volumetric meshes, we convert 3D shapes to volumetric distance fields and thereby design features to train the forest. We investigate two ways to draw volumetric samples: voxels of regular grids and cells from Centroidal Voronoi Tessellation (CVT). While the former consumes considerable memory and in turn limits us to learn only subject-specific correspondences, the latter yields much less memory footprint by compactly tessellating the interior space of a shape with optimal discretization. This facilitates the use of larger cross-subject training databases, generalizes to different human subjects and hence results in less overfitting and better detection. The discriminative correspondences are successfully integrated to both surface and volumetric deformation frameworks that recover human shape poses, which we refer to as 'tracking-bydetection of 3D human shapes.' It allows for large deformations and prevents tracking errors from being accumulated. When combined with ICP for refinement, it proves to yield better accuracy in registration and more stability when tracking over time. Evaluations on existing datasets demonstrate the benefits with respect to the state-of-the-art.

  4. Simultaneous real-time 3D photoacoustic tomography and EEG for neurovascular coupling study in an animal model of epilepsy

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Xiao, Jiaying; Jiang, Huabei

    2014-08-01

    Objective. Neurovascular coupling in epilepsy is poorly understood; its study requires simultaneous monitoring of hemodynamic changes and neural activity in the brain. Approach. Here for the first time we present a combined real-time 3D photoacoustic tomography (PAT) and electrophysiology/electroencephalography (EEG) system for the study of neurovascular coupling in epilepsy, whose ability was demonstrated with a pentylenetetrazol (PTZ) induced generalized seizure model in rats. Two groups of experiments were carried out with different wavelengths to detect the changes of oxy-hemoglobin (HbO2) and deoxy-hemoglobin (HbR) signals in the rat brain. We extracted the average PAT signals of the superior sagittal sinus (SSS), and compared them with the EEG signal. Main results. Results showed that the seizure process can be divided into three stages. A ‘dip’ lasting for 1-2 min in the first stage and the following hyperfusion in the second stage were observed. The HbO2 signal and the HbR signal were generally negatively correlated. The change of blood flow was also estimated. All the acquired results here were in accordance with other published results. Significance. Compared to other existing functional neuroimaging tools, the method proposed here enables reliable tracking of hemodynamic signal with both high spatial and high temporal resolution in 3D, so it is more suitable for neurovascular coupling study of epilepsy.

  5. Realistic 3D Terrain Roaming and Real-Time Flight Simulation

    NASA Astrophysics Data System (ADS)

    Que, Xiang; Liu, Gang; He, Zhenwen; Qi, Guang

    2014-12-01

    This paper presents an integrate method, which can provide access to current status and the dynamic visible scanning topography, to enhance the interactive during the terrain roaming and real-time flight simulation. A digital elevation model and digital ortho-photo map data integrated algorithm is proposed as the base algorithm for our approach to build a realistic 3D terrain scene. A new technique with help of render to texture and head of display for generating the navigation pane is used. In the flight simulating, in order to eliminate flying "jump", we employs the multidimensional linear interpolation method to adjust the camera parameters dynamically and steadily. Meanwhile, based on the principle of scanning laser imaging, we draw pseudo color figures by scanning topography in different directions according to the real-time flying status. Simulation results demonstrate that the proposed algorithm is prospective for applications and the method can improve the effect and enhance dynamic interaction during the real-time flight.

  6. Coordination of gaze and hand movements for tracking and tracing in 3D.

    PubMed

    Gielen, Constantinus C A M; Dijkstra, Tjeerd M H; Roozen, Irene J; Welten, Joke

    2009-03-01

    In this study we have investigated movements in three-dimensional space. Since most studies have investigated planar movements (like ellipses, cloverleaf shapes and "figure eights") we have compared two generalizations of the two-thirds power law to three dimensions. In particular we have tested whether the two-thirds power law could be best described by tangential velocity and curvature in a plane (compatible with the idea of planar segmentation) or whether tangential velocity and curvature should be calculated in three dimensions. We defined total curvature in three dimensions as the square root of the sum of curvature squared and torsion squared. The results demonstrate that most of the variance is explained by tangential velocity and total curvature. This indicates that all three orthogonal components of movements in 3D are equally important and that movements are truly 3D and do not reflect a concatenation of 2D planar movement segments. In addition, we have studied the coordination of eye and hand movements in 3D by measuring binocular eye movements while subjects move the finger along a curved path. The results show that the directional component and finger position almost superimpose when subjects track a target moving in 3D. However, the vergence component of gaze leads finger position by about 250msec. For drawing (tracing) the path of a visible 3D shape, the directional component of gaze leads finger position by about 225msec, and the vergence component leads finger position by about 400msec. These results are compatible with the idea that gaze leads hand position during drawing movement to assist prediction and planning of hand position in 3D space.

  7. Analysis of thoracic aorta hemodynamics using 3D particle tracking velocimetry and computational fluid dynamics.

    PubMed

    Gallo, Diego; Gülan, Utku; Di Stefano, Antonietta; Ponzini, Raffaele; Lüthi, Beat; Holzner, Markus; Morbiducci, Umberto

    2014-09-22

    Parallel to the massive use of image-based computational hemodynamics to study the complex flow establishing in the human aorta, the need for suitable experimental techniques and ad hoc cases for the validation and benchmarking of numerical codes has grown more and more. Here we present a study where the 3D pulsatile flow in an anatomically realistic phantom of human ascending aorta is investigated both experimentally and computationally. The experimental study uses 3D particle tracking velocimetry (PTV) to characterize the flow field in vitro, while finite volume method is applied to numerically solve the governing equations of motion in the same domain, under the same conditions. Our findings show that there is an excellent agreement between computational and measured flow fields during the forward flow phase, while the agreement is poorer during the reverse flow phase. In conclusion, here we demonstrate that 3D PTV is very suitable for a detailed study of complex unsteady flows as in aorta and for validating computational models of aortic hemodynamics. In a future step, it will be possible to take advantage from the ability of 3D PTV to evaluate velocity fluctuations and, for this reason, to gain further knowledge on the process of transition to turbulence occurring in the thoracic aorta.

  8. 3-D Flow Field Diagnostics and Validation Studies using Stereoscopic Tracking Velocimetry

    NASA Technical Reports Server (NTRS)

    Cha, Soyoung Stephen; Ramachandran, Narayanan; Whitaker, Ann F. (Technical Monitor)

    2002-01-01

    The measurement of 3-D three-component velocity fields is of great importance in both ground and space experiments for understanding materials processing and fluid physics. Here, we present the investigation results of stereoscopic tracking velocimetry (STV) for measuring 3-D velocity fields. The effort includes diagnostic technology development, experimental velocity measurement, and comparison with analytical and numerical computation. The advantages of STV stems from the system simplicity for building compact hardware and in software efficiency for continual near-real-time process monitoring. It also has illumination flexibility for observing volumetric flow fields from arbitrary directions. STV is based on stereoscopic CCD observations of particles seeded in a flow. Neural networks are used for data analysis. The developed diagnostic tool is tested with a simple directional solidification apparatus using Succinonitrile. The 3-D velocity field in the liquid phase is measured and compared with results from detailed numerical computations. Our theoretical, numerical, and experimental effort has shown STV to be a viable candidate for reliably quantifying the 3-D flow field in materials processing and fluids experiments.

  9. A real-time cardiac surface tracking system using Subspace Clustering.

    PubMed

    Singh, Vimal; Tewfik, Ahmed H; Gowreesunker, B

    2010-01-01

    Catheter based radio frequency ablation of atrial fibrillation requires real-time 3D tracking of cardiac surfaces with sub-millimeter accuracy. To best of our knowledge, there are no commercial or non-commercial systems capable to do so. In this paper, a system for high-accuracy 3D tracking of cardiac surfaces in real-time is proposed and results applied to a real patient dataset are presented. Proposed system uses Subspace Clustering algorithm to identify the potential deformation subspaces for cardiac surfaces during the training phase from pre-operative MRI scan based training set. In Tracking phase, using low-density outer cardiac surface samples, active deformation subspace is identified and complete inner & outer cardiac surfaces are reconstructed in real-time under a least squares formulation.

  10. Methods for using 3-D ultrasound speckle tracking in biaxial mechanical testing of biological tissue samples.

    PubMed

    Yap, Choon Hwai; Park, Dae Woo; Dutta, Debaditya; Simon, Marc; Kim, Kang

    2015-04-01

    Being multilayered and anisotropic, biological tissues such as cardiac and arterial walls are structurally complex, making the full assessment and understanding of their mechanical behavior challenging. Current standard mechanical testing uses surface markers to track tissue deformations and does not provide deformation data below the surface. In the study described here, we found that combining mechanical testing with 3-D ultrasound speckle tracking could overcome this limitation. Rat myocardium was tested with a biaxial tester and was concurrently scanned with high-frequency ultrasound in three dimensions. The strain energy function was computed from stresses and strains using an iterative non-linear curve-fitting algorithm. Because the strain energy function consists of terms for the base matrix and for embedded fibers, spatially varying fiber orientation was also computed by curve fitting. Using finite-element simulations, we first validated the accuracy of the non-linear curve-fitting algorithm. Next, we compared experimentally measured rat myocardium strain energy function values with those in the literature and found a matching order of magnitude. Finally, we retained samples after the experiments for fiber orientation quantification using histology and found that the results satisfactorily matched those computed in the experiments. We conclude that 3-D ultrasound speckle tracking can be a useful addition to traditional mechanical testing of biological tissues and may provide the benefit of enabling fiber orientation computation.

  11. METHODS FOR USING 3-D ULTRASOUND SPECKLE TRACKING IN BIAXIAL MECHANICAL TESTING OF BIOLOGICAL TISSUE SAMPLES

    PubMed Central

    Yap, Choon Hwai; Park, Dae Woo; Dutta, Debaditya; Simon, Marc; Kim, Kang

    2014-01-01

    Being multilayered and anisotropic, biological tissues such as cardiac and arterial walls are structurally complex, making full assessment and understanding of their mechanical behavior challenging. Current standard mechanical testing uses surface markers to track tissue deformations and does not provide deformation data below the surface. In the study described here, we found that combining mechanical testing with 3-D ultrasound speckle tracking could overcome this limitation. Rat myocardium was tested with a biaxial tester and was concurrently scanned with high-frequency ultrasound in three dimensions. The strain energy function was computed from stresses and strains using an iterative non-linear curve-fitting algorithm. Because the strain energy function consists of terms for the base matrix and for embedded fibers, spatially varying fiber orientation was also computed by curve fitting. Using finite-element simulations, we first validated the accuracy of the non-linear curve-fitting algorithm. Next, we compared experimentally measured rat myocardium strain energy function values with those in the literature and found a matching order of magnitude. Finally, we retained samples after the experiments for fiber orientation quantification using histology and found that the results satisfactorily matched those computed in the experiments. We conclude that 3-D ultrasound speckle tracking can be a useful addition to traditional mechanical testing of biological tissues and may provide the benefit of enabling fiber orientation computation. PMID:25616585

  12. Detection, 3-D positioning, and sizing of small pore defects using digital radiography and tracking

    NASA Astrophysics Data System (ADS)

    Lindgren, Erik

    2014-12-01

    This article presents an algorithm that handles the detection, positioning, and sizing of submillimeter-sized pores in welds using radiographic inspection and tracking. The possibility to detect, position, and size pores which have a low contrast-to-noise ratio increases the value of the nondestructive evaluation of welds by facilitating fatigue life predictions with lower uncertainty. In this article, a multiple hypothesis tracker with an extended Kalman filter is used to track an unknown number of pore indications in a sequence of radiographs as an object is rotated. Each pore is not required to be detected in all radiographs. In addition, in the tracking step, three-dimensional (3-D) positions of pore defects are calculated. To optimize, set up, and pre-evaluate the algorithm, the article explores a design of experimental approach in combination with synthetic radiographs of titanium laser welds containing pore defects. The pre-evaluation on synthetic radiographs at industrially reasonable contrast-to-noise ratios indicate less than 1% false detection rates at high detection rates and less than 0.1 mm of positioning errors for more than 90% of the pores. A comparison between experimental results of the presented algorithm and a computerized tomography reference measurement shows qualitatively good agreement in the 3-D positions of approximately 0.1-mm diameter pores in 5-mm-thick Ti-6242.

  13. The CT-PPS tracking system with 3D pixel detectors

    NASA Astrophysics Data System (ADS)

    Ravera, F.

    2016-11-01

    The CMS-TOTEM Precision Proton Spectrometer (CT-PPS) detector will be installed in Roman pots (RP) positioned on either side of CMS, at about 210 m from the interaction point. This detector will measure leading protons, allowing detailed studies of diffractive physics and central exclusive production in standard LHC running conditions. An essential component of the CT-PPS apparatus is the tracking system, which consists of two detector stations per arm equipped with six 3D silicon pixel-sensor modules, each read out by six PSI46dig chips. The front-end electronics has been designed to fulfill the mechanical constraints of the RP and to be compatible as much as possible with the readout chain of the CMS pixel detector. The tracking system is currently under construction and will be installed by the end of 2016. In this contribution the final design and the expected performance of the CT-PPS tracking system is presented. A summary of the studies performed, before and after irradiation, on the 3D detectors produced for CT-PPS is given.

  14. Experimental analysis of mechanical response of stabilized occipitocervical junction by 3D mark tracking technique

    NASA Astrophysics Data System (ADS)

    Germaneau, A.; Doumalin, P.; Dupré, J. C.; Brèque, C.; Brémand, F.; D'Houtaud, S.; Rigoard, P.

    2010-06-01

    This study is about a biomechanical comparison of some stabilization solutions for the occipitocervical junction. Four kinds of occipito-cervical fixations are analysed in this work: lateral plates fixed by two kinds of screws, lateral plates fixed by hooks and median plate. To study mechanical rigidity of each one, tests have been performed on human skulls by applying loadings and by studying mechanical response of fixations and bone. For this experimental analysis, a specific setup has been developed to impose a load corresponding to the flexion-extension physiological movements. 3D mark tracking technique is employed to measure 3D displacement fields on the bone and on the fixations. Observations of displacement evolution on the bone according to the fixation show different rigidities given by each solution.

  15. Real-time 3D visualization of the thoraco-abdominal surface during breathing with body movement and deformation extraction.

    PubMed

    Povšič, K; Jezeršek, M; Možina, J

    2015-07-01

    Real-time 3D visualization of the breathing displacements can be a useful diagnostic tool in order to immediately observe the most active regions on the thoraco-abdominal surface. The developed method is capable of separating non-relevant torso movement and deformations from the deformations that are solely related to breathing. This makes it possible to visualize only the breathing displacements. The system is based on the structured laser triangulation principle, with simultaneous spatial and color data acquisition of the thoraco-abdominal region. Based on the tracking of the attached passive markers, the torso movement and deformation is compensated using rigid and non-rigid transformation models on the three-dimensional (3D) data. The total time of 3D data processing together with visualization equals 20 ms per cycle.In vitro verification of the rigid movement extraction was performed using the iterative closest point algorithm as a reference. Furthermore, a volumetric evaluation on a live subject was performed to establish the accuracy of the rigid and non-rigid model. The root mean square deviation between the measured and the reference volumes shows an error of  ±0.08 dm(3) for rigid movement extraction. Similarly, the error was calculated to be  ±0.02 dm(3) for torsional deformation extraction and  ±0.11 dm(3) for lateral bending deformation extraction. The results confirm that during the torso movement and deformation, the proposed method is sufficiently accurate to visualize only the displacements related to breathing. The method can be used, for example, during the breathing exercise on an indoor bicycle or a treadmill.

  16. Management of three-dimensional intrafraction motion through real-time DMLC tracking.

    PubMed

    Sawant, Amit; Venkat, Raghu; Srivastava, Vikram; Carlson, David; Povzner, Sergey; Cattell, Herb; Keall, Paul

    2008-05-01

    Tumor tracking using a dynamic multileaf collimator (DMLC) represents a promising approach for intrafraction motion management in thoracic and abdominal cancer radiotherapy. In this work, we develop, empirically demonstrate, and characterize a novel 3D tracking algorithm for real-time, conformal, intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT)-based radiation delivery to targets moving in three dimensions. The algorithm obtains real-time information of target location from an independent position monitoring system and dynamically calculates MLC leaf positions to account for changes in target position. Initial studies were performed to evaluate the geometric accuracy of DMLC tracking of 3D target motion. In addition, dosimetric studies were performed on a clinical linac to evaluate the impact of real-time DMLC tracking for conformal, step-and-shoot (S-IMRT), dynamic (D-IMRT), and VMAT deliveries to a moving target. The efficiency of conformal and IMRT delivery in the presence of tracking was determined. Results show that submillimeter geometric accuracy in all three dimensions is achievable with DMLC tracking. Significant dosimetric improvements were observed in the presence of tracking for conformal and IMRT deliveries to moving targets. A gamma index evaluation with a 3%-3 mm criterion showed that deliveries without DMLC tracking exhibit between 1.7 (S-IMRT) and 4.8 (D-IMRT) times more dose points that fail the evaluation compared to corresponding deliveries with tracking. The efficiency of IMRT delivery, as measured in the lab, was observed to be significantly lower in case of tracking target motion perpendicular to MLC leaf travel compared to motion parallel to leaf travel. Nevertheless, these early results indicate that accurate, real-time DMLC tracking of 3D tumor motion is feasible and can potentially result in significant geometric and dosimetric advantages leading to more effective management of intrafraction motion.

  17. The effect of object speed and direction on the performance of 3D speckle tracking using a 3D swept-volume ultrasound probe

    NASA Astrophysics Data System (ADS)

    Harris, Emma J.; Miller, Naomi R.; Bamber, Jeffrey C.; Symonds-Tayler, J. Richard N.; Evans, Philip M.

    2011-11-01

    Three-dimensional (3D) soft tissue tracking using 3D ultrasound is of interest for monitoring organ motion during therapy. Previously we demonstrated feature tracking of respiration-induced liver motion in vivo using a 3D swept-volume ultrasound probe. The aim of this study was to investigate how object speed affects the accuracy of tracking ultrasonic speckle in the absence of any structural information, which mimics the situation in homogenous tissue for motion in the azimuthal and elevational directions. For object motion prograde and retrograde to the sweep direction of the transducer, the spatial sampling frequency increases or decreases with object speed, respectively. We examined the effect object motion direction of the transducer on tracking accuracy. We imaged a homogenous ultrasound speckle phantom whilst moving the probe with linear motion at a speed of 0-35 mm s-1. Tracking accuracy and precision were investigated as a function of speed, depth and direction of motion for fixed displacements of 2 and 4 mm. For the azimuthal direction, accuracy was better than 0.1 and 0.15 mm for displacements of 2 and 4 mm, respectively. For a 2 mm displacement in the elevational direction, accuracy was better than 0.5 mm for most speeds. For 4 mm elevational displacement with retrograde motion, accuracy and precision reduced with speed and tracking failure was observed at speeds of greater than 14 mm s-1. Tracking failure was attributed to speckle de-correlation as a result of decreasing spatial sampling frequency with increasing speed of retrograde motion. For prograde motion, tracking failure was not observed. For inter-volume displacements greater than 2 mm, only prograde motion should be tracked which will decrease temporal resolution by a factor of 2. Tracking errors of the order of 0.5 mm for prograde motion in the elevational direction indicates that using the swept probe technology speckle tracking accuracy is currently too poor to track homogenous tissue over

  18. Swimming Behavior of Pseudomonas aeruginosa Studied by Holographic 3D Tracking

    PubMed Central

    Vater, Svenja M.; Weiße, Sebastian; Maleschlijski, Stojan; Lotz, Carmen; Koschitzki, Florian; Schwartz, Thomas; Obst, Ursula; Rosenhahn, Axel

    2014-01-01

    Holographic 3D tracking was applied to record and analyze the swimming behavior of Pseudomonas aeruginosa. The obtained trajectories allow to qualitatively and quantitatively analyze the free swimming behavior of the bacterium. This can be classified into five distinct swimming patterns. In addition to the previously reported smooth and oscillatory swimming motions, three additional patterns are distinguished. We show that Pseudomonas aeruginosa performs helical movements which were so far only described for larger microorganisms. Occurrence of the swimming patterns was determined and transitions between the patterns were analyzed. PMID:24498187

  19. IPS - a System for Real-Time Navigation and 3d Modeling

    NASA Astrophysics Data System (ADS)

    Grießbach, D.; Baumbach, D.; Börner, A.; Buder, M.; Ernst, I.; Funk, E.; Wohlfeil, J.; Zuev, S.

    2012-07-01

    fdaReliable navigation and 3D modeling is a necessary requirement for any autonomous system in real world scenarios. German Aerospace Center (DLR) developed a system providing precise information about local position and orientation of a mobile platform as well as three-dimensional information about its environment in real-time. This system, called Integral Positioning System (IPS) can be applied for indoor environments and outdoor environments. To achieve high precision, reliability, integrity and availability a multi-sensor approach was chosen. The important role of sensor data synchronization, system calibration and spatial referencing is emphasized because the data from several sensors has to be fused using a Kalman filter. A hardware operating system (HW-OS) is presented, that facilitates the low-level integration of different interfaces. The benefit of this approach is an increased precision of synchronization at the expense of additional engineering costs. It will be shown that the additional effort is leveraged by the new design concept since the HW-OS methodology allows a proven, flexible and fast design process, a high re-usability of common components and consequently a higher reliability within the low-level sensor fusion. Another main focus of the paper is on IPS software. The DLR developed, implemented and tested a flexible and extensible software concept for data grabbing, efficient data handling, data preprocessing (e.g. image rectification) being essential for thematic data processing. Standard outputs of IPS are a trajectory of the moving platform and a high density 3D point cloud of the current environment. This information is provided in real-time. Based on these results, information processing on more abstract levels can be executed.

  20. Longitudinal Measurement of Extracellular Matrix Rigidity in 3D Tumor Models Using Particle-tracking Microrheology

    PubMed Central

    El-Hamidi, Hamid; Celli, Jonathan P.

    2014-01-01

    The mechanical microenvironment has been shown to act as a crucial regulator of tumor growth behavior and signaling, which is itself remodeled and modified as part of a set of complex, two-way mechanosensitive interactions. While the development of biologically-relevant 3D tumor models have facilitated mechanistic studies on the impact of matrix rheology on tumor growth, the inverse problem of mapping changes in the mechanical environment induced by tumors remains challenging. Here, we describe the implementation of particle-tracking microrheology (PTM) in conjunction with 3D models of pancreatic cancer as part of a robust and viable approach for longitudinally monitoring physical changes in the tumor microenvironment, in situ. The methodology described here integrates a system of preparing in vitro 3D models embedded in a model extracellular matrix (ECM) scaffold of Type I collagen with fluorescently labeled probes uniformly distributed for position- and time-dependent microrheology measurements throughout the specimen. In vitro tumors are plated and probed in parallel conditions using multiwell imaging plates. Drawing on established methods, videos of tracer probe movements are transformed via the Generalized Stokes Einstein Relation (GSER) to report the complex frequency-dependent viscoelastic shear modulus, G*(ω). Because this approach is imaging-based, mechanical characterization is also mapped onto large transmitted-light spatial fields to simultaneously report qualitative changes in 3D tumor size and phenotype. Representative results showing contrasting mechanical response in sub-regions associated with localized invasion-induced matrix degradation as well as system calibration, validation data are presented. Undesirable outcomes from common experimental errors and troubleshooting of these issues are also presented. The 96-well 3D culture plating format implemented in this protocol is conducive to correlation of microrheology measurements with therapeutic

  1. A portable instrument for 3-D dynamic robot measurements using triangulation and laser tracking

    SciTech Connect

    Mayer, J.R.R. . Mechanical Engineering Dept.); Parker, G.A. . Dept. of Mechanical Engineering)

    1994-08-01

    The paper describes the development and validation of a 3-D measurement instrument capable of determining the static and dynamic performance of industrial robots to ISO standards. Using two laser beams to track an optical target attached to the robot end-effector, the target position coordinates may be estimated, relative to the instrument coordinate frame, to a high accuracy using triangulation principles. The effect of variations in the instrument geometry from the nominal model is evaluated through a kinematic model of the tracking head. Significant improvements of the measurement accuracy are then obtained by a simple adjustment of the main parameters. Extensive experimental test results are included to demonstrate the instrument performance. Finally typical static and dynamic measurement results for an industrial robot are presented to illustrate the effectiveness and usefulness of the instrument.

  2. An automated tool for 3D tracking of single molecules in living cells

    NASA Astrophysics Data System (ADS)

    Gardini, L.; Capitanio, M.; Pavone, F. S.

    2015-07-01

    Recently, tremendous improvements have been achieved in the precision of localization of single fluorescent molecules, allowing localization and tracking of biomolecules at the nm level. Since the behaviour of proteins and biological molecules is tightly influenced by the cell's environment, a growing number of microscopy techniques are moving from in vitro to live cell experiments. Looking at both diffusion and active transportation processes inside a cell requires three-dimensional localization over a few microns range, high SNR images and high temporal resolution (ms order of magnitude). To satisfy these requirements we developed an automated routine that allow 3D tracking of single fluorescent molecules in living cells with nanometer accuracy, by exploiting the properties of the point-spread-function of out-of-focus Quantum Dots bound to the protein of interest.

  3. Quantifying the 3D Odorant Concentration Field Used by Actively Tracking Blue Crabs

    NASA Astrophysics Data System (ADS)

    Webster, D. R.; Dickman, B. D.; Jackson, J. L.; Weissburg, M. J.

    2007-11-01

    Blue crabs and other aquatic organisms locate food and mates by tracking turbulent odorant plumes. The odorant concentration fluctuates unpredictably due to turbulent transport, and many characteristics of the fluctuation pattern have been hypothesized as useful cues for orienting to the odorant source. To make a direct linkage between tracking behavior and the odorant concentration signal, we developed a measurement system based the laser induced fluorescence technique to quantify the instantaneous 3D concentration field surrounding actively tracking blue crabs. The data suggest a correlation between upstream walking speed and the concentration of the odorant signal arriving at the antennule chemosensors, which are located near the mouth region. More specifically, we note an increase in upstream walking speed when high concentration bursts arrive at the antennules location. We also test hypotheses regarding the ability of blue crabs to steer relative to the plume centerline based on the signal contrast between the chemosensors located on their leg appendages. These chemosensors are located much closer to the substrate compared to the antennules and are separated by the width of the blue crab. In this case, it appears that blue crabs use the bilateral signal comparison to track along the edge of the plume.

  4. Adaptive Kalman snake for semi-autonomous 3D vessel tracking.

    PubMed

    Lee, Sang-Hoon; Lee, Sanghoon

    2015-10-01

    In this paper, we propose a robust semi-autonomous algorithm for 3D vessel segmentation and tracking based on an active contour model and a Kalman filter. For each computed tomography angiography (CTA) slice, we use the active contour model to segment the vessel boundary and the Kalman filter to track position and shape variations of the vessel boundary between slices. For successful segmentation via active contour, we select an adequate number of initial points from the contour of the first slice. The points are set manually by user input for the first slice. For the remaining slices, the initial contour position is estimated autonomously based on segmentation results of the previous slice. To obtain refined segmentation results, an adaptive control spacing algorithm is introduced into the active contour model. Moreover, a block search-based initial contour estimation procedure is proposed to ensure that the initial contour of each slice can be near the vessel boundary. Experiments were performed on synthetic and real chest CTA images. Compared with the well-known Chan-Vese (CV) model, the proposed algorithm exhibited better performance in segmentation and tracking. In particular, receiver operating characteristic analysis on the synthetic and real CTA images demonstrated the time efficiency and tracking robustness of the proposed model. In terms of computational time redundancy, processing time can be effectively reduced by approximately 20%.

  5. 3D Fluorescent and Reflective Imaging of Whole Stardust Tracks in Aerogel

    SciTech Connect

    Greenberg, M.; Ebel, D.S.

    2011-11-07

    The NASA Stardust mission returned to earth in 2006 with the cometary collector having captured over 1,000 particles in an aerogel medium at a relative velocity of 6.1 km/s. Particles captured in aerogel were heated, disaggregated and dispersed along 'tracks' or cavities in aerogel, singular tracks representing a history of one capture event. It has been our focus to chemically and morphologically characterize whole tracks in 3-dimensions, utilizing solely non-destructive methods. To this end, we have used a variety of methods: 3D Laser Scanning Confocal Microscopy (LSCM), synchrotron X-ray fluorescence (SXRF), and synchrotron X-ray diffraction (SXRD). In the past months we have developed two new techniques to aid in data collection. (1) We have received a new confocal microscope which has enabled autofluorescent and spectral imaging of aerogel samples. (2) We have developed a stereo-SXRF technique to chemically identify large grains in SXRF maps in 3-space. The addition of both of these methods to our analytic abilities provides a greater understanding of the mechanisms and results of track formation.

  6. Designing a high accuracy 3D auto stereoscopic eye tracking display, using a common LCD monitor

    NASA Astrophysics Data System (ADS)

    Taherkhani, Reza; Kia, Mohammad

    2012-09-01

    This paper describes the design and building of a low cost and practical stereoscopic display that does not need to wear special glasses, and uses eye tracking to give a large degree of freedom to viewer (or viewer's) movement while displaying the minimum amount of information. The parallax barrier technique is employed to turn a LCD into an auto-stereoscopic display. The stereo image pair is screened on the usual liquid crystal display simultaneously but in different columns of pixels. Controlling of the display in red-green-blue sub pixels increases the accuracy of light projecting direction to less than 2 degrees without losing too much LCD's resolution and an eye-tracking system determines the correct angle to project the images along the viewer's eye pupils and an image processing system puts the 3D images data in correct R-G-B sub pixels. 1.6 degree of light direction controlling achieved in practice. The 3D monitor is just made by applying some simple optical materials on a usual LCD display with normal resolution. [Figure not available: see fulltext.

  7. 3D tracking the Brownian motion of colloidal particles using digital holographic microscopy and joint reconstruction.

    PubMed

    Verrier, Nicolas; Fournier, Corinne; Fournel, Thierry

    2015-06-01

    In-line digital holography is a valuable tool for sizing, locating, and tracking micro- or nano-objects in a volume. When a parametric imaging model is available, inverse problem approaches provide a straightforward estimate of the object parameters by fitting data with the model, thereby allowing accurate reconstruction. As recently proposed and demonstrated, combining pixel super-resolution techniques with inverse problem approaches improves the estimation of particle size and 3D position. Here, we demonstrate the accurate tracking of colloidal particles in Brownian motion. Particle size and 3D position are jointly optimized from video holograms acquired with a digital holographic microscopy setup based on a low-end microscope objective (×20, NA 0.5). Exploiting information redundancy makes it possible to characterize particles with a standard deviation of 15 nm in size and a theoretical resolution of 2×2×5  nm3 for position under additive white Gaussian noise assumption.

  8. 3D Microfluidic model for evaluating immunotherapy efficacy by tracking dendritic cell behaviour toward tumor cells.

    PubMed

    Parlato, Stefania; De Ninno, Adele; Molfetta, Rosa; Toschi, Elena; Salerno, Debora; Mencattini, Arianna; Romagnoli, Giulia; Fragale, Alessandra; Roccazzello, Lorenzo; Buoncervello, Maria; Canini, Irene; Bentivegna, Enrico; Falchi, Mario; Bertani, Francesca Romana; Gerardino, Annamaria; Martinelli, Eugenio; Natale, Corrado; Paolini, Rossella; Businaro, Luca; Gabriele, Lucia

    2017-04-24

    Immunotherapy efficacy relies on the crosstalk within the tumor microenvironment between cancer and dendritic cells (DCs) resulting in the induction of a potent and effective antitumor response. DCs have the specific role of recognizing cancer cells, taking up tumor antigens (Ags) and then migrating to lymph nodes for Ag (cross)-presentation to naïve T cells. Interferon-α-conditioned DCs (IFN-DCs) exhibit marked phagocytic activity and the special ability of inducing Ag-specific T-cell response. Here, we have developed a novel microfluidic platform recreating tightly interconnected cancer and immune systems with specific 3D environmental properties, for tracking human DC behaviour toward tumor cells. By combining our microfluidic platform with advanced microscopy and a revised cell tracking analysis algorithm, it was possible to evaluate the guided efficient motion of IFN-DCs toward drug-treated cancer cells and the succeeding phagocytosis events. Overall, this platform allowed the dissection of IFN-DC-cancer cell interactions within 3D tumor spaces, with the discovery of major underlying factors such as CXCR4 involvement and underscored its potential as an innovative tool to assess the efficacy of immunotherapeutic approaches.

  9. Looking Beyond the Simple Scenarios: Combining Learners and Optimizers in 3D Temporal Tracking.

    PubMed

    Tan, David Joseph; Navab, Nassir; Tombari, Federico

    2017-11-01

    3D object temporal trackers estimate the 3D rotation and 3D translation of a rigid object by propagating the transformation from one frame to the next. To confront this task, algorithms either learn the transformation between two consecutive frames or optimize an energy function to align the object to the scene. The motivation behind our approach stems from a consideration on the nature of learners and optimizers. Throughout the evaluation of different types of objects and working conditions, we observe their complementary nature - on one hand, learners are more robust when undergoing challenging scenarios, while optimizers are prone to tracking failures due to the entrapment at local minima; on the other, optimizers can converge to a better accuracy and minimize jitter. Therefore, we propose to bridge the gap between learners and optimizers to attain a robust and accurate RGB-D temporal tracker that runs at approximately 2 ms per frame using one CPU core. Our work is highly suitable for Augmented Reality (AR), Mixed Reality (MR) and Virtual Reality (VR) applications due to its robustness, accuracy, efficiency and low latency. Aiming at stepping beyond the simple scenarios used by current systems, often constrained by having a single object in the absence of clutter, averting to touch the object to prevent close-range partial occlusion or selecting brightly colored objects to easily segment them individually, we demonstrate the capacity to handle challenging cases under clutter, partial occlusion and varying lighting conditions.

  10. The role of 3D and speckle tracking echocardiography in cardiac amyloidosis: a case report.

    PubMed

    Nucci, E M; Lisi, M; Cameli, M; Baldi, L; Puccetti, L; Mondillo, S; Favilli, R; Lunghetti, S

    2014-01-01

    Cardiac amyloidosis (CA) is a disorder characterized by amyloid fibrils deposition in cardiac interstitium; it results in a restrictive cardiomyopathy with heart failure (HF) and conduction abnormalities. The "gold standard" for diagnosis of CA is myocardial biopsy but possible sampling errors and procedural risks, limit it's use. Magnetic resonance (RMN) offers more information than traditional echocardiography and allows diagnosis of CA but often it's impossible to perform. We report the case of a man with HF and symptomatic bradyarrhythmia that required an urgent pacemaker implant. Echocardiography was strongly suggestive of CA but wasn't impossible to perform an RMN to confirm this hypothesis because the patient was implanted with a definitive pacemaker. So was performed a Speckle Tracking Echocardiography (STE) and a 3D echocardiography: STE allows to differentiate CA from others hypertrophic cardiomyopathy by longitudinal strain value < 12% and 3D echocardiography shows regional left ventricular dyssynchrony with a characteristic temporal pattern of dispersion of regional volume systolic change. On the basis of these results, finally was performed an endomyocardial biopsy that confirmed the diagnosis of CA. This case underlines the importance of news, noninvasive techniques such as eco 3D and STE for early diagnosis of CA, especially when RMN cannot be performed.

  11. 3D Visualization of near real-time remote-sensing observation for hurricanes field campaign using Google Earth API

    NASA Astrophysics Data System (ADS)

    Li, P.; Turk, J.; Vu, Q.; Knosp, B.; Hristova-Veleva, S. M.; Lambrigtsen, B.; Poulsen, W. L.; Licata, S.

    2009-12-01

    NASA is planning a new field experiment, the Genesis and Rapid Intensification Processes (GRIP), in the summer of 2010 to better understand how tropical storms form and develop into major hurricanes. The DC-8 aircraft and the Global Hawk Unmanned Airborne System (UAS) will be deployed loaded with instruments for measurements including lightning, temperature, 3D wind, precipitation, liquid and ice water contents, aerosol and cloud profiles. During the field campaign, both the spaceborne and the airborne observations will be collected in real-time and integrated with the hurricane forecast models. This observation-model integration will help the campaign achieve its science goals by allowing team members to effectively plan the mission with current forecasts. To support the GRIP experiment, JPL developed a website for interactive visualization of all related remote-sensing observations in the GRIP’s geographical domain using the new Google Earth API. All the observations are collected in near real-time (NRT) with 2 to 5 hour latency. The observations include a 1KM blended Sea Surface Temperature (SST) map from GHRSST L2P products; 6-hour composite images of GOES IR; stability indices, temperature and vapor profiles from AIRS and AMSU-B; microwave brightness temperature and rain index maps from AMSR-E, SSMI and TRMM-TMI; ocean surface wind vectors, vorticity and divergence of the wind from QuikSCAT; the 3D precipitation structure from TRMM-PR and vertical profiles of cloud and precipitation from CloudSAT. All the NRT observations are collected from the data centers and science facilities at NASA and NOAA, subsetted, re-projected, and composited into hourly or daily data products depending on the frequency of the observation. The data products are then displayed on the 3D Google Earth plug-in at the JPL Tropical Cyclone Information System (TCIS) website. The data products offered by the TCIS in the Google Earth display include image overlays, wind vectors, clickable

  12. On the holographic 3D tracking of in vitro cells characterized by a highly-morphological change.

    PubMed

    Memmolo, Pasquale; Iannone, Maria; Ventre, Maurizio; Netti, Paolo Antonio; Finizio, Andrea; Paturzo, Melania; Ferraro, Pietro

    2012-12-17

    Digital Holography (DH) in microscopic configuration is a powerful tool for the imaging of micro-objects contained into a three dimensional (3D) volume, by a single-shot image acquisition. Many studies report on the ability of DH to track particle, microorganism and cells in 3D. However, very few investigations are performed with objects that change severely their morphology during the observation period. Here we study DH as a tool for 3D tracking an osteosarcoma cell line for which extensive changes in cell morphology are associated to cell motion. Due to the great unpredictable morphological change, retrieving cell's position in 3D can become a complicated issue. We investigate and discuss in this paper how the tridimensional position can be affected by the continuous change of the cells. Moreover we propose and test some strategies to afford the problems and compare it with others approaches. Finally, results on the 3D tracking and comments are reported and illustrated.

  13. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  14. Three-dimensional liver motion tracking using real-time two-dimensional MRI

    SciTech Connect

    Brix, Lau; Ringgaard, Steffen; Sørensen, Thomas Sangild; Poulsen, Per Rugaard

    2014-04-15

    Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (or tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial, sagittal

  15. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    PubMed Central

    Afik, Eldad

    2015-01-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection. PMID:26329642

  16. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    NASA Astrophysics Data System (ADS)

    Afik, Eldad

    2015-09-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection.

  17. Does the mitral annulus shrink or enlarge during systole? A real-time 3D echocardiography study.

    PubMed

    Kwan, Jun; Jeon, Min-Jae; Kim, Dae-Hyeok; Park, Keum-Soo; Lee, Woo-Hyung

    2009-04-01

    This study was conducted to explore the geometrical changes of the mitral annulus during systole. The 3D shape of the mitral annulus was reconstructed in 13 normal subjects who had normal structure of the mitral apparatus using real-time 3D echocardiography (RT3DE) and 3D computer software. The two orthogonal (antero-posterior and commissure-commissure) dimensions, the areas (2D projected and 3D surface) and the non-planarity of the mitral annulus were estimated during early, mid and late systole. We demonstrated that the MA had a "saddle shape" appearance and it consistently enlarged mainly in the antero-posterior direction from early to late systole with lessening of its non-planarity, as was determined by 3D reconstruction using RT3DE and 3D computer software.

  18. Prediction of 3D internal organ position from skin surface motion: results from electromagnetic tracking studies

    NASA Astrophysics Data System (ADS)

    Wong, Kenneth H.; Tang, Jonathan; Zhang, Hui J.; Varghese, Emmanuel; Cleary, Kevin R.

    2005-04-01

    An effective treatment method for organs that move with respiration (such as the lungs, pancreas, and liver) is a major goal of radiation medicine. In order to treat such tumors, we need (1) real-time knowledge of the current location of the tumor, and (2) the ability to adapt the radiation delivery system to follow this constantly changing location. In this study, we used electromagnetic tracking in a swine model to address the first challenge, and to determine if movement of a marker attached to the skin could accurately predict movement of an internal marker embedded in an organ. Under approved animal research protocols, an electromagnetically tracked needle was inserted into a swine liver and an electromagnetically tracked guidewire was taped to the abdominal skin of the animal. The Aurora (Northern Digital Inc., Waterloo, Canada) electromagnetic tracking system was then used to monitor the position of both of these sensors every 40 msec. Position readouts from the sensors were then tested to see if any of the movements showed correlation. The strongest correlations were observed between external anterior-posterior motion and internal inferior-superior motion, with many other axes exhibiting only weak correlation. We also used these data to build a predictive model of internal motion by taking segments from the data and using them to derive a general functional relationship between the internal needle and the external guidewire. For the axis with the strongest correlation, this model enabled us to predict internal organ motion to within 1 mm.

  19. Nondestructive 3D confocal laser imaging with deconvolution of seven whole stardust tracks with complementary XRF and quantitative analysis

    SciTech Connect

    Greenberg, M.; Ebel, D.S.

    2009-03-19

    We present a nondestructive 3D system for analysis of whole Stardust tracks, using a combination of Laser Confocal Scanning Microscopy and synchrotron XRF. 3D deconvolution is used for optical corrections, and results of quantitative analyses of several tracks are presented. The Stardust mission to comet Wild 2 trapped many cometary and ISM particles in aerogel, leaving behind 'tracks' of melted silica aerogel on both sides of the collector. Collected particles and their tracks range in size from submicron to millimeter scale. Interstellar dust collected on the obverse of the aerogel collector is thought to have an average track length of {approx}15 {micro}m. It has been our goal to perform a total non-destructive 3D textural and XRF chemical analysis on both types of tracks. To that end, we use a combination of Laser Confocal Scanning Microscopy (LCSM) and X Ray Florescence (XRF) spectrometry. Utilized properly, the combination of 3D optical data and chemical data provides total nondestructive characterization of full tracks, prior to flattening or other destructive analysis methods. Our LCSM techniques allow imaging at 0.075 {micro}m/pixel, without the use of oil-based lenses. A full textural analysis on track No.82 is presented here as well as analysis of 6 additional tracks contained within 3 keystones (No.128, No.129 and No.140). We present a method of removing the axial distortion inherent in LCSM images, by means of a computational 3D Deconvolution algorithm, and present some preliminary experiments with computed point spread functions. The combination of 3D LCSM data and XRF data provides invaluable information, while preserving the integrity of the samples for further analysis. It is imperative that these samples, the first extraterrestrial solids returned since the Apollo era, be fully mapped nondestructively in 3D, to preserve the maximum amount of information prior to other, destructive analysis.

  20. Segmentation and Tracking of Adherens Junctions in 3D for the Analysis of Epithelial Tissue Morphogenesis

    PubMed Central

    Cilla, Rodrigo; Mechery, Vinodh; Hernandez de Madrid, Beatriz; Del Signore, Steven; Dotu, Ivan; Hatini, Victor

    2015-01-01

    Epithelial morphogenesis generates the shape of tissues, organs and embryos and is fundamental for their proper function. It is a dynamic process that occurs at multiple spatial scales from macromolecular dynamics, to cell deformations, mitosis and apoptosis, to coordinated cell rearrangements that lead to global changes of tissue shape. Using time lapse imaging, it is possible to observe these events at a system level. However, to investigate morphogenetic events it is necessary to develop computational tools to extract quantitative information from the time lapse data. Toward this goal, we developed an image-based computational pipeline to preprocess, segment and track epithelial cells in 4D confocal microscopy data. The computational pipeline we developed, for the first time, detects the adherens junctions of epithelial cells in 3D, without the need to first detect cell nuclei. We accentuate and detect cell outlines in a series of steps, symbolically describe the cells and their connectivity, and employ this information to track the cells. We validated the performance of the pipeline for its ability to detect vertices and cell-cell contacts, track cells, and identify mitosis and apoptosis in surface epithelia of Drosophila imaginal discs. We demonstrate the utility of the pipeline to extract key quantitative features of cell behavior with which to elucidate the dynamics and biomechanical control of epithelial tissue morphogenesis. We have made our methods and data available as an open-source multiplatform software tool called TTT (http://github.com/morganrcu/TTT) PMID:25884654

  1. Segmentation and tracking of adherens junctions in 3D for the analysis of epithelial tissue morphogenesis.

    PubMed

    Cilla, Rodrigo; Mechery, Vinodh; Hernandez de Madrid, Beatriz; Del Signore, Steven; Dotu, Ivan; Hatini, Victor

    2015-04-01

    Epithelial morphogenesis generates the shape of tissues, organs and embryos and is fundamental for their proper function. It is a dynamic process that occurs at multiple spatial scales from macromolecular dynamics, to cell deformations, mitosis and apoptosis, to coordinated cell rearrangements that lead to global changes of tissue shape. Using time lapse imaging, it is possible to observe these events at a system level. However, to investigate morphogenetic events it is necessary to develop computational tools to extract quantitative information from the time lapse data. Toward this goal, we developed an image-based computational pipeline to preprocess, segment and track epithelial cells in 4D confocal microscopy data. The computational pipeline we developed, for the first time, detects the adherens junctions of epithelial cells in 3D, without the need to first detect cell nuclei. We accentuate and detect cell outlines in a series of steps, symbolically describe the cells and their connectivity, and employ this information to track the cells. We validated the performance of the pipeline for its ability to detect vertices and cell-cell contacts, track cells, and identify mitosis and apoptosis in surface epithelia of Drosophila imaginal discs. We demonstrate the utility of the pipeline to extract key quantitative features of cell behavior with which to elucidate the dynamics and biomechanical control of epithelial tissue morphogenesis. We have made our methods and data available as an open-source multiplatform software tool called TTT (http://github.com/morganrcu/TTT).

  2. Simulations of Coalescence and Breakup of Interfaces Using a 3D Front-tracking Method

    NASA Astrophysics Data System (ADS)

    Lu, Jiacai; Tryggvason, Gretar

    2015-11-01

    Direct Numerical Simulations (DNS) of complex multiphase flows with coalescing and breaking-up of interfaces are conducted using a 3D front-tracking method. Front-tracking method has been successfully used in DNS of turbulent channel bubbly flows and many other multiphase flows, but as the void fraction increases changes in the interface topology, though coalescence and breakup, become more common and have to be accounted for. Topology changes have often been identified as a challenge for front tracking, where the interface is represented using a triangular mesh, but here we present an efficient algorithm to change the topology of triangular elements of interfaces. In the current implementation we have not included any small-scale attractive forces so thin films coalesce either at prescribed times or when their thickness reaches a given value. Simulations of the collisions of two drops and comparisons with experimental results have been used to validate the algorithm but the main applications have been to flow regime transitions in gas-liquid flows in pressure driven channel flows. The evolution of flow, including flow rate, wall shear, projected interface areas, pseudo-turbulence, and the average size of the various flow structures, is examined as the topology of the interface changes through coalescence and breakup. Research supported by DOE (CASL).

  3. Monitoring the effects of doxorubicin on 3D-spheroid tumor cells in real-time

    PubMed Central

    Baek, NamHuk; Seo, Ok Won; Kim, MinSung; Hulme, John; An, Seong Soo A

    2016-01-01

    Recently, increasing numbers of cell culture experiments with 3D spheroids presented better correlating results in vivo than traditional 2D cell culture systems. 3D spheroids could offer a simple and highly reproducible model that would exhibit many characteristics of natural tissue, such as the production of extracellular matrix. In this paper numerous cell lines were screened and selected depending on their ability to form and maintain a spherical shape. The effects of increasing concentrations of doxorubicin (DXR) on the integrity and viability of the selected spheroids were then measured at regular intervals and in real-time. In total 12 cell lines, adenocarcinomic alveolar basal epithelial (A549), muscle (C2C12), prostate (DU145), testis (F9), pituitary epithelial-like (GH3), cervical cancer (HeLa), HeLa contaminant (HEp2), embryo (NIH3T3), embryo (PA317), neuroblastoma (SH-SY5Y), osteosarcoma U2OS, and embryonic kidney cells (293T), were screened. Out of the 12, 8 cell lines, NIH3T3, C2C12, 293T, SH-SY5Y, A549, HeLa, PA317, and U2OS formed regular spheroids and the effects of DXR on these structures were measured at regular intervals. Finally, 5 cell lines, A549, HeLa, SH-SY5Y, U2OS, and 293T, were selected for real-time monitoring and the effects of DXR treatment on their behavior were continuously recorded for 5 days. A potential correlation regarding the effects of DXR on spheroid viability and ATP production was measured on days 1, 3, and 5. Cytotoxicity of DXR seemed to occur after endocytosis, since the cellular activities and ATP productions were still viable after 1 day of the treatment in all spheroids, except SH-SY5Y. Both cellular activity and ATP production were halted 3 and 5 days from the start of the treatment in all spheroids. All cell lines maintained their spheroid shape, except SHSY-5, which behaved in an unpredictable manner when exposed to toxic concentrations of DXR. Cytotoxic effects of DXR towards SH-SY5Y seemed to cause degradation of

  4. Automatic 2D to 3D conversion implemented for real-time applications

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr; Ramos-Diaz, Eduardo; Gonzalez Huitron, Victor

    2014-05-01

    Different hardware implementations of designed automatic 2D to 3D video color conversion employing 2D video sequence are presented. The analyzed framework includes together processing of neighboring frames using the following blocks: CIELa*b* color space conversion, wavelet transform, edge detection using HF wavelet sub-bands (HF, LH and HH), color segmentation via k-means on a*b* color plane, up-sampling, disparity map (DM) estimation, adaptive postfiltering, and finally, the anaglyph 3D scene generation. During edge detection, the Donoho threshold is computed, then each sub-band is binarized according to a threshold chosen and finally the thresholding image is formed. DM estimation is performed in the following matter: in left stereo image (or frame), a window with varying sizes is used according to the information obtained from binarized sub-band image, distinguishing different texture areas into LL sub-band image. The stereo matching is performed between two (left and right) LL sub-band images using processing with different window sizes. Upsampling procedure is employed in order to obtain the enhanced DM. Adaptive post-processing procedure is based on median filter and k-means segmentation in a*b* color plane. The SSIM and QBP criteria are applied in order to compare the performance of the proposed framework against other disparity map computation techniques. The designed technique has been implemented on DSP TMS320DM648, Matlab's Simulink module over a PC with Windows 7 and using graphic card (NVIDIA Quadro K2000) demonstrating that the proposed approach can be applied in real-time processing mode.

  5. Real-time 3D Fourier-domain optical coherence tomography guided microvascular anastomosis

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Ibrahim, Zuhaib; Lee, W. P. A.; Brandacher, Gerald; Kang, Jin U.

    2013-03-01

    Vascular and microvascular anastomosis is considered to be the foundation of plastic and reconstructive surgery, hand surgery, transplant surgery, vascular surgery and cardiac surgery. In the last two decades innovative techniques, such as vascular coupling devices, thermo-reversible poloxamers and suture-less cuff have been introduced. Intra-operative surgical guidance using a surgical imaging modality that provides in-depth view and 3D imaging can improve outcome following both conventional and innovative anastomosis techniques. Optical coherence tomography (OCT) is a noninvasive high-resolution (micron level), high-speed, 3D imaging modality that has been adopted widely in biomedical and clinical applications. In this work we performed a proof-of-concept evaluation study of OCT as an assisted intraoperative and post-operative imaging modality for microvascular anastomosis of rodent femoral vessels. The OCT imaging modality provided lateral resolution of 12 μm and 3.0 μm axial resolution in air and 0.27 volume/s imaging speed, which could provide the surgeon with clearly visualized vessel lumen wall and suture needle position relative to the vessel during intraoperative imaging. Graphics processing unit (GPU) accelerated phase-resolved Doppler OCT (PRDOCT) imaging of the surgical site was performed as a post-operative evaluation of the anastomosed vessels and to visualize the blood flow and thrombus formation. This information could help surgeons improve surgical precision in this highly challenging anastomosis of rodent vessels with diameter less than 0.5 mm. Our imaging modality could not only detect accidental suture through the back wall of lumen but also promptly diagnose and predict thrombosis immediately after reperfusion. Hence, real-time OCT can assist in decision-making process intra-operatively and avoid post-operative complications.

  6. Clinical feasibility and validation of 3D principal strain analysis from cine MRI: comparison to 2D strain by MRI and 3D speckle tracking echocardiography.

    PubMed

    Satriano, Alessandro; Heydari, Bobak; Narous, Mariam; Exner, Derek V; Mikami, Yoko; Attwood, Monica M; Tyberg, John V; Lydell, Carmen P; Howarth, Andrew G; Fine, Nowell M; White, James A

    2017-07-06

    Two-dimensional (2D) strain analysis is constrained by geometry-dependent reference directions of deformation (i.e. radial, circumferential, and longitudinal) following the assumption of cylindrical chamber architecture. Three-dimensional (3D) principal strain analysis may overcome such limitations by referencing intrinsic (i.e. principal) directions of deformation. This study aimed to demonstrate clinical feasibility of 3D principal strain analysis from routine 2D cine MRI with validation to strain from 2D tagged cine analysis and 3D speckle tracking echocardiography. Thirty-one patients undergoing cardiac MRI were studied. 3D strain was measured from routine, multi-planar 2D cine SSFP images using custom software designed to apply 4D deformation fields to 3D cardiac models to derive principal strain. Comparisons of strain estimates versus those by 2D tagged cine, 2D non-tagged cine (feature tracking), and 3D speckle tracking echocardiography (STE) were performed. Mean age was 51 ± 14 (36% female). Mean LV ejection fraction was 66 ± 10% (range 37-80%). 3D principal strain analysis was feasible in all subjects and showed high inter- and intra-observer reproducibility (ICC range 0.83-0.97 and 0.83-0.98, respectively-p < 0.001 for all directions). Strong correlations of minimum and maximum principal strain were respectively observed versus the following: 3D STE estimates of longitudinal (r = 0.81 and r = -0.64), circumferential (r = 0.76 and r = -0.58) and radial (r = -0.80 and r = 0.63) strain (p < 0.001 for all); 2D tagged cine estimates of longitudinal (r = 0.81 and r = -0.81), circumferential (r = 0.87 and r = -0.85), and radial (r = -0.76 and r = 0.81) strain (p < 0.0001 for all); and 2D cine (feature tracking) estimates of longitudinal (r = 0.85 and -0.83), circumferential (r = 0.88 and r = -0.87), and radial strain (r = -0.79 and r = 0.84, p < 0.0001 for all). 3D

  7. SU-E-J-237: Real-Time 3D Anatomy Estimation From Undersampled MR Acquisitions

    SciTech Connect

    Glitzner, M; Lagendijk, J; Raaymakers, B; Crijns, S; Senneville, B Denis de

    2015-06-15

    Recent developments made MRI guided radiotherapy feasible. Performing simultaneous imaging during fractions can provide information about changing anatomy by means of deformable image registration for either immediate plan adaptations or accurate dose accumulation on the changing anatomy. In 3D MRI, however, acquisition time is considerable and scales with resolution. Furthermore, intra-scan motion degrades image quality.In this work, we investigate the sensitivity of registration quality on imageresolution: potentially, by employing spatial undersampling, the acquisition timeof MR images for the purpose of deformable image registration can be reducedsignificantly.On a volunteer, 3D-MR imaging data was sampled in a navigator-gated manner, acquiring one axial volume (360×260×100mm{sup 3}) per 3s during exhale phase. A T1-weighted FFE sequence was used with an acquired voxel size of (2.5mm{sup 3}) for a duration of 17min. Deformation vector fields were evaluated for 100 imaging cycles with respect to the initial anatomy using deformable image registration based on optical flow. Subsequently, the imaging data was downsampled by a factor of 2, simulating a fourfold acquisition speed. Displacements of the downsampled volumes were then calculated by the same process.In kidneyliver boundaries and the region around stomach/duodenum, prominent organ drifts could be observed in both the original and the downsampled imaging data. An increasing displacement of approximately 2mm was observed for the kidney, while an area around the stomach showed sudden displacements of 4mm. Comparison of the motile points over time showed high reproducibility between the displacements of high-resolution and downsampled volumes: over a 17min acquisition, the componentwise RMS error was not more than 0.38mm.Based on the synthetic experiments, 3D nonrigid image registration shows little sensitivity to image resolution and the displacement information is preserved even when halving the

  8. A real-time emergency response workstation using a 3-D numerical model initialized with sodar

    SciTech Connect

    Lawver, B.S.; Sullivan, T.J.; Baskett, R.L.

    1993-01-28

    Many emergency response dispersion modeling systems provide simple Gaussian models driven by single meteorological tower inputs to estimate the downwind consequences from accidental spills or stack releases. Complex meteorological or terrain settings demand more sophisticated resolution of the three-dimensional structure of the atmosphere to reliably calculate plume dispersion. Mountain valleys and sea breeze flows are two common examples of such settings. To address these complexities, the authors have implemented the three-dimensional diagnostic MATHEW mass-adjusted wind field and ADPIC particle-in-cell dispersion models on a workstation for use in real-time emergency response modeling. MATHEW/ADPIC have shown their utility in a variety of complex settings over the last 15 years within the Department of Energy`s Atmospheric Release Advisory Capability (ARAC) project. The models are initialized using an array of surface wind measurements from meteorological towers coupled with vertical profiles from an acoustic sounder (sodar). The workstation automatically acquires the meteorological data every 15 minutes. A source term is generated using either defaults or a real-time stack monitor. Model outputs include contoured isopleths displayed on site geography or plume densities shown over 3-D color shaded terrain. The models are automatically updated every 15 minutes to provide the emergency response manager with a continuous display of potentially hazardous ground-level conditions if an actual release were to occur. Model run time is typically less than 2 minutes on 6 megaflop ({approximately}30 MIPS) workstations. Data acquisition, limited by dial-up modem communications, requires 3 to 5 minutes.

  9. High-speed real-time 3-D coordinates measurement based on fringe projection profilometry considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, Shi Ling

    2014-10-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. However, the camera lens is never perfect and the lens distortion does influence the accuracy of the measurement result, which is often overlooked in the existing real-time 3-D shape measurement systems. To this end, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. The out-of-plane height is obtained firstly and the acquisition for the two corresponding in-plane coordinates follows on the basis of the solved height. Besides, a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the generated LUTs, a 3-D reconstruction speed of 92.34 frames per second can be achieved.

  10. Application of 3D WebGIS and real-time technique in earthquake information publishing and visualization

    NASA Astrophysics Data System (ADS)

    Li, Boren; Wu, Jianping; Pan, Mao; Huang, Jing

    2015-06-01

    In hazard management, earthquake researchers have utilized GIS to ease the process of managing disasters. Researchers use WebGIS to assess hazards and seismic risk. Although they can provide a visual analysis platform based on GIS technology, they lack a general description in the extensibility of WebGIS for processing dynamic data, especially real-time data. In this paper, we propose a novel approach for real-time 3D visual earthquake information publishing model based on WebGIS and digital globe to improve the ability of processing real-time data in systems based on WebGIS. On the basis of the model, we implement a real-time 3D earthquake information publishing system—EqMap3D. The system can not only publish real-time earthquake information but also display these data and their background geoscience information in a 3D scene. It provides a powerful tool for display, analysis, and decision-making for researchers and administrators. It also facilitates better communication between researchers engaged in geosciences and the interested public.

  11. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.

  12. Application of 3D hydrodynamic and particle tracking models for better environmental management of finfish culture

    NASA Astrophysics Data System (ADS)

    Moreno Navas, Juan; Telfer, Trevor C.; Ross, Lindsay G.

    2011-04-01

    Hydrographic conditions, and particularly current speeds, have a strong influence on the management of fish cage culture. These hydrodynamic conditions can be used to predict particle movement within the water column and the results used to optimise environmental conditions for effective site selection, setting of environmental quality standards, waste dispersion, and potential disease transfer. To this end, a 3D hydrodynamic model, MOHID, has been coupled to a particle tracking model to study the effects of mean current speed, quiescent water periods and bulk water circulation in Mulroy Bay, Co. Donegal Ireland, an Irish fjard (shallow fjordic system) important to the aquaculture industry. A Lagangrian method simulated the instantaneous release of "particles" emulating discharge from finfish cages to show the behaviour of waste in terms of water circulation and water exchange. The 3D spatial models were used to identify areas of mixed and stratified water using a version of the Simpson-Hunter criteria, and to use this in conjunction with models of current flow for appropriate site selection for salmon aquaculture. The modelled outcomes for stratification were in good agreement with the direct measurements of water column stratification based on observed density profiles. Calculations of the Simpson-Hunter tidal parameter indicated that most of Mulroy Bay was potentially stratified with a well mixed region over the shallow channels where the water is faster flowing. The fjard was characterised by areas of both very low and high mean current speeds, with some areas having long periods of quiescent water. The residual current and the particle tracking animations created through the models revealed an anticlockwise eddy that may influence waste dispersion and potential for disease transfer, among salmon cages and which ensures that the retention time of waste substances from cages is extended. The hydrodynamic model results were incorporated into the ArcView TM GIS

  13. Real-Time 3D Microwave Monitoring of Interstitial Thermal Therapy.

    PubMed

    Chen, Guanbo; Stang, John; Haynes, Mark; Leuthardt, Eric; Moghaddam, Mahta

    2017-05-08

    We report a method for real-time 3D monitoring of thermal therapy through the use of non-contact microwave imaging. This method is predicated on using microwaves to image changes in the dielectric properties of tissue with changing temperature. Instead of the precomputed linear Born approximation that was used in prior work to speed up the frameto- frame inversions, here we use the nonlinear Distorted Born Iterative Method (DBIM) to solve the electric volume integral equation to image the temperature change. This is made possible by using a recently developed GPU accelerated conformal finite difference time domain (CFDTD) method to solve the forward problem and update the electric field in the monitored region in each DBIM iteration. Compared to our previous work, this approach provides a far superior approximation of the electric field within the volume integral equation (VIE), and thus yields a more accurate reconstruction of tissue temperature change. The proposed method is validated using a realistic numerical model of interstitial thermal therapy for a deep seated brain lesion. With the new DBIM, we reduced the average estimation error of the mean temperature within the region of interest from 2.5 to 1.0 degrees for the noise-free case, and from 2.9 to 1.7 degrees for the 2% background noise case.

  14. Registration of Real-Time 3-D Ultrasound to Tomographic Images of the Abdominal Aorta.

    PubMed

    Brekken, Reidar; Iversen, Daniel Høyer; Tangen, Geir Arne; Dahl, Torbjørn

    2016-08-01

    The purpose of this study was to develop an image-based method for registration of real-time 3-D ultrasound to computed tomography (CT) of the abdominal aorta, targeting future use in ultrasound-guided endovascular intervention. We proposed a method in which a surface model of the aortic wall was segmented from CT, and the approximate initial location of this model relative to the ultrasound volume was manually indicated. The model was iteratively transformed to automatically optimize correspondence to the ultrasound data. Feasibility was studied using data from a silicon phantom and in vivo data from a volunteer with previously acquired CT. Through visual evaluation, the ultrasound and CT data were seen to correspond well after registration. Both aortic lumen and branching arteries were well aligned. The processing was done offline, and the registration took approximately 0.2 s per ultrasound volume. The results encourage further patient studies to investigate accuracy, robustness and clinical value of the approach.

  15. Real-time 3D visualization of cellular rearrangements during cardiac valve formation.

    PubMed

    Pestel, Jenny; Ramadass, Radhan; Gauvrit, Sebastien; Helker, Christian; Herzog, Wiebke; Stainier, Didier Y R

    2016-06-15

    During cardiac valve development, the single-layered endocardial sheet at the atrioventricular canal (AVC) is remodeled into multilayered immature valve leaflets. Most of our knowledge about this process comes from examining fixed samples that do not allow a real-time appreciation of the intricacies of valve formation. Here, we exploit non-invasive in vivo imaging techniques to identify the dynamic cell behaviors that lead to the formation of the immature valve leaflets. We find that in zebrafish, the valve leaflets consist of two sets of endocardial cells at the luminal and abluminal side, which we refer to as luminal cells (LCs) and abluminal cells (ALCs), respectively. By analyzing cellular rearrangements during valve formation, we observed that the LCs and ALCs originate from the atrium and ventricle, respectively. Furthermore, we utilized Wnt/β-catenin and Notch signaling reporter lines to distinguish between the LCs and ALCs, and also found that cardiac contractility and/or blood flow is necessary for the endocardial expression of these signaling reporters. Thus, our 3D analyses of cardiac valve formation in zebrafish provide fundamental insights into the cellular rearrangements underlying this process. © 2016. Published by The Company of Biologists Ltd.

  16. Real-time 3D visualization of cellular rearrangements during cardiac valve formation

    PubMed Central

    Pestel, Jenny; Ramadass, Radhan; Gauvrit, Sebastien; Helker, Christian; Herzog, Wiebke

    2016-01-01

    During cardiac valve development, the single-layered endocardial sheet at the atrioventricular canal (AVC) is remodeled into multilayered immature valve leaflets. Most of our knowledge about this process comes from examining fixed samples that do not allow a real-time appreciation of the intricacies of valve formation. Here, we exploit non-invasive in vivo imaging techniques to identify the dynamic cell behaviors that lead to the formation of the immature valve leaflets. We find that in zebrafish, the valve leaflets consist of two sets of endocardial cells at the luminal and abluminal side, which we refer to as luminal cells (LCs) and abluminal cells (ALCs), respectively. By analyzing cellular rearrangements during valve formation, we observed that the LCs and ALCs originate from the atrium and ventricle, respectively. Furthermore, we utilized Wnt/β-catenin and Notch signaling reporter lines to distinguish between the LCs and ALCs, and also found that cardiac contractility and/or blood flow is necessary for the endocardial expression of these signaling reporters. Thus, our 3D analyses of cardiac valve formation in zebrafish provide fundamental insights into the cellular rearrangements underlying this process. PMID:27302398

  17. Development of Real-Time 3-D Photoacoustic Imaging System Employing Spherically Curved Array Transducer.

    PubMed

    Nagaoka, Ryo; Tabata, Takuya; Takagi, Ryo; Yoshizawa, Shin; Umemura, Shin-Ichiro; Saijo, Yoshifumi

    2017-08-01

    Photoacoustic (PA) imaging is a promising imaging modality to visualize specific living tissues based on the light absorption coefficients without dyeing. In this paper, a real-time PA imaging system with a tunable laser was newly developed with an originally designed spherically curved array transducer. Five different series of experiments were conducted to validate the PA measurement system. The peak frequency of the transducer response was 17.7 MHz, and a volume-imaging rate of 3-D volume imaging was 10-20 volumes per second. The spatial resolution of imaging was 90- [Formula: see text] along both the axial and lateral directions. The developed imaging system could measure a difference on an absorption coefficient of gold nanorods. Additionally, the PA imaging could visualize the in vivo microvasculatures of a human hand. This PA imaging system with higher spatial-temporal resolution and the tunable laser further should enhance our understanding of not only basic properties of the photo acoustics but also clinical applications.

  18. Using an automated 3D-tracking system to record individual and shoals of adult zebrafish.

    PubMed

    Maaswinkel, Hans; Zhu, Liqun; Weng, Wei

    2013-12-05

    Like many aquatic animals, zebrafish (Danio rerio) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals.

  19. Defragmented image based autostereoscopic 3D displays with dynamic eye tracking

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-12-01

    We studied defragmented image based autostereoscopic 3D displays with dynamic eye tracking. Specifically, we examined the impact of parallax barrier (PB) angular orientation on their image quality. The 3D display system required fine adjustment of PB angular orientation with respect to a display panel. This was critical for both image color balancing and minimizing image resolution mismatch between horizontal and vertical directions. For evaluating uniformity of image brightness, we applied optical ray tracing simulations. The simulations took effects of PB orientation misalignment into account. The simulation results were then compared with recorded experimental data. Our optimal simulated system produced significantly enhanced image uniformity at around sweet spots in viewing zones. However this was contradicted by real experimental results. We offer quantitative treatment of illuminance uniformity of view images to estimate misalignment of PB orientation, which could account for brightness non-uniformity observed experimentally. Our study also shows that slight imperfection in the adjustment of PB orientation due to practical restrictions of adjustment accuracy can induce substantial non-uniformity of view images' brightness. We find that image brightness non-uniformity critically depends on misalignment of PB angular orientation, for example, as slight as ≤ 0.01 ° in our system. This reveals that reducing misalignment of PB angular orientation from the order of 10-2 to 10-3 degrees can greatly improve the brightness uniformity.

  20. Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.

    PubMed

    Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo

    2017-07-01

    Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.

  1. Real-time motion- and B0-correction for LASER-localized spiral-accelerated 3D-MRSI of the brain at 3T.

    PubMed

    Bogner, Wolfgang; Hess, Aaron T; Gagoski, Borjan; Tisdall, M Dylan; van der Kouwe, Andre J W; Trattnig, Siegfried; Rosen, Bruce; Andronesi, Ovidiu C

    2014-03-01

    The full potential of magnetic resonance spectroscopic imaging (MRSI) is often limited by localization artifacts, motion-related artifacts, scanner instabilities, and long measurement times. Localized adiabatic selective refocusing (LASER) provides accurate B1-insensitive spatial excitation even at high magnetic fields. Spiral encoding accelerates MRSI acquisition, and thus, enables 3D-coverage without compromising spatial resolution. Real-time position- and shim/frequency-tracking using MR navigators correct motion- and scanner instability-related artifacts. Each of these three advanced MRI techniques provides superior MRSI data compared to commonly used methods. In this work, we integrated in a single pulse sequence these three promising approaches. Real-time correction of motion, shim, and frequency-drifts using volumetric dual-contrast echo planar imaging-based navigators were implemented in an MRSI sequence that uses low-power gradient modulated short-echo time LASER localization and time efficient spiral readouts, in order to provide fast and robust 3D-MRSI in the human brain at 3T. The proposed sequence was demonstrated to be insensitive to motion- and scanner drift-related degradations of MRSI data in both phantoms and volunteers. Motion and scanner drift artifacts were eliminated and excellent spectral quality was recovered in the presence of strong movement. Our results confirm the expected benefits of combining a spiral 3D-LASER-MRSI sequence with real-time correction. The new sequence provides accurate, fast, and robust 3D metabolic imaging of the human brain at 3T. This will further facilitate the use of 3D-MRSI for neuroscience and clinical applications.

  2. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations.

  3. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    PubMed Central

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications. PMID:27147347

  4. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    SciTech Connect

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-15

    occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.

  5. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    PubMed

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.

  6. Infrared tomographic PIV and 3D motion tracking system applied to aquatic predator-prey interaction

    NASA Astrophysics Data System (ADS)

    Adhikari, Deepak; Longmire, Ellen K.

    2013-02-01

    Infrared tomographic PIV and 3D motion tracking are combined to measure evolving volumetric velocity fields and organism trajectories during aquatic predator-prey interactions. The technique was used to study zebrafish foraging on both non-evasive and evasive prey species. Measurement volumes of 22.5 mm × 10.5 mm × 12 mm were reconstructed from images captured on a set of four high-speed cameras. To obtain accurate fluid velocity vectors within each volume, fish were first masked out using an automated visual hull method. Fish and prey locations were identified independently from the same image sets and tracked separately within the measurement volume. Experiments demonstrated that fish were not influenced by the infrared laser illumination or the tracer particles. Results showed that the zebrafish used different strategies, suction and ram feeding, for successful capture of non-evasive and evasive prey, respectively. The two strategies yielded different variations in fluid velocity between the fish mouth and the prey. In general, the results suggest that the local flow field, the direction of prey locomotion with respect to the predator and the relative accelerations and speeds of the predator and prey may all be significant in determining predation success.

  7. Eulerian and Lagrangian methods for vortex tracking in 2D and 3D flows

    NASA Astrophysics Data System (ADS)

    Huang, Yangzi; Green, Melissa

    2014-11-01

    Coherent structures are a key component of unsteady flows in shear layers. Improvement of experimental techniques has led to larger amounts of data and requires of automated procedures for vortex tracking. Many vortex criteria are Eulerian, and identify the structures by an instantaneous local swirling motion in the field, which are indicated by closed or spiral streamlines or pathlines in a reference frame. Alternatively, a Lagrangian Coherent Structures (LCS) analysis is a Lagrangian method based on the quantities calculated along fluid particle trajectories. In the current work, vortex detection is demonstrated on data from the simulation of two cases: a 2D flow with a flat plate undergoing a 45 ° pitch-up maneuver and a 3D wall-bounded turbulence channel flow. Vortices are visualized and tracked by their centers and boundaries using Γ1, the Q criterion, and LCS saddle points. In the cases of 2D flow, saddle points trace showed a rapid acceleration of the structure which indicates the shedding from the plate. For channel flow, saddle points trace shows that average structure convection speed exhibits a similar trend as a function of wall-normal distance as the mean velocity profile, and leads to statistical quantities of vortex dynamics. Dr. Jeff Eldredge and his research group at UCLA are gratefully acknowledged for sharing the database of simulation for the current research. This work was supported by the Air Force Office of Scientific Research under AFOSR Award No. FA9550-14-1-0210.

  8. A multi-frequency electrical impedance tomography system for real-time 2D and 3D imaging

    NASA Astrophysics Data System (ADS)

    Yang, Yunjie; Jia, Jiabin

    2017-08-01

    This paper presents the design and evaluation of a configurable, fast multi-frequency Electrical Impedance Tomography (mfEIT) system for real-time 2D and 3D imaging, particularly for biomedical imaging. The system integrates 32 electrode interfaces and the current frequency ranges from 10 kHz to 1 MHz. The system incorporates the following novel features. First, a fully adjustable multi-frequency current source with current monitoring function is designed. Second, a flexible switching scheme is developed for arbitrary sensing configuration and a semi-parallel data acquisition architecture is implemented for high-frame-rate data acquisition. Furthermore, multi-frequency digital quadrature demodulation is accomplished in a high-capacity Field Programmable Gate Array. At last, a 3D imaging software, visual tomography, is developed for real-time 2D and 3D image reconstruction, data analysis, and visualization. The mfEIT system is systematically tested and evaluated from the aspects of signal to noise ratio (SNR), frame rate, and 2D and 3D multi-frequency phantom imaging. The highest SNR is 82.82 dB on a 16-electrode sensor. The frame rate is up to 546 fps at serial mode and 1014 fps at semi-parallel mode. The evaluation results indicate that the presented mfEIT system is a powerful tool for real-time 2D and 3D imaging.

  9. A real-time 3D end-to-end augmented reality system (and its representation transformations)

    NASA Astrophysics Data System (ADS)

    Tytgat, Donny; Aerts, Maarten; De Busser, Jeroen; Lievens, Sammy; Rondao Alface, Patrice; Macq, Jean-Francois

    2016-09-01

    The new generation of HMDs coming to the market is expected to enable many new applications that allow free viewpoint experiences with captured video objects. Current applications usually rely on 3D content that is manually created or captured in an offline manner. In contrast, this paper focuses on augmented reality applications that use live captured 3D objects while maintaining free viewpoint interaction. We present a system that allows live dynamic 3D objects (e.g. a person who is talking) to be captured in real-time. Real-time performance is achieved by traversing a number of representation formats and exploiting their specific benefits. For instance, depth images are maintained for fast neighborhood retrieval and occlusion determination, while implicit surfaces are used to facilitate multi-source aggregation for both geometry and texture. The result is a 3D reconstruction system that outputs multi-textured triangle meshes at real-time rates. An end-to-end system is presented that captures and reconstructs live 3D data and allows for this data to be used on a networked (AR) device. For allocating the different functional blocks onto the available physical devices, a number of alternatives are proposed considering the available computational power and bandwidth for each of the components. As we will show, the representation format can play an important role in this functional allocation and allows for a flexible system that can support a highly heterogeneous infrastructure.

  10. Model based 3D CS-catheter tracking from 2D X-ray projections: binary versus attenuation models.

    PubMed

    Haase, Christian; Schäfer, Dirk; Dössel, Olaf; Grass, Michael

    2014-04-01

    Tracking the location of medical devices in interventional X-ray data solves different problems. For example the motion information of the devices is used to determine cardiac or respiratory motion during X-ray guided procedures or device features are used as landmarks to register images. In this publication an approach using a 3D deformable catheter model is presented and used to track a coronary sinus (CS) catheter in 3D plus time through a complete rotational angiography sequence. The benefits of using voxel based models with attenuation information for 2D/3D registration are investigated in comparison to binary catheter models. The 2D/3D registration of the model allows to extract a 3D catheter shape from every individual 2D projection. The tracking accuracy is evaluated on simulated and clinical rotational angiography data of the contrast enhanced left atrium. The quantitative evaluation of the experiments delivers an average registration accuracy for all catheter electrodes of 0.23 mm in 2D and 0.95 mm in 3D when using an attenuation model of the catheter. The overall tracking accuracy is lower when using binary catheter models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. a Cache Design Method for Spatial Information Visualization in 3d Real-Time Rendering Engine

    NASA Astrophysics Data System (ADS)

    Dai, X.; Xiong, H.; Zheng, X.

    2012-07-01

    A well-designed cache system has positive impacts on the 3D real-time rendering engine. As the amount of visualization data getting larger, the effects become more obvious. They are the base of the 3D real-time rendering engine to smoothly browsing through the data, which is out of the core memory, or from the internet. In this article, a new kind of caches which are based on multi threads and large file are introduced. The memory cache consists of three parts, the rendering cache, the pre-rendering cache and the elimination cache. The rendering cache stores the data that is rendering in the engine; the data that is dispatched according to the position of the view point in the horizontal and vertical directions is stored in the pre-rendering cache; the data that is eliminated from the previous cache is stored in the eliminate cache and is going to write to the disk cache. Multi large files are used in the disk cache. When a disk cache file size reaches the limit length(128M is the top in the experiment), no item will be eliminated from the file, but a new large cache file will be created. If the large file number is greater than the maximum number that is pre-set, the earliest file will be deleted from the disk. In this way, only one file is opened for writing and reading, and the rest are read-only so the disk cache can be used in a high asynchronous way. The size of the large file is limited in order to map to the core memory to save loading time. Multi-thread is used to update the cache data. The threads are used to load data to the rendering cache as soon as possible for rendering, to load data to the pre-rendering cache for rendering next few frames, and to load data to the elimination cache which is not necessary for the moment. In our experiment, two threads are designed. The first thread is to organize the memory cache according to the view point, and created two threads: the adding list and the deleting list, the adding list index the data that should be

  12. Real-time markerless tracking for augmented reality: the virtual visual servoing framework.

    PubMed

    Comport, Andrew I; Marchand, Eric; Pressigout, Muriel; Chaumette, François

    2006-01-01

    Tracking is a very important research subject in a real-time augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a real-time, robust, and efficient 3D model-based tracking algorithm is proposed for a "video see through" monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. Here, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating an M-estimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D model-free augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.

  13. Discovery of a biofilm electrocline using real-time 3D metabolite analysis.

    PubMed

    Koley, Dipankar; Ramsey, Matthew M; Bard, Allen J; Whiteley, Marvin

    2011-12-13

    Bacteria are social organisms that possess multiple pathways for sensing and responding to small molecules produced by other microbes. Most bacteria in nature exist in sessile communities called biofilms, and the ability of biofilm bacteria to sense and respond to small molecule signals and cues produced by neighboring biofilm bacteria is particularly important. To understand microbial interactions between biofilms, it is necessary to perform rapid, real-time spatial quantification of small molecules in microenvironments immediately surrounding biofilms; however, such measurements have been elusive. In this study, scanning electrochemical microscopy was used to quantify small molecules surrounding a biofilm in 3D space. Measuring concentrations of the redox-active signaling molecule pyocyanin (PYO) produced by biofilms of the bacterium Pseudomonas aeruginosa revealed a high concentration of PYO that is actively maintained in the reduced state proximal to the biofilm. This gradient results in a reduced layer of PYO that we have termed the PYO "electrocline," a gradient of redox potential, which extends several hundred microns from the biofilm surface. We also demonstrate that the PYO electrocline is formed under electron acceptor-limiting conditions, and that growth conditions favoring formation of the PYO electrocline correlate to an increase in soluble iron. Additionally, we have taken a "reactive image" of a biofilm surface, demonstrating the rate of bacterial redox activity across a 2D surface. These studies establish methodology for spatially coordinated concentration and redox status measurements of microbe-produced small molecules and provide exciting insights into the roles these molecules play in microbial competition and nutrient acquisition.

  14. Discovery of a biofilm electrocline using real-time 3D metabolite analysis

    PubMed Central

    Koley, Dipankar; Ramsey, Matthew M.; Bard, Allen J.; Whiteley, Marvin

    2011-01-01

    Bacteria are social organisms that possess multiple pathways for sensing and responding to small molecules produced by other microbes. Most bacteria in nature exist in sessile communities called biofilms, and the ability of biofilm bacteria to sense and respond to small molecule signals and cues produced by neighboring biofilm bacteria is particularly important. To understand microbial interactions between biofilms, it is necessary to perform rapid, real-time spatial quantification of small molecules in microenvironments immediately surrounding biofilms; however, such measurements have been elusive. In this study, scanning electrochemical microscopy was used to quantify small molecules surrounding a biofilm in 3D space. Measuring concentrations of the redox-active signaling molecule pyocyanin (PYO) produced by biofilms of the bacterium Pseudomonas aeruginosa revealed a high concentration of PYO that is actively maintained in the reduced state proximal to the biofilm. This gradient results in a reduced layer of PYO that we have termed the PYO “electrocline,” a gradient of redox potential, which extends several hundred microns from the biofilm surface. We also demonstrate that the PYO electrocline is formed under electron acceptor-limiting conditions, and that growth conditions favoring formation of the PYO electrocline correlate to an increase in soluble iron. Additionally, we have taken a “reactive image” of a biofilm surface, demonstrating the rate of bacterial redox activity across a 2D surface. These studies establish methodology for spatially coordinated concentration and redox status measurements of microbe-produced small molecules and provide exciting insights into the roles these molecules play in microbial competition and nutrient acquisition. PMID:22123963

  15. A smart homecage system with 3D tracking for long-term behavioral experiments.

    PubMed

    Byunghun Lee; Kiani, Mehdi; Ghovanloo, Maysam

    2014-01-01

    A wirelessly-powered homecage system, called the EnerCage-HC, that is equipped with multi-coil wireless power transfer, closed-loop power control, optical behavioral tracking, and a graphic user interface (GUI) is presented for long-term electrophysiology experiments. The EnerCage-HC system can wirelessly power a mobile unit attached to a small animal subject and also track its behavior in real-time as it is housed inside a standard homecage. The EnerCage-HC system is equipped with one central and four overlapping slanted wire-wound coils (WWCs) with optimal geometries to form 3-and 4-coil power transmission links while operating at 13.56 MHz. Utilizing multi-coil links increases the power transfer efficiency (PTE) compared to conventional 2-coil links and also reduces the number of power amplifiers (PAs) to only one, which significantly reduces the system complexity, cost, and dissipated heat. A Microsoft Kinect installed 90 cm above the homecage localizes the animal position and orientation with 1.6 cm accuracy. An in vivo experiment was conducted on a freely behaving rat by continuously delivering 24 mW to the mobile unit for > 7 hours inside a standard homecage.

  16. Evaluation and comparison of current biopsy needle localization and tracking methods using 3D ultrasound.

    PubMed

    Zhao, Yue; Shen, Yi; Bernard, Adeline; Cachard, Christian; Liebgott, Hervé

    2017-01-01

    This article compares four different biopsy needle localization algorithms in both 3D and 4D situations to evaluate their accuracy and execution time. The localization algorithms were: Principle component analysis (PCA), random Hough transform (RHT), parallel integral projection (PIP) and ROI-RK (ROI based RANSAC and Kalman filter). To enhance the contrast of the biopsy needle and background tissue, a line filtering pre-processing step was implemented. To make the PCA, RHT and PIP algorithms comparable with the ROI-RK method, a region of interest (ROI) strategy was added. Simulated and ex-vivo data were used to evaluate the performance of the different biopsy needle localization algorithms. The resolutions of the sectorial and cylindrical volumes were 0.3mm×0.4mm×0.6mmand0.1mm×0.1mm×0.2mm (axial×lateral×azimuthal) respectively. In so far as the simulation and experimental results show, the ROI-RK method successfully located and tracked the biopsy needle in both 3D and 4D situations. The tip localization error was within 1.5mm and the axis accuracy was within 1.6mm. To the best of our knowledge, considering both localization accuracy and execution time, the ROI-RK was the most stable and time-saving method. Normally, accuracy comes at the expense of time. However, the ROI-RK method was able to locate the biopsy needle with high accuracy in real time, which makes it a promising method for clinical applications.

  17. Automated 3D motion tracking using Gabor filter bank, robust point matching, and deformable models.

    PubMed

    Chen, Ting; Wang, Xiaoxu; Chung, Sohae; Metaxas, Dimitris; Axel, Leon

    2010-01-01

    Tagged magnetic resonance imaging (tagged MRI or tMRI) provides a means of directly and noninvasively displaying the internal motion of the myocardium. Reconstruction of the motion field is needed to quantify important clinical information, e.g., the myocardial strain, and detect regional heart functional loss. In this paper, we present a three-step method for this task. First, we use a Gabor filter bank to detect and locate tag intersections in the image frames, based on local phase analysis. Next, we use an improved version of the robust point matching (RPM) method to sparsely track the motion of the myocardium, by establishing a transformation function and a one-to-one correspondence between grid tag intersections in different image frames. In particular, the RPM helps to minimize the impact on the motion tracking result of 1) through-plane motion and 2) relatively large deformation and/or relatively small tag spacing. In the final step, a meshless deformable model is initialized using the transformation function computed by RPM. The model refines the motion tracking and generates a dense displacement map, by deforming under the influence of image information, and is constrained by the displacement magnitude to retain its geometric structure. The 2D displacement maps in short and long axis image planes can be combined to drive a 3D deformable model, using the moving least square method, constrained by the minimization of the residual error at tag intersections. The method has been tested on a numerical phantom, as well as on in vivo heart data from normal volunteers and heart disease patients. The experimental results show that the new method has a good performance on both synthetic and real data. Furthermore, the method has been used in an initial clinical study to assess the differences in myocardial strain distributions between heart disease (left ventricular hypertrophy) patients and the normal control group. The final results show that the proposed method

  18. Automated 3D Motion Tracking using Gabor Filter Bank, Robust Point Matching, and Deformable Models

    PubMed Central

    Wang, Xiaoxu; Chung, Sohae; Metaxas, Dimitris; Axel, Leon

    2013-01-01

    Tagged Magnetic Resonance Imaging (tagged MRI or tMRI) provides a means of directly and noninvasively displaying the internal motion of the myocardium. Reconstruction of the motion field is needed to quantify important clinical information, e.g., the myocardial strain, and detect regional heart functional loss. In this paper, we present a three-step method for this task. First, we use a Gabor filter bank to detect and locate tag intersections in the image frames, based on local phase analysis. Next, we use an improved version of the Robust Point Matching (RPM) method to sparsely track the motion of the myocardium, by establishing a transformation function and a one-to-one correspondence between grid tag intersections in different image frames. In particular, the RPM helps to minimize the impact on the motion tracking result of: 1) through-plane motion, and 2) relatively large deformation and/or relatively small tag spacing. In the final step, a meshless deformable model is initialized using the transformation function computed by RPM. The model refines the motion tracking and generates a dense displacement map, by deforming under the influence of image information, and is constrained by the displacement magnitude to retain its geometric structure. The 2D displacement maps in short and long axis image planes can be combined to drive a 3D deformable model, using the Moving Least Square method, constrained by the minimization of the residual error at tag intersections. The method has been tested on a numerical phantom, as well as on in vivo heart data from normal volunteers and heart disease patients. The experimental results show that the new method has a good performance on both synthetic and real data. Furthermore, the method has been used in an initial clinical study to assess the differences in myocardial strain distributions between heart disease (left ventricular hypertrophy) patients and the normal control group. The final results show that the proposed method

  19. Tracking Human Faces in Real-Time,

    DTIC Science & Technology

    1995-11-01

    human-computer interactive applications such as lip-reading and gaze tracking. The principle in developing this system can be extended to other tracking problems such as tracking the human hand for gesture recognition .

  20. 3-D geometry calibration and markerless electromagnetic tracking with a mobile C-arm

    NASA Astrophysics Data System (ADS)

    Cheryauka, Arvi; Barrett, Johnny; Wang, Zhonghua; Litvin, Andrew; Hamadeh, Ali; Beaudet, Daniel

    2007-03-01

    The design of mobile X-ray C-arm equipment with image tomography and surgical guidance capabilities involves the retrieval of repeatable gantry positioning in three-dimensional space. Geometry misrepresentations can cause degradation of the reconstruction results with the appearance of blurred edges, image artifacts, and even false structures. It may also amplify surgical instrument tracking errors leading to improper implant placement. In our prior publications we have proposed a C-arm 3D positioner calibration method comprising separate intrinsic and extrinsic geometry calibration steps. Following this approach, in the present paper, we extend the intrinsic geometry calibration of C-gantry beyond angular positions in the orbital plane into angular positions on a unit sphere of isocentric rotation. Our method makes deployment of markerless interventional tool guidance with use of high-resolution fluoro images and electromagnetic tracking feasible at any angular position of the tube-detector assembly. Variations of the intrinsic parameters associated with C-arm motion are measured off-line as functions of orbital and lateral angles. The proposed calibration procedure provides better accuracy, and prevents unnecessary workflow steps for surgical navigation applications. With a slight modification, the Misalignment phantom, a tool for intrinsic geometry calibration, is also utilized to obtain an accurate 'image-to-sensor' mapping. We show simulation results, image quality and navigation accuracy estimates, and feasibility data acquired with the prototype system. The experimental results show the potential of high-resolution CT imaging (voxel size below 0.5 mm) and confident navigation in an interventional surgery setting with a mobile C-arm.

  1. Tracking initially unresolved thrusting objects in 3D using a single stationary optical sensor

    NASA Astrophysics Data System (ADS)

    Lu, Qin; Bar-Shalom, Yaakov; Willett, Peter; Granström, Karl; Ben-Dov, R.; Milgrom, B.

    2017-05-01

    This paper considers the problem of estimating the 3D states of a salvo of thrusting/ballistic endo-atmospheric objects using 2D Cartesian measurements from the focal plane array (FPA) of a single fixed optical sensor. Since the initial separations in the FPA are smaller than the resolution of the sensor, this results in merged measurements in the FPA, compounding the usual false-alarm and missed-detection uncertainty. We present a two-step methodology. First, we assume a Wiener process acceleration (WPA) model for the motion of the images of the projectiles in the optical sensor's FPA. We model the merged measurements with increased variance, and thence employ a multi-Bernoulli (MB) filter using the 2D measurements in the FPA. Second, using the set of associated measurements for each confirmed MB track, we formulate a parameter estimation problem, whose maximum likelihood estimate can be obtained via numerical search and can be used for impact point prediction. Simulation results illustrate the performance of the proposed method.

  2. Real-Time Climate Simulations in the Interactive 3D Game Universe Sandbox ²

    NASA Astrophysics Data System (ADS)

    Goldenson, N. L.

    2014-12-01

    Exploration in an open-ended computer game is an engaging way to explore climate and climate change. Everyone can explore physical models with real-time visualization in the educational simulator Universe Sandbox ² (universesandbox.com/2), which includes basic climate simulations on planets. I have implemented a time-dependent, one-dimensional meridional heat transport energy balance model to run and be adjustable in real time in the midst of a larger simulated system. Universe Sandbox ² is based on the original game - at its core a gravity simulator - with other new physically-based content for stellar evolution, and handling collisions between bodies. Existing users are mostly science enthusiasts in informal settings. We believe that this is the first climate simulation to be implemented in a professionally developed computer game with modern 3D graphical output in real time. The type of simple climate model we've adopted helps us depict the seasonal cycle and the more drastic changes that come from changing the orbit or other external forcings. Users can alter the climate as the simulation is running by altering the star(s) in the simulation, dragging to change orbits and obliquity, adjusting the climate simulation parameters directly or changing other properties like CO2 concentration that affect the model parameters in representative ways. Ongoing visuals of the expansion and contraction of sea ice and snow-cover respond to the temperature calculations, and make it accessible to explore a variety of scenarios and intuitive to understand the output. Variables like temperature can also be graphed in real time. We balance computational constraints with the ability to capture the physical phenomena we wish to visualize, giving everyone access to a simple open-ended meridional energy balance climate simulation to explore and experiment with. The software lends itself to labs at a variety of levels about climate concepts including seasons, the Greenhouse effect

  3. Model Estimation and Selection towardsUnconstrained Real-Time Tracking and Mapping.

    PubMed

    Gauglitz, Steffen; Sweeney, Chris; Ventura, Jonathan; Turk, Matthew; Höllerer, Tobias

    2014-06-01

    We present an approach and prototype implementation to initialization-free real-time tracking and mapping that supports any type of camera motion in 3D environments, that is, parallax-inducing as well as rotation-only motions. Our approach effectively behaves like a keyframe-based Simultaneous Localization and Mapping system or a panorama tracking and mapping system, depending on the camera movement. It seamlessly switches between the two modes and is thus able to track and map through arbitrary sequences of parallax-inducing and rotation-only camera movements. The system integrates both model-based and model-free tracking, automatically choosing between the two depending on the situation, and subsequently uses the "Geometric Robust Information Criterion" to decide whether the current camera motion can best be represented as a parallax-inducing motion or a rotation-only motion. It continues to collect and map data after tracking failure by creating separate tracks which are later merged if they are found to overlap. This is in contrast to most existing tracking and mapping systems, which suspend tracking and mapping and thus discard valuable data until relocalization with respect to the initial map is successful. We tested our prototype implementation on a variety of video sequences, successfully tracking through different camera motions and fully automatically building combinations of panoramas and 3D structure.

  4. Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware.

    PubMed

    Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr

    2005-09-01

    We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.

  5. Twin-beams digital holography for 3D tracking and quantitative phase-contrast microscopy in microfluidics.

    PubMed

    Memmolo, Pasquale; Finizio, Andrea; Paturzo, Melania; Miccio, Lisa; Ferraro, Pietro

    2011-12-05

    We report on a compact twin-beam interferometer that can be adopted as a flexible diagnostic tool in microfluidic platforms with twofold functionality. The novel configuration allows 3D tracking of micro-particles and, at same time, can simultaneously furnish Quantitative Phase-contrast maps of tracked micro-objects by interference microscopy, without changing the configuration. Experimental demonstration is given on for in vitro cells in a microfluidic environment.

  6. Real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events

    NASA Astrophysics Data System (ADS)

    Javidi, Bahram; Yeom, Seokwon; Moon, Inkyu; Daneshpanah, Mehdi

    2006-05-01

    In this paper, we present an overview of three-dimensional (3D) optical imaging techniques for real-time automated sensing, visualization, and recognition of dynamic biological microorganisms. Real time sensing and 3D reconstruction of the dynamic biological microscopic objects can be performed by single-exposure on-line (SEOL) digital holographic microscopy. A coherent 3D microscope-based interferometer is constructed to record digital holograms of dynamic micro biological events. Complex amplitude 3D images of the biological microorganisms are computationally reconstructed at different depths by digital signal processing. Bayesian segmentation algorithms are applied to identify regions of interest for further processing. A number of pattern recognition approaches are addressed to identify and recognize the microorganisms. One uses 3D morphology of the microorganisms by analyzing 3D geometrical shapes which is composed of magnitude and phase. Segmentation, feature extraction, graph matching, feature selection, and training and decision rules are used to recognize the biological microorganisms. In a different approach, 3D technique is used that are tolerant to the varying shapes of the non-rigid biological microorganisms. After segmentation, a number of sampling patches are arbitrarily extracted from the complex amplitudes of the reconstructed 3D biological microorganism. These patches are processed using a number of cost functions and statistical inference theory for the equality of means and equality of variances between the sampling segments. Also, we discuss the possibility of employing computational integral imaging for 3D sensing, visualization, and recognition of biological microorganisms illuminated under incoherent light. Experimental results with several biological microorganisms are presented to illustrate detection, segmentation, and identification of micro biological events.

  7. Real-Time Feature Tracking Using Homography

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel S.; Cheng, Yang; Ansar, Adnan I.; Trotz, David C.; Padgett, Curtis W.

    2010-01-01

    This software finds feature point correspondences in sequences of images. It is designed for feature matching in aerial imagery. Feature matching is a fundamental step in a number of important image processing operations: calibrating the cameras in a camera array, stabilizing images in aerial movies, geo-registration of images, and generating high-fidelity surface maps from aerial movies. The method uses a Shi-Tomasi corner detector and normalized cross-correlation. This process is likely to result in the production of some mismatches. The feature set is cleaned up using the assumption that there is a large planar patch visible in both images. At high altitude, this assumption is often reasonable. A mathematical transformation, called an homography, is developed that allows us to predict the position in image 2 of any point on the plane in image 1. Any feature pair that is inconsistent with the homography is thrown out. The output of the process is a set of feature pairs, and the homography. The algorithms in this innovation are well known, but the new implementation improves the process in several ways. It runs in real-time at 2 Hz on 64-megapixel imagery. The new Shi-Tomasi corner detector tries to produce the requested number of features by automatically adjusting the minimum distance between found features. The homography-finding code now uses an implementation of the RANSAC algorithm that adjusts the number of iterations automatically to achieve a pre-set probability of missing a set of inliers. The new interface allows the caller to pass in a set of predetermined points in one of the images. This allows the ability to track the same set of points through multiple frames.

  8. A real-time noise filtering strategy for photon counting 3D imaging lidar.

    PubMed

    Zhang, Zijing; Zhao, Yuan; Zhang, Yong; Wu, Long; Su, Jianzhong

    2013-04-22

    For a direct-detection 3D imaging lidar, the use of Geiger mode avalanche photodiode (Gm-APD) could greatly enhance the detection sensitivity of the lidar system since each range measurement requires a single detected photon. Furthermore, Gm-APD offers significant advantages in reducing the size, mass, power and complexity of the system. However the inevitable noise, including the background noise, the dark count noise and so on, remains a significant challenge to obtain a clear 3D image of the target of interest. This paper presents a smart strategy, which can filter out false alarms in the stage of acquisition of raw time of flight (TOF) data and obtain a clear 3D image in real time. As a result, a clear 3D image is taken from the experimental system despite the background noise of the sunny day.

  9. Mapping 3D Strains with Ultrasound Speckle Tracking: Method Validation and Initial Results in Porcine Scleral Inflation.

    PubMed

    Cruz Perez, Benjamin; Pavlatos, Elias; Morris, Hugh J; Chen, Hong; Pan, Xueliang; Hart, Richard T; Liu, Jun

    2016-07-01

    This study aimed to develop and validate a high frequency ultrasound method for measuring distributive, 3D strains in the sclera during elevations of intraocular pressure. A 3D cross-correlation based speckle-tracking algorithm was implemented to compute the 3D displacement vector and strain tensor at each tracking point. Simulated ultrasound radiofrequency data from a sclera-like structure at undeformed and deformed states with known strains were used to evaluate the accuracy and signal-to-noise ratio (SNR) of strain estimation. An experimental high frequency ultrasound (55 MHz) system was built to acquire 3D scans of porcine eyes inflated from 15 to 17 and then 19 mmHg. Simulations confirmed good strain estimation accuracy and SNR (e.g., the axial strains had less than 4.5% error with SNRs greater than 16.5 for strains from 0.005 to 0.05). Experimental data in porcine eyes showed increasing tensile, compressive, and shear strains in the posterior sclera during inflation, with a volume ratio close to one suggesting near-incompressibility. This study established the feasibility of using high frequency ultrasound speckle tracking for measuring 3D tissue strains and its potential to characterize physiological deformations in the posterior eye.

  10. Tracking of cracks in bridges using GPR: a 3D approach

    NASA Astrophysics Data System (ADS)

    Benedetto, A.

    2012-04-01

    Corrosion associated with reinforcing bars is the most significant contributor to bridge deficiencies. The corrosion is usually caused by moisture and chloride ion exposure. In particular, corrosion products FeO, Fe2O3, Fe3O4 and other oxides along reinforcement bars. The reinforcing bars are attacked by corrosion and yield expansive corrosion products. These oxidation products occupy a larger volume than the original intact steel and internal expansive stresses lead to cracking and debonding. There are some conventional inspection methods for detection of reinforcing bar corrosion but they can be invasive and destructive, often laborious, lane closures is required and it is difficult or unreliable any quantification of corrosion. For these reasons, bridge engineers are always more preferring to use the Ground Penetrating Radar (GPR) technique. In this work a novel numerical approach for three dimensional tracking and mapping of cracks in the bridge is proposed. The work starts from some interesting results based on the use of the 3D imaging technique in order to improve the potentiality of GPR to detect voids, cracks or buried object. The numerical approach has been tested on data acquired on some bridges using a pulse GPR system specifically designed for bridge deck and pavement inspection that is called RIS Hi Bright. The equipment integrates two arrays of Ultra Wide Band ground coupled antennas, having a main working frequency of 2 GHz. The two arrays within the RIS Hi Bright are using antennas arranged with different polarization. One array includes sensors with parallel polarization with respect to the scanning direction (VV array), the other has sensors in orthogonal polarization (HH array). Overall the system collects 16 profiles within a single scan (8 HH + 8 VV). The cracks, associated often to moisture increasing and higher values of the dielectric constant, produce a not negligible increasing of the signal amplitude. Following this, the algorithm

  11. Esophagogastric Junction pressure morphology: comparison between a station pull-through and real-time 3D-HRM representation

    PubMed Central

    Nicodème, Frédéric; Lin, Zhiyue; Pandolfino, John E.; Kahrilas, Peter J.

    2013-01-01

    BACKGROUND Esophagogastric junction (EGJ) competence is the fundamental defense against reflux making it of great clinical significance. However, characterizing EGJ competence with conventional manometric methodologies has been confounded by its anatomic and physiological complexity. Recent technological advances in miniaturization and electronics have led to the development of a novel device that may overcome these challenges. METHODS Nine volunteer subjects were studied with a novel 3D-HRM device providing 7.5 mm axial and 45° radial pressure resolution within the EGJ. Real-time measurements were made at rest and compared to simulations of a conventional pull-through made with the same device. Moreover, 3D-HRM recordings were analyzed to differentiate contributing pressure signals within the EGJ attributable to lower esophageal sphincter (LES), diaphragm, and vasculature. RESULTS 3D-HRM recordings suggested that sphincter length assessed by a pull-through method greatly exaggerated the estimate of LES length by failing to discriminate among circumferential contractile pressure and asymmetric extrinsic pressure signals attributable to diaphragmatic and vascular structures. Real-time 3D EGJ recordings found that the dominant constituents of EGJ pressure at rest were attributable to the diaphragm. CONCLUSIONS 3D-HRM permits real-time recording of EGJ pressure morphology facilitating analysis of the EGJ constituents responsible for its function as a reflux barrier making it a promising tool in the study of GERD pathophysiology. The enhanced axial and radial recording resolution of the device should facilitate further studies to explore perturbations in the physiological constituents of EGJ pressure in health and disease. PMID:23734788

  12. 3D surface real-time measurement using phase-shifted interference fringe technique for craniofacial identification

    NASA Astrophysics Data System (ADS)

    Levin, Gennady G.; Vishnyakov, Gennady N.; Naumov, Alexey V.; Abramov, Sergey

    1998-03-01

    We offer to use the 3D surface profile real-time measurement using phase-shifted interference fringe projection technique for the cranioficial identification. Our system realizes the profile measurement by projecting interference fringe pattern on the object surface and by observing the deformed fringe pattern at the direction different from the projection. Fringes are formed by a Michelson interferometer with one mirror mounted on a piezoelectric translator. Four steps self- calibration phase-shift method was used.

  13. 3D real-time visualization of blood flow in cerebral aneurysms by light field particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Carlsohn, Matthias F.; Kemmling, André; Petersen, Arne; Wietzke, Lennart

    2016-04-01

    Cerebral aneurysms require endovascular treatment to eliminate potentially lethal hemorrhagic rupture by hemostasis of blood flow within the aneurysm. Devices (e.g. coils and flow diverters) promote homeostasis, however, measurement of blood flow within an aneurysm or cerebral vessel before and after device placement on a microscopic level has not been possible so far. This would allow better individualized treatment planning and improve manufacture design of devices. For experimental analysis, direct measurement of real-time microscopic cerebrovascular flow in micro-structures may be an alternative to computed flow simulations. An application of microscopic aneurysm flow measurement on a regular basis to empirically assess a high number of different anatomic shapes and the corresponding effect of different devices would require a fast and reliable method at low cost with high throughout assessment. Transparent three dimensional 3D models of brain vessels and aneurysms may be used for microscopic flow measurements by particle image velocimetry (PIV), however, up to now the size of structures has set the limits for conventional 3D-imaging camera set-ups. On line flow assessment requires additional computational power to cope with the processing large amounts of data generated by sequences of multi-view stereo images, e.g. generated by a light field camera capturing the 3D information by plenoptic imaging of complex flow processes. Recently, a fast and low cost workflow for producing patient specific three dimensional models of cerebral arteries has been established by stereo-lithographic (SLA) 3D printing. These 3D arterial models are transparent an exhibit a replication precision within a submillimeter range required for accurate flow measurements under physiological conditions. We therefore test the feasibility of microscopic flow measurements by PIV analysis using a plenoptic camera system capturing light field image sequences. Averaging across a sequence of

  14. A Bayesian approach to real-time 3D tumor localization via monoscopic x-ray imaging during treatment delivery

    SciTech Connect

    Li, Ruijiang; Fahimian, Benjamin P.; Xing, Lei

    2011-07-15

    Purpose: Monoscopic x-ray imaging with on-board kV devices is an attractive approach for real-time image guidance in modern radiation therapy such as VMAT or IMRT, but it falls short in providing reliable information along the direction of imaging x-ray. By effectively taking consideration of projection data at prior times and/or angles through a Bayesian formalism, the authors develop an algorithm for real-time and full 3D tumor localization with a single x-ray imager during treatment delivery. Methods: First, a prior probability density function is constructed using the 2D tumor locations on the projection images acquired during patient setup. Whenever an x-ray image is acquired during the treatment delivery, the corresponding 2D tumor location on the imager is used to update the likelihood function. The unresolved third dimension is obtained by maximizing the posterior probability distribution. The algorithm can also be used in a retrospective fashion when all the projection images during the treatment delivery are used for 3D localization purposes. The algorithm does not involve complex optimization of any model parameter and therefore can be used in a ''plug-and-play'' fashion. The authors validated the algorithm using (1) simulated 3D linear and elliptic motion and (2) 3D tumor motion trajectories of a lung and a pancreas patient reproduced by a physical phantom. Continuous kV images were acquired over a full gantry rotation with the Varian TrueBeam on-board imaging system. Three scenarios were considered: fluoroscopic setup, cone beam CT setup, and retrospective analysis. Results: For the simulation study, the RMS 3D localization error is 1.2 and 2.4 mm for the linear and elliptic motions, respectively. For the phantom experiments, the 3D localization error is < 1 mm on average and < 1.5 mm at 95th percentile in the lung and pancreas cases for all three scenarios. The difference in 3D localization error for different scenarios is small and is not

  15. 3-D phantom and in vivo cardiac speckle tracking using a matrix array and raw echo data.

    PubMed

    Byram, Brett; Holley, Greg; Giannantonio, Doug; Trahey, Gregg

    2010-04-01

    Cardiac motion has been tracked using various methods, which vary in their invasiveness and dimensionality. One such noninvasive modality for cardiac motion tracking is ultrasound. Three-dimensional ultrasound motion tracking has been demonstrated using detected data at low volume rates. However, the effects of volume rate, kernel size, and data type (raw and detected) have not been sufficiently explored. First comparisons are made within the stated variables for 3-D speckle tracking. Volumetric data were obtained in a raw, baseband format using a matrix array attached to a high parallel receive beam count scanner. The scanner was used to acquire phantom and human in vivo cardiac volumetric data at 1000-Hz volume rates. Motion was tracked using phase-sensitive normalized cross-correlation. Subsample estimation in the lateral and elevational dimensions used the grid-slopes algorithm. The effects of frame rate, kernel size, and data type on 3-D tracking are shown. In general, the results show improvement of motion estimates at volume rates up to 200 Hz, above which they become stable. However, peak and pixel hopping continue to decrease at volume rates higher than 200 Hz. The tracking method and data show, qualitatively, good temporal and spatial stability (for independent kernels) at high volume rates.

  16. 3-D Phantom and In Vivo Cardiac Speckle Tracking Using a Matrix Array and Raw Echo Data

    PubMed Central

    Byram, Brett C.; Holley, Greg; Giannantonio, Doug M.; Trahey, Gregg E.

    2012-01-01

    Cardiac motion has been tracked using various methods, which vary in their invasiveness and dimensionality. One such noninvasive modality for cardiac motion tracking is ultrasound. Three-dimensional ultrasound motion tracking has been demonstrated using detected data at low volume rates. However, the effects of volume rate, kernel size, and data type (raw and detected) have not been sufficiently explored. First comparisons are made within the stated variables for 3-D speckle tracking. Volumetric data were obtained in a raw, baseband format using a matrix array attached to a high parallel receive beam count scanner. The scanner was used to acquire phantom and human in vivo cardiac volumetric data at 1000-Hz volume rates. Motion was tracked using phase-sensitive normalized cross-correlation. Subsample estimation in the lateral and elevational dimensions used the grid-slopes algorithm. The effects of frame rate, kernel size, and data type on 3-D tracking are shown. In general, the results show improvement of motion estimates at volume rates up to 200 Hz, above which they become stable. However, peak and pixel hopping continue to decrease at volume rates higher than 200 Hz. The tracking method and data show, qualitatively, good temporal and spatial stability (for independent kernels) at high volume rates. PMID:20378447

  17. Rapid, High-Throughput Tracking of Bacterial Motility in 3D via Phase-Contrast Holographic Video Microscopy

    PubMed Central

    Cheong, Fook Chiong; Wong, Chui Ching; Gao, YunFeng; Nai, Mui Hoon; Cui, Yidan; Park, Sungsu; Kenney, Linda J.; Lim, Chwee Teck

    2015-01-01

    Tracking fast-swimming bacteria in three dimensions can be extremely challenging with current optical techniques and a microscopic approach that can rapidly acquire volumetric information is required. Here, we introduce phase-contrast holographic video microscopy as a solution for the simultaneous tracking of multiple fast moving cells in three dimensions. This technique uses interference patterns formed between the scattered and the incident field to infer the three-dimensional (3D) position and size of bacteria. Using this optical approach, motility dynamics of multiple bacteria in three dimensions, such as speed and turn angles, can be obtained within minutes. We demonstrated the feasibility of this method by effectively tracking multiple bacteria species, including Escherichia coli, Agrobacterium tumefaciens, and Pseudomonas aeruginosa. In addition, we combined our fast 3D imaging technique with a microfluidic device to present an example of a drug/chemical assay to study effects on bacterial motility. PMID:25762336

  18. GPU-accelerated 3D mipmap for real-time visualization of ultrasound volume data.

    PubMed

    Kwon, Koojoo; Lee, Eun-Seok; Shin, Byeong-Seok

    2013-10-01

    Ultrasound volume rendering is an efficient method for visualizing the shape of fetuses in obstetrics and gynecology. However, in order to obtain high-quality ultrasound volume rendering, noise removal and coordinates conversion are essential prerequisites. Ultrasound data needs to undergo a noise filtering process; otherwise, artifacts and speckle noise cause quality degradation in the final images. Several two-dimensional (2D) noise filtering methods have been used to reduce this noise. However, these 2D filtering methods ignore relevant information in-between adjacent 2D-scanned images. Although three-dimensional (3D) noise filtering methods are used, they require more processing time than 2D-based methods. In addition, the sampling position in the ultrasonic volume rendering process has to be transformed between conical ultrasound coordinates and Cartesian coordinates. We propose a 3D-mipmap-based noise reduction method that uses graphics hardware, as a typical 3D mipmap requires less time to be generated and less storage capacity. In our method, we compare the density values of the corresponding points on consecutive mipmap levels and find the noise area using the difference in the density values. We also provide a noise detector for adaptively selecting the mipmap level using the difference of two mipmap levels. Our method can visualize 3D ultrasound data in real time with 3D noise filtering.

  19. Integrating eye tracking and motion sensor on mobile phone for interactive 3D display

    NASA Astrophysics Data System (ADS)

    Sun, Yu-Wei; Chiang, Chen-Kuo; Lai, Shang-Hong

    2013-09-01

    In this paper, we propose an eye tracking and gaze estimation system for mobile phone. We integrate an eye detector, cornereye center and iso-center to improve pupil detection. The optical flow information is used for eye tracking. We develop a robust eye tracking system that integrates eye detection and optical-flow based image tracking. In addition, we further incorporate the orientation sensor information from the mobile phone to improve the eye tracking for accurate gaze estimation. We demonstrate the accuracy of the proposed eye tracking and gaze estimation system through experiments on some public video sequences as well as videos acquired directly from mobile phone.

  20. Real-time 3D adaptive filtering for portable imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often not able to run with sufficient performance on a portable platform. In recent years, advanced multicore DSPs have been introduced that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms like 3D adaptive filtering, improving the image quality of portable medical imaging devices. In this study, the performance of a 3D adaptive filtering algorithm on a digital signal processor (DSP) is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec.

  1. Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography

    PubMed Central

    Carrasco-Zevallos, Oscar M.; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A.

    2016-01-01

    Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800

  2. Real-time 3-D shape measurement with composite phase-shifting fringes and multi-view system.

    PubMed

    Tao, Tianyang; Chen, Qian; Da, Jian; Feng, Shijie; Hu, Yan; Zuo, Chao

    2016-09-05

    In recent years, fringe projection has become an established and essential method for dynamic three-dimensional (3-D) shape measurement in different fields such as online inspection and real-time quality control. Numerous high-speed 3-D shape measurement methods have been developed by either employing high-speed hardware, minimizing the number of pattern projection, or both. However, dynamic 3-D shape measurement of arbitrarily-shaped objects with full sensor resolution without the necessity of additional pattern projections is still a big challenge. In this work, we introduce a high-speed 3-D shape measurement technique based on composite phase-shifting fringes and a multi-view system. The geometry constraint is adopted to search the corresponding points independently without additional images. Meanwhile, by analysing the 3-D position and the main wrapped phase of the corresponding point, pairs with an incorrect 3-D position or a considerable phase difference are effectively rejected. All of the qualified corresponding points are then corrected, and the unique one as well as the related period order is selected through the embedded triangular wave. Finally, considering that some points can only be captured by one of the cameras due to the occlusions, these points may have different fringe orders in the two views, so a left-right consistency check is employed to eliminate those erroneous period orders in this case. Several experiments on both static and dynamic scenes are performed, verifying that our method can achieve a speed of 120 frames per second (fps) with 25-period fringe patterns for fast, dense, and accurate 3-D measurement.

  3. Real-time, high-accuracy 3D imaging and shape measurement.

    PubMed

    Nguyen, Hieu; Nguyen, Dung; Wang, Zhaoyang; Kieu, Hien; Le, Minh

    2015-01-01

    In spite of the recent advances in 3D shape measurement and geometry reconstruction, simultaneously achieving fast-speed and high-accuracy performance remains a big challenge in practice. In this paper, a 3D imaging and shape measurement system is presented to tackle such a challenge. The fringe-projection-profilometry-based system employs a number of advanced approaches, such as: composition of phase-shifted fringe patterns, externally triggered synchronization of system components, generalized system setup, ultrafast phase-unwrapping algorithm, flexible system calibration method, robust gamma correction scheme, multithread computation and processing, and graphics-processing-unit-based image display. Experiments have shown that the proposed system can acquire and display high-quality 3D reconstructed images and/or video stream at a speed of 45 frames per second with relative accuracy of 0.04% or at a reduced speed of 22.5 frames per second with enhanced accuracy of 0.01%. The 3D imaging and shape measurement system shows great promise of satisfying the ever-increasing demands of scientific and engineering applications.

  4. Touring Mars Online, Real-time, in 3D for Math and Science Educators and Students

    ERIC Educational Resources Information Center

    Jones, Greg; Kalinowski, Kevin

    2007-01-01

    This article discusses a project that placed over 97% of Mars' topography made available from NASA into an interactive 3D multi-user online learning environment beginning in 2003. In 2005 curriculum materials that were created to support middle school math and science education were developed. Research conducted at the University of North Texas…

  5. Touring Mars Online, Real-time, in 3D for Math and Science Educators and Students

    ERIC Educational Resources Information Center

    Jones, Greg; Kalinowski, Kevin

    2007-01-01

    This article discusses a project that placed over 97% of Mars' topography made available from NASA into an interactive 3D multi-user online learning environment beginning in 2003. In 2005 curriculum materials that were created to support middle school math and science education were developed. Research conducted at the University of North Texas…

  6. Embedded, real-time UAV control for improved, image-based 3D scene reconstruction

    Treesearch

    Jean Liénard; Andre Vogs; Demetrios Gatziolis; Nikolay Strigul

    2016-01-01

    Unmanned Aerial Vehicles (UAVs) are already broadly employed for 3D modeling of large objects such as trees and monuments via photogrammetry. The usual workflow includes two distinct steps: image acquisition with UAV and computationally demanding postflight image processing. Insufficient feature overlaps across images is a common shortcoming in post-flight image...

  7. Hybrid MV-kV 3D respiratory motion tracking during radiation therapy with low imaging dose

    NASA Astrophysics Data System (ADS)

    Yan, Huagang; Li, Haiyun; Liu, Zhixiang; Nath, Ravinder; Liu, Wu

    2012-12-01

    A novel real-time adaptive MV-kV imaging framework for image-guided radiation therapy is developed to reduce the thoracic and abdominal tumor targeting uncertainty caused by respiration-induced intrafraction motion with ultra-low patient imaging dose. In our method, continuous stereoscopic MV-kV imaging is used at the beginning of a radiation therapy delivery for several seconds to measure the implanted marker positions. After this stereoscopic imaging period, the kV imager is switched off except for the times when no fiducial marker is detected in the cine-MV images. The 3D time-varying marker positions are estimated by combining the MV 2D projection data and the motion correlations between directional components of marker motion established from the stereoscopic imaging period and updated afterwards; in particular, the most likely position is assumed to be the position on the projection line that has the shortest distance to the first principal component line segment constructed from previous trajectory points. An adaptive windowed auto-regressive prediction is utilized to predict the marker position a short time later (310 ms and 460 ms in this study) to allow for tracking system latency. To demonstrate the feasibility and evaluate the accuracy of the proposed method, computer simulations were performed for both arc and fixed-gantry deliveries using 66 h of retrospective tumor motion data from 42 patients treated for thoracic or abdominal cancers. The simulations reveal that using our hybrid approach, a smaller than 1.2 mm or 1.5 mm root-mean-square tracking error can be achieved at a system latency of 310 ms or 460 ms, respectively. Because the kV imaging is only used for a short period of time in our method, extra patient imaging dose can be reduced by an order of magnitude compared to continuous MV-kV imaging, while the clinical tumor targeting accuracy for thoracic or abdominal cancers is maintained. Furthermore, no additional hardware is required with the

  8. Hybrid MV-kV 3D respiratory motion tracking during radiation therapy with low imaging dose.

    PubMed

    Yan, Huagang; Li, Haiyun; Liu, Zhixiang; Nath, Ravinder; Liu, Wu

    2012-12-21

    A novel real-time adaptive MV-kV imaging framework for image-guided radiation therapy is developed to reduce the thoracic and abdominal tumor targeting uncertainty caused by respiration-induced intrafraction motion with ultra-low patient imaging dose. In our method, continuous stereoscopic MV-kV imaging is used at the beginning of a radiation therapy delivery for several seconds to measure the implanted marker positions. After this stereoscopic imaging period, the kV imager is switched off except for the times when no fiducial marker is detected in the cine-MV images. The 3D time-varying marker positions are estimated by combining the MV 2D projection data and the motion correlations between directional components of marker motion established from the stereoscopic imaging period and updated afterwards; in particular, the most likely position is assumed to be the position on the projection line that has the shortest distance to the first principal component line segment constructed from previous trajectory points. An adaptive windowed auto-regressive prediction is utilized to predict the marker position a short time later (310 ms and 460 ms in this study) to allow for tracking system latency. To demonstrate the feasibility and evaluate the accuracy of the proposed method, computer simulations were performed for both arc and fixed-gantry deliveries using 66 h of retrospective tumor motion data from 42 patients treated for thoracic or abdominal cancers. The simulations reveal that using our hybrid approach, a smaller than 1.2 mm or 1.5 mm root-mean-square tracking error can be achieved at a system latency of 310 ms or 460 ms, respectively. Because the kV imaging is only used for a short period of time in our method, extra patient imaging dose can be reduced by an order of magnitude compared to continuous MV-kV imaging, while the clinical tumor targeting accuracy for thoracic or abdominal cancers is maintained. Furthermore, no additional hardware is required

  9. Real-time 3D image reconstruction guidance in liver resection surgery.

    PubMed

    Soler, Luc; Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques

    2014-04-01

    Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. From a patient's medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon's intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid

  10. Real-time 3D image reconstruction guidance in liver resection surgery

    PubMed Central

    Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques

    2014-01-01

    Background Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. Methods From a patient’s medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon’s intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. Results From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Conclusions Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this

  11. Seeing More Is Knowing More: V3D Enables Real-Time 3D Visualization and Quantitative Analysis of Large-Scale Biological Image Data Sets

    NASA Astrophysics Data System (ADS)

    Peng, Hanchuan; Long, Fuhui

    Everyone understands seeing more is knowing more. However, for large-scale 3D microscopic image analysis, it has not been an easy task to efficiently visualize, manipulate and understand high-dimensional data in 3D, 4D or 5D spaces. We developed a new 3D+ image visualization and analysis platform, V3D, to meet this need. The V3D system provides 3D visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3Dbased application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a high-resolution 3D digital atlas of neurite tracts in the fruitfly brain. V3D can be easily extended using a simple-to-use and comprehensive plugin interface.

  12. 3D Ultrasonic Needle Tracking with a 1.5D Transducer Array for Guidance of Fetal Interventions

    PubMed Central

    West, Simeon J.; Mari, Jean-Martial; Ourselin, Sebastien; David, Anna L.; Desjardins, Adrien E.

    2016-01-01

    Ultrasound image guidance is widely used in minimally invasive procedures, including fetal surgery. In this context, maintaining visibility of medical devices is a significant challenge. Needles and catheters can readily deviate from the ultrasound imaging plane as they are inserted. When the medical device tips are not visible, they can damage critical structures, with potentially profound consequences including loss of pregnancy. In this study, we performed 3D ultrasonic tracking of a needle using a novel probe with a 1.5D array of transducer elements that was driven by a commercial ultrasound system. A fiber-optic hydrophone integrated into the needle received transmissions from the probe, and data from this sensor was processed to estimate the position of the hydrophone tip in the coordinate space of the probe. Golay coding was used to increase the signal-to-noise (SNR). The relative tracking accuracy was better than 0.4 mm in all dimensions, as evaluated using a water phantom. To obtain a preliminary indication of the clinical potential of 3D ultrasonic needle tracking, an intravascular needle insertion was performed in an in vivo pregnant sheep model. The SNR values ranged from 12 to 16 at depths of 20 to 31 mm and at an insertion angle of 49° relative to the probe surface normal. The results of this study demonstrate that 3D ultrasonic needle tracking with a fiber-optic hydrophone sensor and a 1.5D array is feasible in clinically realistic environments. PMID:28111644

  13. Model-based lasso catheter tracking in monoplane fluoroscopy for 3D breathing motion compensation during EP procedures

    NASA Astrophysics Data System (ADS)

    Liao, Rui

    2010-02-01

    Radio-frequency catheter ablation (RFCA) of the pulmonary veins (PVs) attached to the left atrium (LA) is usually carried out under fluoroscopy guidance. Overlay of detailed anatomical structures via 3-D CT and/or MR volumes onto the fluoroscopy helps visualization and navigation in electrophysiology procedures (EP). Unfortunately, respiratory motion may impair the utility of static overlay of the volume with fluoroscopy for catheter navigation. In this paper, we propose a B-spline based method for tracking the circumferential catheter (lasso catheter) in monoplane fluoroscopy. The tracked motion can be used for the estimation of the 3-D trajectory of breathing motion and for subsequent motion compensation. A lasso catheter is typically used during EP procedures and is pushed against the ostia of the PVs to be ablated. Hence this method does not require additional instruments, and achieves motion estimation right at the site of ablation. The performance of the proposed tracking algorithm was evaluated on 340 monoplane frames with an average error of 0.68 +/- 0.36 mms. Our contributions in this work are twofold. First and foremost, we show how to design an effective, practical, and workflow-friendly 3-D motion compensation scheme for EP procedures in a monoplane setup. In addition, we develop an efficient and accurate method for model-based tracking of the circumferential lasso catheter in the low-dose EP fluoroscopy.

  14. Using 3D Glyph Visualization to Explore Real-time Seismic Data on Immersive and High-resolution Display Systems

    NASA Astrophysics Data System (ADS)

    Nayak, A. M.; Lindquist, K.; Kilb, D.; Newman, R.; Vernon, F.; Leigh, J.; Johnson, A.; Renambot, L.

    2003-12-01

    The study of time-dependent, three-dimensional natural phenomena like earthquakes can be enhanced with innovative and pertinent 3D computer graphics. Here we display seismic data as 3D glyphs (graphics primitives or symbols with various geometric and color attributes), allowing us to visualize the measured, time-dependent, 3D wave field from an earthquake recorded by a certain seismic network. In addition to providing a powerful state-of-health diagnostic of the seismic network, the graphical result presents an intuitive understanding of the real-time wave field that is hard to achieve with traditional 2D visualization methods. We have named these 3D icons `seismoglyphs' to suggest visual objects built from three components of ground motion data (north-south, east-west, vertical) recorded by a seismic sensor. A seismoglyph changes color with time, spanning the spectrum, to indicate when the seismic amplitude is largest. The spatial extent of the glyph indicates the polarization of the wave field as it arrives at the recording station. We compose seismoglyphs using the real time ANZA broadband data (http://www.eqinfo.ucsd.edu) to understand the 3D behavior of a seismic wave field in Southern California. Fifteen seismoglyphs are drawn simultaneously with a 3D topography map of Southern California, as real time data is piped into the graphics software using the Antelope system. At each station location, the seismoglyph evolves with time and this graphical display allows a scientist to observe patterns and anomalies in the data. The display also provides visual clues to indicate wave arrivals and ~real-time earthquake detection. Future work will involve adding phase detections, network triggers and near real-time 2D surface shaking estimates. The visuals can be displayed in an immersive environment using the passive stereoscopic Geowall (http://www.geowall.org). The stereographic projection allows for a better understanding of attenuation due to distance and earth

  15. Real-time geometric scene estimation for RGBD images using a 3D box shape grammar

    NASA Astrophysics Data System (ADS)

    Willis, Andrew R.; Brink, Kevin M.

    2016-06-01

    This article describes a novel real-time algorithm for the purpose of extracting box-like structures from RGBD image data. In contrast to conventional approaches, the proposed algorithm includes two novel attributes: (1) it divides the geometric estimation procedure into subroutines having atomic incremental computational costs, and (2) it uses a generative "Block World" perceptual model that infers both concave and convex box elements from detection of primitive box substructures. The end result is an efficient geometry processing engine suitable for use in real-time embedded systems such as those on an UAVs where it is intended to be an integral component for robotic navigation and mapping applications.

  16. Real-time 3D ultrasound fetal image enhancment techniques using motion-compensated frame rate up-conversion

    NASA Astrophysics Data System (ADS)

    Lee, Gun-Ill; Park, Rae-Hong; Song, Young-Seuk; Kim, Cheol-An; Hwang, Jae-Sub

    2003-05-01

    In this paper, we present a motion compensated frame rate up-conversion method for real-time three-dimensional (3-D) ultrasound fetal image enhancement. The conventional mechanical scan method with one-dimensional (1-D) array converters used for 3-D volume data acquisition has a slow frame rate of multi-planar images. This drawback is not an issue for stationary objects, however in ultrasound images showing a fetus of more than about 25 weeks, we perceive abrupt changes due to fast motions. To compensate for this defect, we propose the frame rate up-conversion method by which new interpolated frames are inserted between two input frames, giving smooth renditions to human eyes. More natural motions can be obtained by frame rate up-conversion. In the proposed algorithm, we employ forward motion estimation (ME), in which motion vectors (MVs) ar estimated using a block matching algorithm (BMA). To smooth MVs over neighboring blocks, vector median filtering is performed. Using these smoothed MVs, interpolated frames are reconstructed by motion compensation (MC). The undesirable blocking artifacts due to blockwise processing are reduced by block boundary filtering using a Gaussian low pass filter (LPF). The proposed method can be used in computer aided diagnosis (CAD), where more natural 3-D ultrasound images are displayed in real-time. Simulation results with several real test sequences show the effectiveness of the proposed algorithm.

  17. 3D-SURFER 2.0: web platform for real-time search and characterization of protein surfaces.

    PubMed

    Xiong, Yi; Esquivel-Rodriguez, Juan; Sael, Lee; Kihara, Daisuke

    2014-01-01

    The increasing number of uncharacterized protein structures necessitates the development of computational approaches for function annotation using the protein tertiary structures. Protein structure database search is the basis of any structure-based functional elucidation of proteins. 3D-SURFER is a web platform for real-time protein surface comparison of a given protein structure against the entire PDB using 3D Zernike descriptors. It can smoothly navigate the protein structure space in real-time from one query structure to another. A major new feature of Release 2.0 is the ability to compare the protein surface of a single chain, a single domain, or a single complex against databases of protein chains, domains, complexes, or a combination of all three in the latest PDB. Additionally, two types of protein structures can now be compared: all-atom-surface and backbone-atom-surface. The server can also accept a batch job for a large number of database searches. Pockets in protein surfaces can be identified by VisGrid and LIGSITE (csc) . The server is available at http://kiharalab.org/3d-surfer/.

  18. An embedded real-time red peach detection system based on an OV7670 camera, ARM cortex-M4 processor and 3D look-up tables.

    PubMed

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-10-22

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.

  19. An Embedded Real-Time Red Peach Detection System Based on an OV7670 Camera, ARM Cortex-M4 Processor and 3D Look-Up Tables

    PubMed Central

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second. PMID:23202040

  20. The 3D Tele Motion Tracking for the Orthodontic Facial Analysis

    PubMed Central

    Nota, Alessandro; Marchetti, Enrico; Padricelli, Giuseppe; Marzo, Giuseppe

    2016-01-01

    Aim. This study aimed to evaluate the reliability of 3D-TMT, previously used only for dynamic testing, in a static cephalometric evaluation. Material and Method. A group of 40 patients (20 males and 20 females; mean age 14.2 ± 1.2 years; 12–18 years old) was included in the study. The measurements obtained by the 3D-TMT cephalometric analysis with a conventional frontal cephalometric analysis were compared for each subject. Nine passive markers reflectors were positioned on the face skin for the detection of the profile of the patient. Through the acquisition of these points, corresponding plans for three-dimensional posterior-anterior cephalometric analysis were found. Results. The cephalometric results carried out with 3D-TMT and with traditional posterior-anterior cephalometric analysis showed the 3D-TMT system values are slightly higher than the values measured on radiographs but statistically significant; nevertheless their correlation is very high. Conclusion. The recorded values obtained using the 3D-TMT analysis were correlated to cephalometric analysis, with small but statistically significant differences. The Dahlberg errors resulted to be always lower than the mean difference between the 2D and 3D measurements. A clinician should use, during the clinical monitoring of a patient, always the same method, to avoid comparing different millimeter magnitudes. PMID:28044130

  1. Argonaute 3D: a real-time cooperative medical planning software on DSL network.

    PubMed

    Le Mer, Pascal; Soler, Luc; Pavy, Dominique; Bernard, Alain; Moreau, Johan; Mutter, Didier; Marescaux, Jacques

    2004-01-01

    Today, diagnosis of cancer and also therapeutic choice imply many specialized practitioners. They are generally located at different places and have to take the best decision as promptly as possible with the difficulty of CT-scan or MRI interpretation. Argonaute 3D is a tool that easily overcomes these issues, thanks to a cooperative solution based on virtual reality. An experimentation, where four practitioners met virtually throughout France, allowed to assess the interest of this solution.

  2. Real-Time Tracking of Knee Adduction Moment in Patients with Knee Osteoarthritis

    PubMed Central

    Kang, Sang Hoon; Lee, Song Joo; Zhang, Li-Qun

    2014-01-01

    Background The external knee adduction moment (EKAM) is closely associated with the presence, progression, and severity of knee osteoarthritis (OA). However, there is a lack of convenient and practical method to estimate and track in real-time the EKAM of patients with knee OA for clinical evaluation and gait training, especially outside of gait laboratories. New Method A real-time EKAM estimation method was developed and applied to track and investigate the EKAM and other knee moments during stepping on an elliptical trainer in both healthy subjects and a patient with knee OA. Results Substantial changes were observed in the EKAM and other knee moments during stepping in the patient with knee OA. Comparison with Existing Method(s) This is the first study to develop and test feasibility of real-time tracking method of the EKAM on patients with knee OA using 3-D inverse dynamics. Conclusions The study provides us an accurate and practical method to evaluate in real-time the critical EKAM associated with knee OA, which is expected to help us to diagnose and evaluate patients with knee OA and provide the patients with real-time EKAM feedback rehabilitation training. PMID:24361759

  3. Real-time forecasting of Hong Kong beach water quality by 3D deterministic model.

    PubMed

    Chan, S N; Thoe, W; Lee, J H W

    2013-03-15

    Bacterial level (e.g. Escherichia coli) is generally adopted as the key indicator of beach water quality due to its high correlation with swimming associated illnesses. A 3D deterministic hydrodynamic model is developed to provide daily water quality forecasting for eight marine beaches in Tsuen Wan, which are only about 8 km from the Harbour Area Treatment Scheme (HATS) outfall discharging 1.4 million m(3)/d of partially-treated sewage. The fate and transport of the HATS effluent and its impact on the E. coli level at nearby beaches are studied. The model features the seamless coupling of near field jet mixing and the far field transport and dispersion of wastewater discharge from submarine outfalls, and a spatial-temporal dependent E. coli decay rate formulation specifically developed for sub-tropical Hong Kong waters. The model prediction of beach water quality has been extensively validated against field data both before and after disinfection of the HATS effluent. Compared with daily beach E. coli data during August-November 2011, the model achieves an overall accuracy of 81-91% in forecasting compliance/exceedance of beach water quality standard. The 3D deterministic model has been most valuable in the interpretation of the complex variation of beach water quality which depends on tidal level, solar radiation and other hydro-meteorological factors. The model can also be used in optimization of disinfection dosage and in emergency response situations.

  4. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    SciTech Connect

    Ravindran, V. R.; Sreelakshmi, C.; Vibin

    2008-09-26

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  5. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    NASA Astrophysics Data System (ADS)

    Ravindran, V. R.; Sreelakshmi, C.; Vibin, Vibin

    2008-09-01

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  6. An Active Particle-based Tracking Framework for 2D and 3D Time-lapse Microscopy Images

    PubMed Central

    Hossain, M. Julius; Whelan, Paul F.; Czirok, Andras; Ghita, Ovidiu

    2014-01-01

    The process required to track cellular structures is a key task in the study of cell migration. This allows the accurate estimation of motility indicators that help in the understanding of mechanisms behind various biological processes. This paper reports a particle-based fully automatic tracking framework that is able to quantify the motility of living cells in time-lapse images. Contrary to the standard tracking methods based on predefined motion models, in this paper we reformulate the tracking mechanism as a data driven optimization process to remove its reliance on a priory motion models. The proposed method has been evaluated using 2D and 3D deconvolved epifluorescent in-vivo image sequences that describe the development of the quail embryo. PMID:22255855

  7. Accurate and high-performance 3D position measurement of fiducial marks by stereoscopic system for railway track inspection

    NASA Astrophysics Data System (ADS)

    Gorbachev, Alexey A.; Serikova, Mariya G.; Pantyushina, Ekaterina N.; Volkova, Daria A.

    2016-04-01

    Modern demands for railway track measurements require high accuracy (about 2-5 mm) of rails placement along the track to ensure smooth, safe and fast transportation. As a mean for railways geometry measurements we suggest a stereoscopic system which measures 3D position of fiducial marks arranged along the track by image processing algorithms. The system accuracy was verified during laboratory tests by comparison with precise laser tracker indications. The accuracy of +/-1.5 mm within a measurement volume 150×400×5000 mm was achieved during the tests. This confirmed that the stereoscopic system demonstrates good measurement accuracy and can be potentially used as fully automated mean for railway track inspection.

  8. Accuracy of real-time 3D echocardiography in the evaluation of functional anatomy of mitral regurgitation.

    PubMed

    Agricola, Eustachio; Oppizzi, Michele; Pisani, Matteo; Maisano, Francesco; Margonato, Alberto

    2008-07-21

    To evaluate the feasibility of mitral valve (MV) reconstruction protocol by real-time 3D echocardiography (RT3DE) in the assessment mitral regurgitant (MR) lesions, and to determine the accuracy of RT3DE compared with transthoracic (TTE) and transesophageal (TEE) echocardiographies using surgical findings as gold standard. Sixty-three consecutive patients (mean age 61.7+/-12.5 years, 35 men and 28 women) with severe organic MR were enrolled. Data were acquired in zoom and in full-volume modes from apical and/or parasternal windows. A volume rendered en-face view of MV and five serial longitudinal cut planes were reconstructed to visualize all segments of both leaflets. The feasibility of RT3D reconstruction was 94%. Compared with surgical diagnosis, the accuracy of RT3D was 91% for aetiology, 92% for mechanisms, 94% for prolapse, 88% for flail and 94% for defect location. Diagnostic accuracy was significant higher for RT3D than TTE for all end points except for flail lesion and similar to TEE but inferior to this for flail lesion. The accuracy, sensitivity and specificity were higher in patients with good-excellent than those with poor image quality regarding aetiology, mechanisms and defect location (all p=0.0001). RT3D imaging of MV is feasible and accurate in defining aetiology, mechanism and defect location in patients with MR and has incremental diagnostic value if TTE is inconclusive and similar diagnostic value of TEE except for flail lesion. RT3D, at least in patients with good acoustic window, may obviate the need for subsequent TEE and/or can be considered a complementary technique to study MV in patients with MR.

  9. A spheroid toxicity assay using magnetic 3D bioprinting and real-time mobile device-based imaging

    PubMed Central

    Tseng, Hubert; Gage, Jacob A.; Shen, Tsaiwei; Haisler, William L.; Neeley, Shane K.; Shiao, Sue; Chen, Jianbo; Desai, Pujan K.; Liao, Angela; Hebel, Chris; Raphael, Robert M.; Becker, Jeanne L.; Souza, Glauco R.

    2015-01-01

    An ongoing challenge in biomedical research is the search for simple, yet robust assays using 3D cell cultures for toxicity screening. This study addresses that challenge with a novel spheroid assay, wherein spheroids, formed by magnetic 3D bioprinting, contract immediately as cells rearrange and compact the spheroid in relation to viability and cytoskeletal organization. Thus, spheroid size can be used as a simple metric for toxicity. The goal of this study was to validate spheroid contraction as a cytotoxic endpoint using 3T3 fibroblasts in response to 5 toxic compounds (all-trans retinoic acid, dexamethasone, doxorubicin, 5′-fluorouracil, forskolin), sodium dodecyl sulfate (+control), and penicillin-G (−control). Real-time imaging was performed with a mobile device to increase throughput and efficiency. All compounds but penicillin-G significantly slowed contraction in a dose-dependent manner (Z’ = 0.88). Cells in 3D were more resistant to toxicity than cells in 2D, whose toxicity was measured by the MTT assay. Fluorescent staining and gene expression profiling of spheroids confirmed these findings. The results of this study validate spheroid contraction within this assay as an easy, biologically relevant endpoint for high-throughput compound screening in representative 3D environments. PMID:26365200

  10. Low-cost real-time 3D PC distributed-interactive-simulation (DIS) application for C4I

    NASA Astrophysics Data System (ADS)

    Gonthier, David L.; Veron, Harry

    1998-04-01

    A 3D Distributed Interactive Simulation (DIS) application was developed and demonstrated in a PC environment. The application is capable of running in the stealth mode or as a player which includes battlefield simulations, such as ModSAF. PCs can be clustered together, but not necessarily collocated, to run a simulation or training exercise on their own. A 3D perspective view of the battlefield is displayed that includes terrain, trees, buildings and other objects supported by the DIS application. Screen update rates of 15 to 20 frames per second have been achieved with fully lit and textured scenes thus providing high quality and fast graphics. A complete PC system can be configured for under $2,500. The software runs under Windows95 and WindowsNT. It is written in C++ and uses a commercial API called RenderWare for 3D rendering. The software uses Microsoft Foundation classes and Microsoft DirectPlay for joystick input. The RenderWare libraries enhance the performance through optimization for MMX and the Pentium Pro processor. The RenderWare and the Righteous 3D graphics board from Orchid Technologies with an advertised rendering rate of up to 2 million texture mapped triangles per second. A low-cost PC DIS simulator that can partake in a real-time collaborative simulation with other platforms is thus achieved.

  11. Real-Time and High-Resolution 3D Face Measurement via a Smart Active Optical Sensor.

    PubMed

    You, Yong; Shen, Yang; Zhang, Guocai; Xing, Xiuwen

    2017-03-31

    The 3D measuring range and accuracy in traditional active optical sensing, such as Fourier transform profilometry, are influenced by the zero frequency of the captured patterns. The phase-shifting technique is commonly applied to remove the zero component. However, this phase-shifting method must capture several fringe patterns with phase difference, thereby influencing the real-time performance. This study introduces a smart active optical sensor, in which a composite pattern is utilized. The composite pattern efficiently combines several phase-shifting fringes and carrier frequencies. The method can remove zero frequency by using only one pattern. Model face reconstruction and human face measurement were employed to study the validity and feasibility of this method. Results show no distinct decrease in the precision of the novel method unlike the traditional phase-shifting method. The texture mapping technique was utilized to reconstruct a nature-appearance 3D digital face.

  12. Real-time registration by tracking for MR-guided cardiac interventions

    NASA Astrophysics Data System (ADS)

    Chung, Desmond; Satkunasingham, Janakan; Wright, Graham; Radau, Perry

    2006-03-01

    Cardiac interventional procedures such as myocardial stem cell delivery and radiofrequency ablation require a high degree of accuracy and efficiency. Real-time, 2-D MR technology is being developed to guide such procedures; the associated challenges include the relatively low resolution and image quality in real-time images. Real-time MR guidance can be enhanced by acquiring a 4-D (3-D + phase) volume prior to the procedure and aligning it to the 2-D real-time images, so that corresponding features in the prior volume can be integrated into the real-time image visualization. This technique provides spatial context with high resolution and SNR. A left ventricular (LV) myocardial wall contour tracking system was developed to maintain spatial alignment of prior volume images to real-time MR images. Over 9 test images sequences, each comprising 100 frames of simulated respiratory motion, the tracker maintained alignment with a mean displacement error of 1.61mm in a region of interest around the LV, as compared to a mean displacement error of 5.2mm without tracking.

  13. Application of 3D digital image correlation to track displacements and strains of canvas paintings exposed to relative humidity changes.

    PubMed

    Malowany, Krzysztof; Tymińska-Widmer, Ludmiła; Malesa, Marcin; Kujawińska, Małgorzata; Targowski, Piotr; Rouba, Bogumiła J

    2014-03-20

    This paper introduces a methodology for tracking displacements in canvas paintings exposed to relative humidity changes. Displacements are measured by means of the 3D digital image correlation method that is followed by a postprocessing of displacement data, which allows the separation of local displacements from global displacement maps. The applicability of this methodology is tested on measurements of a model painting on canvas with introduced defects causing local inhomogeneity. The method allows the evaluation of conservation methods used for repairing canvas supports.

  14. Atmospheric Motion Vectors from INSAT-3D: Initial quality assessment and its impact on track forecast of cyclonic storm NANAUK

    NASA Astrophysics Data System (ADS)

    Deb, S. K.; Kishtawal, C. M.; Kumar, Prashant; Kiran Kumar, A. S.; Pal, P. K.; Kaushik, Nitesh; Sangar, Ghansham

    2016-03-01

    The advanced Indian meteorological geostationary satellite INSAT-3D was launched on 26 July 2013 with an improved imager and an infrared sounder and is placed at 82°E over the Indian Ocean region. With the advancement in retrieval techniques of different atmospheric parameters and with improved imager data have enhanced the scope for better understanding of the different tropical atmospheric processes over this region. The retrieval techniques and accuracy of one such parameter, Atmospheric Motion Vectors (AMV) has improved significantly with the availability of improved spatial resolution data along with more options of spectral channels in the INSAT-3D imager. The present work is mainly focused on providing brief descriptions of INSAT-3D data and AMV derivation processes using these data. It also discussed the initial quality assessment of INSAT-3D AMVs for a period of six months starting from 01 February 2014 to 31 July 2014 with other independent observations: i) Meteosat-7 AMVs available over this region, ii) in-situ radiosonde wind measurements, iii) cloud tracked winds from Multi-angle Imaging Spectro-Radiometer (MISR) and iv) numerical model analysis. It is observed from this study that the qualities of newly derived INSAT-3D AMVs are comparable with existing two versions of Meteosat-7 AMVs over this region. To demonstrate its initial application, INSAT-3D AMVs are assimilated in the Weather Research and Forecasting (WRF) model and it is found that the assimilation of newly derived AMVs has helped in reduction of track forecast errors of the recent cyclonic storm NANAUK over the Arabian Sea. Though, the present study is limited to its application to one case study, however, it will provide some guidance to the operational agencies for implementation of this new AMV dataset for future applications in the Numerical Weather Prediction (NWP) over the south Asia region.

  15. Shape measurement by a multi-view methodology based on the remote tracking of a 3D optical scanner

    NASA Astrophysics Data System (ADS)

    Barone, Sandro; Paoli, Alessandro; Viviano Razionale, Armando

    2012-03-01

    Full field optical techniques can be reliably used for 3D measurements of complex shapes by multi-view processes, which require the computation of transformation parameters relating different views into a common reference system. Although, several multi-view approaches have been proposed, the alignment process is still the crucial step of a shape reconstruction. In this paper, a methodology to automatically align 3D views has been developed by integrating a stereo vision system and a full field optical scanner. In particular, the stereo vision system is used to remotely track the optical scanner within a working volume. The tracking system uses stereo images to detect the 3D coordinates of retro-reflective infrared markers rigidly connected to the scanner. Stereo correspondences are established by a robust methodology based on combining the epipolar geometry with an image spatial transformation constraint. The proposed methodology has been validated by experimental tests regarding both the evaluation of the measurement accuracy and the 3D reconstruction of an industrial shape.

  16. 3D printing and milling a real-time PCR device for infectious disease diagnostics.

    PubMed

    Mulberry, Geoffrey; White, Kevin A; Vaidya, Manjusha; Sugaya, Kiminobu; Kim, Brian N

    2017-01-01

    Diagnosing infectious diseases using quantitative polymerase chain reaction (qPCR) offers a conclusive result in determining the infection, the strain or type of pathogen, and the level of infection. However, due to the high-cost instrumentation involved and the complexity in maintenance, it is rarely used in the field to make a quick turnaround diagnosis. In order to provide a higher level of accessibility than current qPCR devices, a set of 3D manufacturing methods is explored as a possible option to fabricate a low-cost and portable qPCR device. The key advantage of this approach is the ability to upload the digital format of the design files on the internet for wide distribution so that people at any location can simply download and feed into their 3D printers for quick manufacturing. The material and design are carefully selected to minimize the number of custom parts that depend on advanced manufacturing processes which lower accessibility. The presented 3D manufactured qPCR device is tested with 20-μL samples that contain various concentrations of lentivirus, the same type as HIV. A reverse-transcription step is a part of the device's operation, which takes place prior to the qPCR step to reverse transcribe the target RNA from the lentivirus into complementary DNA (cDNA). This is immediately followed by qPCR which quantifies the target sequence molecules in the sample during the PCR amplification process. The entire process of thermal control and time-coordinated fluorescence reading is automated by closed-loop feedback and a microcontroller. The resulting device is portable and battery-operated, with a size of 12 × 7 × 6 cm3 and mass of only 214 g. By uploading and sharing the design files online, the presented low-cost qPCR device may provide easier access to a robust diagnosis protocol for various infectious diseases, such as HIV and malaria.

  17. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures

    PubMed Central

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R.

    2012-01-01

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient’s skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures. PMID:24027616

  18. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures.

    PubMed

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R

    2012-02-23

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  19. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures

    NASA Astrophysics Data System (ADS)

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R.

    2012-03-01

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in realtime by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  20. Correlation and 3D-tracking of objects by pointing sensors

    DOEpatents

    Griesmeyer, J. Michael

    2017-04-04

    A method and system for tracking at least one object using a plurality of pointing sensors and a tracking system are disclosed herein. In a general embodiment, the tracking system is configured to receive a series of observation data relative to the at least one object over a time base for each of the plurality of pointing sensors. The observation data may include sensor position data, pointing vector data and observation error data. The tracking system may further determine a triangulation point using a magnitude of a shortest line connecting a line of sight value from each of the series of observation data from each of the plurality of sensors to the at least one object, and perform correlation processing on the observation data and triangulation point to determine if at least two of the plurality of sensors are tracking the same object. Observation data may also be branched, associated and pruned using new incoming observation data.

  1. 3D Near Infrared and Ultrasound Imaging of Peripheral Blood Vessels for Real-Time Localization and Needle Guidance

    PubMed Central

    Chen, Alvin I.; Balter, Max L.; Maguire, Timothy J.; Yarmush, Martin L.

    2016-01-01

    This paper presents a portable imaging device designed to detect peripheral blood vessels for cannula insertion that are otherwise difficult to visualize beneath the skin. The device combines near infrared stereo vision, ultrasound, and real-time image analysis to map the 3D structure of subcutaneous vessels. We show that the device can identify adult forearm vessels and be used to guide manual insertions in tissue phantoms with increased first-stick accuracy compared to unassisted cannulation. We also demonstrate that the system may be coupled with a robotic manipulator to perform automated, image-guided venipuncture. PMID:27981261

  2. Real-time 3D imaging of microstructure growth in battery cells using indirect MRI

    PubMed Central

    Ilott, Andrew J.; Mohammadi, Mohaddese; Chang, Hee Jung; Grey, Clare P.; Jerschow, Alexej

    2016-01-01

    Lithium metal is a promising anode material for Li-ion batteries due to its high theoretical specific capacity and low potential. The growth of dendrites is a major barrier to the development of high capacity, rechargeable Li batteries with lithium metal anodes, and hence, significant efforts have been undertaken to develop new electrolytes and separator materials that can prevent this process or promote smooth deposits at the anode. Central to these goals, and to the task of understanding the conditions that initiate and propagate dendrite growth, is the development of analytical and nondestructive techniques that can be applied in situ to functioning batteries. MRI has recently been demonstrated to provide noninvasive imaging methodology that can detect and localize microstructure buildup. However, until now, monitoring dendrite growth by MRI has been limited to observing the relatively insensitive metal nucleus directly, thus restricting the temporal and spatial resolution and requiring special hardware and acquisition modes. Here, we present an alternative approach to detect a broad class of metallic dendrite growth via the dendrites’ indirect effects on the surrounding electrolyte, allowing for the application of fast 3D 1H MRI experiments with high resolution. We use these experiments to reconstruct 3D images of growing Li dendrites from MRI, revealing details about the growth rate and fractal behavior. Radiofrequency and static magnetic field calculations are used alongside the images to quantify the amount of the growing structures. PMID:27621444

  3. 3D tracking and phase-contrast imaging by twin-beams digital holographic microscope in microfluidics

    NASA Astrophysics Data System (ADS)

    Miccio, L.; Memmolo, P.; Finizio, A.; Paturzo, M.; Merola, F.; Grilli, S.; Ferraro, P.

    2012-06-01

    A compact twin-beam interferometer that can be adopted as a flexible diagnostic tool in microfluidic platforms is presented. The devise has two functionalities, as explained in the follow, and can be easily integrated in microfluidic chip. The configuration allows 3D tracking of micro-particles and, at same time, furnishes Quantitative Phase-Contrast maps of tracked micro-objects by interference microscopy. Experimental demonstration of its effectiveness and compatibility with biological field is given on for in vitro cells in microfluidic environment. Nowadays, several microfluidic configuration exist and many of them are commercially available, their development is due to the possibility for manipulating droplets, handling micro and nano-objects, visualize and quantify processes occurring in small volumes and, clearly, for direct applications on lab-on-a chip devices. In microfluidic research field, optical/photonics approaches are the more suitable ones because they have various advantages as to be non-contact, full-field, non-invasive and can be packaged thanks to the development of integrable optics. Moreover, phase contrast approaches, adapted to a lab-on-a-chip configurations, give the possibility to get quantitative information with remarkable lateral and vertical resolution directly in situ without the need to dye and/or kill cells. Furthermore, numerical techniques for tracking of micro-objects needs to be developed for measuring velocity fields, trajectories patterns, motility of cancer cell and so on. Here, we present a compact holographic microscope that can ensure, by the same configuration and simultaneously, accurate 3D tracking and quantitative phase-contrast analysis. The system, simple and solid, is based on twin laser beams coming from a single laser source. Through a easy conceptual design, we show how these two different functionalities can be accomplished by the same optical setup. The working principle, the optical setup and the mathematical

  4. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  5. Non-iterative double-frame 2D/3D particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Fuchs, Thomas; Hain, Rainer; Kähler, Christian J.

    2017-09-01

    In recent years, the detection of individual particle images and their tracking over time to determine the local flow velocity has become quite popular for planar and volumetric measurements. Particle tracking velocimetry has strong advantages compared to the statistical analysis of an ensemble of particle images by means of cross-correlation approaches, such as particle image velocimetry. Tracking individual particles does not suffer from spatial averaging and therefore bias errors can be avoided. Furthermore, the spatial resolution can be increased up to the sub-pixel level for mean fields. A maximization of the spatial resolution for instantaneous measurements requires high seeding concentrations. However, it is still challenging to track particles at high seeding concentrations, if no time series is available. Tracking methods used under these conditions are typically very complex iterative algorithms, which require expert knowledge due to the large number of adjustable parameters. To overcome these drawbacks, a new non-iterative tracking approach is introduced in this letter, which automatically analyzes the motion of the neighboring particles without requiring to specify any parameters, except for the displacement limits. This makes the algorithm very user friendly and also offers unexperienced users to use and implement particle tracking. In addition, the algorithm enables measurements of high speed flows using standard double-pulse equipment and estimates the flow velocity reliably even at large particle image densities.

  6. The Value of 3D Printing Models of Left Atrial Appendage Using Real-Time 3D Transesophageal Echocardiographic Data in Left Atrial Appendage Occlusion: Applications toward an Era of Truly Personalized Medicine.

    PubMed

    Liu, Peng; Liu, Rijing; Zhang, Yan; Liu, Yingfeng; Tang, Xiaoming; Cheng, Yanzhen

    2016-01-01

    The objective of this study was to assess the clinical feasibility of generating 3D printing models of left atrial appendage (LAA) using real-time 3D transesophageal echocardiogram (TEE) data for preoperative reference of LAA occlusion. Percutaneous LAA occlusion can effectively prevent patients with atrial fibrillation from stroke. However, the anatomical structure of LAA is so complicated that adequate information of its structure is essential for successful LAA occlusion. Emerging 3D printing technology has the demonstrated potential to structure more accurately than conventional imaging modalities by creating tangible patient-specific models. Typically, 3D printing data sets are acquired from CT and MRI, which may involve intravenous contrast, sedation, and ionizing radiation. It has been reported that 3D models of LAA were successfully created by the data acquired from CT. However, 3D printing of the LAA using real-time 3D TEE data has not yet been explored. Acquisition of 3D transesophageal echocardiographic data from 8 patients with atrial fibrillation was performed using the Philips EPIQ7 ultrasound system. Raw echocardiographic image data were opened in Philips QLAB and converted to 'Cartesian DICOM' format and imported into Mimics® software to create 3D models of LAA, which were printed using a rubber-like material. The printed 3D models were then used for preoperative reference and procedural simulation in LAA occlusion. We successfully printed LAAs of 8 patients. Each LAA costs approximately CNY 800-1,000 and the total process takes 16-17 h. Seven of the 8 Watchman devices predicted by preprocedural 2D TEE images were of the same sizes as those placed in the real operation. Interestingly, 3D printing models were highly reflective of the shape and size of LAAs, and all device sizes predicted by the 3D printing model were fully consistent with those placed in the real operation. Also, the 3D printed model could predict operating difficulty and the

  7. 3D GABA imaging with real-time motion correction, shim update and reacquisition of adiabatic spiral MRSI.

    PubMed

    Bogner, Wolfgang; Gagoski, Borjan; Hess, Aaron T; Bhat, Himanshu; Tisdall, M Dylan; van der Kouwe, Andre J W; Strasser, Bernhard; Marjańska, Małgorzata; Trattnig, Siegfried; Grant, Ellen; Rosen, Bruce; Andronesi, Ovidiu C

    2014-12-01

    Gamma-aminobutyric acid (GABA) and glutamate (Glu) are the major neurotransmitters in the brain. They are crucial for the functioning of healthy brain and their alteration is a major mechanism in the pathophysiology of many neuro-psychiatric disorders. Magnetic resonance spectroscopy (MRS) is the only way to measure GABA and Glu non-invasively in vivo. GABA detection is particularly challenging and requires special MRS techniques. The most popular is MEscher-GArwood (MEGA) difference editing with single-voxel Point RESolved Spectroscopy (PRESS) localization. This technique has three major limitations: a) MEGA editing is a subtraction technique, hence is very sensitive to scanner instabilities and motion artifacts. b) PRESS is prone to localization errors at high fields (≥3T) that compromise accurate quantification. c) Single-voxel spectroscopy can (similar to a biopsy) only probe steady GABA and Glu levels in a single location at a time. To mitigate these problems, we implemented a 3D MEGA-editing MRS imaging sequence with the following three features: a) Real-time motion correction, dynamic shim updates, and selective reacquisition to eliminate subtraction artifacts due to scanner instabilities and subject motion. b) Localization by Adiabatic SElective Refocusing (LASER) to improve the localization accuracy and signal-to-noise ratio. c) K-space encoding via a weighted stack of spirals provides 3D metabolic mapping with flexible scan times. Simulations, phantom and in vivo experiments prove that our MEGA-LASER sequence enables 3D mapping of GABA+ and Glx (Glutamate+Gluatmine), by providing 1.66 times larger signal for the 3.02ppm multiplet of GABA+ compared to MEGA-PRESS, leading to clinically feasible scan times for 3D brain imaging. Hence, our sequence allows accurate and robust 3D-mapping of brain GABA+ and Glx levels to be performed at clinical 3T MR scanners for use in neuroscience and clinical applications. Copyright © 2014 Elsevier Inc. All rights

  8. 3D GABA imaging with real-time motion correction, shim update and reacquisition of adiabatic spiral MRSI

    PubMed Central

    Bogner, Wolfgang; Gagoski, Borjan; Hess, Aaron T; Bhat, Himanshu; Tisdall, M. Dylan; van der Kouwe, Andre J.W.; Strasser, Bernhard; Marjańska, Małgorzata; Trattnig, Siegfried; Grant, Ellen; Rosen, Bruce; Andronesi, Ovidiu C

    2014-01-01

    Gamma-aminobutyric acid (GABA) and glutamate (Glu) are the major neurotransmitters in the brain. They are crucial for the functioning of healthy brain and their alteration is a major mechanism in the pathophysiology of many neuro-psychiatric disorders. Magnetic resonance spectroscopy (MRS) is the only way to measure GABA and Glu non-invasively in vivo. GABA detection is particularly challenging and requires special MRS techniques. The most popular is MEscher-GArwood (MEGA) difference editing with single-voxel Point RESolved Spectroscopy (PRESS) localization. This technique has three major limitations: a) MEGA editing is a subtraction technique, hence is very sensitive to scanner instabilities and motion artifacts. b) PRESS is prone to localization errors at high fields (≥3T) that compromise accurate quantification. c) Single-voxel spectroscopy can (similar to a biopsy) only probe average GABA and Glu levels in a single location at a time. To mitigate these problems, we implemented a 3D MEGA-editing MRS imaging sequence with the following three features: a) Real-time motion correction, dynamic shim updates, and selective reacquisition to eliminate subtraction artifacts due to scanner instabilities and subject motion. b) Localization by Adiabatic SElective Refocusing (LASER) to improve the localization accuracy and signal-to-noise ratio. c) K-space encoding via a weighted stack of spirals provides 3D metabolic mapping with flexible scan times. Simulations, phantom and in vivo experiments prove that our MEGA-LASER sequence enables 3D mapping of GABA+ and Glx (Glutamate + Gluatmine), by providing 1.66 times larger signal for the 3.02 ppm multiplet of GABA+ compared to MEGA-PRESS, leading to clinically feasible scan times for 3D brain imaging. Hence, our sequence allows accurate and robust 3D-mapping of brain GABA+ and Glx levels to be performed at clinical 3T MR scanners for use in neuroscience and clinical applications. PMID:25255945

  9. Vision-Based 3D Motion Estimation for On-Orbit Proximity Satellite Tracking and Navigation

    DTIC Science & Technology

    2015-06-01

    Network .....................................................................................58 3. Telemetry Computer...screenshot of the telemetry software and the SSH terminals. ...........61 Figure 25. View of the VICON cameras above the granite flat floor of the FSS...point-wise kinematic models. The pose of the 3D structure is then estimated using a dual quaternion method [19]. The robustness and validity of this

  10. Computer Vision Tracking Using Particle Filters for 3D Position Estimation

    DTIC Science & Technology

    2014-03-27

    5 2.2 Photogrammetry ...focus on particle filters. 2.2 Photogrammetry Photogrammetry is the process of determining 3-D coordinates through images. The mathematical underpinnings...of photogrammetry are rooted in the 1480s with Leonardo da Vinci’s study of perspectives [8, p. 1]. However, digital photogrammetry did not emerge

  11. 3D printing and milling a real-time PCR device for infectious disease diagnostics

    PubMed Central

    Mulberry, Geoffrey; White, Kevin A.; Vaidya, Manjusha; Sugaya, Kiminobu

    2017-01-01

    Diagnosing infectious diseases using quantitative polymerase chain reaction (qPCR) offers a conclusive result in determining the infection, the strain or type of pathogen, and the level of infection. However, due to the high-cost instrumentation involved and the complexity in maintenance, it is rarely used in the field to make a quick turnaround diagnosis. In order to provide a higher level of accessibility than current qPCR devices, a set of 3D manufacturing methods is explored as a possible option to fabricate a low-cost and portable qPCR device. The key advantage of this approach is the ability to upload the digital format of the design files on the internet for wide distribution so that people at any location can simply download and feed into their 3D printers for quick manufacturing. The material and design are carefully selected to minimize the number of custom parts that depend on advanced manufacturing processes which lower accessibility. The presented 3D manufactured qPCR device is tested with 20-μL samples that contain various concentrations of lentivirus, the same type as HIV. A reverse-transcription step is a part of the device’s operation, which takes place prior to the qPCR step to reverse transcribe the target RNA from the lentivirus into complementary DNA (cDNA). This is immediately followed by qPCR which quantifies the target sequence molecules in the sample during the PCR amplification process. The entire process of thermal control and time-coordinated fluorescence reading is automated by closed-loop feedback and a microcontroller. The resulting device is portable and battery-operated, with a size of 12 × 7 × 6 cm3 and mass of only 214 g. By uploading and sharing the design files online, the presented low-cost qPCR device may provide easier access to a robust diagnosis protocol for various infectious diseases, such as HIV and malaria. PMID:28586401

  12. The real-time interactive 3-D-DVA for robust coronary MRA.

    PubMed

    Sachs, T S; Meyer, C H; Pauly, J M; Hu, B S; Nishimura, D G; Macovski, A

    2000-02-01

    A graphical user interface (GUI) has been developed which enables interactive feedback and control to the real-time diminishing variance algorithm (DVA). This interactivity allows the user to set scan parameters, view scan statistics, and view image updates during the course of the scan. In addition, the DVA has been extended to simultaneously reduce motion artifacts in three dimensions using three orthogonal navigators. Preliminary in vivo studies indicate that these improvements to the standard DVA allow for significantly improved consistency and robustness in eliminating respiratory motion artifacts from MR images, particularly when imaging the coronary arteries.

  13. MRI - 3D Ultrasound - X-ray Image Fusion with Electromagnetic Tracking for Transendocardial Therapeutic Injections: In-vitro Validation and In-vivo Feasibility

    PubMed Central

    Hatt, Charles R.; Jain, Ameet K.; Parthasarathy, Vijay; Lang, Andrew; Raval, Amish N.

    2014-01-01

    Myocardial infarction (MI) is one of the leading causes of death in the world. Small animal studies have shown that stem-cell therapy offers dramatic functional improvement post-MI. An endomyocardial catheter injection approach to therapeutic agent delivery has been proposed to improve efficacy through increased cell retention. Accurate targeting is critical for reaching areas of greatest therapeutic potential while avoiding a life-threatening myocardial perforation. Multimodal image fusion has been proposed as a way to improve these procedures by augmenting traditional intra-operative imaging modalities with high resolution pre-procedural images. Previous approaches have suffered from a lack of real-time tissue imaging and dependence on X-ray imaging to track devices, leading to increased ionizing radiation dose. In this paper, we present a new image fusion system for catheter-based targeted delivery of therapeutic agents. The system registers real-time 3D echocardiography, magnetic resonance, X-ray, and electromagnetic sensor tracking within a single flexible framework. All system calibrations and registrations were validated and found to have target registration errors less than 5 mm in the worst case. Injection accuracy was validated in a motion enabled cardiac injection phantom, where targeting accuracy ranged from 0.57 to 3.81 mm. Clinical feasibility was demonstrated with in-vivo swine experiments, where injections were successfully made into targeted regions of the heart. PMID:23561056

  14. Real-time optical holographic tracking of multiple objects.

    PubMed

    Chao, T H; Liu, H K

    1989-01-15

    A coherent optical correlation technique for real-time simultaneous tracking of several different objects making independent movements is described, and experimental results are presented. An evaluation of this system compared with digital computing systems is made. The real-time processing capability is obtained through the use of a liquid crystal television spatial light modulator and a dichromated gelatin multifocus hololens. A coded reference beam is utilized in the separation of the output correlation plane associated with each input target so that independent tracking can be achieved.

  15. Real-time optical holographic tracking of multiple objects

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Liu, Hua-Kuang

    1989-01-01

    A coherent optical correlation technique for real-time simultaneous tracking of several different objects making independent movements is described, and experimental results are presented. An evaluation of this system compared with digital computing systems is made. The real-time processing capability is obtained through the use of a liquid crystal television spatial light modulator and a dichromated gelatin multifocus hololens. A coded reference beam is utilized in the separation of the output correlation plane associated with each input target so that independent tracking can be achieved.

  16. Handheld portable real-time tracking and communications device

    DOEpatents

    Wiseman, James M [Albuquerque, NM; Riblett, Jr., Loren E.; Green, Karl L [Albuquerque, NM; Hunter, John A [Albuquerque, NM; Cook, III, Robert N.; Stevens, James R [Arlington, VA

    2012-05-22

    Portable handheld real-time tracking and communications devices include; a controller module, communications module including global positioning and mesh network radio module, data transfer and storage module, and a user interface module enclosed in a water-resistant enclosure. Real-time tracking and communications devices can be used by protective force, security and first responder personnel to provide situational awareness allowing for enhance coordination and effectiveness in rapid response situations. Such devices communicate to other authorized devices via mobile ad-hoc wireless networks, and do not require fixed infrastructure for their operation.

  17. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks.

    PubMed

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P

    2017-01-07

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot

  18. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    PubMed Central

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.

    2017-01-01

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot

  19. Real-time seam tracking for rocket thrust chamber manufacturing

    SciTech Connect

    Schmitt, D.J.; Novak, J.L.; Starr, G.P.; Maslakowski, J.E.

    1993-11-01

    A sensor-based control approach for real-time seam tracking of rocket thrust chamber assemblies has been developed to enable automation of a braze paste dispensing process. This approach utilizes a non-contact Multi-Axis Seam Tracking (MAST) sensor to track the seams. Thee MAST sensor measures capacitance variations between the sensor and the workpiece and produces four varying voltages which are read directly into the robot controller. A PID control algorithm which runs at the application program level has been designed based upon a simple dynamic model of the combined robot and sensor plant. The control algorithm acts on the incoming sensor signals in real-time to guide the robot motion along the seam path. Experiments demonstrate that seams can be tracked at 100 mm/sec within the accuracy required for braze paste dispensing.

  20. Feasibility of modulation-encoded TOBE CMUTS for real-time 3-D imaging.

    PubMed

    Chee, Ryan K W; Zemp, Roger J

    2015-04-01

    Modulation-encoded top orthogonal to bottom electrode (TOBE) capacitive micromachined ultrasound transducers (CMUTs) are proposed 2-D ultrasound transducer arrays that could allow 3-D images to be acquired in a single acquisition using only N channels for an N × N array. In the proposed modulation-encoding scheme, columns are not only biased, but also modulated with a different frequency for each column. The modulation frequencies are higher than the passband of the CMUT membranes and mix nonlinearly in CMUT cells with acoustic signals to produce acoustic signal sidebands around the modulation carriers in the frequency domain. Thus, signals from elements along a row may be read out simultaneously via frequency-domain multiplexing. We present the theory and feasibility data behind modulation-encoded TOBE CMUTs. We also present experiments showing necessary modifications to the current TOBE design that would allow for crosstalk-mitigated modulation-encoding.

  1. Registration and real-time visualization of transcranial magnetic stimulation with 3-D MR images.

    PubMed

    Noirhomme, Quentin; Ferrant, Matthieu; Vandermeeren, Yves; Olivier, Etienne; Macq, Benoît; Cuisenaire, Olivier

    2004-11-01

    This paper describes a method for registering and visualizing in real-time the results of transcranial magnetic stimulations (TMS) in physical space on the corresponding anatomical locations in MR images of the brain. The method proceeds in three main steps. First, the patient scalp is digitized in physical space with a magnetic-field digitizer, following a specific digitization pattern. Second, a registration process minimizes the mean square distance between those points and a segmented scalp surface extracted from the magnetic resonance image. Following this registration, the physician can follow the change in coil position in real-time through the visualization interface and adjust the coil position to the desired anatomical location. Third, amplitude of motor evoked potentials can be projected onto the segmented brain in order to create functional brain maps. The registration has subpixel accuracy in a study with simulated data, while we obtain a point to surface root-mean-square error of 1.17+/-0.38 mm in a 24 subject study.

  2. Real-time Probabilistic Covariance Tracking with Efficient Model Update

    DTIC Science & Technology

    2012-05-01

    computational complexity, resulting in a real-time performance. The covariance-based representation and ICTL are then combined with the particle filter ...Terms—Visual tracking, particle filter , covariance descriptor, Riemannian manifolds, incremental learning, model update. F 1 INTRODUCTION Visual tracking...which is the main contribution of our work. Further, our ICTL method uses a particle filter [13] for motion parameter estimation rather than the

  3. Novel real-time 3D radiological mapping solution for ALARA maximization, D and D assessments and radiological management

    SciTech Connect

    Dubart, Philippe; Hautot, Felix; Morichi, Massimo; Abou-Khalil, Roger

    2015-07-01

    Good management of dismantling and decontamination (D and D) operations and activities is requiring safety, time saving and perfect radiological knowledge of the contaminated environment as well as optimization for personnel dose and minimization of waste volume. In the same time, Fukushima accident has imposed a stretch to the nuclear measurement operational approach requiring in such emergency situation: fast deployment and intervention, quick analysis and fast scenario definition. AREVA, as return of experience from his activities carried out at Fukushima and D and D sites has developed a novel multi-sensor solution as part of his D and D research, approach and method, a system with real-time 3D photo-realistic spatial radiation distribution cartography of contaminated premises. The system may be hand-held or mounted on a mobile device (robot, drone, e.g). In this paper, we will present our current development based on a SLAM technology (Simultaneous Localization And Mapping) and integrated sensors and detectors allowing simultaneous topographic and radiological (dose rate and/or spectroscopy) data acquisitions. This enabling technology permits 3D gamma activity cartography in real-time. (authors)

  4. Registration of clinical volumes to beams-eye-view images for real-time tracking

    SciTech Connect

    Bryant, Jonathan H.; Rottmann, Joerg; Lewis, John H.; Mishra, Pankaj; Berbeco, Ross I.; Keall, Paul J.

    2014-12-15

    Purpose: The authors combine the registration of 2D beam’s eye view (BEV) images and 3D planning computed tomography (CT) images, with relative, markerless tumor tracking to provide automatic absolute tracking of physician defined volumes such as the gross tumor volume (GTV). Methods: During treatment of lung SBRT cases, BEV images were continuously acquired with an electronic portal imaging device (EPID) operating in cine mode. For absolute registration of physician-defined volumes, an intensity based 2D/3D registration to the planning CT was performed using the end-of-exhale (EoE) phase of the four dimensional computed tomography (4DCT). The volume was converted from Hounsfield units into electron density by a calibration curve and digitally reconstructed radiographs (DRRs) were generated for each beam geometry. Using normalized cross correlation between the DRR and an EoE BEV image, the best in-plane rigid transformation was found. The transformation was applied to physician-defined contours in the planning CT, mapping them into the EPID image domain. A robust multiregion method of relative markerless lung tumor tracking quantified deviations from the EoE position. Results: The success of 2D/3D registration was demonstrated at the EoE breathing phase. By registering at this phase and then employing a separate technique for relative tracking, the authors are able to successfully track target volumes in the BEV images throughout the entire treatment delivery. Conclusions: Through the combination of EPID/4DCT registration and relative tracking, a necessary step toward the clinical implementation of BEV tracking has been completed. The knowledge of tumor volumes relative to the treatment field is important for future applications like real-time motion management, adaptive radiotherapy, and delivered dose calculations.

  5. Registration of clinical volumes to beams-eye-view images for real-time tracking

    PubMed Central

    Bryant, Jonathan H.; Rottmann, Joerg; Lewis, John H.; Mishra, Pankaj; Keall, Paul J.; Berbeco, Ross I.

    2014-01-01

    Purpose: The authors combine the registration of 2D beam’s eye view (BEV) images and 3D planning computed tomography (CT) images, with relative, markerless tumor tracking to provide automatic absolute tracking of physician defined volumes such as the gross tumor volume (GTV). Methods: During treatment of lung SBRT cases, BEV images were continuously acquired with an electronic portal imaging device (EPID) operating in cine mode. For absolute registration of physician-defined volumes, an intensity based 2D/3D registration to the planning CT was performed using the end-of-exhale (EoE) phase of the four dimensional computed tomography (4DCT). The volume was converted from Hounsfield units into electron density by a calibration curve and digitally reconstructed radiographs (DRRs) were generated for each beam geometry. Using normalized cross correlation between the DRR and an EoE BEV image, the best in-plane rigid transformation was found. The transformation was applied to physician-defined contours in the planning CT, mapping them into the EPID image domain. A robust multiregion method of relative markerless lung tumor tracking quantified deviations from the EoE position. Results: The success of 2D/3D registration was demonstrated at the EoE breathing phase. By registering at this phase and then employing a separate technique for relative tracking, the authors are able to successfully track target volumes in the BEV images throughout the entire treatment delivery. Conclusions: Through the combination of EPID/4DCT registration and relative tracking, a necessary step toward the clinical implementation of BEV tracking has been completed. The knowledge of tumor volumes relative to the treatment field is important for future applications like real-time motion management, adaptive radiotherapy, and delivered dose calculations. PMID:25471950

  6. An FPGA-Based Real-Time Maximum Likelihood 3D Position Estimation for a Continuous Crystal PET Detector

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Xiao, Yong; Cheng, Xinyi; Li, Deng; Wang, Liwei

    2016-02-01

    For the continuous crystal-based positron emission tomography (PET) detector built in our lab, a maximum likelihood algorithm adapted for implementation on a field programmable gate array (FPGA) is proposed to estimate the three-dimensional (3D) coordinate of interaction position with the single-end detected scintillation light response. The row-sum and column-sum readout scheme organizes the 64 channels of photomultiplier (PMT) into eight row signals and eight column signals to be readout for X- and Y-coordinates estimation independently. By the reference events irradiated in a known oblique angle, the probability density function (PDF) for each depth-of-interaction (DOI) segment is generated, by which the reference events in perpendicular irradiation are assigned to DOI segments for generating the PDFs for X and Y estimation in each DOI layer. Evaluated by the experimental data, the algorithm achieves an average X resolution of 1.69 mm along the central X-axis, and DOI resolution of 3.70 mm over the whole thickness (0-10 mm) of crystal. The performance improvements from 2D estimation to the 3D algorithm are also presented. Benefiting from abundant resources of FPGA and a hierarchical storage arrangement, the whole algorithm can be implemented into a middle-scale FPGA. By a parallel structure in pipelines, the 3D position estimator on the FPGA can achieve a processing throughput of 15 M events/s, which is sufficient for the requirement of real-time PET imaging.

  7. Real-time 3D and 4D Fourier domain Doppler optical coherence tomography based on dual graphics processing units.

    PubMed

    Huang, Yong; Liu, Xuan; Kang, Jin U

    2012-09-01

    We present real-time 3D (2D cross-sectional image plus time) and 4D (3D volume plus time) phase-resolved Doppler OCT (PRDOCT) imaging based on configuration of dual graphics processing units (GPU). A GPU-accelerated phase-resolving processing algorithm was developed and implemented. We combined a structural image intensity-based thresholding mask and average window method to improve the signal-to-noise ratio of the Doppler phase image. A 2D simultaneous display of the structure and Doppler flow images was presented at a frame rate of 70 fps with an image size of 1000 × 1024 (X × Z) pixels. A 3D volume rendering of tissue structure and flow images-each with a size of 512 × 512 pixels-was presented 64.9 milliseconds after every volume scanning cycle with a volume size of 500 × 256 × 512 (X × Y × Z) voxels, with an acquisition time window of only 3.7 seconds. To the best of our knowledge, this is the first time that an online, simultaneous structure and Doppler flow volume visualization has been achieved. Maximum system processing speed was measured to be 249,000 A-scans per second with each A-scan size of 2048 pixels.

  8. Real-time 3D and 4D Fourier domain Doppler optical coherence tomography based on dual graphics processing units

    PubMed Central

    Huang, Yong; Liu, Xuan; Kang, Jin U.

    2012-01-01

    We present real-time 3D (2D cross-sectional image plus time) and 4D (3D volume plus time) phase-resolved Doppler OCT (PRDOCT) imaging based on configuration of dual graphics processing units (GPU). A GPU-accelerated phase-resolving processing algorithm was developed and implemented. We combined a structural image intensity-based thresholding mask and average window method to improve the signal-to-noise ratio of the Doppler phase image. A 2D simultaneous display of the structure and Doppler flow images was presented at a frame rate of 70 fps with an image size of 1000 × 1024 (X × Z) pixels. A 3D volume rendering of tissue structure and flow images—each with a size of 512 × 512 pixels—was presented 64.9 milliseconds after every volume scanning cycle with a volume size of 500 × 256 × 512 (X × Y × Z) voxels, with an acquisition time window of only 3.7 seconds. To the best of our knowledge, this is the first time that an online, simultaneous structure and Doppler flow volume visualization has been achieved. Maximum system processing speed was measured to be 249,000 A-scans per second with each A-scan size of 2048 pixels. PMID:23024910

  9. Real-time 3D imaging of Haines jumps in porous media flow

    PubMed Central

    Berg, Steffen; Ott, Holger; Klapp, Stephan A.; Schwing, Alex; Neiteler, Rob; Brussee, Niels; Makurat, Axel; Leu, Leon; Enzmann, Frieder; Schwarz, Jens-Oliver; Kersten, Michael; Irvine, Sarah; Stampanoni, Marco

    2013-01-01

    Newly developed high-speed, synchrotron-based X-ray computed microtomography enabled us to directly image pore-scale displacement events in porous rock in real time. Common approaches to modeling macroscopic fluid behavior are phenomenological, have many shortcomings, and lack consistent links to elementary pore-scale displacement processes, such as Haines jumps and snap-off. Unlike the common singular pore jump paradigm based on observations of restricted artificial capillaries, we found that Haines jumps typically cascade through 10–20 geometrically defined pores per event, accounting for 64% of the energy dissipation. Real-time imaging provided a more detailed fundamental understanding of the elementary processes in porous media, such as hysteresis, snap-off, and nonwetting phase entrapment, and it opens the way for a rigorous process for upscaling based on thermodynamic models. PMID:23431151

  10. MobileFusion: real-time volumetric surface reconstruction and dense tracking on mobile phones.

    PubMed

    Ondrúška, Peter; Kohli, Pushmeet; Izadi, Shahram

    2015-11-01

    We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ∼ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.

  11. 3D Markov Process for Traffic Flow Prediction in Real-Time

    PubMed Central

    Ko, Eunjeong; Ahn, Jinyoung; Kim, Eun Yi

    2016-01-01

    Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1) a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2) the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further. PMID:26821025

  12. Real-time sensing of mouth 3-D position and orientation

    NASA Astrophysics Data System (ADS)

    Burdea, Grigore C.; Dunn, Stanley M.; Mallik, Matsumita; Jun, Heesung

    1990-07-01

    A key problem in using digital subtraction radiography in dentistry is the ability to reposition the X-ray source and patient so as to reproduce an identical imaging geometry. In this paper we describe an approach to solving this problem based on real time sensing of the 3-D position and orientation of the patient's mouth. The research described here is part of a program which has a long term goal to develop an automated digital subtraction radiography system. This will allow the patient and X-ray source to be accurately repositioned without the mechanical fixtures that are presently used to preserve the imaging geometry. If we can measure the position and orientation of the mouth, then the desired position of the source can be computed as the product of the transformation matrices describing the desired imaging geometry and the position vector of the targeted tooth. Position and orientation of the mouth is measured by a real time sensing device using low-frequency magnetic field technology. We first present the problem of repositioning the patient and source and then outline our analytic solution. Then we describe an experimental setup to measure the accuracy, reproducibility and resolution of the sensor and present results of preliminary experiments.

  13. Real-Time Estimation of 3-D Needle Shape and Deflection for MRI-Guided Interventions

    PubMed Central

    Park, Yong-Lae; Elayaperumal, Santhi; Daniel, Bruce; Ryu, Seok Chang; Shin, Mihye; Savall, Joan; Black, Richard J.; Moslehi, Behzad; Cutkosky, Mark R.

    2015-01-01

    We describe a MRI-compatible biopsy needle instrumented with optical fiber Bragg gratings for measuring bending deflections of the needle as it is inserted into tissues. During procedures, such as diagnostic biopsies and localized treatments, it is useful to track any tool deviation from the planned trajectory to minimize positioning errors and procedural complications. The goal is to display tool deflections in real time, with greater bandwidth and accuracy than when viewing the tool in MR images. A standard 18 ga × 15 cm inner needle is prepared using a fixture, and 350-μm-deep grooves are created along its length. Optical fibers are embedded in the grooves. Two sets of sensors, located at different points along the needle, provide an estimate of the bent profile, as well as temperature compensation. Tests of the needle in a water bath showed that it produced no adverse imaging artifacts when used with the MR scanner. PMID:26405428

  14. Concept for an airborne real-time ISR system with multi-sensor 3D data acquisition

    NASA Astrophysics Data System (ADS)

    Haraké, Laura; Schilling, Hendrik; Blohm, Christian; Hillemann, Markus; Lenz, Andreas; Becker, Merlin; Keskin, Göksu; Middelmann, Wolfgang

    2016-10-01

    In modern aerial Intelligence, Surveillance and Reconnaissance operations, precise 3D information becomes inevitable for increased situation awareness. In particular, object geometries represented by texturized digital surface models constitute an alternative to a pure evaluation of radiometric measurements. Besides the 3D data's level of detail aspect, its availability is time-relevant in order to make quick decisions. Expanding the concept of our preceding remote sensing platform developed together with OHB System AG and Geosystems GmbH, in this paper we present an airborne multi-sensor system based on a motor glider equipped with two wing pods; one carries the sensors, whereas the second pod downlinks sensor data to a connected ground control station by using the Aerial Reconnaissance Data System of OHB. An uplink is created to receive remote commands from the manned mobile ground control station, which on its part processes and evaluates incoming sensor data. The system allows the integration of efficient image processing and machine learning algorithms. In this work, we introduce a near real-time approach for the acquisition of a texturized 3D data model with the help of an airborne laser scanner and four high-resolution multi-spectral (RGB, near-infrared) cameras. Image sequences from nadir and off-nadir cameras permit to generate dense point clouds and to texturize also facades of buildings. The ground control station distributes processed 3D data over a linked geoinformation system with web capabilities to off-site decision-makers. As the accurate acquisition of sensor data requires boresight calibrated sensors, we additionally examine the first steps of a camera calibration workflow.

  15. Intracellular nanomanipulation by a photonic-force microscope with real-time acquisition of a 3D stiffness matrix

    NASA Astrophysics Data System (ADS)

    Bertseva, E.; Singh, A. S. G.; Lekki, J.; Thévenaz, P.; Lekka, M.; Jeney, S.; Gremaud, G.; Puttini, S.; Nowak, W.; Dietler, G.; Forró, L.; Unser, M.; Kulik, A. J.

    2009-07-01

    A traditional photonic-force microscope (PFM) results in huge sets of data, which requires tedious numerical analysis. In this paper, we propose instead an analog signal processor to attain real-time capabilities while retaining the richness of the traditional PFM data. Our system is devoted to intracellular measurements and is fully interactive through the use of a haptic joystick. Using our specialized analog hardware along with a dedicated algorithm, we can extract the full 3D stiffness matrix of the optical trap in real time, including the off-diagonal cross-terms. Our system is also capable of simultaneously recording data for subsequent offline analysis. This allows us to check that a good correlation exists between the classical analysis of stiffness and our real-time measurements. We monitor the PFM beads using an optical microscope. The force-feedback mechanism of the haptic joystick helps us in interactively guiding the bead inside living cells and collecting information from its (possibly anisotropic) environment. The instantaneous stiffness measurements are also displayed in real time on a graphical user interface. The whole system has been built and is operational; here we present early results that confirm the consistency of the real-time measurements with offline computations.

  16. Defense Additive Manufacturing: DOD Needs to Systematically Track Department-wide 3D Printing Efforts

    DTIC Science & Technology

    2015-10-01

    shipping parts . According to Army officials, additive manufacturing offers customers the opportunity to enhance value when the lead time needed to...additive manufacturing for design and prototyping and for some production, such as parts for medical applications; and it is conducting research to...qualifying materials and certifying parts . However, DOD does not systematically track additive manufacturing efforts, to include (1) all activities

  17. 3D Real-Time Echocardiography Combined with Mini Pressure Wire Generate Reliable Pressure-Volume Loops in Small Hearts

    PubMed Central

    Linden, Katharina; Dewald, Oliver; Gatzweiler, Eva; Seehase, Matthias; Duerr, Georg Daniel; Dörner, Jonas; Kleppe, Stephanie

    2016-01-01

    Background Pressure-volume loops (PVL) provide vital information regarding ventricular performance and pathophysiology in cardiac disease. Unfortunately, acquisition of PVL by conductance technology is not feasible in neonates and small children due to the available human catheter size and resulting invasiveness. The aim of the study was to validate the accuracy of PVL in small hearts using volume data obtained by real-time three-dimensional echocardiography (3DE) and simultaneously acquired pressure data. Methods In 17 piglets (weight range: 3.6–8.0 kg) left ventricular PVL were generated by 3DE and simultaneous recordings of ventricular pressure using a mini pressure wire (PVL3D). PVL3D were compared to conductance catheter measurements (PVLCond) under various hemodynamic conditions (baseline, alpha-adrenergic stimulation with phenylephrine, beta-adrenoreceptor-blockage using esmolol). In order to validate the accuracy of 3D volumetric data, cardiac magnetic resonance imaging (CMR) was performed in another 8 piglets. Results Correlation between CMR- and 3DE-derived volumes was good (enddiastolic volume: mean bias -0.03ml ±1.34ml). Computation of PVL3D in small hearts was feasible and comparable to results obtained by conductance technology. Bland-Altman analysis showed a low bias between PVL3D and PVLCond. Systolic and diastolic parameters were closely associated (Intraclass-Correlation Coefficient for: systolic myocardial elastance 0.95, arterial elastance 0.93, diastolic relaxation constant tau 0.90, indexed end-diastolic volume 0.98). Hemodynamic changes under different conditions were well detected by both methods (ICC 0.82 to 0.98). Inter- and intra-observer coefficients of variation were below 5% for all parameters. Conclusions PVL3D generated from 3DE combined with mini pressure wire represent a novel, feasible and reliable method to assess different hemodynamic conditions of cardiac function in hearts comparable to neonate and infant size. This

  18. Real-time 3-D SAFT-UT system evaluation and validation

    SciTech Connect

    Doctor, S.R.; Schuster, G.J.; Reid, L.D.; Hall, T.E.

    1996-09-01

    SAFT-UT technology is shown to provide significant enhancements to the inspection of materials used in US nuclear power plants. This report provides guidelines for the implementation of SAFT-UT technology and shows the results from its application. An overview of the development of SAFT-UT is provided so that the reader may become familiar with the technology. Then the basic fundamentals are presented with an extensive list of references. A comprehensive operating procedure, which is used in conjunction with the SAFT-UT field system developed by Pacific Northwest Laboratory (PNL), provides the recipe for both SAFT data acquisition and analysis. The specification for the hardware implementation is provided for the SAFT-UT system along with a description of the subsequent developments and improvements. One development of technical interest is the SAFT real time processor. Performance of the real-time processor is impressive and comparison is made of this dedicated parallel processor to a conventional computer and to the newer high-speed computer architectures designed for image processing. Descriptions of other improvements, including a robotic scanner, are provided. Laboratory parametric and application studies, performed by PNL and not previously reported, are discussed followed by a section on field application work in which SAFT was used during inservice inspections of operating reactors.

  19. Left ventricular endocardial surface detection based on real-time 3D echocardiographic data

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Borsari, M.; Consegnati, F.; Sarti, A.; Lamberti, C.; Travaglini, A.; Shiota, T.; Thomas, J. D.

    2001-01-01

    OBJECTIVE: A new computerized semi-automatic method for left ventricular (LV) chamber segmentation is presented. METHODS: The LV is imaged by real-time three-dimensional echocardiography (RT3DE). The surface detection model, based on level set techniques, is applied to RT3DE data for image analysis. The modified level set partial differential equation we use is solved by applying numerical methods for conservation laws. The initial conditions are manually established on some slices of the entire volume. The solution obtained for each slice is a contour line corresponding with the boundary between LV cavity and LV endocardium. RESULTS: The mathematical model has been applied to sequences of frames of human hearts (volume range: 34-109 ml) imaged by 2D and reconstructed off-line and RT3DE data. Volume estimation obtained by this new semi-automatic method shows an excellent correlation with those obtained by manual tracing (r = 0.992). Dynamic change of LV volume during the cardiac cycle is also obtained. CONCLUSION: The volume estimation method is accurate; edge based segmentation, image completion and volume reconstruction can be accomplished. The visualization technique also allows to navigate into the reconstructed volume and to display any section of the volume.

  20. Improvements to the ShipIR/NTCS adaptive track gate algorithm and 3D flare particle model

    NASA Astrophysics Data System (ADS)

    Ramaswamy, Srinivasan; Vaitekunas, David A.; Gunter, Willem H.; February, Faith J.

    2017-05-01

    A key component in any image-based tracking system is the adaptive tracking algorithm used to segment the image into potential targets, rank-and-select the best candidate target, and gate the selected target to further improve tracker performance. Similarly, a key component in any soft-kill response to an incoming guided missile is the flare/chaff decoy used to distract or seduce the seeker homing system away from the naval platform. This paper describes the recent improvements to the naval threat countermeasure simulator (NTCS) of the NATO-standard ship signature model (ShipIR). Efforts to analyse and match the 3D flare particle model against actual IR measurements of the Chemring TALOS IR round resulted in further refinement of the 3D flare particle distribution. The changes in the flare model characteristics were significant enough to require an overhaul to the adaptive track gate (ATG) algorithm in the way it detects the presence of flare decoys and reacquires the target after flare separation. A series of test scenarios are used to demonstrate the impact of the new flare and ATG on IR tactics simulation.

  1. Close to real-time robust pedestrian detection and tracking

    NASA Astrophysics Data System (ADS)

    Lipetski, Y.; Loibner, G.; Sidla, O.

    2015-03-01

    Fully automated video based pedestrian detection and tracking is a challenging task with many practical and important applications. We present our work aimed to allow robust and simultaneously close to real-time tracking of pedestrians. The presented approach is stable to occlusions, lighting conditions and is generalized to be applied on arbitrary video data. The core tracking approach is built upon tracking-by-detections principle. We describe our cascaded HOG detector with successive CNN verification in detail. For the tracking and re-identification task, we did an extensive analysis of appearance based features as well as their combinations. The tracker was tested on many hours of video data for different scenarios; the results are presented and discussed.

  2. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago

    2015-08-01

    The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most

  3. SU-C-201-04: Noise and Temporal Resolution in a Near Real-Time 3D Dosimeter

    SciTech Connect

    Rilling, M; Goulet, M; Beaulieu, L; Archambault, L; Thibault, S

    2016-06-15

    Purpose: To characterize the performance of a real-time three-dimensional scintillation dosimeter in terms of signal-to-noise ratio (SNR) and temporal resolution of 3D dose measurements. This study quantifies its efficiency in measuring low dose levels characteristic of EBRT dynamic treatments, and in reproducing field profiles for varying multileaf collimator (MLC) speeds. Methods: The dosimeter prototype uses a plenoptic camera to acquire continuous images of the light field emitted by a 10×10×10 cm{sup 3} plastic scintillator. Using EPID acquisitions, ray tracing-based iterative tomographic algorithms allow millimeter-sized reconstruction of relative 3D dose distributions. Measurements were taken at 6MV, 400 MU/min with the scintillator centered at the isocenter, first receiving doses from 1.4 to 30.6 cGy. Dynamic measurements were then performed by closing half of the MLCs at speeds of 0.67 to 2.5 cm/s, at 0° and 90° collimator angles. A reference static half-field was obtained for measured profile comparison. Results: The SNR steadily increases as a function of dose and reaches a clinically adequate plateau of 80 at 10 cGy. Below this, the decrease in light collected and increase in pixel noise diminishes the SNR; nonetheless, the EPID acquisitions and the voxel correlation employed in the reconstruction algorithms result in suitable SNR values (>75) even at low doses. For dynamic measurements at varying MLC speeds, central relative dose profiles are characterized by gradients at %D{sub 50} of 8.48 to 22.7 %/mm. These values converge towards the 32.8 %/mm-gradient measured for the static reference field profile, but are limited by the dosimeter’s current acquisition rate of 1Hz. Conclusion: This study emphasizes the efficiency of the 3D dose distribution reconstructions, while identifying limits of the current prototype’s temporal resolution in terms of dynamic EBRT parameters. This work paves the way for providing an optimized, second

  4. Real-time Awake Animal Motion Tracking System for SPECT Imaging

    SciTech Connect

    Goddard Jr, James Samuel; Baba, Justin S; Lee, Seung Joon; Weisenberger, A G; Stolin, A; McKisson, J; Smith, M F

    2008-01-01

    Enhancements have been made in the development of a real-time optical pose measurement and tracking system that provides 3D position and orientation data for a single photon emission computed tomography (SPECT) imaging system for awake, unanesthetized, unrestrained small animals. Three optical cameras with infrared (IR) illumination view the head movements of an animal enclosed in a transparent burrow. Markers placed on the head provide landmark points for image segmentation. Strobed IR LED s are synchronized to the cameras and illuminate the markers to prevent motion blur for each set of images. The system using the three cameras automatically segments the markers, detects missing data, rejects false reflections, performs trinocular marker correspondence, and calculates the 3D pose of the animal s head. Improvements have been made in methods for segmentation, tracking, and 3D calculation to give higher speed and more accurate measurements during a scan. The optical hardware has been installed within a Siemens MicroCAT II small animal scanner at Johns Hopkins without requiring functional changes to the scanner operation. The system has undergone testing using both phantoms and live mice and has been characterized in terms of speed, accuracy, robustness, and reliability. Experimental data showing these motion tracking results are given.

  5. Impact of area strain by 3D speckle tracking on clinical outcome in patients after acute myocardial infarction.

    PubMed

    Shin, Sung-Hee; Suh, Young Ju; Baek, Yong-Soo; Lee, Man-Jong; Park, Sang-Don; Kwon, Sung-Woo; Woo, Seong-Ill; Kim, Dae-Hyeok; Park, Keum-Soo; Kwan, Jun

    2016-12-01

    Three-dimensional (3D) speckle tracking echocardiography (STE) has been developed to overcome the limitations of two-dimensional (2D) STE and has been applied in the several clinical settings. However, no data exist about the prognostic value of 3DSTE-based strain on clinical outcome after myocardial infarction (MI). This study was designed to investigate the prognostic value of area strain (AS) by 3D speckle tracking in predicting clinical outcome after acute MI. We assessed 96 patients (62±14 years, 72% male) with acute MI and who had undergone a coronary angiography. Clinical parameters and conventional echocardiographic measurements including the left atrial (LA) size and tissue Doppler measurements were evaluated. The global left ventricular (LV) AS was measured using 3D speckle tracking software. The relationship between the AS and clinical outcome of death or hospitalization for heart failure (HF) was assessed. During a median follow-up of 33±10 months, primary endpoint of death or HF occurred in 12 patients (12.5%). AS was predictive of death or HF after adjustment for age, gender, peak CK-MB, LA volume, LV end-systolic volume, LV mass, the ratio of early mitral inflow velocity to early mitral annular velocity, and LV ejection fraction in a multivariate Cox model (HR 1.23, 95% CI 1.02-1.47, P=.03). In addition, AS added incremental value in predicting death or heart failure on a model based on clinical and standard echocardiographic measures (P=.008). AS is independently associated with increased risk of death or HF after acute MI, suggesting that it can be a useful prognostic parameter in the patients following MI. © 2016, Wiley Periodicals, Inc.

  6. Laetoli’s lost tracks: 3D generated mean shape and missing footprints

    PubMed Central

    Bennett, M. R.; Reynolds, S. C.; Morse, S. A.; Budka, M.

    2016-01-01

    The Laetoli site (Tanzania) contains the oldest known hominin footprints, and their interpretation remains open to debate, despite over 35 years of research. The two hominin trackways present are parallel to one another, one of which is a composite formed by at least two individuals walking in single file. Most researchers have focused on the single, clearly discernible G1 trackway while the G2/3 trackway has been largely dismissed due to its composite nature. Here we report the use of a new technique that allows us to decouple the G2 and G3 tracks for the first time. In so doing we are able to quantify the mean footprint topology of the G3 trackway and render it useable for subsequent data analyses. By restoring the effectively ‘lost’ G3 track, we have doubled the available data on some of the rarest traces directly associated with our Pliocene ancestors. PMID:26902912

  7. 3D environment modeling and location tracking using off-the-shelf components

    NASA Astrophysics Data System (ADS)

    Luke, Robert H.

    2016-05-01

    The remarkable popularity of smartphones over the past decade has led to a technological race for dominance in market share. This has resulted in a flood of new processors and sensors that are inexpensive, low power and high performance. These sensors include accelerometers, gyroscope, barometers and most importantly cameras. This sensor suite, coupled with multicore processors, allows a new community of researchers to build small, high performance platforms for low cost. This paper describes a system using off-the-shelf components to perform position tracking as well as environment modeling. The system relies on tracking using stereo vision and inertial navigation to determine movement of the system as well as create a model of the environment sensed by the system.

  8. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    DOE PAGES

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together intomore » larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.« less

  9. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    SciTech Connect

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; Heiser, John; Yoo, Shinjae; Kalb, Paul

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together into larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.

  10. Ice crystallization in porous building materials: assessing damage using real-time 3D monitoring

    NASA Astrophysics Data System (ADS)

    Deprez, Maxim; De Kock, Tim; De Schutter, Geert; Cnudde, Veerle

    2017-04-01

    Frost action is one of the main causes of deterioration of porous building materials in regions at middle to high latitudes. Damage will occur when the internal stresses due to ice formation become larger than the strength of the material. Hence, the sensitivity of the material to frost damage is partly defined by the structure of the solid body. On the other hand, the size, shape and interconnection of pores manages the water distribution in the building material and, therefore, the characteristics of the pore space control potential to form ice crystals (Ruedrich et al., 2011). In order to assess the damage to building materials by ice crystallization, lot of effort was put into identifying the mechanisms behind the stress build up. First of all, volumetric expansion of 9% (Hirschwald, 1908) during the transition of water to ice should be mentioned. Under natural circumstances, however, water saturation degrees within natural rocks or concrete cannot reach a damaging value. Therefore, linear growth pressure (Scherer, 1999), as well as several mechanisms triggered by water redistribution during freezing (Powers and Helmuth, 1953; Everett, 1961) are more likely responsible for damage due to freezing. Nevertheless, these theories are based on indirect observations and models and, thus, direct evidence that reveals the exact damage mechanism under certain conditions is still lacking. To obtain this proof, in-situ information needs to be acquired while a freezing process is performed. X-ray computed tomography has proven to be of great value in material research. Recent advances at the Ghent University Centre for Tomography (UGCT) have already allowed to dynamically 3D image crack growth in natural rock during freeze-thaw cycles (De Kock et al., 2015). A great potential to evaluate the different stress build-up mechanisms can be found in this imaging technique consequently. It is required to cover a range of materials with different petrophysical properties to achieve

  11. An improved real-time visual tracking method for space non-cooperative target

    NASA Astrophysics Data System (ADS)

    Zhang, Limin; Zhu, Feng; Hao, Yingming

    2016-10-01

    In order to enable the non-cooperative rendezvous, capture, and removal of large space debris, robust and fast tracking of the non-cooperative target is needed. This paper proposes an improved algorithm of real-time visual tracking for space non-cooperative target based on three-dimensional model, and it does not require any artificial markers. The non-cooperative target is assumed to be a 3D model known and constantly in the field of view of the camera mounted on the chaser. Space non-cooperative targets are regarded as less textured manmade objects, and the design documents of 3D model are available. Space appears to be black, so we can assume the object is in empty space and only the object is visible, and the background of the image is dark. Due to edge features offer a good invariance to illumination changes or image noise, our method relies on monocular vision and uses 3D-2D correspondences between the 3D model and its corresponding 2D edges in the image. The paper proposes to remove the sample points that are susceptible to false matches based on geometrical distance due to perspective projection of the 3D model. To allow a better robustness, we compare the local region similarity to get better matches between sample points and edge points. Our algorithm is proved to be efficient and shows improved accuracy without significant computational burden. The results show potential tracking performance with mean errors of < 3 degrees and < 1.5% of range.

  12. Effects of limited resources in 3D real-time simulation of an extended ECHO complex adaptive system model

    NASA Astrophysics Data System (ADS)

    Dominiak, Dana M.; Rinaldo, Frank; Evans, Martha W.

    2001-08-01

    An evolutionary model of adaptive agents called `ECHO' was proposed by John Holland. ECHO is a first step toward mathematical theory in the field of complex adaptive systems. Researchers in numerous disciplines have used the existing ECHO simulation both to model and to explain complex system behaviors. This paper describes the effects of limited resources in a 3D simulation of an extended Holland ECHO model. In this simulation, adaptive agents move about the ECHO terrain and interact with other agents in real-time. Adaptive agents are bred using a genetic algorithm. The model's environment contains limited resources, represented as symbols. Elaborate relationships are developed by the agents to utilize resources through both competition and cooperation. Researchers have a better tool by which to identify and explain complex adaptive system behavior by observing the emergence of complexity first hand.

  13. Incorporation of 3-D Scanning Lidar Data into Google Earth for Real-time Air Pollution Observation

    NASA Astrophysics Data System (ADS)

    Chiang, C.; Nee, J.; Das, S.; Sun, S.; Hsu, Y.; Chiang, H.; Chen, S.; Lin, P.; Chu, J.; Su, C.; Lee, W.; Su, L.; Chen, C.

    2011-12-01

    3-D Differential Absorption Scanning Lidar (DIASL) system has been designed with small size, light weight, and suitable for installation in various vehicles and places for monitoring of air pollutants and displays a detailed real-time temporal and spatial variability of trace gases via the Google Earth. The fast scanning techniques and visual information can rapidly identify the locations and sources of the polluted gases and assess the most affected areas. It is helpful for Environmental Protection Agency (EPA) to protect the people's health and abate the air pollution as quickly as possible. The distributions of the atmospheric pollutants and their relationship with local metrological parameters measured with ground based instruments will also be discussed. Details will be presented in the upcoming symposium.

  14. Tracking immune-related cell responses to drug delivery microparticles in 3D dense collagen matrix.

    PubMed

    Obarzanek-Fojt, Magdalena; Curdy, Catherine; Loggia, Nicoletta; Di Lena, Fabio; Grieder, Kathrin; Bitar, Malak; Wick, Peter

    2016-10-01

    Beyond the therapeutic purpose, the impact of drug delivery microparticles on the local tissue and inflammatory responses remains to be further elucidated specifically for reactions mediated by the host immune cells. Such immediate and prolonged reactions may adversely influence the release efficacy and intended therapeutic pathway. The lack of suitable in vitro platforms limits our ability to gain insight into the nature of immune responses at a single cell level. In order to establish an in vitro 3D system mimicking the connective host tissue counterpart, we utilized reproducible, compressed, rat-tail collagen polymerized matrices. THP1 cells (human acute monocytic leukaemia cells) differentiated into macrophage-like cells were chosen as cell model and their functionality was retained in the dense rat-tail collagen matrix. Placebo microparticles were later combined in the immune cell seeded system during collagen polymerization and secreted pro-inflammatory factors: TNFα and IL-8 were used as immune response readout (ELISA). Our data showed an elevated TNFα and IL-8 secretion by macrophage THP1 cells indicating that Placebo microparticles trigger certain immune cell responses under 3D in vivo like conditions. Furthermore, we have shown that the system is sensitive to measure the differences in THP1 macrophage pro-inflammatory responses to Active Pharmaceutical Ingredient (API) microparticles with different API release kinetics. We have successfully developed a tissue-like, advanced, in vitro system enabling selective "readouts" of specific responses of immune-related cells. Such system may provide the basis of an advanced toolbox enabling systemic evaluation and prediction of in vivo microparticle reactions on human immune-related cells.

  15. Visualizing and Tracking Evolving Features in 3D Unstructured and Adaptive Datasets

    SciTech Connect

    Silver, D.; Zabusky, N.

    2002-08-01

    The massive amounts of time-varying datasets being generated demand new visualization and quantification techniques. Visualization alone is not sufficient. Without proper measurement information/computations real science cannot be done. Our focus is this work was to combine visualization with quantification of the data to allow for advanced querying and searching. As part of this proposal, we have developed a feature extraction adn tracking methodology which allows researcher to identify features of interest and follow their evolution over time. The implementation is distributed and operates over data In-situ: where it is stored and when it was computed.

  16. SIMULTANEOUS BILATERAL REAL-TIME 3-D TRANSCRANIAL ULTRASOUND IMAGING AT 1 MHZ THROUGH POOR ACOUSTIC WINDOWS

    PubMed Central

    Lindsey, Brooks D.; Nicoletto, Heather A.; Bennett, Ellen R.; Laskowitz, Daniel T.; Smith, Stephen W.

    2013-01-01

    Ultrasound imaging has been proposed as a rapid, portable alternative imaging modality to examine stroke patients in pre-hospital or emergency room settings. However, in performing transcranial ultrasound examinations, 8%–29% of patients in a general population may present with window failure, in which case it is not possible to acquire clinically useful sonographic information through the temporal bone acoustic window. In this work, we describe the technical considerations, design and fabrication of low-frequency (1.2 MHz), large aperture (25.3 mm) sparse matrix array transducers for 3-D imaging in the event of window failure. These transducers are integrated into a system for real-time 3-D bilateral transcranial imaging—the ultrasound brain helmet—and color flow imaging capabilities at 1.2 MHz are directly compared with arrays operating at 1.8 MHz in a flow phantom with attenuation comparable to the in vivo case. Contrast-enhanced imaging allowed visualization of arteries of the Circle of Willis in 5 of 5 subjects and 8 of 10 sides of the head despite probe placement outside of the acoustic window. Results suggest that this type of transducer may allow acquisition of useful images either in individuals with poor windows or outside of the temporal acoustic window in the field. PMID:23415287

  17. Simultaneous bilateral real-time 3-d transcranial ultrasound imaging at 1 MHz through poor acoustic windows.

    PubMed

    Lindsey, Brooks D; Nicoletto, Heather A; Bennett, Ellen R; Laskowitz, Daniel T; Smith, Stephen W

    2013-04-01

    Ultrasound imaging has been proposed as a rapid, portable alternative imaging modality to examine stroke patients in pre-hospital or emergency room settings. However, in performing transcranial ultrasound examinations, 8%-29% of patients in a general population may present with window failure, in which case it is not possible to acquire clinically useful sonographic information through the temporal bone acoustic window. In this work, we describe the technical considerations, design and fabrication of low-frequency (1.2 MHz), large aperture (25.3 mm) sparse matrix array transducers for 3-D imaging in the event of window failure. These transducers are integrated into a system for real-time 3-D bilateral transcranial imaging-the ultrasound brain helmet-and color flow imaging capabilities at 1.2 MHz are directly compared with arrays operating at 1.8 MHz in a flow phantom with attenuation comparable to the in vivo case. Contrast-enhanced imaging allowed visualization of arteries of the Circle of Willis in 5 of 5 subjects and 8 of 10 sides of the head despite probe placement outside of the acoustic window. Results suggest that this type of transducer may allow acquisition of useful images either in individuals with poor windows or outside of the temporal acoustic window in the field.

  18. Dynamic shape modeling of the mitral valve from real-time 3D ultrasound images using continuous medial representation

    NASA Astrophysics Data System (ADS)

    Pouch, Alison M.; Yushkevich, Paul A.; Jackson, Benjamin M.; Gorman, Joseph H., III; Gorman, Robert C.; Sehgal, Chandra M.

    2012-03-01

    Purpose: Patient-specific shape analysis of the mitral valve from real-time 3D ultrasound (rt-3DUS) has broad application to the assessment and surgical treatment of mitral valve disease. Our goal is to demonstrate that continuous medial representation (cm-rep) is an accurate valve shape representation that can be used for statistical shape modeling over the cardiac cycle from rt-3DUS images. Methods: Transesophageal rt-3DUS data acquired from 15 subjects with a range of mitral valve pathology were analyzed. User-initialized segmentation with level sets and symmetric diffeomorphic normalization delineated the mitral leaflets at each time point in the rt-3DUS data series. A deformable cm-rep was fitted to each segmented image of the mitral leaflets in the time series, producing a 4D parametric representation of valve shape in a single cardiac cycle. Model fitting accuracy was evaluated by the Dice overlap, and shape interpolation and principal component analysis (PCA) of 4D valve shape were performed. Results: Of the 289 3D images analyzed, the average Dice overlap between each fitted cm-rep and its target segmentation was 0.880+/-0.018 (max=0.912, min=0.819). The results of PCA represented variability in valve morphology and localized leaflet thickness across subjects. Conclusion: Deformable medial modeling accurately captures valve geometry in rt-3DUS images over the entire cardiac cycle and enables statistical shape analysis of the mitral valve.

  19. Real-time 4D dose reconstruction for tracked dynamic MLC deliveries for lung SBRT.

    PubMed

    Kamerling, Cornelis Ph; Fast, Martin F; Ziegenhein, Peter; Menten, Martin J; Nill, Simeon; Oelfke, Uwe

    2016-11-01

    This study provides a proof of concept for real-time 4D dose reconstruction for lung stereotactic body radiation therapy (SBRT) with multileaf collimator (MLC) tracking and assesses the impact of tumor tracking on the size of target margins. The authors have implemented real-time 4D dose reconstruction by connecting their tracking and delivery software to an Agility MLC at an Elekta Synergy linac and to their in-house treatment planning software (TPS). Actual MLC apertures and (simulated) target positions are reported to the TPS every 40 ms. The dose is calculated in real-time from 4DCT data directly after each reported aperture by utilization of precalculated dose-influence data based on a Monte Carlo algorithm. The dose is accumulated onto the peak-exhale (reference) phase using energy-mass transfer mapping. To investigate the impact of a potentially reducible safety margin, the authors have created and delivered treatment plans designed for a conventional internal target volume (ITV) + 5 mm, a midventilation approach, and three tracking scenarios for four lung SBRT patients. For the tracking plans, a moving target volume (MTV) was established by delineating the gross target volume (GTV) on every 4DCT phase. These were rigidly aligned to the reference phase, resulting in a unified maximum GTV to which a 1, 3, or 5 mm isotropic margin was added. All scenarios were planned for 9-beam step-and-shoot IMRT to meet the criteria of RTOG 1021 (3 × 18 Gy). The GTV 3D center-of-volume shift varied from 6 to 14 mm. Real-time dose reconstruction at 25 Hz could be realized on a single workstation due to the highly efficient implementation of dose calculation and dose accumulation. Decreased PTV margins resulted in inadequate target coverage during untracked deliveries for patients with substantial tumor motion. MLC tracking could ensure the GTV target dose for these patients. Organ-at-risk (OAR) doses were consistently reduced by decreased PTV margins. The tracked MTV + 1 mm

  20. Multisensor 3D tracking for counter small unmanned air vehicles (CSUAV)

    NASA Astrophysics Data System (ADS)

    Vasquez, Juan R.; Tarplee, Kyle M.; Case, Ellen E.; Zelnio, Anne M.; Rigling, Brian D.

    2008-04-01

    A variety of unmanned air vehicles (UAVs) have been developed for both military and civilian use. The typical large UAV is typically state owned, whereas small UAVs (SUAVs) may be in the form of remote controlled aircraft that are widely available. The potential threat of these SUAVs to both the military and civilian populace has led to research efforts to counter these assets via track, ID, and attack. Difficulties arise from the small size and low radar cross section when attempting to detect and track these targets with a single sensor such as radar or video cameras. In addition, clutter objects make accurate ID difficult without very high resolution data, leading to the use of an acoustic array to support this function. This paper presents a multi-sensor architecture that exploits sensor modes including EO/IR cameras, an acoustic array, and future inclusion of a radar. A sensor resource management concept is presented along with preliminary results from three of the sensors.