Science.gov

Sample records for 3d motion tracking

  1. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  2. Tracking 3-D body motion for docking and robot control

    NASA Technical Reports Server (NTRS)

    Donath, M.; Sorensen, B.; Yang, G. B.; Starr, R.

    1987-01-01

    An advanced method of tracking three-dimensional motion of bodies has been developed. This system has the potential to dynamically characterize machine and other structural motion, even in the presence of structural flexibility, thus facilitating closed loop structural motion control. The system's operation is based on the concept that the intersection of three planes defines a point. Three rotating planes of laser light, fixed and moving photovoltaic diode targets, and a pipe-lined architecture of analog and digital electronics are used to locate multiple targets whose number is only limited by available computer memory. Data collection rates are a function of the laser scan rotation speed and are currently selectable up to 480 Hz. The tested performance on a preliminary prototype designed for 0.1 in accuracy (for tracking human motion) at a 480 Hz data rate includes a worst case resolution of 0.8 mm (0.03 inches), a repeatability of plus or minus 0.635 mm (plus or minus 0.025 inches), and an absolute accuracy of plus or minus 2.0 mm (plus or minus 0.08 inches) within an eight cubic meter volume with all results applicable at the 95 percent level of confidence along each coordinate region. The full six degrees of freedom of a body can be computed by attaching three or more target detectors to the body of interest.

  3. 3D model-based catheter tracking for motion compensation in EP procedures

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Liao, Rui; Hornegger, Joachim; Strobel, Norbert

    2010-02-01

    Atrial fibrillation is the most common sustained heart arrhythmia and a leading cause of stroke. Its treatment by radio-frequency catheter ablation, performed using fluoroscopic image guidance, is gaining increasingly more importance. Two-dimensional fluoroscopic navigation can take advantage of overlay images derived from pre-operative 3-D data to add anatomical details otherwise not visible under X-ray. Unfortunately, respiratory motion may impair the utility of these static overlay images for catheter navigation. We developed an approach for image-based 3-D motion compensation as a solution to this problem. A bi-plane C-arm system is used to take X-ray images of a special circumferential mapping catheter from two directions. In the first step of the method, a 3-D model of the device is reconstructed. Three-dimensional respiratory motion at the site of ablation is then estimated by tracking the reconstructed catheter model in 3-D. This step involves bi-plane fluoroscopy and 2-D/3-D registration. Phantom data and clinical data were used to assess our model-based catheter tracking method. Experiments involving a moving heart phantom yielded an average 2-D tracking error of 1.4 mm and an average 3-D tracking error of 1.1 mm. Our evaluation of clinical data sets comprised 469 bi-plane fluoroscopy frames (938 monoplane fluoroscopy frames). We observed an average 2-D tracking error of 1.0 mm +/- 0.4 mm and an average 3-D tracking error of 0.8 mm +/- 0.5 mm. These results demonstrate that model-based motion-compensation based on 2-D/3-D registration is both feasible and accurate.

  4. Towards robust 3D visual tracking for motion compensation in beating heart surgery.

    PubMed

    Richa, Rogério; Bó, Antônio P L; Poignet, Philippe

    2011-06-01

    In the context of minimally invasive cardiac surgery, active vision-based motion compensation schemes have been proposed for mitigating problems related to physiological motion. However, robust and accurate visual tracking remains a difficult task. The purpose of this paper is to present a robust visual tracking method that estimates the 3D temporal and spatial deformation of the heart surface using stereo endoscopic images. The novelty is the combination of a visual tracking method based on a Thin-Plate Spline (TPS) model for representing the heart surface deformations with a temporal heart motion model based on a time-varying dual Fourier series for overcoming tracking disturbances or failures. The considerable improvements in tracking robustness facing specular reflections and occlusions are demonstrated through experiments using images of in vivo porcine and human beating hearts.

  5. Towards robust 3D visual tracking for motion compensation in beating heart surgery.

    PubMed

    Richa, Rogério; Bó, Antônio P L; Poignet, Philippe

    2011-06-01

    In the context of minimally invasive cardiac surgery, active vision-based motion compensation schemes have been proposed for mitigating problems related to physiological motion. However, robust and accurate visual tracking remains a difficult task. The purpose of this paper is to present a robust visual tracking method that estimates the 3D temporal and spatial deformation of the heart surface using stereo endoscopic images. The novelty is the combination of a visual tracking method based on a Thin-Plate Spline (TPS) model for representing the heart surface deformations with a temporal heart motion model based on a time-varying dual Fourier series for overcoming tracking disturbances or failures. The considerable improvements in tracking robustness facing specular reflections and occlusions are demonstrated through experiments using images of in vivo porcine and human beating hearts. PMID:21277821

  6. 3D deformable organ model based liver motion tracking in ultrasound videos

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Bae; Hwang, Youngkyoo; Oh, Young-Taek; Bang, Won-Chul; Lee, Heesae; Kim, James D. K.; Kim, Chang Yeong

    2013-03-01

    This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor's 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.

  7. Tracking left ventricular borders in 3D echocardiographic sequences using motion-guided optical flow

    NASA Astrophysics Data System (ADS)

    Leung, K. Y. Esther; Danilouchkine, Mikhail G.; van Stralen, Marijn; de Jong, Nico; van der Steen, Antonius F. W.; Bosch, Johan G.

    2009-02-01

    For obtaining quantitative and objective functional parameters from three-dimensional (3D) echocardiographic sequences, automated segmentation methods may be preferable to cumbersome manual delineation of 3D borders. In this study, a novel optical-flow based tracking method is proposed for propagating 3D endocardial contours of the left ventricle throughout the cardiac cycle. To take full advantage of the time-continuous nature of cardiac motion, a statistical motion model was explicitly embedded in the optical flow solution. The cardiac motion was modeled as frame-to-frame affine transforms, which were extracted using Procrustes analysis on a set of training contours. Principal component analysis was applied to obtain a compact model of cardiac motion throughout the whole cardiac cycle. The parameters of this model were resolved in an optical flow manner, via spatial and temporal gradients in image intensity. The algorithm was tested on 36 noncontrast and 28 contrast enhanced 3D echocardiographic sequences in a leave-one-out manner. Good results were obtained using a combination of the proposed motion-guided method and a purely data-driven optical flow approach. The improvement was particularly noticeable in areas where the LV wall was obscured by image artifacts. In conclusion, the results show the applicability of the proposed method in clinical quality echocardiograms.

  8. Ultrasonic diaphragm tracking for cardiac interventional navigation on 3D motion compensated static roadmaps

    NASA Astrophysics Data System (ADS)

    Timinger, Holger; Kruger, Sascha; Dietmayer, Klaus; Borgert, Joern

    2005-04-01

    In this paper, a novel approach to cardiac interventional navigation on 3D motion-compensated static roadmaps is presented. Current coronary interventions, e.g. percutaneous transluminal coronary angioplasties, are performed using 2D X-ray fluoroscopy. This comes along with well-known drawbacks like radiation exposure, use of contrast agent, and limited visualization, e.g. overlap and foreshortening, due to projection imaging. In the presented approach, the interventional device, i.e. the catheter, is tracked using an electromagnetic tracking system (MTS). Therefore, the catheters position is mapped into a static 3D image of the volume of interest (VOI) by means of an affine registration. In order to compensate for respiratory motion of the catheter with respect to the static image, a parameterized affine motion model is used which is driven by a respiratory sensor signal. This signal is derived from ultrasonic diaphragm tracking. The motion compensation for the heartbeat is done using ECG-gating. The methods are validated using a heart- and diaphragm-phantom. The mean displacement of the catheter due to the simulated organ motion decreases from approximately 9 mm to 1.3 mm. This result indicates that the proposed method is able to reconstruct the catheter position within the VOI accurately and that it can help to overcome drawbacks of current interventional procedures.

  9. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  10. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    SciTech Connect

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    2014-02-15

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  11. 3D motion tracking of the heart using Harmonic Phase (HARP) isosurfaces

    NASA Astrophysics Data System (ADS)

    Soliman, Abraam S.; Osman, Nael F.

    2010-03-01

    Tags are non-invasive features induced in the heart muscle that enable the tracking of heart motion. Each tag line, in fact, corresponds to a 3D tag surface that deforms with the heart muscle during the cardiac cycle. Tracking of tag surfaces deformation is useful for the analysis of left ventricular motion. Cardiac material markers (Kerwin et al, MIA, 1997) can be obtained from the intersections of orthogonal surfaces which can be reconstructed from short- and long-axis tagged images. The proposed method uses Harmonic Phase (HARP) method for tracking tag lines corresponding to a specific harmonic phase value and then the reconstruction of grid tag surfaces is achieved by a Delaunay triangulation-based interpolation for sparse tag points. Having three different tag orientations from short- and long-axis images, the proposed method showed the deformation of 3D tag surfaces during the cardiac cycle. Previous work on tag surface reconstruction was restricted for the "dark" tag lines; however, the use of HARP as proposed enables the reconstruction of isosurfaces based on their harmonic phase values. The use of HARP, also, provides a fast and accurate way for tag lines identification and tracking, and hence, generating the surfaces.

  12. Automated 3D Motion Tracking using Gabor Filter Bank, Robust Point Matching, and Deformable Models

    PubMed Central

    Wang, Xiaoxu; Chung, Sohae; Metaxas, Dimitris; Axel, Leon

    2013-01-01

    Tagged Magnetic Resonance Imaging (tagged MRI or tMRI) provides a means of directly and noninvasively displaying the internal motion of the myocardium. Reconstruction of the motion field is needed to quantify important clinical information, e.g., the myocardial strain, and detect regional heart functional loss. In this paper, we present a three-step method for this task. First, we use a Gabor filter bank to detect and locate tag intersections in the image frames, based on local phase analysis. Next, we use an improved version of the Robust Point Matching (RPM) method to sparsely track the motion of the myocardium, by establishing a transformation function and a one-to-one correspondence between grid tag intersections in different image frames. In particular, the RPM helps to minimize the impact on the motion tracking result of: 1) through-plane motion, and 2) relatively large deformation and/or relatively small tag spacing. In the final step, a meshless deformable model is initialized using the transformation function computed by RPM. The model refines the motion tracking and generates a dense displacement map, by deforming under the influence of image information, and is constrained by the displacement magnitude to retain its geometric structure. The 2D displacement maps in short and long axis image planes can be combined to drive a 3D deformable model, using the Moving Least Square method, constrained by the minimization of the residual error at tag intersections. The method has been tested on a numerical phantom, as well as on in vivo heart data from normal volunteers and heart disease patients. The experimental results show that the new method has a good performance on both synthetic and real data. Furthermore, the method has been used in an initial clinical study to assess the differences in myocardial strain distributions between heart disease (left ventricular hypertrophy) patients and the normal control group. The final results show that the proposed method

  13. Neural network techniques for invariant recognition and motion tracking of 3-D objects

    SciTech Connect

    Hwang, J.N.; Tseng, Y.H.

    1995-12-31

    Invariant recognition and motion tracking of 3-D objects under partial object viewing are difficult tasks. In this paper, we introduce a new neural network solution that is robust to noise corruption and partial viewing of objects. This method directly utilizes the acquired range data and requires no feature extraction. In the proposed approach, the object is first parametrically represented by a continuous distance transformation neural network (CDTNN) which is trained by the surface points of the exemplar object. When later presented with the surface points of an unknown object, this parametric representation allows the mismatch information to back-propagate through the CDTNN to gradually determine the best similarity transformation (translation and rotation) of the unknown object. The mismatch can be directly measured in the reconstructed representation domain between the model and the unknown object.

  14. Infrared tomographic PIV and 3D motion tracking system applied to aquatic predator-prey interaction

    NASA Astrophysics Data System (ADS)

    Adhikari, Deepak; Longmire, Ellen K.

    2013-02-01

    Infrared tomographic PIV and 3D motion tracking are combined to measure evolving volumetric velocity fields and organism trajectories during aquatic predator-prey interactions. The technique was used to study zebrafish foraging on both non-evasive and evasive prey species. Measurement volumes of 22.5 mm × 10.5 mm × 12 mm were reconstructed from images captured on a set of four high-speed cameras. To obtain accurate fluid velocity vectors within each volume, fish were first masked out using an automated visual hull method. Fish and prey locations were identified independently from the same image sets and tracked separately within the measurement volume. Experiments demonstrated that fish were not influenced by the infrared laser illumination or the tracer particles. Results showed that the zebrafish used different strategies, suction and ram feeding, for successful capture of non-evasive and evasive prey, respectively. The two strategies yielded different variations in fluid velocity between the fish mouth and the prey. In general, the results suggest that the local flow field, the direction of prey locomotion with respect to the predator and the relative accelerations and speeds of the predator and prey may all be significant in determining predation success.

  15. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. PMID:23218511

  16. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations.

  17. Integrating eye tracking and motion sensor on mobile phone for interactive 3D display

    NASA Astrophysics Data System (ADS)

    Sun, Yu-Wei; Chiang, Chen-Kuo; Lai, Shang-Hong

    2013-09-01

    In this paper, we propose an eye tracking and gaze estimation system for mobile phone. We integrate an eye detector, cornereye center and iso-center to improve pupil detection. The optical flow information is used for eye tracking. We develop a robust eye tracking system that integrates eye detection and optical-flow based image tracking. In addition, we further incorporate the orientation sensor information from the mobile phone to improve the eye tracking for accurate gaze estimation. We demonstrate the accuracy of the proposed eye tracking and gaze estimation system through experiments on some public video sequences as well as videos acquired directly from mobile phone.

  18. Prospective motion correction of 3D echo-planar imaging data for functional MRI using optical tracking.

    PubMed

    Todd, Nick; Josephs, Oliver; Callaghan, Martina F; Lutti, Antoine; Weiskopf, Nikolaus

    2015-06-01

    We evaluated the performance of an optical camera based prospective motion correction (PMC) system in improving the quality of 3D echo-planar imaging functional MRI data. An optical camera and external marker were used to dynamically track the head movement of subjects during fMRI scanning. PMC was performed by using the motion information to dynamically update the sequence's RF excitation and gradient waveforms such that the field-of-view was realigned to match the subject's head movement. Task-free fMRI experiments on five healthy volunteers followed a 2 × 2 × 3 factorial design with the following factors: PMC on or off; 3.0mm or 1.5mm isotropic resolution; and no, slow, or fast head movements. Visual and motor fMRI experiments were additionally performed on one of the volunteers at 1.5mm resolution comparing PMC on vs PMC off for no and slow head movements. Metrics were developed to quantify the amount of motion as it occurred relative to k-space data acquisition. The motion quantification metric collapsed the very rich camera tracking data into one scalar value for each image volume that was strongly predictive of motion-induced artifacts. The PMC system did not introduce extraneous artifacts for the no motion conditions and improved the time series temporal signal-to-noise by 30% to 40% for all combinations of low/high resolution and slow/fast head movement relative to the standard acquisition with no prospective correction. The numbers of activated voxels (p<0.001, uncorrected) in both task-based experiments were comparable for the no motion cases and increased by 78% and 330%, respectively, for PMC on versus PMC off in the slow motion cases. The PMC system is a robust solution to decrease the motion sensitivity of multi-shot 3D EPI sequences and thereby overcome one of the main roadblocks to their widespread use in fMRI studies.

  19. Prospective motion correction of 3D echo-planar imaging data for functional MRI using optical tracking

    PubMed Central

    Todd, Nick; Josephs, Oliver; Callaghan, Martina F.; Lutti, Antoine; Weiskopf, Nikolaus

    2015-01-01

    We evaluated the performance of an optical camera based prospective motion correction (PMC) system in improving the quality of 3D echo-planar imaging functional MRI data. An optical camera and external marker were used to dynamically track the head movement of subjects during fMRI scanning. PMC was performed by using the motion information to dynamically update the sequence's RF excitation and gradient waveforms such that the field-of-view was realigned to match the subject's head movement. Task-free fMRI experiments on five healthy volunteers followed a 2 × 2 × 3 factorial design with the following factors: PMC on or off; 3.0 mm or 1.5 mm isotropic resolution; and no, slow, or fast head movements. Visual and motor fMRI experiments were additionally performed on one of the volunteers at 1.5 mm resolution comparing PMC on vs PMC off for no and slow head movements. Metrics were developed to quantify the amount of motion as it occurred relative to k-space data acquisition. The motion quantification metric collapsed the very rich camera tracking data into one scalar value for each image volume that was strongly predictive of motion-induced artifacts. The PMC system did not introduce extraneous artifacts for the no motion conditions and improved the time series temporal signal-to-noise by 30% to 40% for all combinations of low/high resolution and slow/fast head movement relative to the standard acquisition with no prospective correction. The numbers of activated voxels (p < 0.001, uncorrected) in both task-based experiments were comparable for the no motion cases and increased by 78% and 330%, respectively, for PMC on versus PMC off in the slow motion cases. The PMC system is a robust solution to decrease the motion sensitivity of multi-shot 3D EPI sequences and thereby overcome one of the main roadblocks to their widespread use in fMRI studies. PMID:25783205

  20. Hybrid MV-kV 3D respiratory motion tracking during radiation therapy with low imaging dose

    NASA Astrophysics Data System (ADS)

    Yan, Huagang; Li, Haiyun; Liu, Zhixiang; Nath, Ravinder; Liu, Wu

    2012-12-01

    A novel real-time adaptive MV-kV imaging framework for image-guided radiation therapy is developed to reduce the thoracic and abdominal tumor targeting uncertainty caused by respiration-induced intrafraction motion with ultra-low patient imaging dose. In our method, continuous stereoscopic MV-kV imaging is used at the beginning of a radiation therapy delivery for several seconds to measure the implanted marker positions. After this stereoscopic imaging period, the kV imager is switched off except for the times when no fiducial marker is detected in the cine-MV images. The 3D time-varying marker positions are estimated by combining the MV 2D projection data and the motion correlations between directional components of marker motion established from the stereoscopic imaging period and updated afterwards; in particular, the most likely position is assumed to be the position on the projection line that has the shortest distance to the first principal component line segment constructed from previous trajectory points. An adaptive windowed auto-regressive prediction is utilized to predict the marker position a short time later (310 ms and 460 ms in this study) to allow for tracking system latency. To demonstrate the feasibility and evaluate the accuracy of the proposed method, computer simulations were performed for both arc and fixed-gantry deliveries using 66 h of retrospective tumor motion data from 42 patients treated for thoracic or abdominal cancers. The simulations reveal that using our hybrid approach, a smaller than 1.2 mm or 1.5 mm root-mean-square tracking error can be achieved at a system latency of 310 ms or 460 ms, respectively. Because the kV imaging is only used for a short period of time in our method, extra patient imaging dose can be reduced by an order of magnitude compared to continuous MV-kV imaging, while the clinical tumor targeting accuracy for thoracic or abdominal cancers is maintained. Furthermore, no additional hardware is required with the

  1. From 1D to 2D via 3D: dynamics of surface motion segmentation for ocular tracking in primates.

    PubMed

    Masson, Guillaume S

    2004-01-01

    In primates, tracking eye movements help vision by stabilising onto the retinas the images of a moving object of interest. This sensorimotor transformation involves several stages of motion processing, from the local measurement of one-dimensional luminance changes up to the integration of first and higher-order local motion cues into a global two-dimensional motion immune to antagonistic motions arising from the surrounding. The dynamics of this surface motion segmentation is reflected into the various components of the tracking responses and its underlying neural mechanisms can be correlated with behaviour at both single-cell and population levels. I review a series of behavioural studies which demonstrate that the neural representation driving eye movements evolves over time from a fast vector average of the outputs of linear and non-linear spatio-temporal filtering to a progressive and slower accurate solution for global motion. Because of the sensitivity of earliest ocular following to binocular disparity, antagonistic visual motion from surfaces located at different depths are filtered out. Thus, global motion integration is restricted within the depth plane of the object to be tracked. Similar dynamics were found at the level of monkey extra-striate areas MT and MST and I suggest that several parallel pathways along the motion stream are involved albeit with different latencies to build-up this accurate surface motion representation. After 200-300 ms, most of the computational problems of early motion processing (aperture problem, motion integration, motion segmentation) are solved and the eye velocity matches the global object velocity to maintain a clear and steady retinal image. PMID:15477021

  2. Creation of 3D digital anthropomorphic phantoms which model actual patient non-rigid body motion as determined from MRI and position tracking studies of volunteers

    NASA Astrophysics Data System (ADS)

    Connolly, C. M.; Konik, A.; Dasari, P. K. R.; Segars, P.; Zheng, S.; Johnson, K. L.; Dey, J.; King, M. A.

    2011-03-01

    Patient motion can cause artifacts, which can lead to difficulty in interpretation. The purpose of this study is to create 3D digital anthropomorphic phantoms which model the location of the structures of the chest and upper abdomen of human volunteers undergoing a series of clinically relevant motions. The 3D anatomy is modeled using the XCAT phantom and based on MRI studies. The NURBS surfaces of the XCAT are interactively adapted to fit the MRI studies. A detailed XCAT phantom is first developed from an EKG triggered Navigator acquisition composed of sagittal slices with a 3 x 3 x 3 mm voxel dimension. Rigid body motion states are then acquired at breath-hold as sagittal slices partially covering the thorax, centered on the heart, with 9 mm gaps between them. For non-rigid body motion requiring greater sampling, modified Navigator sequences covering the entire thorax with 3 mm gaps between slices are obtained. The structures of the initial XCAT are then adapted to fit these different motion states. Simultaneous to MRI imaging the positions of multiple reflective markers on stretchy bands about the volunteer's chest and abdomen are optically tracked in 3D via stereo imaging. These phantoms with combined position tracking will be used to investigate both imaging-data-driven and motion-tracking strategies to estimate and correct for patient motion. Our initial application will be to cardiacperfusion SPECT imaging where the XCAT phantoms will be used to create patient activity and attenuation distributions for each volunteer with corresponding motion tracking data from the markers on the body-surface. Monte Carlo methods will then be used to simulate SPECT acquisitions, which will be used to evaluate various motion estimation and correction strategies.

  3. The birth of a dinosaur footprint: Subsurface 3D motion reconstruction and discrete element simulation reveal track ontogeny

    PubMed Central

    2014-01-01

    Locomotion over deformable substrates is a common occurrence in nature. Footprints represent sedimentary distortions that provide anatomical, functional, and behavioral insights into trackmaker biology. The interpretation of such evidence can be challenging, however, particularly for fossil tracks recovered at bedding planes below the originally exposed surface. Even in living animals, the complex dynamics that give rise to footprint morphology are obscured by both foot and sediment opacity, which conceals animal–substrate and substrate–substrate interactions. We used X-ray reconstruction of moving morphology (XROMM) to image and animate the hind limb skeleton of a chicken-like bird traversing a dry, granular material. Foot movement differed significantly from walking on solid ground; the longest toe penetrated to a depth of ∼5 cm, reaching an angle of 30° below horizontal before slipping backward on withdrawal. The 3D kinematic data were integrated into a validated substrate simulation using the discrete element method (DEM) to create a quantitative model of limb-induced substrate deformation. Simulation revealed that despite sediment collapse yielding poor quality tracks at the air–substrate interface, subsurface displacements maintain a high level of organization owing to grain–grain support. Splitting the substrate volume along “virtual bedding planes” exposed prints that more closely resembled the foot and could easily be mistaken for shallow tracks. DEM data elucidate how highly localized deformations associated with foot entry and exit generate specific features in the final tracks, a temporal sequence that we term “track ontogeny.” This combination of methodologies fosters a synthesis between the surface/layer-based perspective prevalent in paleontology and the particle/volume-based perspective essential for a mechanistic understanding of sediment redistribution during track formation. PMID:25489092

  4. The birth of a dinosaur footprint: subsurface 3D motion reconstruction and discrete element simulation reveal track ontogeny.

    PubMed

    Falkingham, Peter L; Gatesy, Stephen M

    2014-12-23

    Locomotion over deformable substrates is a common occurrence in nature. Footprints represent sedimentary distortions that provide anatomical, functional, and behavioral insights into trackmaker biology. The interpretation of such evidence can be challenging, however, particularly for fossil tracks recovered at bedding planes below the originally exposed surface. Even in living animals, the complex dynamics that give rise to footprint morphology are obscured by both foot and sediment opacity, which conceals animal-substrate and substrate-substrate interactions. We used X-ray reconstruction of moving morphology (XROMM) to image and animate the hind limb skeleton of a chicken-like bird traversing a dry, granular material. Foot movement differed significantly from walking on solid ground; the longest toe penetrated to a depth of ∼5 cm, reaching an angle of 30° below horizontal before slipping backward on withdrawal. The 3D kinematic data were integrated into a validated substrate simulation using the discrete element method (DEM) to create a quantitative model of limb-induced substrate deformation. Simulation revealed that despite sediment collapse yielding poor quality tracks at the air-substrate interface, subsurface displacements maintain a high level of organization owing to grain-grain support. Splitting the substrate volume along "virtual bedding planes" exposed prints that more closely resembled the foot and could easily be mistaken for shallow tracks. DEM data elucidate how highly localized deformations associated with foot entry and exit generate specific features in the final tracks, a temporal sequence that we term "track ontogeny." This combination of methodologies fosters a synthesis between the surface/layer-based perspective prevalent in paleontology and the particle/volume-based perspective essential for a mechanistic understanding of sediment redistribution during track formation.

  5. 3-D model-based vehicle tracking.

    PubMed

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  6. 3-D model-based vehicle tracking.

    PubMed

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter. PMID:16238061

  7. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  8. Intrinsic Feature Motion Tracking

    SciTech Connect

    Goddard, Jr., James S.

    2013-03-19

    Subject motion during 3D medical scanning can cause blurring and artifacts in the 3D images resulting in either rescans or poor diagnosis. Anesthesia or physical restraints may be used to eliminate motion but are undesirable and can affect results. This software measures the six degree of freedom 3D motion of the subject during the scan under a rigidity assumption using only the intrinsic features present on the subject area being monitored. This movement over time can then be used to correct the scan data removing the blur and artifacts. The software acquires images from external cameras or images stored on disk for processing. The images are from two or three calibrated cameras in a stereo arrangement. Algorithms extract and track the features over time and calculate position and orientation changes relative to an initial position. Output is the 3D position and orientation change measured at each image.

  9. Intrinsic Feature Motion Tracking

    2013-03-19

    Subject motion during 3D medical scanning can cause blurring and artifacts in the 3D images resulting in either rescans or poor diagnosis. Anesthesia or physical restraints may be used to eliminate motion but are undesirable and can affect results. This software measures the six degree of freedom 3D motion of the subject during the scan under a rigidity assumption using only the intrinsic features present on the subject area being monitored. This movement over timemore » can then be used to correct the scan data removing the blur and artifacts. The software acquires images from external cameras or images stored on disk for processing. The images are from two or three calibrated cameras in a stereo arrangement. Algorithms extract and track the features over time and calculate position and orientation changes relative to an initial position. Output is the 3D position and orientation change measured at each image.« less

  10. Temporal tracking of 3D coronary arteries in projection angiograms

    NASA Astrophysics Data System (ADS)

    Shechter, Guy; Devernay, Frederic; Coste-Maniere, Eve; McVeigh, Elliot R.

    2002-05-01

    A method for 3D temporal tracking of a 3D coronary tree model through a sequence of biplane cineangiography images has been developed. A registration framework is formulated in which the coronary tree centerline model deforms in an external potential field defined by a multiscale analysis response map computed from the angiogram images. To constrain the procedure and to improve convergence, a set of three motion models is hierarchically used: a 3D rigid-body transformation, a 3D affine transformation, and a 3D B-spline deformation field. This 3D motion tracking approach has significant advantages over 2D methods: (1) coherent deformation of a single 3D coronary reconstruction preserves the topology of the arterial tree; (2) constraints on arterial length and regularity, which lack meaning in 2D projection space, are directly applicable in 3D; and (3) tracking arterial segments through occlusions and crossings in the projection images is simplified with knowledge of the 3D relationship of the arteries. The method has been applied to patient data and results are presented.

  11. On the Inverse Problem of Binocular 3D Motion Perception

    PubMed Central

    Lages, Martin; Heron, Suzanne

    2010-01-01

    It is shown that existing processing schemes of 3D motion perception such as interocular velocity difference, changing disparity over time, as well as joint encoding of motion and disparity, do not offer a general solution to the inverse optics problem of local binocular 3D motion. Instead we suggest that local velocity constraints in combination with binocular disparity and other depth cues provide a more flexible framework for the solution of the inverse problem. In the context of the aperture problem we derive predictions from two plausible default strategies: (1) the vector normal prefers slow motion in 3D whereas (2) the cyclopean average is based on slow motion in 2D. Predicting perceived motion directions for ambiguous line motion provides an opportunity to distinguish between these strategies of 3D motion processing. Our theoretical results suggest that velocity constraints and disparity from feature tracking are needed to solve the inverse problem of 3D motion perception. It seems plausible that motion and disparity input is processed in parallel and integrated late in the visual processing hierarchy. PMID:21124957

  12. Automatic respiration tracking for radiotherapy using optical 3D camera

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Rapid optical three-dimensional (O3D) imaging systems provide accurate digitized 3D surface data in real-time, with no patient contact nor radiation. The accurate 3D surface images offer crucial information in image-guided radiation therapy (IGRT) treatments for accurate patient repositioning and respiration management. However, applications of O3D imaging techniques to image-guided radiotherapy have been clinically challenged by body deformation, pathological and anatomical variations among individual patients, extremely high dimensionality of the 3D surface data, and irregular respiration motion. In existing clinical radiation therapy (RT) procedures target displacements are caused by (1) inter-fractional anatomy changes due to weight, swell, food/water intake; (2) intra-fractional variations from anatomy changes within any treatment session due to voluntary/involuntary physiologic processes (e.g. respiration, muscle relaxation); (3) patient setup misalignment in daily reposition due to user errors; and (4) changes of marker or positioning device, etc. Presently, viable solution is lacking for in-vivo tracking of target motion and anatomy changes during the beam-on time without exposing patient with additional ionized radiation or high magnet field. Current O3D-guided radiotherapy systems relay on selected points or areas in the 3D surface to track surface motion. The configuration of the marks or areas may change with time that makes it inconsistent in quantifying and interpreting the respiration patterns. To meet the challenge of performing real-time respiration tracking using O3D imaging technology in IGRT, we propose a new approach to automatic respiration motion analysis based on linear dimensionality reduction technique based on PCA (principle component analysis). Optical 3D image sequence is decomposed with principle component analysis into a limited number of independent (orthogonal) motion patterns (a low dimension eigen-space span by eigen-vectors). New

  13. 3D visual presentation of shoulder joint motion.

    PubMed

    Totterman, S; Tamez-Pena, J; Kwok, E; Strang, J; Smith, J; Rubens, D; Parker, K

    1998-01-01

    The 3D visual presentation of biodynamic events of human joints is a challenging task. Although the 3D reconstruction of high contrast structures from CT data has been widely explored, then there is much less experience in reconstructing the small low contrast soft tissue structures from inhomogeneous and sometimes noisy MR data. Further, there are no algorithms for tracking the motion of moving anatomic structures through MR data. We represent a comprehensive approach to 3D musculoskeletal imagery that addresses these challenges. Specific imaging protocols, segmentation algorithms and rendering techniques are developed and applied to render complex 3D musculoskeletal systems for their 4D visual presentation. Applications of our approach include analysis of rotational motion of the shoulder, the knee flexion, and other complex musculoskeletal motions, and the development of interactive virtual human joints.

  14. Quantitative Evaluation of 3D Mouse Behaviors and Motor Function in the Open-Field after Spinal Cord Injury Using Markerless Motion Tracking

    PubMed Central

    Sheets, Alison L.; Lai, Po-Lun; Fisher, Lesley C.; Basso, D. Michele

    2013-01-01

    Thousands of scientists strive to identify cellular mechanisms that could lead to breakthroughs in developing ameliorative treatments for debilitating neural and muscular conditions such as spinal cord injury (SCI). Most studies use rodent models to test hypotheses, and these are all limited by the methods available to evaluate animal motor function. This study’s goal was to develop a behavioral and locomotor assessment system in a murine model of SCI that enables quantitative kinematic measurements to be made automatically in the open-field by applying markerless motion tracking approaches. Three-dimensional movements of eight naïve, five mild, five moderate, and four severe SCI mice were recorded using 10 cameras (100 Hz). Background subtraction was used in each video frame to identify the animal’s silhouette, and the 3D shape at each time was reconstructed using shape-from-silhouette. The reconstructed volume was divided into front and back halves using k-means clustering. The animal’s front Center of Volume (CoV) height and whole-body CoV speed were calculated and used to automatically classify animal behaviors including directed locomotion, exploratory locomotion, meandering, standing, and rearing. More detailed analyses of CoV height, speed, and lateral deviation during directed locomotion revealed behavioral differences and functional impairments in animals with mild, moderate, and severe SCI when compared with naïve animals. Naïve animals displayed the widest variety of behaviors including rearing and crossing the center of the open-field, the fastest speeds, and tallest rear CoV heights. SCI reduced the range of behaviors, and decreased speed (r = .70 p<.005) and rear CoV height (r = .65 p<.01) were significantly correlated with greater lesion size. This markerless tracking approach is a first step toward fundamentally changing how rodent movement studies are conducted. By providing scientists with sensitive, quantitative measurement

  15. Accuracy and Precision of a Custom Camera-Based System for 2-D and 3-D Motion Tracking during Speech and Nonspeech Motor Tasks

    ERIC Educational Resources Information Center

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose: Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable…

  16. Robust patella motion tracking using intensity-based 2D-3D registration on dynamic bi-plane fluoroscopy: towards quantitative assessment in MPFL reconstruction surgery

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Esnault, Matthieu; Grupp, Robert; Kosugi, Shinichi; Sato, Yoshinobu

    2016-03-01

    The determination of in vivo motion of multiple-bones using dynamic fluoroscopic images and computed tomography (CT) is useful for post-operative assessment of orthopaedic surgeries such as medial patellofemoral ligament reconstruction. We propose a robust method to measure the 3D motion of multiple rigid objects with high accuracy using a series of bi-plane fluoroscopic images and a multi-resolution, intensity-based, 2D-3D registration. A Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimizer was used with a gradient correlation similarity metric. Four approaches to register three rigid objects (femur, tibia-fibula and patella) were implemented: 1) an individual bone approach registering one bone at a time, each with optimization of a six degrees of freedom (6DOF) parameter, 2) a sequential approach registering one bone at a time but using the previous bone results as the background in DRR generation, 3) a simultaneous approach registering all the bones together (18DOF) and 4) a combination of the sequential and the simultaneous approaches. These approaches were compared in experiments using simulated images generated from the CT of a healthy volunteer and measured fluoroscopic images. Over the 120 simulated frames of motion, the simultaneous approach showed improved registration accuracy compared to the individual approach: with less than 0.68mm root-mean-square error (RMSE) for translation and less than 1.12° RMSE for rotation. A robustness evaluation was conducted with 45 trials of a randomly perturbed initialization showed that the sequential approach improved robustness significantly (74% success rate) compared to the individual bone approach (34% success) for patella registration (femur and tibia-fibula registration had a 100% success rate with each approach).

  17. Speeding up 3D speckle tracking using PatchMatch

    NASA Astrophysics Data System (ADS)

    Zontak, Maria; O'Donnell, Matthew

    2016-03-01

    Echocardiography provides valuable information to diagnose heart dysfunction. A typical exam records several minutes of real-time cardiac images. To enable complete analysis of 3D cardiac strains, 4-D (3-D+t) echocardiography is used. This results in a huge dataset and requires effective automated analysis. Ultrasound speckle tracking is an effective method for tissue motion analysis. It involves correlation of a 3D kernel (block) around a voxel with kernels in later frames. The search region is usually confined to a local neighborhood, due to biomechanical and computational constraints. For high strains and moderate frame-rates, however, this search region will remain large, leading to a considerable computational burden. Moreover, speckle decorrelation (due to high strains) leads to errors in tracking. To solve this, spatial motion coherency between adjacent voxels should be imposed, e.g., by averaging their correlation functions.1 This requires storing correlation functions for neighboring voxels, thus increasing memory demands. In this work, we propose an efficient search using PatchMatch, 2 a powerful method to find correspondences between images. Here we adopt PatchMatch for 3D volumes and radio-frequency signals. As opposed to an exact search, PatchMatch performs random sampling of the search region and propagates successive matches among neighboring voxels. We show that: 1) Inherently smooth offset propagation in PatchMatch contributes to spatial motion coherence without any additional processing or memory demand. 2) For typical scenarios, PatchMatch is at least 20 times faster than the exact search, while maintaining comparable tracking accuracy.

  18. 3D Human Motion Editing and Synthesis: A Survey

    PubMed Central

    Wang, Xin; Chen, Qiudi; Wang, Wanliang

    2014-01-01

    The ways to compute the kinematics and dynamic quantities of human bodies in motion have been studied in many biomedical papers. This paper presents a comprehensive survey of 3D human motion editing and synthesis techniques. Firstly, four types of methods for 3D human motion synthesis are introduced and compared. Secondly, motion capture data representation, motion editing, and motion synthesis are reviewed successively. Finally, future research directions are suggested. PMID:25045395

  19. Characterization of 3-D coronary tree motion from MSCT angiography

    PubMed Central

    Yang, Guanyu; Zhou, Jian; Boulmier, Dominique; Garcia, Marie-Paule; Luo, Limin; Toumoulin, Christine

    2010-01-01

    This paper describes a method for the characterization of coronary artery motion using Multi-slice Computed Tomography (MSCT) volume sequences. Coronary trees are first extracted by a spatial vessel tracking method in each volume of MSCT sequence. A point-based matching algorithm, with feature landmarks constraint, is then applied to match the 3D extracted centerlines between two consecutive instants over a complete cardiac cycle. The transformation functions and correspondence matrices are estimated simultaneously and allow deformable fitting of the vessels over the volume series. Either point-based or branch-based motion features can be derived. Experiments have been conducted in order to evaluate the performance of the method with a matching error analysis. PMID:19783508

  20. Motion estimation in the 3-D Gabor domain.

    PubMed

    Feng, Mu; Reed, Todd R

    2007-08-01

    Motion estimation methods can be broadly classified as being spatiotemporal or frequency domain in nature. The Gabor representation is an analysis framework providing localized frequency information. When applied to image sequences, the 3-D Gabor representation displays spatiotemporal/spatiotemporal-frequency (st/stf) information, enabling the application of robust frequency domain methods with adjustable spatiotemporal resolution. In this work, the 3-D Gabor representation is applied to motion analysis. We demonstrate that piecewise uniform translational motion can be estimated by using a uniform translation motion model in the st/stf domain. The resulting motion estimation method exhibits both good spatiotemporal resolution and substantial noise resistance compared to existing spatiotemporal methods. To form the basis of this model, we derive the signature of the translational motion in the 3-D Gabor domain. Finally, to obtain higher spatiotemporal resolution for more complex motions, a dense motion field estimation method is developed to find a motion estimate for every pixel in the sequence.

  1. The agreement between 3D, standard 2D and triplane 2D speckle tracking: effects of image quality and 3D volume rate

    PubMed Central

    Stöbe, Stephan; Tarr, Adrienn; Pfeiffer, Dietrich; Hagendorff, Andreas

    2014-01-01

    Comparison of 3D and 2D speckle tracking performed on standard 2D and triplane 2D datasets of normal and pathological left ventricular (LV) wall-motion patterns with a focus on the effect that 3D volume rate (3DVR), image quality and tracking artifacts have on the agreement between 2D and 3D speckle tracking. 37 patients with normal LV function and 18 patients with ischaemic wall-motion abnormalities underwent 2D and 3D echocardiography, followed by offline speckle tracking measurements. The values of 3D global, regional and segmental strain were compared with the standard 2D and triplane 2D strain values. Correlation analysis with the LV ejection fraction (LVEF) was also performed. The 3D and 2D global strain values correlated good in both normally and abnormally contracting hearts, though systematic differences between the two methods were observed. Of the 3D strain parameters, the area strain showed the best correlation with the LVEF. The numerical agreement of 3D and 2D analyses varied significantly with the volume rate and image quality of the 3D datasets. The highest correlation between 2D and 3D peak systolic strain values was found between 3D area and standard 2D longitudinal strain. Regional wall-motion abnormalities were similarly detected by 2D and 3D speckle tracking. 2DST of triplane datasets showed similar results to those of conventional 2D datasets. 2D and 3D speckle tracking similarly detect normal and pathological wall-motion patterns. Limited image quality has a significant impact on the agreement between 3D and 2D numerical strain values. PMID:26693303

  2. Electrically tunable lens speeds up 3D orbital tracking

    PubMed Central

    Annibale, Paolo; Dvornikov, Alexander; Gratton, Enrico

    2015-01-01

    3D orbital particle tracking is a versatile and effective microscopy technique that allows following fast moving fluorescent objects within living cells and reconstructing complex 3D shapes using laser scanning microscopes. We demonstrated notable improvements in the range, speed and accuracy of 3D orbital particle tracking by replacing commonly used piezoelectric stages with Electrically Tunable Lens (ETL) that eliminates mechanical movement of objective lenses. This allowed tracking and reconstructing shape of structures extending 500 microns in the axial direction. Using the ETL, we tracked at high speed fluorescently labeled genomic loci within the nucleus of living cells with unprecedented temporal resolution of 8ms using a 1.42NA oil-immersion objective. The presented technology is cost effective and allows easy upgrade of scanning microscopes for fast 3D orbital tracking. PMID:26114037

  3. 3D harmonic phase tracking with anatomical regularization.

    PubMed

    Zhou, Yitian; Bernard, Olivier; Saloux, Eric; Manrique, Alain; Allain, Pascal; Makram-Ebeid, Sherif; De Craene, Mathieu

    2015-12-01

    This paper presents a novel algorithm that extends HARP to handle 3D tagged MRI images. HARP results were regularized by an original regularization framework defined in an anatomical space of coordinates. In the meantime, myocardium incompressibility was integrated in order to correct the radial strain which is reported to be more challenging to recover. Both the tracking and regularization of LV displacements were done on a volumetric mesh to be computationally efficient. Also, a window-weighted regression method was extended to cardiac motion tracking which helps maintain a low complexity even at finer scales. On healthy volunteers, the tracking accuracy was found to be as accurate as the best candidates of a recent benchmark. Strain accuracy was evaluated on synthetic data, showing low bias and strain errors under 5% (excluding outliers) for longitudinal and circumferential strains, while the second and third quartiles of the radial strain errors are in the (-5%,5%) range. In clinical data, strain dispersion was shown to correlate with the extent of transmural fibrosis. Also, reduced deformation values were found inside infarcted segments.

  4. 3D harmonic phase tracking with anatomical regularization.

    PubMed

    Zhou, Yitian; Bernard, Olivier; Saloux, Eric; Manrique, Alain; Allain, Pascal; Makram-Ebeid, Sherif; De Craene, Mathieu

    2015-12-01

    This paper presents a novel algorithm that extends HARP to handle 3D tagged MRI images. HARP results were regularized by an original regularization framework defined in an anatomical space of coordinates. In the meantime, myocardium incompressibility was integrated in order to correct the radial strain which is reported to be more challenging to recover. Both the tracking and regularization of LV displacements were done on a volumetric mesh to be computationally efficient. Also, a window-weighted regression method was extended to cardiac motion tracking which helps maintain a low complexity even at finer scales. On healthy volunteers, the tracking accuracy was found to be as accurate as the best candidates of a recent benchmark. Strain accuracy was evaluated on synthetic data, showing low bias and strain errors under 5% (excluding outliers) for longitudinal and circumferential strains, while the second and third quartiles of the radial strain errors are in the (-5%,5%) range. In clinical data, strain dispersion was shown to correlate with the extent of transmural fibrosis. Also, reduced deformation values were found inside infarcted segments. PMID:26363844

  5. Motion field estimation for a dynamic scene using a 3D LiDAR.

    PubMed

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-09-09

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively.

  6. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR

    PubMed Central

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  7. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes. PMID:23955795

  8. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.

  9. 3D guide wire tracking for navigation in endovascular interventions

    NASA Astrophysics Data System (ADS)

    Baert, Shirley A.; van Walsum, Theo; Niessen, Wiro J.

    2004-05-01

    A method is presented to track the guide wire during endovascular interventions and to visualize it in 3D, together with the vasculature of the patient. The guide wire is represented by a 3D spline whose position is optimized using internal and external forces. For the external forces, the 3D spline is projected onto the biplane projection images that are routinely acquired. Feature images are constructed based on the enhancement of line-like structures in the projection images. A threshold is applied to this image such that if the probability of a pixel to be part of the guide wire is sufficiently high this feature image is used, whereas outside this region a distance transform is computed to improve the capture range of the method. In preliminary experiments, it is shown that some of the problems of the 2D tracking which where presented in previous work can successfully be circumvented using the 3D tracking method.

  10. Low-level motion analysis of color and luminance for perception of 2D and 3D motion.

    PubMed

    Shioiri, Satoshi; Yoshizawa, Masanori; Ogiya, Mistuharu; Matsumiya, Kazumichi; Yaguchi, Hirohisa

    2012-01-01

    We investigated the low-level motion mechanisms for color and luminance and their integration process using 2D and 3D motion aftereffects (MAEs). The 2D and 3D MAEs obtained in equiluminant color gratings showed that the visual system has the low-level motion mechanism for color motion as well as for luminance motion. The 3D MAE is an MAE for motion in depth after monocular motion adaptation. Apparent 3D motion can be perceived after prolonged exposure of one eye to lateral motion because the difference in motion signal between the adapted and unadapted eyes generates interocular velocity differences (IOVDs). Since IOVDs cannot be analyzed by the high-level motion mechanism of feature tracking, we conclude that a low-level motion mechanism is responsible for the 3D MAE. Since we found different temporal frequency characteristics between the color and luminance stimuli, MAEs in the equiluminant color stimuli cannot be attributed to a residual luminance component in the color stimulus. Although a similar MAE was found with a luminance and a color test both for 2D and 3D motion judgments after adapting to either color or luminance motion, temporal frequency characteristics were different between the color and luminance adaptation. The visual system must have a low-level motion mechanism for color signals as for luminance ones. We also found that color and luminance motion signals are integrated monocularly before IOVD analysis, showing a cross adaptation effect between color and luminance stimuli. This was supported by an experiment with dichoptic presentations of color and luminance tests. In the experiment, color and luminance tests were presented in the different eyes dichoptically with four different combinations of test and adaptation: color or luminance test in the adapted eye after color or luminance adaptation. Findings of little or no influence of the adaptation/test combinations indicate the integration of color and luminance motion signals prior to the

  11. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439

  12. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.

  13. [Evaluation of Motion Sickness Induced by 3D Video Clips].

    PubMed

    Matsuura, Yasuyuki; Takada, Hiroki

    2016-01-01

    The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology.

  14. Preference for motion and depth in 3D film

    NASA Astrophysics Data System (ADS)

    Hartle, Brittney; Lugtigheid, Arthur; Kazimi, Ali; Allison, Robert S.; Wilcox, Laurie M.

    2015-03-01

    While heuristics have evolved over decades for the capture and display of conventional 2D film, it is not clear these always apply well to stereoscopic 3D (S3D) film. Further, while there has been considerable recent research on viewer comfort in S3D media, little attention has been paid to audience preferences for filming parameters in S3D. Here we evaluate viewers' preferences for moving S3D film content in a theatre setting. Specifically we examine preferences for combinations of camera motion (speed and direction) and stereoscopic depth (IA). The amount of IA had no impact on clip preferences regardless of the direction or speed of camera movement. However, preferences were influenced by camera speed, but only in the in-depth condition where viewers preferred faster motion. Given that previous research shows that slower speeds are more comfortable for viewing S3D content, our results show that viewing preferences cannot be predicted simply from measures of comfort. Instead, it is clear that viewer response to S3D film is complex and that film parameters selected to enhance comfort may in some instances produce less appealing content.

  15. Integrated bronchoscopic video tracking and 3D CT registration for virtual bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Helferty, James P.; Padfield, Dirk R.

    2003-05-01

    Lung cancer assessment involves an initial evaluation of 3D CT image data followed by interventional bronchoscopy. The physician, with only a mental image inferred from the 3D CT data, must guide the bronchoscope through the bronchial tree to sites of interest. Unfortunately, this procedure depends heavily on the physician's ability to mentally reconstruct the 3D position of the bronchoscope within the airways. In order to assist physicians in performing biopsies of interest, we have developed a method that integrates live bronchoscopic video tracking and 3D CT registration. The proposed method is integrated into a system we have been devising for virtual-bronchoscopic analysis and guidance for lung-cancer assessment. Previously, the system relied on a method that only used registration of the live bronchoscopic video to corresponding virtual endoluminal views derived from the 3D CT data. This procedure only performs the registration at manually selected sites; it does not draw upon the motion information inherent in the bronchoscopic video. Further, the registration procedure is slow. The proposed method has the following advantages: (1) it tracks the 3D motion of the bronchoscope using the bronchoscopic video; (2) it uses the tracked 3D trajectory of the bronchoscope to assist in locating sites in the 3D CT "virtual world" to perform the registration. In addition, the method incorporates techniques to: (1) detect and exclude corrupted video frames (to help make the video tracking more robust); (2) accelerate the computation of the many 3D virtual endoluminal renderings (thus, speeding up the registration process). We have tested the integrated tracking-registration method on a human airway-tree phantom and on real human data.

  16. A new 3D tracking method exploiting the capabilities of digital holography in microscopy

    NASA Astrophysics Data System (ADS)

    Miccio, L.; Memmolo, P.; Merola, F.; Fusco, S.; Embrione, V.; Netti, P. A.; Ferraro, P.

    2013-04-01

    A method for 3D tracking has been developed exploiting Digital Holographic Microscopy (DHM) features. In the framework of self-consistent platform for manipulation and measurement of biological specimen we use DHM for quantitative and completely label free analysis of specimen with low amplitude contrast. Tracking capability extend the potentiality of DHM allowing to monitor the motion of appropriate probes and correlate it with sample properties. Complete 3D tracking has been obtained for the probes avoiding the issue of amplitude refocusing in traditional tracking processing. Our technique belongs to the video tracking methods that, conversely from Quadrant Photo-Diode method, opens the possibility to track multiples probes. All the common used video tracking algorithms are based on the numerical analysis of amplitude images in the focus plane and the shift of the maxima in the image plane are measured after the application of an appropriate threshold. Our approach for video tracking uses different theoretical basis. A set of interferograms is recorded and the complex wavefields are managed numerically to obtain three dimensional displacements of the probes. The procedure works properly on an higher number of probes and independently from their size. This method overcomes the traditional video tracking issues as the inability to measure the axial movement and the choice of suitable threshold mask. The novel configuration allows 3D tracking of micro-particles and simultaneously can furnish Quantitative Phase-contrast maps of tracked micro-objects by interference microscopy, without changing the configuration. In this paper, we show a new concept for a compact interferometric microscope that can ensure the multifunctionality, accomplishing accurate 3D tracking and quantitative phase-contrast analysis. Experimental results are presented and discussed for in vitro cells. Through a very simple and compact optical arrangement we show how two different functionalities

  17. Monocular 3-D gait tracking in surveillance scenes.

    PubMed

    Rogez, Grégory; Rihan, Jonathan; Guerrero, Jose J; Orrite, Carlos

    2014-06-01

    Gait recognition can potentially provide a noninvasive and effective biometric authentication from a distance. However, the performance of gait recognition systems will suffer in real surveillance scenarios with multiple interacting individuals and where the camera is usually placed at a significant angle and distance from the floor. We present a methodology for view-invariant monocular 3-D human pose tracking in man-made environments in which we assume that observed people move on a known ground plane. First, we model 3-D body poses and camera viewpoints with a low dimensional manifold and learn a generative model of the silhouette from this manifold to a reduced set of training views. During the online stage, 3-D body poses are tracked using recursive Bayesian sampling conducted jointly over the scene's ground plane and the pose-viewpoint manifold. For each sample, the homography that relates the corresponding training plane to the image points is calculated using the dominant 3-D directions of the scene, the sampled location on the ground plane and the sampled camera view. Each regressed silhouette shape is projected using this homographic transformation and is matched in the image to estimate its likelihood. Our framework is able to track 3-D human walking poses in a 3-D environment exploring only a 4-D state space with success. In our experimental evaluation, we demonstrate the significant improvements of the homographic alignment over a commonly used similarity transformation and provide quantitative pose tracking results for the monocular sequences with a high perspective effect from the CAVIAR dataset. PMID:23955796

  18. Linear tracking for 3-D medical ultrasound imaging.

    PubMed

    Huang, Qing-Hua; Yang, Zhao; Hu, Wei; Jin, Lian-Wen; Wei, Gang; Li, Xuelong

    2013-12-01

    As the clinical application grows, there is a rapid technical development of 3-D ultrasound imaging. Compared with 2-D ultrasound imaging, 3-D ultrasound imaging can provide improved qualitative and quantitative information for various clinical applications. In this paper, we proposed a novel tracking method for a freehand 3-D ultrasound imaging system with improved portability, reduced degree of freedom, and cost. We designed a sliding track with a linear position sensor attached, and it transmitted positional data via a wireless communication module based on Bluetooth, resulting in a wireless spatial tracking modality. A traditional 2-D ultrasound probe fixed to the position sensor on the sliding track was used to obtain real-time B-scans, and the positions of the B-scans were simultaneously acquired when moving the probe along the track in a freehand manner. In the experiments, the proposed method was applied to ultrasound phantoms and real human tissues. The results demonstrated that the new system outperformed a previously developed freehand system based on a traditional six-degree-of-freedom spatial sensor in phantom and in vivo studies, indicating its merit in clinical applications for human tissues and organs. PMID:23757592

  19. 3D imaging of neutron tracks using confocal microscopy

    NASA Astrophysics Data System (ADS)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  20. Geometric uncertainty of 2D projection imaging in monitoring 3D tumor motion

    NASA Astrophysics Data System (ADS)

    Suh, Yelin; Dieterich, Sonja; Keall, Paul J.

    2007-07-01

    The purpose of this study was to investigate the accuracy of two-dimensional (2D) projection imaging methods in three-dimensional (3D) tumor motion monitoring. Many commercial linear accelerator types have projection imaging capabilities, and tumor motion monitoring is useful for motion inclusive, respiratory gated or tumor tracking strategies. Since 2D projection imaging is limited in its ability to resolve the motion along the imaging beam axis, there is unresolved motion when monitoring 3D tumor motion. From the 3D tumor motion data of 160 treatment fractions for 46 thoracic and abdominal cancer patients, the unresolved motion due to the geometric limitation of 2D projection imaging was calculated as displacement in the imaging beam axis for different beam angles and time intervals. The geometric uncertainty to monitor 3D motion caused by the unresolved motion of 2D imaging was quantified using the root-mean-square (rms) metric. Geometric uncertainty showed interfractional and intrafractional variation. Patient-to-patient variation was much more significant than variation for different time intervals. For the patient cohort studied, as the time intervals increase, the rms, minimum and maximum values of the rms uncertainty show decreasing tendencies for the lung patients but increasing for the liver and retroperitoneal patients, which could be attributed to patient relaxation. Geometric uncertainty was smaller for coplanar treatments than non-coplanar treatments, as superior-inferior (SI) tumor motion, the predominant motion from patient respiration, could be always resolved for coplanar treatments. Overall rms of the rms uncertainty was 0.13 cm for all treatment fractions and 0.18 cm for the treatment fractions whose average breathing peak-trough ranges were more than 0.5 cm. The geometric uncertainty for 2D imaging varies depending on the tumor site, tumor motion range, time interval and beam angle as well as between patients, between fractions and within a

  1. A 3D diamond detector for particle tracking

    NASA Astrophysics Data System (ADS)

    Bachmair, F.; Bäni, L.; Bergonzo, P.; Caylar, B.; Forcolin, G.; Haughton, I.; Hits, D.; Kagan, H.; Kass, R.; Li, L.; Oh, A.; Phan, S.; Pomorski, M.; Smith, D. S.; Tyzhnevyi, V.; Wallny, R.; Whitehead, D.

    2015-06-01

    A novel device using single-crystal chemical vapour deposited diamond and resistive electrodes in the bulk forming a 3D diamond detector is presented. The electrodes of the device were fabricated with laser assisted phase change of diamond into a combination of diamond-like carbon, amorphous carbon and graphite. The connections to the electrodes of the device were made using a photo-lithographic process. The electrical and particle detection properties of the device were investigated. A prototype detector system consisting of the 3D device connected to a multi-channel readout was successfully tested with 120 GeV protons proving the feasibility of the 3D diamond detector concept for particle tracking applications for the first time.

  2. Light driven micro-robotics with holographic 3D tracking

    NASA Astrophysics Data System (ADS)

    Glückstad, Jesper

    2016-04-01

    We recently pioneered the concept of light-driven micro-robotics including the new and disruptive 3D-printed micro-tools coined Wave-guided Optical Waveguides that can be real-time optically trapped and "remote-controlled" in a volume with six-degrees-of-freedom. To be exploring the full potential of this new drone-like 3D light robotics approach in challenging microscopic geometries requires a versatile and real-time reconfigurable light coupling that can dynamically track a plurality of "light robots" in 3D to ensure continuous optimal light coupling on the fly. Our latest developments in this new and exciting area will be reviewed in this invited paper.

  3. Learning Projectile Motion with the Computer Game ``Scorched 3D``

    NASA Astrophysics Data System (ADS)

    Jurcevic, John S.

    2008-01-01

    For most of our students, video games are a normal part of their lives. We should take advantage of this medium to teach physics in a manner that is engrossing for our students. In particular, modern video games incorporate accurate physics in their game engines, and they allow us to visualize the physics through flashy and captivating graphics. I recently used the game "Scorched 3D" to help my students understand projectile motion.

  4. Preparation and 3D Tracking of Catalytic Swimming Devices

    PubMed Central

    Campbell, Andrew; Archer, Richard; Ebbens, Stephen

    2016-01-01

    We report a method to prepare catalytically active Janus colloids that "swim" in fluids and describe how to determine their 3D motion using fluorescence microscopy. One commonly deployed method for catalytically active colloids to produce enhanced motion is via an asymmetrical distribution of catalyst. Here this is achieved by spin coating a dispersed layer of fluorescent polymeric colloids onto a flat planar substrate, and then using directional platinum vapor deposition to half coat the exposed colloid surface, making a two faced "Janus" structure. The Janus colloids are then re-suspended from the planar substrate into an aqueous solution containing hydrogen peroxide. Hydrogen peroxide serves as a fuel for the platinum catalyst, which is decomposed into water and oxygen, but only on one side of the colloid. The asymmetry results in gradients that produce enhanced motion, or "swimming". A fluorescence microscope, together with a video camera is used to record the motion of individual colloids. The center of the fluorescent emission is found using image analysis to provide an x and y coordinate for each frame of the video. While keeping the microscope focal position fixed, the fluorescence emission from the colloid produces a characteristic concentric ring pattern which is subject to image analysis to determine the particles relative z position. In this way 3D trajectories for the swimming colloid are obtained, allowing swimming velocity to be accurately measured, and physical phenomena such as gravitaxis, which may bias the colloids motion to be detected. PMID:27404327

  5. Sketch on dynamic gesture tracking and analysis exploiting vision-based 3D interface

    NASA Astrophysics Data System (ADS)

    Woo, Woontack; Kim, Namgyu; Wong, Karen; Tadenuma, Makoto

    2000-12-01

    In this paper, we propose a vision-based 3D interface exploiting invisible 3D boxes, arranged in the personal space (i.e. reachable space by the body without traveling), which allows robust yet simple dynamic gesture tracking and analysis, without exploiting complicated sensor-based motion tracking systems. Vision-based gesture tracking and analysis is still a challenging problem, even though we have witnessed rapid advances in computer vision over the last few decades. The proposed framework consists of three main parts, i.e. (1) object segmentation without bluescreen and 3D box initialization with depth information, (2) movement tracking by observing how the body passes through the 3D boxes in the personal space and (3) movement feature extraction based on Laban's Effort theory and movement analysis by mapping features to meaningful symbols using time-delay neural networks. Obviously, exploiting depth information using multiview images improves the performance of gesture analysis by reducing the errors introduced by simple 2D interfaces In addition, the proposed box-based 3D interface lessens the difficulties in both tracking movement in 3D space and in extracting low-level features of the movement. Furthermore, the time-delay neural networks lessens the difficulties in movement analysis by training. Due to its simplicity and robustness, the framework will provide interactive systems, such as ATR I-cubed Tangible Music System or ATR Interactive Dance system, with improved quality of the 3D interface. The proposed simple framework also can be extended to other applications requiring dynamic gesture tracking and analysis on the fly.

  6. From canonical poses to 3D motion capture using a single camera.

    PubMed

    Fossati, Andrea; Dimitrijevic, Miodrag; Lepetit, Vincent; Fua, Pascal

    2010-07-01

    We combine detection and tracking techniques to achieve robust 3D motion recovery of people seen from arbitrary viewpoints by a single and potentially moving camera. We rely on detecting key postures, which can be done reliably, using a motion model to infer 3D poses between consecutive detections, and finally refining them over the whole sequence using a generative model. We demonstrate our approach in the cases of golf motions filmed using a static camera and walking motions acquired using a potentially moving one. We will show that our approach, although monocular, is both metrically accurate because it integrates information over many frames and robust because it can recover from a few misdetections.

  7. Meshless deformable models for 3D cardiac motion and strain analysis from tagged MRI.

    PubMed

    Wang, Xiaoxu; Chen, Ting; Zhang, Shaoting; Schaerer, Joël; Qian, Zhen; Huh, Suejung; Metaxas, Dimitris; Axel, Leon

    2015-01-01

    Tagged magnetic resonance imaging (TMRI) provides a direct and noninvasive way to visualize the in-wall deformation of the myocardium. Due to the through-plane motion, the tracking of 3D trajectories of the material points and the computation of 3D strain field call for the necessity of building 3D cardiac deformable models. The intersections of three stacks of orthogonal tagging planes are material points in the myocardium. With these intersections as control points, 3D motion can be reconstructed with a novel meshless deformable model (MDM). Volumetric MDMs describe an object as point cloud inside the object boundary and the coordinate of each point can be written in parametric functions. A generic heart mesh is registered on the TMRI with polar decomposition. A 3D MDM is generated and deformed with MR image tagging lines. Volumetric MDMs are deformed by calculating the dynamics function and minimizing the local Laplacian coordinates. The similarity transformation of each point is computed by assuming its neighboring points are making the same transformation. The deformation is computed iteratively until the control points match the target positions in the consecutive image frame. The 3D strain field is computed from the 3D displacement field with moving least squares. We demonstrate that MDMs outperformed the finite element method and the spline method with a numerical phantom. Meshless deformable models can track the trajectory of any material point in the myocardium and compute the 3D strain field of any particular area. The experimental results on in vivo healthy and patient heart MRI show that the MDM can fully recover the myocardium motion in three dimensions.

  8. Meshless deformable models for 3D cardiac motion and strain analysis from tagged MRI.

    PubMed

    Wang, Xiaoxu; Chen, Ting; Zhang, Shaoting; Schaerer, Joël; Qian, Zhen; Huh, Suejung; Metaxas, Dimitris; Axel, Leon

    2015-01-01

    Tagged magnetic resonance imaging (TMRI) provides a direct and noninvasive way to visualize the in-wall deformation of the myocardium. Due to the through-plane motion, the tracking of 3D trajectories of the material points and the computation of 3D strain field call for the necessity of building 3D cardiac deformable models. The intersections of three stacks of orthogonal tagging planes are material points in the myocardium. With these intersections as control points, 3D motion can be reconstructed with a novel meshless deformable model (MDM). Volumetric MDMs describe an object as point cloud inside the object boundary and the coordinate of each point can be written in parametric functions. A generic heart mesh is registered on the TMRI with polar decomposition. A 3D MDM is generated and deformed with MR image tagging lines. Volumetric MDMs are deformed by calculating the dynamics function and minimizing the local Laplacian coordinates. The similarity transformation of each point is computed by assuming its neighboring points are making the same transformation. The deformation is computed iteratively until the control points match the target positions in the consecutive image frame. The 3D strain field is computed from the 3D displacement field with moving least squares. We demonstrate that MDMs outperformed the finite element method and the spline method with a numerical phantom. Meshless deformable models can track the trajectory of any material point in the myocardium and compute the 3D strain field of any particular area. The experimental results on in vivo healthy and patient heart MRI show that the MDM can fully recover the myocardium motion in three dimensions. PMID:25157446

  9. Meshless deformable models for 3D cardiac motion and strain analysis from tagged MRI

    PubMed Central

    Wang, Xiaoxu; Chen, Ting; Zhang, Shaoting; Schaerer, Joël; Qian, Zhen; Huh, Suejung; Metaxas, Dimitris; Axel, Leon

    2016-01-01

    Tagged magnetic resonance imaging (TMRI) provides a direct and noninvasive way to visualize the in-wall deformation of the myocardium. Due to the through-plane motion, the tracking of 3D trajectories of the material points and the computation of 3D strain field call for the necessity of building 3D cardiac deformable models. The intersections of three stacks of orthogonal tagging planes are material points in the myocardium. With these intersections as control points, 3D motion can be reconstructed with a novel meshless deformable model (MDM). Volumetric MDMs describe an object as point cloud inside the object boundary and the coordinate of each point can be written in parametric functions. A generic heart mesh is registered on the TMRI with polar decomposition. A 3D MDM is generated and deformed with MR image tagging lines. Volumetric MDMs are deformed by calculating the dynamics function and minimizing the local Laplacian coordinates. The similarity transformation of each point is computed by assuming its neighboring points are making the same transformation. The deformation is computed iteratively until the control points match the target positions in the consecutive image frame. The 3D strain field is computed from the 3D displacement field with moving least squares. We demonstrate that MDMs outperformed the finite element method and the spline method with a numerical phantom. Meshless deformable models can track the trajectory of any material point in the myocardium and compute the 3D strain field of any particular area. The experimental results on in vivo healthy and patient heart MRI show that the MDM can fully recover the myocardium motion in three dimensions. PMID:25157446

  10. Automated 3-D Tracking of Centrosomes in Sequences of Confocal Image Stacks

    SciTech Connect

    Kerekes, Ryan A; Gleason, Shaun Scott; Trivedi, Dr. Niraj; Solecki, Dr. David

    2009-01-01

    In order to facilitate the study of neuron migration, we propose a method for 3-D detection and tracking of centrosomes in time-lapse confocal image stacks of live neuron cells. We combine Laplacian-based blob detection, adaptive thresholding, and the extraction of scale and roundness features to find centrosome-like objects in each frame. We link these detections using the joint probabilistic data association filter (JPDAF) tracking algorithm with a Newtonian state-space model tailored to the motion characteristics of centrosomes in live neurons. We apply our algorithm to image sequences containing multiple cells, some of which had been treated with motion-inhibiting drugs. We provide qualitative results and quantitative comparisons to manual segmentation and tracking results showing that our motion estimates closely agree with those generated by neurobiology experts.

  11. 3D Guided Wave Motion Analysis on Laminated Composites

    NASA Technical Reports Server (NTRS)

    Tian, Zhenhua; Leckey, Cara; Yu, Lingyu

    2013-01-01

    Ultrasonic guided waves have proved useful for structural health monitoring (SHM) and nondestructive evaluation (NDE) due to their ability to propagate long distances with less energy loss compared to bulk waves and due to their sensitivity to small defects in the structure. Analysis of actively transmitted ultrasonic signals has long been used to detect and assess damage. However, there remain many challenging tasks for guided wave based SHM due to the complexity involved with propagating guided waves, especially in the case of composite materials. The multimodal nature of the ultrasonic guided waves complicates the related damage analysis. This paper presents results from parallel 3D elastodynamic finite integration technique (EFIT) simulations used to acquire 3D wave motion in the subject laminated carbon fiber reinforced polymer composites. The acquired 3D wave motion is then analyzed by frequency-wavenumber analysis to study the wave propagation and interaction in the composite laminate. The frequency-wavenumber analysis enables the study of individual modes and visualization of mode conversion. Delamination damage has been incorporated into the EFIT model to generate "damaged" data. The potential for damage detection in laminated composites is discussed in the end.

  12. Tracking people and cars using 3D modeling and CCTV.

    PubMed

    Edelman, Gerda; Bijhold, Jurrien

    2010-10-10

    The aim of this study was to find a method for the reconstruction of movements of people and cars using CCTV footage and a 3D model of the environment. A procedure is proposed, in which video streams are synchronized and displayed in a 3D model, by using virtual cameras. People and cars are represented by cylinders and boxes, which are moved in the 3D model, according to their movements as shown in the video streams. The procedure was developed and tested in an experimental setup with test persons who logged their GPS coordinates as a recording of the ground truth. Results showed that it is possible to implement this procedure and to reconstruct movements of people and cars from video recordings. The procedure was also applied to a forensic case. In this work we experienced that more situational awareness was created by the 3D model, which made it easier to track people on multiple video streams. Based on all experiences from the experimental set up and the case, recommendations are formulated for use in practice.

  13. Coverage Assessment and Target Tracking in 3D Domains

    PubMed Central

    Boudriga, Noureddine; Hamdi, Mohamed; Iyengar, Sitharama

    2011-01-01

    Recent advances in integrated electronic devices motivated the use of Wireless Sensor Networks (WSNs) in many applications including domain surveillance and mobile target tracking, where a number of sensors are scattered within a sensitive region to detect the presence of intruders and forward related events to some analysis center(s). Obviously, sensor deployment should guarantee an optimal event detection rate and should reduce coverage holes. Most of the coverage control approaches proposed in the literature deal with two-dimensional zones and do not develop strategies to handle coverage in three-dimensional domains, which is becoming a requirement for many applications including water monitoring, indoor surveillance, and projectile tracking. This paper proposes efficient techniques to detect coverage holes in a 3D domain using a finite set of sensors, repair the holes, and track hostile targets. To this end, we use the concepts of Voronoi tessellation, Vietoris complex, and retract by deformation. We show in particular that, through a set of iterative transformations of the Vietoris complex corresponding to the deployed sensors, the number of coverage holes can be computed with a low complexity. Mobility strategies are also proposed to repair holes by moving appropriately sensors towards the uncovered zones. The tracking objective is to set a non-uniform WSN coverage within the monitored domain to allow detecting the target(s) by the set of sensors. We show, in particular, how the proposed algorithms adapt to cope with obstacles. Simulation experiments are carried out to analyze the efficiency of the proposed models. To our knowledge, repairing and tracking is addressed for the first time in 3D spaces with different sensor coverage schemes. PMID:22163733

  14. 3-D model-based tracking for UAV indoor localization.

    PubMed

    Teulière, Céline; Marchand, Eric; Eck, Laurent

    2015-05-01

    This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights.

  15. 3-D model-based tracking for UAV indoor localization.

    PubMed

    Teulière, Céline; Marchand, Eric; Eck, Laurent

    2015-05-01

    This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights. PMID:25099967

  16. Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle

    NASA Astrophysics Data System (ADS)

    Ettl, Svenja

    2015-04-01

    'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.

  17. A comparison of 3D scapular kinematics between dominant and nondominant shoulders during multiplanar arm motion

    PubMed Central

    Lee, Sang Ki; Yang, Dae Suk; Kim, Ha Yong; Choy, Won Sik

    2013-01-01

    Background: Generally, the scapular motions of pathologic and contralateral normal shoulders are compared to characterize shoulder disorders. However, the symmetry of scapular motion of normal shoulders remains undetermined. Therefore, the aim of this study was to compare 3dimensinal (3D) scapular motion between dominant and nondominant shoulders during three different planes of arm motion by using an optical tracking system. Materials and Methods: Twenty healthy subjects completed five repetitions of elevation and lowering in sagittal plane flexion, scapular plane abduction, and coronal plane abduction. The 3D scapular motion was measured using an optical tracking system, after minimizing reflective marker skin slippage using ultrasonography. The dynamic 3D motion of the scapula of dominant and nondominant shoulders, and the scapulohumeral rhythm (SHR) were analyzed at each 10° increment during the three planes of arm motion. Results: There was no significant difference in upward rotation or internal rotation (P > 0.05) of the scapula between dominant and nondominant shoulders during the three planes of arm motion. However, there was a significant difference in posterior tilting (P = 0.018) during coronal plane abduction. The SHR was a large positive or negative number in the initial phase of sagittal plane flexion and scapular plane abduction. However, the SHR was a small positive or negative number in the initial phase of coronal plane abduction. Conclusions: Only posterior tilting of the scapula during coronal plane abduction was asymmetrical in our healthy subjects, and depending on the plane of arm motion, the pattern of the SHR differed as well. These differences should be considered in the clinical assessment of shoulder pathology. PMID:23682174

  18. Fusing inertial sensor data in an extended Kalman filter for 3D camera tracking.

    PubMed

    Erdem, Arif Tanju; Ercan, Ali Özer

    2015-02-01

    In a setup where camera measurements are used to estimate 3D egomotion in an extended Kalman filter (EKF) framework, it is well-known that inertial sensors (i.e., accelerometers and gyroscopes) are especially useful when the camera undergoes fast motion. Inertial sensor data can be fused at the EKF with the camera measurements in either the correction stage (as measurement inputs) or the prediction stage (as control inputs). In general, only one type of inertial sensor is employed in the EKF in the literature, or when both are employed they are both fused in the same stage. In this paper, we provide an extensive performance comparison of every possible combination of fusing accelerometer and gyroscope data as control or measurement inputs using the same data set collected at different motion speeds. In particular, we compare the performances of different approaches based on 3D pose errors, in addition to camera reprojection errors commonly found in the literature, which provides further insight into the strengths and weaknesses of different approaches. We show using both simulated and real data that it is always better to fuse both sensors in the measurement stage and that in particular, accelerometer helps more with the 3D position tracking accuracy, whereas gyroscope helps more with the 3D orientation tracking accuracy. We also propose a simulated data generation method, which is beneficial for the design and validation of tracking algorithms involving both camera and inertial measurement unit measurements in general.

  19. Two-photon single particle tracking in 3D

    NASA Astrophysics Data System (ADS)

    So, Peter T. C.; Ragan, Timothy; Gratton, Enrico; Carerro, Jenny; Voss, Edward

    1997-05-01

    Transport processes are important in biology and medicine. Examples include virus docking and infection, endocytosis of extracellular protein and phagocytosis of antigenic material. Trafficking driven by molecular motors inside a complex 3D environment is a shared common theme. The complex sequence of these events are difficult to resolve with conventional techniques where the action of many cells are asynchronously averaged. Single particle tracking (SPT) was developed by Ghosh and Webb to address this problem and has proven to be a powerful technique in understanding membrane- protein interaction. Since the traditional SPT method uses wide field illumination and area detectors, it is limited to the study of 2D systems. In this presentation, we report the development of a 3D single particle tracking technique using two-photon excitation. Using a real-time feedback system, we can dynamically position the sub-femtoliter two-photon excitation volume to follow the fluorescent particle under transport by maximizing the detected fluorescent intensity. Further, fluorescence spectroscopy can be performed in real time along the particle trajectory to monitor the underlying biochemical signals driving this transport process. The first application of this instrument will focus on the study of antigen endocytosis process of macrophages.

  20. High resolution 3D insider detection and tracking.

    SciTech Connect

    Nelson, Cynthia Lee

    2003-09-01

    Vulnerability analysis studies show that one of the worst threats against a facility is that of an active insider during an emergency evacuation. When a criticality or other emergency alarm occurs, employees immediately proceed along evacuation routes to designated areas. Procedures are then implemented to account for all material, classified parts, etc. The 3-Dimensional Video Motion Detection (3DVMD) technology could be used to detect and track possible insider activities during alarm situations, as just described, as well as during normal operating conditions. The 3DVMD technology uses multiple cameras to create 3-dimensional detection volumes or zones. Movement throughout detection zones is tracked and high-level information, such as the number of people and their direction of motion, is extracted. In the described alarm scenario, deviances of evacuation procedures taken by an individual could be immediately detected and relayed to a central alarm station. The insider could be tracked and any protected items removed from the area could be flagged. The 3DVMD technology could also be used to monitor such items as machines that are used to build classified parts. During an alarm, detections could be made if items were removed from the machine. Overall, the use of 3DVMD technology during emergency evacuations would help to prevent the loss of classified items and would speed recovery from emergency situations. Further security could also be added by analyzing tracked behavior (motion) as it corresponds to predicted behavior, e.g., behavior corresponding with the execution of required procedures. This information would be valuable for detecting a possible insider not only during emergency situations, but also during times of normal operation.

  1. Processing 3D form and 3D motion: respective contributions of attention-based and stimulus-driven activity.

    PubMed

    Paradis, A-L; Droulez, J; Cornilleau-Pérès, V; Poline, J-B

    2008-12-01

    This study aims at segregating the neural substrate for the 3D-form and 3D-motion attributes in structure-from-motion perception, and at disentangling the stimulus-driven and endogenous-attention-driven processing of these attributes. Attention and stimulus were manipulated independently: participants had to detect the transitions of one attribute--form, 3D motion or colour--while the visual stimulus underwent successive transitions of all attributes. We compared the BOLD activity related to form and 3D motion in three conditions: stimulus-driven processing (unattended transitions), endogenous attentional selection (task) or both stimulus-driven processing and attentional selection (attended transitions). In all conditions, the form versus 3D-motion contrasts revealed a clear dorsal/ventral segregation. However, while the form-related activity is consistent with previously described shape-selective areas, the activity related to 3D motion does not encompass the usual "visual motion" areas, but rather corresponds to a high-level motion system, including IPL and STS areas. Second, we found a dissociation between the neural processing of unattended attributes and that involved in endogenous attentional selection. Areas selective for 3D-motion and form showed either increased activity at transitions of these respective attributes or decreased activity when subjects' attention was directed to a competing attribute. We propose that both facilitatory and suppressive mechanisms of attribute selection are involved depending on the conditions driving this selection. Therefore, attentional selection is not limited to an increased activity in areas processing stimulus properties, and may unveil different functional localization from stimulus modulation.

  2. SU-E-J-135: An Investigation of Ultrasound Imaging for 3D Intra-Fraction Prostate Motion Estimation

    SciTech Connect

    O'Shea, T; Harris, E; Bamber, J; Evans, P

    2014-06-01

    Purpose: This study investigates the use of a mechanically swept 3D ultrasound (US) probe to estimate intra-fraction motion of the prostate during radiation therapy using an US phantom and simulated transperineal imaging. Methods: A 3D motion platform was used to translate an US speckle phantom while simulating transperineal US imaging. Motion patterns for five representative types of prostate motion, generated from patient data previously acquired with a Calypso system, were using to move the phantom in 3D. The phantom was also implanted with fiducial markers and subsequently tracked using the CyberKnife kV x-ray system for comparison. A normalised cross correlation block matching algorithm was used to track speckle patterns in 3D and 2D US data. Motion estimation results were compared with known phantom translations. Results: Transperineal 3D US could track superior-inferior (axial) and anterior-posterior (lateral) motion to better than 0.8 mm root-mean-square error (RMSE) at a volume rate of 1.7 Hz (comparable with kV x-ray tracking RMSE). Motion estimation accuracy was poorest along the US probe's swept axis (right-left; RL; RMSE < 4.2 mm) but simple regularisation methods could be used to improve RMSE (< 2 mm). 2D US was found to be feasible for slowly varying motion (RMSE < 0.5 mm). 3D US could also allow accurate radiation beam gating with displacement thresholds of 2 mm and 5 mm exhibiting a RMSE of less than 0.5 mm. Conclusion: 2D and 3D US speckle tracking is feasible for prostate motion estimation during radiation delivery. Since RL prostate motion is small in magnitude and frequency, 2D or a hybrid (2D/3D) US imaging approach which also accounts for potential prostate rotations could be used. Regularisation methods could be used to ensure the accuracy of tracking data, making US a feasible approach for gating or tracking in standard or hypo-fractionated prostate treatments.

  3. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  4. 3D visualisation and analysis of single and coalescing tracks in Solid state Nuclear Track Detectors

    NASA Astrophysics Data System (ADS)

    Wertheim, David; Gillmore, Gavin; Brown, Louise; Petford, Nick

    2010-05-01

    Exposure to radon gas (222Rn) and associated ionising decay products can cause lung cancer in humans (1). Solid state Nuclear Track Detectors (SSNTDs) can be used to monitor radon concentrations (2). Radon particles form tracks in the detectors and these tracks can be etched in order to enable 2D surface image analysis. We have previously shown that confocal microscopy can be used for 3D visualisation of etched SSNTDs (3). The aim of the study was to further investigate track angles and patterns in SSNTDs. A 'LEXT' confocal laser scanning microscope (Olympus Corporation, Japan) was used to acquire 3D image datasets of five CR-39 plastic SSNTD's. The resultant 3D visualisations were analysed by eye and inclination angles assessed on selected tracks. From visual assessment, single isolated tracks as well as coalescing tracks were observed on the etched detectors. In addition varying track inclination angles were observed. Several different patterns of track formation were seen such as single isolated and double coalescing tracks. The observed track angles of inclination may help to assess the angle at which alpha particles hit the detector. Darby, S et al. Radon in homes and risk of lung cancer : collaborative analysis of individual data from 13 European case-control studies. British Medical Journal 2005; 330, 223-226. Phillips, P.S., Denman, A.R., Crockett, R.G.M., Gillmore, G., Groves-Kirkby, C.J., Woolridge, A., Comparative Analysis of Weekly vs. Three monthly radon measurements in dwellings. DEFRA Report No., DEFRA/RAS/03.006. (2004). Wertheim D, Gillmore G, Brown L, and Petford N. A new method of imaging particle tracks in Solid State Nuclear Track Detectors. Journal of Microscopy 2010; 237: 1-6.

  5. Error control in the set-up of stereo camera systems for 3d animal tracking

    NASA Astrophysics Data System (ADS)

    Cavagna, A.; Creato, C.; Del Castello, L.; Giardina, I.; Melillo, S.; Parisi, L.; Viale, M.

    2015-12-01

    Three-dimensional tracking of animal systems is the key to the comprehension of collective behavior. Experimental data collected via a stereo camera system allow the reconstruction of the 3d trajectories of each individual in the group. Trajectories can then be used to compute some quantities of interest to better understand collective motion, such as velocities, distances between individuals and correlation functions. The reliability of the retrieved trajectories is strictly related to the accuracy of the 3d reconstruction. In this paper, we perform a careful analysis of the most significant errors affecting 3d reconstruction, showing how the accuracy depends on the camera system set-up and on the precision of the calibration parameters.

  6. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Smith, Philip W.

    1993-01-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  7. Real-Time 3D Tracking and Reconstruction on Mobile Phones.

    PubMed

    Prisacariu, Victor Adrian; Kähler, Olaf; Murray, David W; Reid, Ian D

    2015-05-01

    We present a novel framework for jointly tracking a camera in 3D and reconstructing the 3D model of an observed object. Due to the region based approach, our formulation can handle untextured objects, partial occlusions, motion blur, dynamic backgrounds and imperfect lighting. Our formulation also allows for a very efficient implementation which achieves real-time performance on a mobile phone, by running the pose estimation and the shape optimisation in parallel. We use a level set based pose estimation but completely avoid the, typically required, explicit computation of a global distance. This leads to tracking rates of more than 100 Hz on a desktop PC and 30 Hz on a mobile phone. Further, we incorporate additional orientation information from the phone's inertial sensor which helps us resolve the tracking ambiguities inherent to region based formulations. The reconstruction step first probabilistically integrates 2D image statistics from selected keyframes into a 3D volume, and then imposes coherency and compactness using a total variational regularisation term. The global optimum of the overall energy function is found using a continuous max-flow algorithm and we show that, similar to tracking, the integration of per voxel posteriors instead of likelihoods improves the precision and accuracy of the reconstruction.

  8. Teleoperation of a robot manipulator from 3D human hand-arm motion

    NASA Astrophysics Data System (ADS)

    Kofman, Jonathan; Verma, Siddharth; Wu, Xianghai; Luu, Timothy

    2003-10-01

    The control of a robot manipulator by a human operator is often necessary in unstructured dynamic environments with unfamiliar objects. Remote teleoperation is required when human presence at the robot site is undesirable or difficult, such as in handling hazardous materials and operating in dangerous or inaccessible environments. Previous approaches have employed mechanical or other contacting interfaces which require unnatural motions for object manipulation tasks or hinder dexterous human motion. This paper presents a non-contacting method of teleoperating a robot manipulator by having the human operator perform the 3D human hand-arm motion that would naturally be used to compete an object manipulation task and tracking the motion with a stereo-camera system at a local site. The 3D human hand-arm motion is reconstructed at the remote robot site and is used to control the position and orientation of the robot manipulator end-effector in real-time. Images captured of the robot interacting with objects at the remote site provide visual feedback to the human operator. Tests in teleoperation of the robot manipulator have demonstrated the ability of the human to carry out object manipulator tasks remotely and the teleoperated robot manipulator system to copy human-arm motions in real-time.

  9. Holographic microscopy for 3D tracking of bacteria

    NASA Astrophysics Data System (ADS)

    Nadeau, Jay; Cho, Yong Bin; El-Kholy, Marwan; Bedrossian, Manuel; Rider, Stephanie; Lindensmith, Christian; Wallace, J. Kent

    2016-03-01

    Understanding when, how, and if bacteria swim is key to understanding critical ecological and biological processes, from carbon cycling to infection. Imaging motility by traditional light microscopy is limited by focus depth, requiring cells to be constrained in z. Holographic microscopy offers an instantaneous 3D snapshot of a large sample volume, and is therefore ideal in principle for quantifying unconstrained bacterial motility. However, resolving and tracking individual cells is difficult due to the low amplitude and phase contrast of the cells; the index of refraction of typical bacteria differs from that of water only at the second decimal place. In this work we present a combination of optical and sample-handling approaches to facilitating bacterial tracking by holographic phase imaging. The first is the design of the microscope, which is an off-axis design with the optics along a common path, which minimizes alignment issues while providing all of the advantages of off-axis holography. Second, we use anti-reflective coated etalon glass in the design of sample chambers, which reduce internal reflections. Improvement seen with the antireflective coating is seen primarily in phase imaging, and its quantification is presented here. Finally, dyes may be used to increase phase contrast according to the Kramers-Kronig relations. Results using three test strains are presented, illustrating the different types of bacterial motility characterized by an enteric organism (Escherichia coli), an environmental organism (Bacillus subtilis), and a marine organism (Vibrio alginolyticus). Data processing steps to increase the quality of the phase images and facilitate tracking are also discussed.

  10. 3-D imaging of particle tracks in solid state nuclear track detectors

    NASA Astrophysics Data System (ADS)

    Wertheim, D.; Gillmore, G.; Brown, L.; Petford, N.

    2010-05-01

    It has been suggested that 3 to 5% of total lung cancer deaths in the UK may be associated with elevated radon concentration. Radon gas levels can be assessed using CR-39 plastic detectors which are often assessed by 2-D image analysis of surface images. 3-D analysis has the potential to provide information relating to the angle at which alpha particles impinge on the detector. In this study we used a "LEXT" OLS3100 confocal laser scanning microscope (Olympus Corporation, Tokyo, Japan) to image tracks on five CR-39 detectors. We were able to identify several patterns of single and coalescing tracks from 3-D visualisation. Thus this method may provide a means of detailed 3-D analysis of Solid State Nuclear Track Detectors.

  11. Drogue tracking using 3D flash lidar for autonomous aerial refueling

    NASA Astrophysics Data System (ADS)

    Chen, Chao-I.; Stettner, Roger

    2011-06-01

    Autonomous aerial refueling (AAR) is an important capability for an unmanned aerial vehicle (UAV) to increase its flying range and endurance without increasing its size. This paper presents a novel tracking method that utilizes both 2D intensity and 3D point-cloud data acquired with a 3D Flash LIDAR sensor to establish relative position and orientation between the receiver vehicle and drogue during an aerial refueling process. Unlike classic, vision-based sensors, a 3D Flash LIDAR sensor can provide 3D point-cloud data in real time without motion blur, in the day or night, and is capable of imaging through fog and clouds. The proposed method segments out the drogue through 2D analysis and estimates the center of the drogue from 3D point-cloud data for flight trajectory determination. A level-set front propagation routine is first employed to identify the target of interest and establish its silhouette information. Sufficient domain knowledge, such as the size of the drogue and the expected operable distance, is integrated into our approach to quickly eliminate unlikely target candidates. A statistical analysis along with a random sample consensus (RANSAC) is performed on the target to reduce noise and estimate the center of the drogue after all 3D points on the drogue are identified. The estimated center and drogue silhouette serve as the seed points to efficiently locate the target in the next frame.

  12. 3D whiteboard: collaborative sketching with 3D-tracked smart phones

    NASA Astrophysics Data System (ADS)

    Lue, James; Schulze, Jürgen P.

    2014-02-01

    We present the results of our investigation of the feasibility of a new approach for collaborative drawing in 3D, based on Android smart phones. Our approach utilizes a number of fiduciary markers, placed in the working area where they can be seen by the smart phones' cameras, in order to estimate the pose of each phone in the room. Our prototype allows two users to draw 3D objects with their smart phones by moving their phones around in 3D space. For example, 3D lines are drawn by recording the path of the phone as it is moved around in 3D space, drawing line segments on the screen along the way. Each user can see the virtual drawing space on their smart phones' displays, as if the display was a window into this space. Besides lines, our prototype application also supports 3D geometry creation, geometry transformation operations, and it shows the location of the other user's phone.

  13. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    NASA Astrophysics Data System (ADS)

    Kanphet, J.; Suriyapee, S.; Dumrongkijudom, N.; Sanghangthum, T.; Kumkhwao, J.; Wisetrintong, M.

    2016-03-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable.

  14. 3D motion and strain estimation of the heart: initial clinical findings

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Hristova, Krassimira; Loeckx, Dirk; Rademakers, Frank; Claus, Piet; D'hooge, Jan

    2010-03-01

    The quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. As such, tissue Doppler imaging and speckle tracking based methods have been introduced to estimate local myocardial strain. Recently, volumetric ultrasound has become more readily available, allowing therefore the 3D estimation of motion and myocardial deformation. Our lab has previously presented a method based on spatio-temporal elastic registration of ultrasound volumes to estimate myocardial motion and deformation in 3D, overcoming the spatial limitations of the existing methods. This method was optimized on simulated data sets in previous work and is currently tested in a clinical setting. In this manuscript, 10 healthy volunteers, 10 patient with myocardial infarction and 10 patients with arterial hypertension were included. The cardiac strain values extracted with the proposed method were compared with the ones estimated with 1D tissue Doppler imaging and 2D speckle tracking in all patient groups. Although the absolute values of the 3D strain components assessed by this new methodology were not identical to the reference methods, the relationship between the different patient groups was similar.

  15. On Integral Invariants for Effective 3-D Motion Trajectory Matching and Recognition.

    PubMed

    Shao, Zhanpeng; Li, Youfu

    2016-02-01

    Motion trajectories tracked from the motions of human, robots, and moving objects can provide an important clue for motion analysis, classification, and recognition. This paper defines some new integral invariants for a 3-D motion trajectory. Based on two typical kernel functions, we design two integral invariants, the distance and area integral invariants. The area integral invariants are estimated based on the blurred segment of noisy discrete curve to avoid the computation of high-order derivatives. Such integral invariants for a motion trajectory enjoy some desirable properties, such as computational locality, uniqueness of representation, and noise insensitivity. Moreover, our formulation allows the analysis of motion trajectories at a range of scales by varying the scale of kernel function. The features of motion trajectories can thus be perceived at multiscale levels in a coarse-to-fine manner. Finally, we define a distance function to measure the trajectory similarity to find similar trajectories. Through the experiments, we examine the robustness and effectiveness of the proposed integral invariants and find that they can capture the motion cues in trajectory matching and sign recognition satisfactorily.

  16. Artificial neural networks for 3-D motion analysis-Part II: Nonrigid motion.

    PubMed

    Chen, T; Lin, W C; Chen, C T

    1995-01-01

    For pt. I see ibid., p. 1386-93 (1995). An approach applying artificial neural net techniques to 3D nonrigid motion analysis is proposed. The 3D nonrigid motion of the left ventricle of a human heart is examined using biplanar cineangiography data, consisting of 3D coordinates of 30 coronary artery bifurcation points of the left ventricle and the correspondences of these points taken over 10 time instants during the heart cardiac cycle. The motion is decomposed into global rigid motion and a set of local nonrigid deformations which are coupled with the global motion. The global rigid motion can be estimated precisely as a translation vecto and a rotation matrix. Local nonrigid deformation estimation is discussed. A set of neural nets similar in structure and dynamics but different in physical size is proposed to tackle the problem of nonrigidity. These neural networks are interconnected through feedbacks. The activation function of the output layer is selected so that a feedback is involved in the output updating. The constraints are specified to ensure stable and globally consistent estimation. The objective is to find the optimal deformation matrices that satisfy the constraints for all coronary artery bifurcation points of the left ventricle. The proposed neural networks differ from other existing neural network models in their unique structure and dynamics.

  17. 3D imaging of particle tracks in Solid State Nuclear Track Detectors

    NASA Astrophysics Data System (ADS)

    Wertheim, D.; Gillmore, G.; Brown, L.; Petford, N.

    2009-04-01

    Inhalation of radon gas (222Rn) and associated ionizing decay products is known to cause lung cancer in human. In the U.K., it has been suggested that 3 to 5 % of total lung cancer deaths can be linked to elevated radon concentrations in the home and/or workplace. Radon monitoring in buildings is therefore routinely undertaken in areas of known risk. Indeed, some organisations such as the Radon Council in the UK and the Environmental Protection Agency in the USA, advocate a ‘to test is best' policy. Radon gas occurs naturally, emanating from the decay of 238U in rock and soils. Its concentration can be measured using CR?39 plastic detectors which conventionally are assessed by 2D image analysis of the surface; however there can be some variation in outcomes / readings even in closely spaced detectors. A number of radon measurement methods are currently in use (for examples, activated carbon and electrets) but the most widely used are CR?39 solid state nuclear track?etch detectors (SSNTDs). In this technique, heavily ionizing alpha particles leave tracks in the form of radiation damage (via interaction between alpha particles and the atoms making up the CR?39 polymer). 3D imaging of the tracks has the potential to provide information relating to angle and energy of alpha particles but this could be time consuming. Here we describe a new method for rapid high resolution 3D imaging of SSNTDs. A ‘LEXT' OLS3100 confocal laser scanning microscope was used in confocal mode to successfully obtain 3D image data on four CR?39 plastic detectors. 3D visualisation and image analysis enabled characterisation of track features. This method may provide a means of rapid and detailed 3D analysis of SSNTDs. Keywords: Radon; SSNTDs; confocal laser scanning microscope; 3D imaging; LEXT

  18. Angle-independent measure of motion for image-based gating in 3D coronary angiography

    SciTech Connect

    Lehmann, Glen C.; Holdsworth, David W.; Drangova, Maria

    2006-05-15

    The role of three-dimensional (3D) image guidance for interventional procedures and minimally invasive surgeries is increasing for the treatment of vascular disease. Currently, most interventional procedures are guided by two-dimensional x-ray angiography, but computed rotational angiography has the potential to provide 3D geometric information about the coronary arteries. The creation of 3D angiographic images of the coronary arteries requires synchronization of data acquisition with respect to the cardiac cycle, in order to minimize motion artifacts. This can be achieved by inferring the extent of motion from a patient's electrocardiogram (ECG) signal. However, a direct measurement of motion (from the 2D angiograms) has the potential to improve the 3D angiographic images by ensuring that only projections acquired during periods of minimal motion are included in the reconstruction. This paper presents an image-based metric for measuring the extent of motion in 2D x-ray angiographic images. Adaptive histogram equalization was applied to projection images to increase the sharpness of coronary arteries and the superior-inferior component of the weighted centroid (SIC) was measured. The SIC constitutes an image-based metric that can be used to track vessel motion, independent of apparent motion induced by the rotational acquisition. To evaluate the technique, six consecutive patients scheduled for routine coronary angiography procedures were studied. We compared the end of the SIC rest period ({rho}) to R-waves (R) detected in the patient's ECG and found a mean difference of 14{+-}80 ms. Two simultaneous angular positions were acquired and {rho} was detected for each position. There was no statistically significant difference (P=0.79) between {rho} in the two simultaneously acquired angular positions. Thus we have shown the SIC to be independent of view angle, which is critical for rotational angiography. A preliminary image-based gating strategy that employed the SIC

  19. Ion track reconstruction in 3D using alumina-based fluorescent nuclear track detectors.

    PubMed

    Niklas, M; Bartz, J A; Akselrod, M S; Abollahi, A; Jäkel, O; Greilich, S

    2013-09-21

    Fluorescent nuclear track detectors (FNTDs) based on Al2O3: C, Mg single crystal combined with confocal microscopy provide 3D information on ion tracks with a resolution only limited by light diffraction. FNTDs are also ideal substrates to be coated with cells to engineer cell-fluorescent ion track hybrid detectors (Cell-Fit-HD). This radiobiological tool enables a novel platform linking cell responses to physical dose deposition on a sub-cellular level in proton and heavy ion therapies. To achieve spatial correlation between single ion hits in the cell coating and its biological response the ion traversals have to be reconstructed in 3D using the depth information gained by the FNTD read-out. FNTDs were coated with a confluent human lung adenocarcinoma epithelial (A549) cell layer. Carbon ion irradiation of the hybrid detector was performed perpendicular and angular to the detector surface. In situ imaging of the fluorescently labeled cell layer and the FNTD was performed in a sequential read-out. Making use of the trajectory information provided by the FNTD the accuracy of 3D track reconstruction of single particles traversing the hybrid detector was studied. The accuracy is strongly influenced by the irradiation angle and therefore by complexity of the FNTD signal. Perpendicular irradiation results in highest accuracy with error of smaller than 0.10°. The ability of FNTD technology to provide accurate 3D ion track reconstruction makes it a powerful tool for radiobiological investigations in clinical ion beams, either being used as a substrate to be coated with living tissue or being implanted in vivo. PMID:23965401

  20. Ion track reconstruction in 3D using alumina-based fluorescent nuclear track detectors

    NASA Astrophysics Data System (ADS)

    Niklas, M.; Bartz, J. A.; Akselrod, M. S.; Abollahi, A.; Jäkel, O.; Greilich, S.

    2013-09-01

    Fluorescent nuclear track detectors (FNTDs) based on Al2O3: C, Mg single crystal combined with confocal microscopy provide 3D information on ion tracks with a resolution only limited by light diffraction. FNTDs are also ideal substrates to be coated with cells to engineer cell-fluorescent ion track hybrid detectors (Cell-Fit-HD). This radiobiological tool enables a novel platform linking cell responses to physical dose deposition on a sub-cellular level in proton and heavy ion therapies. To achieve spatial correlation between single ion hits in the cell coating and its biological response the ion traversals have to be reconstructed in 3D using the depth information gained by the FNTD read-out. FNTDs were coated with a confluent human lung adenocarcinoma epithelial (A549) cell layer. Carbon ion irradiation of the hybrid detector was performed perpendicular and angular to the detector surface. In situ imaging of the fluorescently labeled cell layer and the FNTD was performed in a sequential read-out. Making use of the trajectory information provided by the FNTD the accuracy of 3D track reconstruction of single particles traversing the hybrid detector was studied. The accuracy is strongly influenced by the irradiation angle and therefore by complexity of the FNTD signal. Perpendicular irradiation results in highest accuracy with error of smaller than 0.10°. The ability of FNTD technology to provide accurate 3D ion track reconstruction makes it a powerful tool for radiobiological investigations in clinical ion beams, either being used as a substrate to be coated with living tissue or being implanted in vivo.

  1. Faceless identification: a model for person identification using the 3D shape and 3D motion as cues

    NASA Astrophysics Data System (ADS)

    Klasen, Lena M.; Li, Haibo

    1999-02-01

    Person identification by using biometric methods based on image sequences, or still images, often requires a controllable and cooperative environment during the image capturing stage. In the forensic case the situation is more likely to be the opposite. In this work we propose a method that makes use of the anthropometry of the human body and human actions as cues for identification. Image sequences from surveillance systems are used, which can be seen as monocular image sequences. A 3D deformable wireframe body model is used as a platform to handle the non-rigid information of the 3D shape and 3D motion of the human body from the image sequence. A recursive method for estimating global motion and local shape variations is presented, using two recursive feedback systems.

  2. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI.

    PubMed

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z; Stone, Maureen; Prince, Jerry L

    2014-12-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations. PMID:25155697

  3. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI.

    PubMed

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z; Stone, Maureen; Prince, Jerry L

    2014-12-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations.

  4. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI

    PubMed Central

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.

    2014-01-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations. PMID:25155697

  5. Analysis of thoracic aorta hemodynamics using 3D particle tracking velocimetry and computational fluid dynamics.

    PubMed

    Gallo, Diego; Gülan, Utku; Di Stefano, Antonietta; Ponzini, Raffaele; Lüthi, Beat; Holzner, Markus; Morbiducci, Umberto

    2014-09-22

    Parallel to the massive use of image-based computational hemodynamics to study the complex flow establishing in the human aorta, the need for suitable experimental techniques and ad hoc cases for the validation and benchmarking of numerical codes has grown more and more. Here we present a study where the 3D pulsatile flow in an anatomically realistic phantom of human ascending aorta is investigated both experimentally and computationally. The experimental study uses 3D particle tracking velocimetry (PTV) to characterize the flow field in vitro, while finite volume method is applied to numerically solve the governing equations of motion in the same domain, under the same conditions. Our findings show that there is an excellent agreement between computational and measured flow fields during the forward flow phase, while the agreement is poorer during the reverse flow phase. In conclusion, here we demonstrate that 3D PTV is very suitable for a detailed study of complex unsteady flows as in aorta and for validating computational models of aortic hemodynamics. In a future step, it will be possible to take advantage from the ability of 3D PTV to evaluate velocity fluctuations and, for this reason, to gain further knowledge on the process of transition to turbulence occurring in the thoracic aorta.

  6. Computational optical-sectioning microscopy for 3D quantization of cell motion: results and challenges

    NASA Astrophysics Data System (ADS)

    McNally, James G.

    1994-09-01

    How cells move and navigate within a 3D tissue mass is of central importance in such diverse problems as embryonic development, wound healing and metastasis. This locomotion can now be visualized and quantified by using computation optical-sectioning microscopy. In this approach, a series of 2D images at different depths in a specimen are stacked to construct a 3D image, and then with a knowledge of the microscope's point-spread function, the actual distribution of fluorescent intensity in the specimen is estimated via computation. When coupled with wide-field optics and a cooled CCD camera, this approach permits non-destructive 3D imaging of living specimens over long time periods. With these techniques, we have observed a complex diversity of motile behaviors in a model embryonic system, the cellular slime mold Dictyostelium. To understand the mechanisms which control these various behaviors, we are examining motion in various Dictyostelium mutants with known defects in proteins thought to be essential for signal reception, cell-cell adhesion or locomotion. This application of computational techniques to analyze 3D cell locomotion raises several technical challenges. Image restoration techniques must be fast enough to process numerous 1 Gbyte time-lapse data sets (16 Mbytes per 3D image X 60 time points). Because some cells are weakly labeled and background intensity is often high due to unincorporated dye, the SNR in some of these images is poor. Currently, the images are processed by a regularized linear least- squares restoration method, and occasionally by a maximum-likelihood method. Also required for these studies are accurate automated- tracking procedures to generate both 3D trajectories for individual cells and 3D flows for a group of cells. Tracking is currently done independently for each cell, using a cell's image as a template to search for a similar image at the next time point. Finally, sophisticated visualization techniques are needed to view the

  7. Motion tracking in narrow spaces: a structured light approach.

    PubMed

    Olesen, Oline Vinter; Paulsen, Rasmus R; Højgaar, Liselotte; Roed, Bjarne; Larsen, Rasmus

    2010-01-01

    We present a novel tracking system for patient head motion inside 3D medical scanners. Currently, the system is targeted at the Siemens High Resolution Research Tomograph (HRRT) PET scanner. Partial face surfaces are reconstructed using a miniaturized structured light system. The reconstructed 3D point clouds are matched to a reference surface using a robust iterative closest point algorithm. A main challenge is the narrow geometry requiring a compact structured light system and an oblique angle of observation. The system is validated using a mannequin head mounted on a rotary stage. We compare the system to a standard optical motion tracker based on a rigid tracking tool. Our system achieves an angular RMSE of 0.11 degrees demonstrating its relevance for motion compensated 3D scan image reconstructions as well as its competitiveness against the standard optical system with an RMSE of 0.08 degrees. Finally, we demonstrate qualitative result on real face motion estimation.

  8. LayTracks3D: A new approach for meshing general solids using medial axis transform

    SciTech Connect

    Quadros, William Roshan

    2015-08-22

    This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to the MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.

  9. A 3D feature point tracking method for ion radiation

    NASA Astrophysics Data System (ADS)

    Kouwenberg, Jasper J. M.; Ulrich, Leonie; Jäkel, Oliver; Greilich, Steffen

    2016-06-01

    A robust and computationally efficient algorithm for automated tracking of high densities of particles travelling in (semi-) straight lines is presented. It extends the implementation of (Sbalzarini and Koumoutsakos 2005) and is intended for use in the analysis of single ion track detectors. By including information of existing tracks in the exclusion criteria and a recursive cost minimization function, the algorithm is robust to variations on the measured particle tracks. A trajectory relinking algorithm was included to resolve the crossing of tracks in high particle density images. Validation of the algorithm was performed using fluorescent nuclear track detectors (FNTD) irradiated with high- and low (heavy) ion fluences and showed less than 1% faulty trajectories in the latter.

  10. Real-time visual sensing system achieving high-speed 3D particle tracking with nanometer resolution.

    PubMed

    Cheng, Peng; Jhiang, Sissy M; Menq, Chia-Hsiang

    2013-11-01

    This paper presents a real-time visual sensing system, which is created to achieve high-speed three-dimensional (3D) motion tracking of microscopic spherical particles in aqueous solutions with nanometer resolution. The system comprises a complementary metal-oxide-semiconductor (CMOS) camera, a field programmable gate array (FPGA), and real-time image processing programs. The CMOS camera has high photosensitivity and superior SNR. It acquires images of 128×120 pixels at a frame rate of up to 10,000 frames per second (fps) under the white light illumination from a standard 100 W halogen lamp. The real-time image stream is downloaded from the camera directly to the FPGA, wherein a 3D particle-tracking algorithm is implemented to calculate the 3D positions of the target particle in real time. Two important objectives, i.e., real-time estimation of the 3D position matches the maximum frame rate of the camera and the timing of the output data stream of the system is precisely controlled, are achieved. Two sets of experiments were conducted to demonstrate the performance of the system. First, the visual sensing system was used to track the motion of a 2 μm polystyrene bead, whose motion was controlled by a three-axis piezo motion stage. The ability to track long-range motion with nanometer resolution in all three axes is demonstrated. Second, it was used to measure the Brownian motion of the 2 μm polystyrene bead, which was stabilized in aqueous solution by a laser trapping system. PMID:24216655

  11. Determining 3-D motion and structure from image sequences

    NASA Technical Reports Server (NTRS)

    Huang, T. S.

    1982-01-01

    A method of determining three-dimensional motion and structure from two image frames is presented. The method requires eight point correspondences between the two frames, from which motion and structure parameters are determined by solving a set of eight linear equations and a singular value decomposition of a 3x3 matrix. It is shown that the solution thus obtained is unique.

  12. Robot motion tracking system with multiple views

    NASA Astrophysics Data System (ADS)

    Yamano, Hiroshi; Saito, Hideo

    2001-10-01

    In such a space where human workers and industrial robots work together, it has become necessary to monitor a robot motion for the safety. For such robot surveillance, we propose a robot tracking system from multiple view images. In this system, we treat tracking robot movement problem as an estimation problem of each pose parameter through all frames. This tracking algorithm consists of four stages, image generating stage, estimation stage, parameter searching stage, and prediction stage. At the first stage, robot area of real image is extracted by background subtraction. Here, Yuv color system is used because of reducing the change of lighting condition. By calibrating extrinsic and intrinsic parameters of all cameras with Tsai's method, we can project 3D model of the robot onto each camera. In the next stage, correlation of the input image and projected model image is calculated, which is defined by the area of robots in real and 3D images. At third stage, the pose parameters of the robot are estimated by maximizing the correlation. For computational efficiency, a high dimensional pose parameter space is divided into many low dimensional sub-spaces in accordance with the predicted pose parameters in the previous flame. We apply the proposed system for pose estimation of 5-axis robot manipulator. The estimated pose parameters are successfully matched with the actual pose of the robots.

  13. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846

  14. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis

    PubMed Central

    Cerveri, Pietro; Barros, Ricardo M. L.; Marins, João C. B.; Silvatti, Amanda P.

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846

  15. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  16. Study of a viewer tracking system with multiview 3D display

    NASA Astrophysics Data System (ADS)

    Yang, Jinn-Cherng; Wu, Chang-Shuo; Hsiao, Chuan-Heng; Yang, Ming-Chieh; Liu, Wen-Chieh; Hung, Yi-Ping

    2008-02-01

    An autostereoscopic display provides users great enjoyment of stereo visualization without uncomfortable and inconvenient drawbacks of wearing stereo glasses. However, bandwidth constraints of current multi-view 3D display severely restrict the number of views that can be simultaneously displayed without degrading resolution or increasing display cost unacceptably. An alternative to multiple view presentation is that the position of observer can be measured by using viewer-tracking sensor. It is a very important module of the viewer-tracking component for fluently rendering and accurately projecting the stereo video. In order to render stereo content with respect to user's view points and to optically project the content onto the left and right eyes of the user accurately, the real-time viewer tracking technique that allows the user to move around freely when watching the autostereoscopic display is developed in this study. It comprises the face detection by using multiple eigenspaces of various lighting conditions, fast block matching for tracking four motion parameters of the user's face region. The Edge Orientation Histogram (EOH) on Real AdaBoost to improve the performance of original AdaBoost algorithm is also applied in this study. The AdaBoost algorithm using Haar feature in OpenCV library developed by Intel to detect human face and enhance the accuracy performance with rotating image. The frame rate of viewer tracking process can achieve up to 15 Hz. Since performance of the viewer tracking autostereoscopic display is still influenced under variant environmental conditions, the accuracy, robustness and efficiency of the viewer-tracking system are evaluated in this study.

  17. Full 3-D transverse oscillations: a method for tissue motion estimation.

    PubMed

    Salles, Sebastien; Liebgott, Hervé; Garcia, Damien; Vray, Didier

    2015-08-01

    We present a new method to estimate 4-D (3-D + time) tissue motion. The method used combines 3-D phase based motion estimation with an unconventional beamforming strategy. The beamforming technique allows us to obtain full 3-D RF volumes with axial, lateral, and elevation modulations. Based on these images, we propose a method to estimate 3-D motion that uses phase images instead of amplitude images. First, volumes featuring 3-D oscillations are created using only a single apodization function, and the 3-D displacement between two consecutive volumes is estimated simultaneously by applying this 3-D estimation. The validity of the method is investigated by conducting simulations and phantom experiments. The results are compared with those obtained with two other conventional estimation methods: block matching and optical flow. The results show that the proposed method outperforms the conventional methods, especially in the transverse directions.

  18. Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller

    PubMed Central

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  19. On-line 3D motion estimation using low resolution MRI

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; de Senneville, B. Denis; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2015-08-01

    Image processing such as deformable image registration finds its way into radiotherapy as a means to track non-rigid anatomy. With the advent of magnetic resonance imaging (MRI) guided radiotherapy, intrafraction anatomy snapshots become technically feasible. MRI provides the needed tissue signal for high-fidelity image registration. However, acquisitions, especially in 3D, take a considerable amount of time. Pushing towards real-time adaptive radiotherapy, MRI needs to be accelerated without degrading the quality of information. In this paper, we investigate the impact of image resolution on the quality of motion estimations. Potentially, spatially undersampled images yield comparable motion estimations. At the same time, their acquisition times would reduce greatly due to the sparser sampling. In order to substantiate this hypothesis, exemplary 4D datasets of the abdomen were downsampled gradually. Subsequently, spatiotemporal deformations are extracted consistently using the same motion estimation for each downsampled dataset. Errors between the original and the respectively downsampled version of the dataset are then evaluated. Compared to ground-truth, results show high similarity of deformations estimated from downsampled image data. Using a dataset with {{≤ft(2.5 \\text{mm}\\right)}3} voxel size, deformation fields could be recovered well up to a downsampling factor of 2, i.e. {{≤ft(5 \\text{mm}\\right)}3} . In a therapy guidance scenario MRI, imaging speed could accordingly increase approximately fourfold, with acceptable loss of estimated motion quality.

  20. On-line 3D motion estimation using low resolution MRI.

    PubMed

    Glitzner, M; de Senneville, B Denis; Lagendijk, J J W; Raaymakers, B W; Crijns, S P M

    2015-08-21

    Image processing such as deformable image registration finds its way into radiotherapy as a means to track non-rigid anatomy. With the advent of magnetic resonance imaging (MRI) guided radiotherapy, intrafraction anatomy snapshots become technically feasible. MRI provides the needed tissue signal for high-fidelity image registration. However, acquisitions, especially in 3D, take a considerable amount of time. Pushing towards real-time adaptive radiotherapy, MRI needs to be accelerated without degrading the quality of information. In this paper, we investigate the impact of image resolution on the quality of motion estimations. Potentially, spatially undersampled images yield comparable motion estimations. At the same time, their acquisition times would reduce greatly due to the sparser sampling. In order to substantiate this hypothesis, exemplary 4D datasets of the abdomen were downsampled gradually. Subsequently, spatiotemporal deformations are extracted consistently using the same motion estimation for each downsampled dataset. Errors between the original and the respectively downsampled version of the dataset are then evaluated. Compared to ground-truth, results show high similarity of deformations estimated from downsampled image data. Using a dataset with (2.5 mm)3 voxel size, deformation fields could be recovered well up to a downsampling factor of 2, i.e. (5 mm)3. In a therapy guidance scenario MRI, imaging speed could accordingly increase approximately fourfold, with acceptable loss of estimated motion quality.

  1. A 3D diamond detector for particle tracking

    NASA Astrophysics Data System (ADS)

    Artuso, M.; Bachmair, F.; Bäni, L.; Bartosik, M.; Beacham, J.; Bellini, V.; Belyaev, V.; Bentele, B.; Berdermann, E.; Bergonzo, P.; Bes, A.; Brom, J.-M.; Bruzzi, M.; Cerv, M.; Chau, C.; Chiodini, G.; Chren, D.; Cindro, V.; Claus, G.; Collot, J.; Costa, S.; Cumalat, J.; Dabrowski, A.; D`Alessandro, R.; de Boer, W.; Dehning, B.; Dobos, D.; Dünser, M.; Eremin, V.; Eusebi, R.; Forcolin, G.; Forneris, J.; Frais-Kölbl, H.; Gan, K. K.; Gastal, M.; Goffe, M.; Goldstein, J.; Golubev, A.; Gonella, L.; Gorišek, A.; Graber, L.; Grigoriev, E.; Grosse-Knetter, J.; Gui, B.; Guthoff, M.; Haughton, I.; Hidas, D.; Hits, D.; Hoeferkamp, M.; Hofmann, T.; Hosslet, J.; Hostachy, J.-Y.; Hügging, F.; Jansen, H.; Janssen, J.; Kagan, H.; Kanxheri, K.; Kasieczka, G.; Kass, R.; Kassel, F.; Kis, M.; Kramberger, G.; Kuleshov, S.; Lacoste, A.; Lagomarsino, S.; Lo Giudice, A.; Maazouzi, C.; Mandic, I.; Mathieu, C.; McFadden, N.; McGoldrick, G.; Menichelli, M.; Mikuž, M.; Morozzi, A.; Moss, J.; Mountain, R.; Murphy, S.; Oh, A.; Olivero, P.; Parrini, G.; Passeri, D.; Pauluzzi, M.; Pernegger, H.; Perrino, R.; Picollo, F.; Pomorski, M.; Potenza, R.; Quadt, A.; Re, A.; Riley, G.; Roe, S.; Sapinski, M.; Scaringella, M.; Schnetzer, S.; Schreiner, T.; Sciortino, S.; Scorzoni, A.; Seidel, S.; Servoli, L.; Sfyrla, A.; Shimchuk, G.; Smith, D. S.; Sopko, B.; Sopko, V.; Spagnolo, S.; Spanier, S.; Stenson, K.; Stone, R.; Sutera, C.; Taylor, A.; Traeger, M.; Tromson, D.; Trischuk, W.; Tuve, C.; Uplegger, L.; Velthuis, J.; Venturi, N.; Vittone, E.; Wagner, S.; Wallny, R.; Wang, J. C.; Weilhammer, P.; Weingarten, J.; Weiss, C.; Wengler, T.; Wermes, N.; Yamouni, M.; Zavrtanik, M.

    2016-07-01

    In the present study, results towards the development of a 3D diamond sensor are presented. Conductive channels are produced inside the sensor bulk using a femtosecond laser. This electrode geometry allows full charge collection even for low quality diamond sensors. Results from testbeam show that charge is collected by these electrodes. In order to understand the channel growth parameters, with the goal of producing low resistivity channels, the conductive channels produced with a different laser setup are evaluated by Raman spectroscopy.

  2. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  3. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  4. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  5. Rigid Body Motion in Stereo 3D Simulation

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2010-01-01

    This paper addresses the difficulties experienced by first-grade students studying rigid body motion at Sofia University. Most quantities describing the rigid body are in relations that the students find hard to visualize and understand. They also lose the notion of cause-result relations between vector quantities, such as the relation between…

  6. Automatic alignment of standard views in 3D echocardiograms using real-time tracking

    NASA Astrophysics Data System (ADS)

    Orderud, Fredrik; Torp, Hans; Rabben, Stein Inge

    2009-02-01

    In this paper, we present an automatic approach for alignment of standard apical and short-axis slices, and correcting them for out-of-plane motion in 3D echocardiography. This is enabled by using real-time Kalman tracking to perform automatic left ventricle segmentation using a coupled deformable model, consisting of a left ventricle model, as well as structures for the right ventricle and left ventricle outflow tract. Landmark points from the segmented model are then used to generate standard apical and short-axis slices. The slices are automatically updated after tracking in each frame to correct for out-of-plane motion caused by longitudinal shortening of the left ventricle. Results from a dataset of 35 recordings demonstrate the potential for automating apical slice initialization and dynamic short-axis slices. Apical 4-chamber, 2-chamber and long-axis slices are generated based on an assumption of fixed angle between the slices, and short-axis slices are generated so that they follow the same myocardial tissue over the entire cardiac cycle. The error compared to manual annotation was 8.4 +/- 3.5 mm for apex, 3.6 +/- 1.8 mm for mitral valve and 8.4 +/- 7.4 for apical 4-chamber view. The high computational efficiency and automatic behavior of the method enables it to operate in real-time, potentially during image acquisition.

  7. Markerless 3D motion capture for animal locomotion studies

    PubMed Central

    Sellers, William Irvin; Hirasaki, Eishi

    2014-01-01

    ABSTRACT Obtaining quantitative data describing the movements of animals is an essential step in understanding their locomotor biology. Outside the laboratory, measuring animal locomotion often relies on video-based approaches and analysis is hampered because of difficulties in calibration and often the limited availability of possible camera positions. It is also usually restricted to two dimensions, which is often an undesirable over-simplification given the essentially three-dimensional nature of many locomotor performances. In this paper we demonstrate a fully three-dimensional approach based on 3D photogrammetric reconstruction using multiple, synchronised video cameras. This approach allows full calibration based on the separation of the individual cameras and will work fully automatically with completely unmarked and undisturbed animals. As such it has the potential to revolutionise work carried out on free-ranging animals in sanctuaries and zoological gardens where ad hoc approaches are essential and access within enclosures often severely restricted. The paper demonstrates the effectiveness of video-based 3D photogrammetry with examples from primates and birds, as well as discussing the current limitations of this technique and illustrating the accuracies that can be obtained. All the software required is open source so this can be a very cost effective approach and provides a methodology of obtaining data in situations where other approaches would be completely ineffective. PMID:24972869

  8. 3-D tracking in a miniature time projection chamber

    NASA Astrophysics Data System (ADS)

    Vahsen, S. E.; Hedges, M. T.; Jaegle, I.; Ross, S. J.; Seong, I. S.; Thorpe, T. N.; Yamaoka, J.; Kadyk, J. A.; Garcia-Sciveres, M.

    2015-07-01

    The three-dimensional (3-D) detection of millimeter-scale ionization trails is of interest for detecting nuclear recoils in directional fast neutron detectors and in direction-sensitive searches for weakly interacting massive particles (WIMPs), which may constitute the Dark Matter of the universe. We report on performance characterization of a miniature gas target Time Projection Chamber (TPC) where the drift charge is avalanche-multiplied with Gas Electron Multipliers (GEMs) and detected with the ATLAS FE-I3 Pixel Application Specific Integrated Circuit (ASIC). We report on measurements of gain, gain resolution, point resolution, diffusion, angular resolution, and energy resolution with low-energy X-rays, cosmic rays, and alpha particles, using the gases Ar:CO2 (70:30) and He:CO2 (70:30) at atmospheric pressure. We discuss the implications for future, larger directional neutron and Dark Matter detectors. With an eye to designing and selecting components for these, we generalize our results into analytical expressions for detector performance whenever possible. We conclude by demonstrating the 3-D directional detection of a fast neutron source.

  9. THE THOMSON SURFACE. III. TRACKING FEATURES IN 3D

    SciTech Connect

    Howard, T. A.; DeForest, C. E.; Tappin, S. J.; Odstrcil, D.

    2013-03-01

    In this, the final installment in a three-part series on the Thomson surface, we present simulated observations of coronal mass ejections (CMEs) observed by a hypothetical polarizing white light heliospheric imager. Thomson scattering yields a polarization signal that can be exploited to locate observed features in three dimensions relative to the Thomson surface. We consider how the appearance of the CME changes with the direction of trajectory, using simulations of a simple geometrical shape and also of a more realistic CME generated using the ENLIL model. We compare the appearance in both unpolarized B and polarized pB light, and show that there is a quantifiable difference in the measured brightness of a CME between unpolarized and polarized observations. We demonstrate a technique for using this difference to extract the three-dimensional (3D) trajectory of large objects such as CMEs. We conclude with a discussion on how a polarizing heliospheric imager could be used to extract 3D trajectory information about CMEs or other observed features.

  10. Flash trajectory imaging of target 3D motion

    NASA Astrophysics Data System (ADS)

    Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang

    2011-03-01

    We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.

  11. Comparative abilities of Microsoft Kinect and Vicon 3D motion capture for gait analysis.

    PubMed

    Pfister, Alexandra; West, Alexandre M; Bronner, Shaw; Noah, Jack Adam

    2014-07-01

    Biomechanical analysis is a powerful tool in the evaluation of movement dysfunction in orthopaedic and neurologic populations. Three-dimensional (3D) motion capture systems are widely used, accurate systems, but are costly and not available in many clinical settings. The Microsoft Kinect™ has the potential to be used as an alternative low-cost motion analysis tool. The purpose of this study was to assess concurrent validity of the Kinect™ with Brekel Kinect software in comparison to Vicon Nexus during sagittal plane gait kinematics. Twenty healthy adults (nine male, 11 female) were tracked while walking and jogging at three velocities on a treadmill. Concurrent hip and knee peak flexion and extension and stride timing measurements were compared between Vicon and Kinect™. Although Kinect measurements were representative of normal gait, the Kinect™ generally under-estimated joint flexion and over-estimated extension. Kinect™ and Vicon hip angular displacement correlation was very low and error was large. Kinect™ knee measurements were somewhat better than hip, but were not consistent enough for clinical assessment. Correlation between Kinect™ and Vicon stride timing was high and error was fairly small. Variability in Kinect™ measurements was smallest at the slowest velocity. The Kinect™ has basic motion capture capabilities and with some minor adjustments will be an acceptable tool to measure stride timing, but sophisticated advances in software and hardware are necessary to improve Kinect™ sensitivity before it can be implemented for clinical use.

  12. 2D-3D rigid registration to compensate for prostate motion during 3D TRUS-guided biopsy

    NASA Astrophysics Data System (ADS)

    De Silva, Tharindu; Fenster, Aaron; Bax, Jeffrey; Gardi, Lori; Romagnoli, Cesare; Samarabandu, Jagath; Ward, Aaron D.

    2012-02-01

    Prostate biopsy is the clinical standard for prostate cancer diagnosis. To improve the accuracy of targeting suspicious locations, systems have been developed that can plan and record biopsy locations in a 3D TRUS image acquired at the beginning of the procedure. Some systems are designed for maximum compatibility with existing ultrasound equipment and are thus designed around the use of a conventional 2D TRUS probe, using controlled axial rotation of this probe to acquire a 3D TRUS reference image at the start of the biopsy procedure. Prostate motion during the biopsy procedure causes misalignments between the prostate in the live 2D TRUS images and the pre-acquired 3D TRUS image. We present an image-based rigid registration technique that aligns live 2D TRUS images, acquired immediately prior to biopsy needle insertion, with the pre-acquired 3D TRUS image to compensate for this motion. Our method was validated using 33 manually identified intrinsic fiducials in eight subjects and the target registration error was found to be 1.89 mm. We analysed the suitability of two image similarity metrics (normalized cross correlation and mutual information) for this task by plotting these metrics as a function of varying parameters in the six degree-of-freedom transformation space, with the ground truth plane obtained from registration as the starting point for the parameter exploration. We observed a generally convex behaviour of the similarity metrics. This encourages their use for this registration problem, and could assist in the design of a tool for the detection of misalignment, which could trigger the execution of a non-real-time registration, when needed during the procedure.

  13. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  14. Simple 3-D stimulus for motion parallax and its simulation.

    PubMed

    Ono, Hiroshi; Chornenkyy, Yevgen; D'Amour, Sarah

    2013-01-01

    Simulation of a given stimulus situation should produce the same perception as the original. Rogers et al (2009 Perception 38 907-911) simulated Wheeler's (1982, PhD thesis, Rutgers University, NJ) motion parallax stimulus and obtained quite different perceptions. Wheeler's observers were unable to reliably report the correct direction of depth, whereas Rogers's were. With three experiments we explored the possible reasons for the discrepancy. Our results suggest that Rogers was able to see depth from the simulation partly due to his experience seeing depth with random dot surfaces. PMID:23964382

  15. Track of Right-Wheel Drag (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    This 360-degree stereo panorama combines several frames taken by the navigation camera on NASA's Mars Exploration Rover Spirit during the rover's 313th martian day (Nov. 19, 2004). The site, labeled Spirit site 93, is in the 'Columbia Hills' inside Gusev Crater. The rover tracks point westward. Spirit had driven eastward, in reverse and dragging its right front wheel, for about 30 meters (100 feet) on the day the picture was taken. Driving backwards while dragging that wheel is a precautionary strategy to extend the usefulness of the wheel for when it is most needed, because it has developed more friction than the other wheels. The right-hand track in this look backwards shows how the dragging disturbed the soil. This view is presented in a cylindrical-perspective projection with geometric seam correction.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  16. Evaluating the utility of 3D TRUS image information in guiding intra-procedure registration for motion compensation

    NASA Astrophysics Data System (ADS)

    De Silva, Tharindu; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    In targeted 3D transrectal ultrasound (TRUS)-guided biopsy, patient and prostate movement during the procedure can cause target misalignments that hinder accurate sampling of pre-planned suspicious tissue locations. Multiple solutions have been proposed for motion compensation via registration of intra-procedural TRUS images to a baseline 3D TRUS image acquired at the beginning of the biopsy procedure. While 2D TRUS images are widely used for intra-procedural guidance, some solutions utilize richer intra-procedural images such as bi- or multi-planar TRUS or 3D TRUS, acquired by specialized probes. In this work, we measured the impact of such richer intra-procedural imaging on motion compensation accuracy, to evaluate the tradeoff between cost and complexity of intra-procedural imaging versus improved motion compensation. We acquired baseline and intra-procedural 3D TRUS images from 29 patients at standard sextant-template biopsy locations. We used the planes extracted from the 3D intra-procedural scans to simulate 2D and 3D information available in different clinically relevant scenarios for registration. The registration accuracy was evaluated by calculating the target registration error (TRE) using manually identified homologous fiducial markers (micro-calcifications). Our results indicate that TRE improves gradually when the number of intra-procedural imaging planes used in registration is increased. Full 3D TRUS information helps the registration algorithm to robustly converge to more accurate solutions. These results can also inform the design of a fail-safe workflow during motion compensation in a system using a tracked 2D TRUS probe, by prescribing rotational acquisitions that can be performed quickly and easily by the physician immediately prior to needle targeting.

  17. Ultra-Wideband Time-Difference-of-Arrival High Resolution 3D Proximity Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Dekome, Kent; Dusl, John

    2010-01-01

    This paper describes a research and development effort for a prototype ultra-wideband (UWB) tracking system that is currently under development at NASA Johnson Space Center (JSC). The system is being studied for use in tracking of lunar./Mars rovers and astronauts during early exploration missions when satellite navigation systems are not available. U IATB impulse radio (UWB-IR) technology is exploited in the design and implementation of the prototype location and tracking system. A three-dimensional (3D) proximity tracking prototype design using commercially available UWB products is proposed to implement the Time-Difference- Of-Arrival (TDOA) tracking methodology in this research effort. The TDOA tracking algorithm is utilized for location estimation in the prototype system, not only to exploit the precise time resolution possible with UWB signals, but also to eliminate the need for synchronization between the transmitter and the receiver. Simulations show that the TDOA algorithm can achieve the fine tracking resolution with low noise TDOA estimates for close-in tracking. Field tests demonstrated that this prototype UWB TDOA High Resolution 3D Proximity Tracking System is feasible for providing positioning-awareness information in a 3D space to a robotic control system. This 3D tracking system is developed for a robotic control system in a facility called "Moonyard" at Honeywell Defense & System in Arizona under a Space Act Agreement.

  18. Nonrigid Autofocus Motion Correction for Coronary MR Angiography with a 3D Cones Trajectory

    PubMed Central

    Ingle, R. Reeve; Wu, Holden H.; Addy, Nii Okai; Cheng, Joseph Y.; Yang, Phillip C.; Hu, Bob S.; Nishimura, Dwight G.

    2014-01-01

    Purpose: To implement a nonrigid autofocus motion correction technique to improve respiratory motion correction of free-breathing whole-heart coronary magnetic resonance angiography (CMRA) acquisitions using an image-navigated 3D cones sequence. Methods: 2D image navigators acquired every heartbeat are used to measure superior-inferior, anterior-posterior, and right-left translation of the heart during a free-breathing CMRA scan using a 3D cones readout trajectory. Various tidal respiratory motion patterns are modeled by independently scaling the three measured displacement trajectories. These scaled motion trajectories are used for 3D translational compensation of the acquired data, and a bank of motion-compensated images is reconstructed. From this bank, a gradient entropy focusing metric is used to generate a nonrigid motion-corrected image on a pixel-by-pixel basis. The performance of the autofocus motion correction technique is compared with rigid-body translational correction and no correction in phantom, volunteer, and patient studies. Results: Nonrigid autofocus motion correction yields improved image quality compared to rigid-body-corrected images and uncorrected images. Quantitative vessel sharpness measurements indicate superiority of the proposed technique in 14 out of 15 coronary segments from three patient and two volunteer studies. Conclusion: The proposed technique corrects nonrigid motion artifacts in free-breathing 3D cones acquisitions, improving image quality compared to rigid-body motion correction. PMID:24006292

  19. Motion-Corrected 3D Sonic Anemometer for Tethersondes and Other Moving Platforms

    NASA Technical Reports Server (NTRS)

    Bognar, John

    2012-01-01

    To date, it has not been possible to apply 3D sonic anemometers on tethersondes or similar atmospheric research platforms due to the motion of the supporting platform. A tethersonde module including both a 3D sonic anemometer and associated motion correction sensors has been developed, enabling motion-corrected 3D winds to be measured from a moving platform such as a tethersonde. Blimps and other similar lifting systems are used to support tethersondes meteorological devices that fly on the tether of a blimp or similar platform. To date, tethersondes have been limited to making basic meteorological measurements (pressure, temperature, humidity, and wind speed and direction). The motion of the tethersonde has precluded the addition of 3D sonic anemometers, which can be used for high-speed flux measurements, thereby limiting what has been achieved to date with tethersondes. The tethersonde modules fly on a tether that can be constantly moving and swaying. This would introduce enormous error into the output of an uncorrected 3D sonic anemometer. The motion correction that is required must be implemented in a low-weight, low-cost manner to be suitable for this application. Until now, flux measurements using 3D sonic anemometers could only be made if the 3D sonic anemometer was located on a rigid, fixed platform such as a tower. This limited the areas in which they could be set up and used. The purpose of the innovation was to enable precise 3D wind and flux measurements to be made using tether - sondes. In brief, a 3D accelerometer and a 3D gyroscope were added to a tethersonde module along with a 3D sonic anemometer. This combination allowed for the necessary package motions to be measured, which were then mathematically combined with the measured winds to yield motion-corrected 3D winds. At the time of this reporting, no tethersonde has been able to make any wind measurement other than a basic wind speed and direction measurement. The addition of a 3D sonic

  20. Recording High Resolution 3D Lagrangian Motions In Marine Dinoflagellates using Digital Holographic Microscopic Cinematography

    NASA Astrophysics Data System (ADS)

    Sheng, J.; Malkiel, E.; Katz, J.; Place, A. R.; Belas, R.

    2006-11-01

    Detailed data on swimming behavior and locomotion for dense population of dinoflagellates constitutes a key component to understanding cell migration, cell-cell interactions and predator-prey dynamics, all of which affect algae bloom dynamics. Due to the multi-dimensional nature of flagellated cell motions, spatial-temporal Lagrangian measurements of multiple cells in high concentration are very limited. Here we present detailed data on 3D Lagrangian motions for three marine dinoflagellates: Oxyrrhis marina, Karlodinium veneficum, and Pfiesteria piscicida, using digital holographic microscopic cinematography. The measurements are performed in a 5x5x25mm cuvette with cell densities varying from 50,000 ˜ 90,000 cells/ml. Approximately 200-500 cells are tracked simultaneously for 12s at 60fps in a sample volume of 1x1x5 mm at a spatial resolution of 0.4x0.4x2 μm. We fully resolve the longitudinal flagella (˜200nm) along with the Lagrangian trajectory of each organism. Species dependent swimming behavior are identified and categorized quantitatively by velocities, radii of curvature, and rotations of pitch. Statistics on locomotion, temporal & spatial scales, and diffusion rate show substantial differences between species. The scaling between turning radius and cell dimension can be explained by a distributed stokeslet model for a self-propelled body.

  1. LayTracks3D: A new approach for meshing general solids using medial axis transform

    DOE PAGES

    Quadros, William Roshan

    2015-08-22

    This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to themore » MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.« less

  2. Blind watermark algorithm on 3D motion model based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Qi, Hu; Zhai, Lang

    2013-12-01

    With the continuous development of 3D vision technology, digital watermark technology, as the best choice for copyright protection, has fused with it gradually. This paper proposed a blind watermark plan of 3D motion model based on wavelet transform, and made it loaded into the Vega real-time visual simulation system. Firstly, put 3D model into affine transform, and take the distance from the center of gravity to the vertex of 3D object in order to generate a one-dimensional discrete signal; then make this signal into wavelet transform to change its frequency coefficients and embed watermark, finally generate 3D motion model with watermarking. In fixed affine space, achieve the robustness in translation, revolving and proportion transforms. The results show that this approach has better performances not only in robustness, but also in watermark- invisibility.

  3. Model-based risk assessment for motion effects in 3D radiotherapy of lung tumors

    NASA Astrophysics Data System (ADS)

    Werner, René; Ehrhardt, Jan; Schmidt-Richberg, Alexander; Handels, Heinz

    2012-02-01

    Although 4D CT imaging becomes available in an increasing number of radiotherapy facilities, 3D imaging and planning is still standard in current clinical practice. In particular for lung tumors, respiratory motion is a known source of uncertainty and should be accounted for during radiotherapy planning - which is difficult by using only a 3D planning CT. In this contribution, we propose applying a statistical lung motion model to predict patients' motion patterns and to estimate dosimetric motion effects in lung tumor radiotherapy if only 3D images are available. Being generated based on 4D CT images of patients with unimpaired lung motion, the model tends to overestimate lung tumor motion. It therefore promises conservative risk assessment regarding tumor dose coverage. This is exemplarily evaluated using treatment plans of lung tumor patients with different tumor motion patterns and for two treatment modalities (conventional 3D conformal radiotherapy and step-&- shoot intensity modulated radiotherapy). For the test cases, 4D CT images are available. Thus, also a standard registration-based 4D dose calculation is performed, which serves as reference to judge plausibility of the modelbased 4D dose calculation. It will be shown that, if combined with an additional simple patient-specific breathing surrogate measurement (here: spirometry), the model-based dose calculation provides reasonable risk assessment of respiratory motion effects.

  4. Structural response to 3D simulated earthquake motions in San Bernardino Valley

    USGS Publications Warehouse

    Safak, E.; Frankel, A.

    1994-01-01

    Structural repsonse to one- and three-dimensional (3D) simulated motions in San Bernardino Valley from a hypothetical earthquake along the San Andreas fault with moment magnitude 6.5 and rupture length of 30km is investigated. The results show that the ground motions and the structural response vary dramatically with the type of simulation and the location. -from Authors

  5. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models.

    PubMed

    Dhou, S; Hurwitz, M; Mishra, P; Cai, W; Rottmann, J; Li, R; Williams, C; Wagar, M; Berbeco, R; Ionascu, D; Lewis, J H

    2015-05-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  6. Extraction and tracking of MRI tagging sheets using a 3D Gabor filter bank.

    PubMed

    Qian, Zhen; Metaxas, Dimitris N; Axel, Leon

    2006-01-01

    In this paper, we present a novel method for automatically extracting the tagging sheets in tagged cardiac MR images, and tracking their displacement during the heart cycle, using a tunable 3D Gabor filter bank. Tagged MRI is a non-invasive technique for the study of myocardial deformation. We design the 3D Gabor filter bank based on the geometric characteristics of the tagging sheets. The tunable parameters of the Gabor filter bank are used to adapt to the myocardium deformation. The whole 3D image dataset is convolved with each Gabor filter in the filter bank, in the Fourier domain. Then we impose a set of deformable meshes onto the extracted tagging sheets and track them over time. Dynamic estimation of the filter parameters and the mesh internal smoothness are used to help the tracking. Some very encouraging results are shown.

  7. The effect of motion on IMRT - looking at interplay with 3D measurements

    NASA Astrophysics Data System (ADS)

    Thomas, A.; Yan, H.; Oldham, M.; Juang, T.; Adamovics, J.; Yin, F. F.

    2013-06-01

    Clinical recommendations to address tumor motion management have been derived from studies dealing with simulations and 2D measurements. 3D measurements may provide more insight and possibly alter the current motion management guidelines. This study provides an initial look at true 3D measurements involving leaf motion deliveries by use of a motion phantom and the PRESAGE/DLOS dosimetry system. An IMRT and VMAT plan were delivered to the phantom and analyzed by means of DVHs to determine whether the expansion of treatment volumes based on known imaging motion adequately cover the target. DVHs confirmed that for these deliveries the expansion volumes were adequate to treat the intended target although further studies should be conducted to allow for differences in parameters that could alter the results, such as delivery dose and breathe rate.

  8. Confocal fluorometer for diffusion tracking in 3D engineered tissue constructs

    NASA Astrophysics Data System (ADS)

    Daly, D.; Zilioli, A.; Tan, N.; Buttenschoen, K.; Chikkanna, B.; Reynolds, J.; Marsden, B.; Hughes, C.

    2016-03-01

    We present results of the development of a non-contacting instrument, called fScan, based on scanning confocal fluorometry for assessing the diffusion of materials through a tissue matrix. There are many areas in healthcare diagnostics and screening where it is now widely accepted that the need for new quantitative monitoring technologies is a major pinch point in patient diagnostics and in vitro testing. With the increasing need to interpret 3D responses this commonly involves the need to track the diffusion of compounds, pharma-active species and cells through a 3D matrix of tissue. Methods are available but to support the advances that are currently only promised, this monitoring needs to be real-time, non-invasive, and economical. At the moment commercial meters tend to be invasive and usually require a sample of the medium to be removed and processed prior to testing. This methodology clearly has a number of significant disadvantages. fScan combines a fiber based optical arrangement with a compact, free space optical front end that has been integrated so that the sample's diffusion can be measured without interference. This architecture is particularly important due to the "wet" nature of the samples. fScan is designed to measure constructs located within standard well plates and a 2-D motion stage locates the required sample with respect to the measurement system. Results are presented that show how the meter has been used to evaluate movements of samples through collagen constructs in situ without disturbing their kinetic characteristics. These kinetics were little understood prior to these measurements.

  9. The 3D Human Motion Control Through Refined Video Gesture Annotation

    NASA Astrophysics Data System (ADS)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  10. High-throughput 3D tracking of bacteria on a standard phase contrast microscope

    PubMed Central

    Taute, K.M.; Gude, S.; Tans, S.J.; Shimizu, T.S.

    2015-01-01

    Bacteria employ diverse motility patterns in traversing complex three-dimensional (3D) natural habitats. 2D microscopy misses crucial features of 3D behaviour, but the applicability of existing 3D tracking techniques is constrained by their performance or ease of use. Here we present a simple, broadly applicable, high-throughput 3D bacterial tracking method for use in standard phase contrast microscopy. Bacteria are localized at micron-scale resolution over a range of 350 × 300 × 200 μm by maximizing image cross-correlations between their observed diffraction patterns and a reference library. We demonstrate the applicability of our technique to a range of bacterial species and exploit its high throughput to expose hidden contributions of bacterial individuality to population-level variability in motile behaviour. The simplicity of this powerful new tool for bacterial motility research renders 3D tracking accessible to a wider community and paves the way for investigations of bacterial motility in complex 3D environments. PMID:26522289

  11. Improving segmentation of 3D touching cell nuclei using flow tracking on surface meshes.

    PubMed

    Li, Gang; Guo, Lei

    2012-01-01

    Automatic segmentation of touching cell nuclei in 3D microscopy images is of great importance in bioimage informatics and computational biology. This paper presents a novel method for improving 3D touching cell nuclei segmentation. Given binary touching nuclei by the method in Li et al. (2007), our method herein consists of several steps: surface mesh reconstruction and curvature information estimation; direction field diffusion on surface meshes; flow tracking on surface meshes; and projection of surface mesh segmentation to volumetric images. The method is validated on both synthesised and real 3D touching cell nuclei images, demonstrating its validity and effectiveness.

  12. Motion-Capture-Enabled Software for Gestural Control of 3D Models

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey S.; Luo, Victor; Crockett, Thomas M.; Shams, Khawaja S.; Powell, Mark W.; Valderrama, Anthony

    2012-01-01

    Current state-of-the-art systems use general-purpose input devices such as a keyboard, mouse, or joystick that map to tasks in unintuitive ways. This software enables a person to control intuitively the position, size, and orientation of synthetic objects in a 3D virtual environment. It makes possible the simultaneous control of the 3D position, scale, and orientation of 3D objects using natural gestures. Enabling the control of 3D objects using a commercial motion-capture system allows for natural mapping of the many degrees of freedom of the human body to the manipulation of the 3D objects. It reduces training time for this kind of task, and eliminates the need to create an expensive, special-purpose controller.

  13. Vision-Based Long-Range 3D Tracking, applied to Underground Surveying Tasks

    NASA Astrophysics Data System (ADS)

    Mossel, Annette; Gerstweiler, Georg; Vonach, Emanuel; Kaufmann, Hannes; Chmelina, Klaus

    2014-04-01

    To address the need of highly automated positioning systems in underground construction, we present a long-range 3D tracking system based on infrared optical markers. It provides continuous 3D position estimation of static or kinematic targets with low latency over a tracking volume of 12 m x 8 m x 70 m (width x height x depth). Over the entire volume, relative 3D point accuracy with a maximal deviation ≤ 22 mm is ensured with possible target rotations of yaw, pitch = 0 - 45° and roll = 0 - 360°. No preliminary sighting of target(s) is necessary since the system automatically locks onto a target without user intervention and autonomously starts tracking as soon as a target is within the view of the system. The proposed system needs a minimal hardware setup, consisting of two machine vision cameras and a standard workstation for data processing. This allows for quick installation with minimal disturbance of construction work. The data processing pipeline ensures camera calibration and tracking during on-going underground activities. Tests in real underground scenarios prove the system's capabilities to act as 3D position measurement platform for multiple underground tasks that require long range, low latency and high accuracy. Those tasks include simultaneously tracking of personnel, machines or robots.

  14. 3D target tracking in infrared imagery by SIFT-based distance histograms

    NASA Astrophysics Data System (ADS)

    Yan, Ruicheng; Cao, Zhiguo

    2011-11-01

    SIFT tracking algorithm is an excellent point-based tracking algorithm, which has high tracking performance and accuracy due to its robust capability against rotation, scale change and occlusion. However, when tracking a huge 3D target in complicated real scenarios in a forward-looking infrared (FLIR) image sequence taken from an airborne moving platform, the tracked point locating in the vertical surface usually shifts away from the correct position. In this paper, we propose a novel algorithm for 3D target tracking in FLIR image sequences. Our approach uses SIFT keypoints detected in consecutive frames for point correspondence. The candidate position of the tracked point is firstly estimated by computing the affine transformation using local corresponding SIFT keypoints. Then the correct position is located via an optimal method. Euclidean distances between a candidate point and SIFT keypoints nearby are calculated and formed into a SIFT-based distance histogram. The distance histogram is defined a cost of associating each candidate point to a correct tracked point using the constraint based on the topology of each candidate point with its surrounding SIFT keypoints. Minimization of the cost is formulated as a combinatorial optimization problem. Experiments demonstrate that the proposed algorithm efficiently improves the tracking performance and accuracy.

  15. 3-D geometry calibration and markerless electromagnetic tracking with a mobile C-arm

    NASA Astrophysics Data System (ADS)

    Cheryauka, Arvi; Barrett, Johnny; Wang, Zhonghua; Litvin, Andrew; Hamadeh, Ali; Beaudet, Daniel

    2007-03-01

    The design of mobile X-ray C-arm equipment with image tomography and surgical guidance capabilities involves the retrieval of repeatable gantry positioning in three-dimensional space. Geometry misrepresentations can cause degradation of the reconstruction results with the appearance of blurred edges, image artifacts, and even false structures. It may also amplify surgical instrument tracking errors leading to improper implant placement. In our prior publications we have proposed a C-arm 3D positioner calibration method comprising separate intrinsic and extrinsic geometry calibration steps. Following this approach, in the present paper, we extend the intrinsic geometry calibration of C-gantry beyond angular positions in the orbital plane into angular positions on a unit sphere of isocentric rotation. Our method makes deployment of markerless interventional tool guidance with use of high-resolution fluoro images and electromagnetic tracking feasible at any angular position of the tube-detector assembly. Variations of the intrinsic parameters associated with C-arm motion are measured off-line as functions of orbital and lateral angles. The proposed calibration procedure provides better accuracy, and prevents unnecessary workflow steps for surgical navigation applications. With a slight modification, the Misalignment phantom, a tool for intrinsic geometry calibration, is also utilized to obtain an accurate 'image-to-sensor' mapping. We show simulation results, image quality and navigation accuracy estimates, and feasibility data acquired with the prototype system. The experimental results show the potential of high-resolution CT imaging (voxel size below 0.5 mm) and confident navigation in an interventional surgery setting with a mobile C-arm.

  16. Note: Time-gated 3D single quantum dot tracking with simultaneous spinning disk imaging

    SciTech Connect

    DeVore, M. S.; Stich, D. G.; Keller, A. M.; Phipps, M. E.; Hollingsworth, J. A.; Goodwin, P. M.; Werner, J. H.; Cleyrat, C.; Lidke, D. S.; Wilson, B. S.

    2015-12-15

    We describe recent upgrades to a 3D tracking microscope to include simultaneous Nipkow spinning disk imaging and time-gated single-particle tracking (SPT). Simultaneous 3D molecular tracking and spinning disk imaging enable the visualization of cellular structures and proteins around a given fluorescently labeled target molecule. The addition of photon time-gating to the SPT hardware improves signal to noise by discriminating against Raman scattering and short-lived fluorescence. In contrast to camera-based SPT, single-photon arrival times are recorded, enabling time-resolved spectroscopy (e.g., measurement of fluorescence lifetimes and photon correlations) to be performed during single molecule/particle tracking experiments.

  17. Note: Time-gated 3D single quantum dot tracking with simultaneous spinning disk imaging.

    PubMed

    DeVore, M S; Stich, D G; Keller, A M; Cleyrat, C; Phipps, M E; Hollingsworth, J A; Lidke, D S; Wilson, B S; Goodwin, P M; Werner, J H

    2015-12-01

    We describe recent upgrades to a 3D tracking microscope to include simultaneous Nipkow spinning disk imaging and time-gated single-particle tracking (SPT). Simultaneous 3D molecular tracking and spinning disk imaging enable the visualization of cellular structures and proteins around a given fluorescently labeled target molecule. The addition of photon time-gating to the SPT hardware improves signal to noise by discriminating against Raman scattering and short-lived fluorescence. In contrast to camera-based SPT, single-photon arrival times are recorded, enabling time-resolved spectroscopy (e.g., measurement of fluorescence lifetimes and photon correlations) to be performed during single molecule/particle tracking experiments.

  18. Intrathoracic tumour motion estimation from CT imaging using the 3D optical flow method

    NASA Astrophysics Data System (ADS)

    Guerrero, Thomas; Zhang, Geoffrey; Huang, Tzung-Chi; Lin, Kang-Ping

    2004-09-01

    The purpose of this work was to develop and validate an automated method for intrathoracic tumour motion estimation from breath-hold computed tomography (BH CT) imaging using the three-dimensional optical flow method (3D OFM). A modified 3D OFM algorithm provided 3D displacement vectors for each voxel which were used to map tumour voxels on expiration BH CT onto inspiration BH CT images. A thoracic phantom and simulated expiration/inspiration BH CT pairs were used for validation. The 3D OFM was applied to the measured inspiration and expiration BH CT images from one lung cancer and one oesophageal cancer patient. The resulting displacements were plotted in histogram format and analysed to provide insight regarding the tumour motion. The phantom tumour displacement was measured as 1.20 and 2.40 cm with full-width at tenth maximum (FWTM) for the distribution of displacement estimates of 0.008 and 0.006 cm, respectively. The maximum error of any single voxel's motion estimate was 1.1 mm along the z-dimension or approximately one-third of the z-dimension voxel size. The simulated BH CT pairs revealed an rms error of less than 0.25 mm. The displacement of the oesophageal tumours was nonuniform and up to 1.4 cm, this was a new finding. A lung tumour maximum displacement of 2.4 cm was found in the case evaluated. In conclusion, 3D OFM provided an accurate estimation of intrathoracic tumour motion, with estimated errors less than the voxel dimension in a simulated motion phantom study. Surprisingly, oesophageal tumour motion was large and nonuniform, with greatest motion occurring at the gastro-oesophageal junction. Presented at The IASTED Second International Conference on Biomedical Engineering (BioMED 2004), Innsbruck, Austria, 16-18 February 2004.

  19. Design and Performance Evaluation on Ultra-Wideband Time-Of-Arrival 3D Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Dusl, John

    2012-01-01

    A three-dimensional (3D) Ultra-Wideband (UWB) Time--of-Arrival (TOA) tracking system has been studied at NASA Johnson Space Center (JSC) to provide the tracking capability inside the International Space Station (ISS) modules for various applications. One of applications is to locate and report the location where crew experienced possible high level of carbon-dioxide and felt upset. In order to accurately locate those places in a multipath intensive environment like ISS modules, it requires a robust real-time location system (RTLS) which can provide the required accuracy and update rate. A 3D UWB TOA tracking system with two-way ranging has been proposed and studied. The designed system will be tested in the Wireless Habitat Testbed which simulates the ISS module environment. In this presentation, we discuss the 3D TOA tracking algorithm and the performance evaluation based on different tracking baseline configurations. The simulation results show that two configurations of the tracking baseline are feasible. With 100 picoseconds standard deviation (STD) of TOA estimates, the average tracking error 0.2392 feet (about 7 centimeters) can be achieved for configuration Twisted Rectangle while the average tracking error 0.9183 feet (about 28 centimeters) can be achieved for configuration Slightly-Twisted Top Rectangle . The tracking accuracy can be further improved with the improvement of the STD of TOA estimates. With 10 picoseconds STD of TOA estimates, the average tracking error 0.0239 feet (less than 1 centimeter) can be achieved for configuration "Twisted Rectangle".

  20. Moving Human Path Tracking Based on Video Surveillance in 3d Indoor Scenarios

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Zlatanova, Sisi; Wang, Zhe; Zhang, Yeting; Liu, Liu

    2016-06-01

    Video surveillance systems are increasingly used for a variety of 3D indoor applications. We can analyse human behaviour, discover and avoid crowded areas, monitor human traffic and so forth. In this paper we concentrate on use of surveillance cameras to track and reconstruct the path a person has followed. For the purpose we integrated video surveillance data with a 3D indoor model of the building and develop a single human moving path tracking method. We process the surveillance videos to detected single human moving traces; then we match the depth information of 3D scenes to the constructed 3D indoor network model and define the human traces in the 3D indoor space. Finally, the single human traces extracted from multiple cameras are connected with the help of the connectivity provided by the 3D network model. Using this approach, we can reconstruct the entire walking path. The provided experiments with a single person have verified the effectiveness and robustness of the method.

  1. 3D single molecule tracking in thick cellular specimens using multifocal plane microscopy

    NASA Astrophysics Data System (ADS)

    Ram, Sripad; Ward, E. Sally; Ober, Raimund J.

    2011-03-01

    One of the major challenges in single molecule microscopy concerns 3D tracking of single molecules in cellular specimens. This has been a major impediment to study many fundamental cellular processes, such as protein transport across thick cellular specimens (e.g. a cell-monolayer). Here we show that multifocal plane microscopy (MUM), an imaging modality developed by our group, provides the much needed solution to this longstanding problem. While MUM was previously used for 3D single molecule tracking at shallow depths (~ 1 micron) in live-cells, the question arises if MUM can also live up to the significant challenge of tracking single molecules in thick samples. Here by substantially expanding the capabilities of MUM, we demonstrate 3D tracking of quantum-dot labeled molecules in a ~ 10 micron thick cell monolayer. In this way we have reconstructed the complete 3D intracellular trafficking itinerary of single molecules at high spatial and temporal precision in a thick cell-sample. Funding support: NIH and the National MS Society.

  2. Mesoscopic in vivo 3-D tracking of sparse cell populations using angular multiplexed optical projection tomography

    PubMed Central

    Chen, Lingling; Alexandrov, Yuriy; Kumar, Sunil; Andrews, Natalie; Dallman, Margaret J.; French, Paul M. W.; McGinty, James

    2015-01-01

    We describe an angular multiplexed imaging technique for 3-D in vivo cell tracking of sparse cell distributions and optical projection tomography (OPT) with superior time-lapse resolution and a significantly reduced light dose compared to volumetric time-lapse techniques. We demonstrate that using dual axis OPT, where two images are acquired simultaneously at different projection angles, can enable localization and tracking of features in 3-D with a time resolution equal to the camera frame rate. This is achieved with a 200x reduction in light dose compared to an equivalent volumetric time-lapse single camera OPT acquisition with 200 projection angles. We demonstrate the application of this technique to mapping the 3-D neutrophil migration pattern observed over ~25.5 minutes in a live 2 day post-fertilisation transgenic LysC:GFP zebrafish embryo following a tail wound. PMID:25909009

  3. Surveillance, detection, and 3D infrared tracking of bullets, rockets, mortars, and artillery

    NASA Astrophysics Data System (ADS)

    Leslie, Daniel H.; Hyman, Howard; Moore, Fritz; Squire, Mark D.

    2001-09-01

    We describe test results using the FIRST (Fast InfraRed Sniper Tracker) to detect, track, and range to bullets in flight for determining the location of the bullet launch point. The technology developed for the FIRST system can be used to provide detection and accurate 3D track data for other small threat objects including rockets, mortars, and artillery in addition to bullets. We discuss the radiometry and detection range for these objects, and discuss the trade-offs involved in design of the very fast optical system for acquisition, tracking, and ranging of these targets.

  4. Motion corrected LV quantification based on 3D modelling for improved functional assessment in cardiac MRI

    NASA Astrophysics Data System (ADS)

    Liew, Y. M.; McLaughlin, R. A.; Chan, B. T.; Aziz, Y. F. Abdul; Chee, K. H.; Ung, N. M.; Tan, L. K.; Lai, K. W.; Ng, S.; Lim, E.

    2015-04-01

    Cine MRI is a clinical reference standard for the quantitative assessment of cardiac function, but reproducibility is confounded by motion artefacts. We explore the feasibility of a motion corrected 3D left ventricle (LV) quantification method, incorporating multislice image registration into the 3D model reconstruction, to improve reproducibility of 3D LV functional quantification. Multi-breath-hold short-axis and radial long-axis images were acquired from 10 patients and 10 healthy subjects. The proposed framework reduced misalignment between slices to subpixel accuracy (2.88 to 1.21 mm), and improved interstudy reproducibility for 5 important clinical functional measures, i.e. end-diastolic volume, end-systolic volume, ejection fraction, myocardial mass and 3D-sphericity index, as reflected in a reduction in the sample size required to detect statistically significant cardiac changes: a reduction of 21-66%. Our investigation on the optimum registration parameters, including both cardiac time frames and number of long-axis (LA) slices, suggested that a single time frame is adequate for motion correction whereas integrating more LA slices can improve registration and model reconstruction accuracy for improved functional quantification especially on datasets with severe motion artefacts.

  5. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  6. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0–1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  7. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy.

    PubMed

    Stemkens, Bjorn; Tijssen, Rob H N; de Senneville, Baudouin Denis; Lagendijk, Jan J W; van den Berg, Cornelis A T

    2016-07-21

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  8. Incremental learning of 3D-DCT compact representations for robust visual tracking.

    PubMed

    Li, Xi; Dick, Anthony; Shen, Chunhua; van den Hengel, Anton; Wang, Hanzi

    2013-04-01

    Visual tracking usually requires an object appearance model that is robust to changing illumination, pose, and other factors encountered in video. Many recent trackers utilize appearance samples in previous frames to form the bases upon which the object appearance model is built. This approach has the following limitations: 1) The bases are data driven, so they can be easily corrupted, and 2) it is difficult to robustly update the bases in challenging situations. In this paper, we construct an appearance model using the 3D discrete cosine transform (3D-DCT). The 3D-DCT is based on a set of cosine basis functions which are determined by the dimensions of the 3D signal and thus independent of the input video data. In addition, the 3D-DCT can generate a compact energy spectrum whose high-frequency coefficients are sparse if the appearance samples are similar. By discarding these high-frequency coefficients, we simultaneously obtain a compact 3D-DCT-based object representation and a signal reconstruction-based similarity measure (reflecting the information loss from signal reconstruction). To efficiently update the object representation, we propose an incremental 3D-DCT algorithm which decomposes the 3D-DCT into successive operations of the 2D discrete cosine transform (2D-DCT) and 1D discrete cosine transform (1D-DCT) on the input video data. As a result, the incremental 3D-DCT algorithm only needs to compute the 2D-DCT for newly added frames as well as the 1D-DCT along the third dimension, which significantly reduces the computational complexity. Based on this incremental 3D-DCT algorithm, we design a discriminative criterion to evaluate the likelihood of a test sample belonging to the foreground object. We then embed the discriminative criterion into a particle filtering framework for object state inference over time. Experimental results demonstrate the effectiveness and robustness of the proposed tracker.

  9. 3D model-based detection and tracking for space autonomous and uncooperative rendezvous

    NASA Astrophysics Data System (ADS)

    Shang, Yang; Zhang, Yueqiang; Liu, Haibo

    2015-10-01

    In order to fully navigate using a vision sensor, a 3D edge model based detection and tracking technique was developed. Firstly, we proposed a target detection strategy over a sequence of several images from the 3D model to initialize the tracking. The overall purpose of such approach is to robustly match each image with the model views of the target. Thus we designed a line segment detection and matching method based on the multi-scale space technology. Experiments on real images showed that our method is highly robust under various image changes. Secondly, we proposed a method based on 3D particle filter (PF) coupled with M-estimation to track and estimate the pose of the target efficiently. In the proposed approach, a similarity observation model was designed according to a new distance function of line segments. Then, based on the tracking results of PF, the pose was optimized using M-estimation. Experiments indicated that the proposed method can effectively track and accurately estimate the pose of freely moving target in unconstrained environment.

  10. Extracting, Tracking, and Visualizing Magnetic Flux Vortices in 3D Complex-Valued Superconductor Simulation Data.

    PubMed

    Guo, Hanqi; Phillips, Carolyn L; Peterka, Tom; Karpeyev, Dmitry; Glatz, Andreas

    2016-01-01

    We propose a method for the vortex extraction and tracking of superconducting magnetic flux vortices for both structured and unstructured mesh data. In the Ginzburg-Landau theory, magnetic flux vortices are well-defined features in a complex-valued order parameter field, and their dynamics determine electromagnetic properties in type-II superconductors. Our method represents each vortex line (a 1D curve embedded in 3D space) as a connected graph extracted from the discretized field in both space and time. For a time-varying discrete dataset, our vortex extraction and tracking method is as accurate as the data discretization. We then apply 3D visualization and 2D event diagrams to the extraction and tracking results to help scientists understand vortex dynamics and macroscale superconductor behavior in greater detail than previously possible.

  11. Eye Tracking to Explore the Impacts of Photorealistic 3d Representations in Pedstrian Navigation Performance

    NASA Astrophysics Data System (ADS)

    Dong, Weihua; Liao, Hua

    2016-06-01

    Despite the now-ubiquitous two-dimensional (2D) maps, photorealistic three-dimensional (3D) representations of cities (e.g., Google Earth) have gained much attention by scientists and public users as another option. However, there is no consistent evidence on the influences of 3D photorealism on pedestrian navigation. Whether 3D photorealism can communicate cartographic information for navigation with higher effectiveness and efficiency and lower cognitive workload compared to the traditional symbolic 2D maps remains unknown. This study aims to explore whether the photorealistic 3D representation can facilitate processes of map reading and navigation in digital environments using a lab-based eye tracking approach. Here we show the differences of symbolic 2D maps versus photorealistic 3D representations depending on users' eye-movement and navigation behaviour data. We found that the participants using the 3D representation were less effective, less efficient and were required higher cognitive workload than using the 2D map for map reading. However, participants using the 3D representation performed more efficiently in self-localization and orientation at the complex decision points. The empirical results can be helpful to improve the usability of pedestrian navigation maps in future designs.

  12. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    NASA Astrophysics Data System (ADS)

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  13. Real-time 3D visualization of volumetric video motion sensor data

    SciTech Connect

    Carlson, J.; Stansfield, S.; Shawver, D.; Flachs, G.M.; Jordan, J.B.; Bao, Z.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to be immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.

  14. PEPT: An invaluable tool for 3-D particle tracking and CFD simulation verification in hydrocyclone studies

    NASA Astrophysics Data System (ADS)

    Chang, Yu-Fen; Adamsen, Tom C. H.; Pisarev, Gleb I.; Hoffmann, Alex C.

    2013-05-01

    Particle tracks in a hydrocyclone generated both experimentally by positron emission particle tracking (PEPT) and numerically with Eulerian-Lagranian CFD have been studied and compared. A hydrocyclone with a cylinder-on-cone design was used in this study, the geometries used in the CFD simulations and in the experiments being identical. It is shown that it is possible to track a fast-moving particle in a hydrocyclone using PEPT with high temporal and spatial resolutions. The numerical 3-D particle trajectories were generated using the Large Eddy Simulation (LES) turbulence model for the fluid and Lagrangian particle tracking for the particles. The behaviors of the particles were analyzed in detail and were found to be consistent between experiments and CFD simulations. The tracks of the particles are discussed and related to the fluid flow field visualized in the CFD simulations using the cross-sectional static pressure distribution.

  15. Effects of 3D random correlated velocity perturbations on predicted ground motions

    USGS Publications Warehouse

    Hartzell, S.; Harmsen, S.; Frankel, A.

    2010-01-01

    Three-dimensional, finite-difference simulations of a realistic finite-fault rupture on the southern Hayward fault are used to evaluate the effects of random, correlated velocity perturbations on predicted ground motions. Velocity perturbations are added to a three-dimensional (3D) regional seismic velocity model of the San Francisco Bay Area using a 3D von Karman random medium. Velocity correlation lengths of 5 and 10 km and standard deviations in the velocity of 5% and 10% are considered. The results show that significant deviations in predicted ground velocities are seen in the calculated frequency range (≤1 Hz) for standard deviations in velocity of 5% to 10%. These results have implications for the practical limits on the accuracy of scenario ground-motion calculations and on retrieval of source parameters using higher-frequency, strong-motion data.

  16. Method for dose-reduced 3D catheter tracking on a scanning-beam digital x-ray system using dynamic electronic collimation

    NASA Astrophysics Data System (ADS)

    Dunkerley, David A. P.; Funk, Tobias; Speidel, Michael A.

    2016-03-01

    Scanning-beam digital x-ray (SBDX) is an inverse geometry x-ray fluoroscopy system capable of tomosynthesis-based 3D catheter tracking. This work proposes a method of dose-reduced 3D tracking using dynamic electronic collimation (DEC) of the SBDX scanning x-ray tube. Positions in the 2D focal spot array are selectively activated to create a regionof- interest (ROI) x-ray field around the tracked catheter. The ROI position is updated for each frame based on a motion vector calculated from the two most recent 3D tracking results. The technique was evaluated with SBDX data acquired as a catheter tip inside a chest phantom was pulled along a 3D trajectory. DEC scans were retrospectively generated from the detector images stored for each focal spot position. DEC imaging of a catheter tip in a volume measuring 11.4 cm across at isocenter required 340 active focal spots per frame, versus 4473 spots in full-FOV mode. The dose-area-product (DAP) and peak skin dose (PSD) for DEC versus full field-of-view (FOV) scanning were calculated using an SBDX Monte Carlo simulation code. DAP was reduced to 7.4% to 8.4% of the full-FOV value, consistent with the relative number of active focal spots (7.6%). For image sequences with a moving catheter, PSD was 33.6% to 34.8% of the full-FOV value. The root-mean-squared-deviation between DEC-based 3D tracking coordinates and full-FOV 3D tracking coordinates was less than 0.1 mm. The 3D distance between the tracked tip and the sheath centerline averaged 0.75 mm. Dynamic electronic collimation can reduce dose with minimal change in tracking performance.

  17. Alignment of 3D Building Models and TIR Video Sequences with Line Tracking

    NASA Astrophysics Data System (ADS)

    Iwaszczuk, D.; Stilla, U.

    2014-11-01

    Thermal infrared imagery of urban areas became interesting for urban climate investigations and thermal building inspections. Using a flying platform such as UAV or a helicopter for the acquisition and combining the thermal data with the 3D building models via texturing delivers a valuable groundwork for large-area building inspections. However, such thermal textures are useful for further analysis if they are geometrically correctly extracted. This can be achieved with a good coregistrations between the 3D building models and thermal images, which cannot be achieved by direct georeferencing. Hence, this paper presents methodology for alignment of 3D building models and oblique TIR image sequences taken from a flying platform. In a single image line correspondences between model edges and image line segments are found using accumulator approach and based on these correspondences an optimal camera pose is calculated to ensure the best match between the projected model and the image structures. Among the sequence the linear features are tracked based on visibility prediction. The results of the proposed methodology are presented using a TIR image sequence taken from helicopter in a densely built-up urban area. The novelty of this work is given by employing the uncertainty of the 3D building models and by innovative tracking strategy based on a priori knowledge from the 3D building model and the visibility checking.

  18. Mobile Biplane X-Ray Imaging System for Measuring 3D Dynamic Joint Motion During Overground Gait.

    PubMed

    Guan, Shanyuanye; Gray, Hans A; Keynejad, Farzad; Pandy, Marcus G

    2016-01-01

    Most X-ray fluoroscopy systems are stationary and impose restrictions on the measurement of dynamic joint motion; for example, knee-joint kinematics during gait is usually measured with the subject ambulating on a treadmill. We developed a computer-controlled, mobile, biplane, X-ray fluoroscopy system to track human body movement for high-speed imaging of 3D joint motion during overground gait. A robotic gantry mechanism translates the two X-ray units alongside the subject, tracking and imaging the joint of interest as the subject moves. The main aim of the present study was to determine the accuracy with which the mobile imaging system measures 3D knee-joint kinematics during walking. In vitro experiments were performed to measure the relative positions of the tibia and femur in an intact human cadaver knee and of the tibial and femoral components of a total knee arthroplasty (TKA) implant during simulated overground gait. Accuracy was determined by calculating mean, standard deviation and root-mean-squared errors from differences between kinematic measurements obtained using volumetric models of the bones and TKA components and reference measurements obtained from metal beads embedded in the bones. Measurement accuracy was enhanced by the ability to track and image the joint concurrently. Maximum root-mean-squared errors were 0.33 mm and 0.65° for translations and rotations of the TKA knee and 0.78 mm and 0.77° for translations and rotations of the intact knee, which are comparable to results reported for treadmill walking using stationary biplane systems. System capability for in vivo joint motion measurement was also demonstrated for overground gait.

  19. Quantitative 3-d diagnostic ultrasound imaging using a modified transducer array and an automated image tracking technique.

    PubMed

    Hossack, John A; Sumanaweera, Thilaka S; Napel, Sandy; Ha, Jun S

    2002-08-01

    An approach for acquiring dimensionally accurate three-dimensional (3-D) ultrasound data from multiple 2-D image planes is presented. This is based on the use of a modified linear-phased array comprising a central imaging array that acquires multiple, essentially parallel, 2-D slices as the transducer is translated over the tissue of interest. Small, perpendicularly oriented, tracking arrays are integrally mounted on each end of the imaging transducer. As the transducer is translated in an elevational direction with respect to the central imaging array, the images obtained by the tracking arrays remain largely coplanar. The motion between successive tracking images is determined using a minimum sum of absolute difference (MSAD) image matching technique with subpixel matching resolution. An initial phantom scanning-based test of a prototype 8 MHz array indicates that linear dimensional accuracy of 4.6% (2 sigma) is achievable. This result compares favorably with those obtained using an assumed average velocity [31.5% (2 sigma) accuracy] and using an approach based on measuring image-to-image decorrelation [8.4% (2 sigma) accuracy]. The prototype array and imaging system were also tested in a clinical environment, and early results suggest that the approach has the potential to enable a low cost, rapid, screening method for detecting carotid artery stenosis. The average time for performing a screening test for carotid stenosis was reduced from an average of 45 minutes using 2-D duplex Doppler to 12 minutes using the new 3-D scanning approach.

  20. 3D Measurement of Forearm and Upper Arm during Throwing Motion using Body Mounted Sensor

    NASA Astrophysics Data System (ADS)

    Koda, Hideharu; Sagawa, Koichi; Kuroshima, Kouta; Tsukamoto, Toshiaki; Urita, Kazutaka; Ishibashi, Yasuyuki

    The aim of this study is to propose the measurement method of three-dimensional (3D) movement of forearm and upper arm during pitching motion of baseball using inertial sensors without serious consideration of sensor installation. Although high accuracy measurement of sports motion is achieved by using optical motion capture system at present, it has some disadvantages such as the calibration of cameras and limitation of measurement place. Whereas the proposed method for 3D measurement of pitching motion using body mounted sensors provides trajectory and orientation of upper arm by the integration of acceleration and angular velocity measured on upper limb. The trajectory of forearm is derived so that the elbow joint axis of forearm corresponds to that of upper arm. Spatial relation between upper limb and sensor system is obtained by performing predetermined movements of upper limb and utilizing angular velocity and gravitational acceleration. The integration error is modified so that the estimated final position, velocity and posture of upper limb agree with the actual ones. The experimental results of the measurement of pitching motion show that trajectories of shoulder, elbow and wrist estimated by the proposed method are highly correlated to those from the motion capture system within the estimation error of about 10 [%].

  1. Brightness-compensated 3-D optical flow algorithm for monitoring cochlear motion patterns

    NASA Astrophysics Data System (ADS)

    von Tiedemann, Miriam; Fridberger, Anders; Ulfendahl, Mats; de Monvel, Jacques Boutet

    2010-09-01

    A method for three-dimensional motion analysis designed for live cell imaging by fluorescence confocal microscopy is described. The approach is based on optical flow computation and takes into account brightness variations in the image scene that are not due to motion, such as photobleaching or fluorescence variations that may reflect changes in cellular physiology. The 3-D optical flow algorithm allowed almost perfect motion estimation on noise-free artificial sequences, and performed with a relative error of <10% on noisy images typical of real experiments. The method was applied to a series of 3-D confocal image stacks from an in vitro preparation of the guinea pig cochlea. The complex motions caused by slow pressure changes in the cochlear compartments were quantified. At the surface of the hearing organ, the largest motion component was the transverse one (normal to the surface), but significant radial and longitudinal displacements were also present. The outer hair cell displayed larger radial motion at their basolateral membrane than at their apical surface. These movements reflect mechanical interactions between different cellular structures, which may be important for communicating sound-evoked vibrations to the sensory cells. A better understanding of these interactions is important for testing realistic models of cochlear mechanics.

  2. 3D imaging of particle-scale rotational motion in cyclically driven granular flows

    NASA Astrophysics Data System (ADS)

    Harrington, Matt; Powers, Dylan; Cooper, Eric; Losert, Wolfgang

    Recent experimental advances have enabled three-dimensional (3D) imaging of motion, structure, and failure within granular systems. 3D imaging allows researchers to directly characterize bulk behaviors that arise from particle- and meso-scale features. For instance, segregation of a bidisperse system of spheres under cyclic shear can originate from microscopic irreversibilities and the development of convective secondary flows. Rotational motion and frictional rotational coupling, meanwhile, have been less explored in such experimental 3D systems, especially under cyclic forcing. In particular, relative amounts of sliding and/or rolling between pairs of contacting grains could influence the reversibility of both trajectories, in terms of both position and orientation. In this work, we apply the Refractive Index Matched Scanning technique to a granular system that is cyclically driven and measure both translational and rotational motion of individual grains. We relate measured rotational motion to resulting shear bands and convective flows, further indicating the degree to which pairs and neighborhoods of grains collectively rotate.

  3. Quantitative underwater 3D motion analysis using submerged video cameras: accuracy analysis and trajectory reconstruction.

    PubMed

    Silvatti, Amanda P; Cerveri, Pietro; Telles, Thiago; Dias, Fábio A S; Baroni, Guido; Barros, Ricardo M L

    2013-01-01

    In this study we aim at investigating the applicability of underwater 3D motion capture based on submerged video cameras in terms of 3D accuracy analysis and trajectory reconstruction. Static points with classical direct linear transform (DLT) solution, a moving wand with bundle adjustment and a moving 2D plate with Zhang's method were considered for camera calibration. As an example of the final application, we reconstructed the hand motion trajectories in different swimming styles and qualitatively compared this with Maglischo's model. Four highly trained male swimmers performed butterfly, breaststroke and freestyle tasks. The middle fingertip trajectories of both hands in the underwater phase were considered. The accuracy (mean absolute error) of the two calibration approaches (wand: 0.96 mm - 2D plate: 0.73 mm) was comparable to out of water results and highly superior to the classical DLT results (9.74 mm). Among all the swimmers, the hands' trajectories of the expert swimmer in the style were almost symmetric and in good agreement with Maglischo's model. The kinematic results highlight symmetry or asymmetry between the two hand sides, intra- and inter-subject variability in terms of the motion patterns and agreement or disagreement with the model. The two outcomes, calibration results and trajectory reconstruction, both move towards the quantitative 3D underwater motion analysis.

  4. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson's Disease.

    PubMed

    Piro, Neltje E; Piro, Lennart K; Kassubek, Jan; Blechschmidt-Trapp, Ronald A

    2016-01-01

    Remote monitoring of Parkinson's Disease (PD) patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS) classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a) an examination based on video recordings as a clinical reference; (b) an automatically classified UPDRS; and (c) a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG) sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b) the automatically classified UPDRS is 0.48 and with (c) the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team. PMID:27338400

  5. Kinetic depth effect and optic flow--I. 3D shape from Fourier motion.

    PubMed

    Dosher, B A; Landy, M S; Sperling, G

    1989-01-01

    Fifty-three different 3D shapes were defined by sequences of 2D views (frames) of dots on a rotating 3D surface. (1) Subjects' accuracy of shape identifications dropped from over 90% to less than 10% when either the polarity of the stimulus dots was alternated from light-on-gray to dark-on-gray on successive frames or when neutral gray interframe intervals were interposed. Both manipulations interfere with motion extraction by spatio-temporal (Fourier) and gradient first-order detectors. Second-order (non-Fourier) detectors that use full-wave rectification are unaffected by alternating-polarity but disrupted by interposed gray frames. (2) To equate the accuracy of two-alternative forced-choice (2AFC) planar direction-of-motion discrimination in standard and polarity-alternated stimuli, standard contrast was reduced. 3D shape discrimination survived contrast reduction in standard stimuli whereas it failed completely with polarity-alternation even at full contrast. (3) When individual dots were permitted to remain in the image sequence for only two frames, performance showed little loss compared to standard displays where individual dots had an expected lifetime of 20 frames, showing that 3D shape identification does not require continuity of stimulus tokens. (4) Performance in all discrimination tasks is predicted (up to a monotone transformation) by considering the quality of first-order information (as given by a simple computation on Fourier power) and the number of locations at which motion information is required. Perceptual first-order analysis of optic flow is the primary substrate for structure-from-motion computations in random dot displays because only it offers sufficient quality of perceptual motion at a sufficient number of locations.

  6. PRIMAS: a real-time 3D motion-analysis system

    NASA Astrophysics Data System (ADS)

    Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans

    1994-03-01

    The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.

  7. Measurement Matrix Optimization and Mismatch Problem Compensation for DLSLA 3-D SAR Cross-Track Reconstruction

    PubMed Central

    Bao, Qian; Jiang, Chenglong; Lin, Yun; Tan, Weixian; Wang, Zhirui; Hong, Wen

    2016-01-01

    With a short linear array configured in the cross-track direction, downward looking sparse linear array three-dimensional synthetic aperture radar (DLSLA 3-D SAR) can obtain the 3-D image of an imaging scene. To improve the cross-track resolution, sparse recovery methods have been investigated in recent years. In the compressive sensing (CS) framework, the reconstruction performance depends on the property of measurement matrix. This paper concerns the technique to optimize the measurement matrix and deal with the mismatch problem of measurement matrix caused by the off-grid scatterers. In the model of cross-track reconstruction, the measurement matrix is mainly affected by the configuration of antenna phase centers (APC), thus, two mutual coherence based criteria are proposed to optimize the configuration of APCs. On the other hand, to compensate the mismatch problem of the measurement matrix, the sparse Bayesian inference based method is introduced into the cross-track reconstruction by jointly estimate the scatterers and the off-grid error. Experiments demonstrate the performance of the proposed APCs’ configuration schemes and the proposed cross-track reconstruction method. PMID:27556471

  8. Measurement Matrix Optimization and Mismatch Problem Compensation for DLSLA 3-D SAR Cross-Track Reconstruction.

    PubMed

    Bao, Qian; Jiang, Chenglong; Lin, Yun; Tan, Weixian; Wang, Zhirui; Hong, Wen

    2016-01-01

    With a short linear array configured in the cross-track direction, downward looking sparse linear array three-dimensional synthetic aperture radar (DLSLA 3-D SAR) can obtain the 3-D image of an imaging scene. To improve the cross-track resolution, sparse recovery methods have been investigated in recent years. In the compressive sensing (CS) framework, the reconstruction performance depends on the property of measurement matrix. This paper concerns the technique to optimize the measurement matrix and deal with the mismatch problem of measurement matrix caused by the off-grid scatterers. In the model of cross-track reconstruction, the measurement matrix is mainly affected by the configuration of antenna phase centers (APC), thus, two mutual coherence based criteria are proposed to optimize the configuration of APCs. On the other hand, to compensate the mismatch problem of the measurement matrix, the sparse Bayesian inference based method is introduced into the cross-track reconstruction by jointly estimate the scatterers and the off-grid error. Experiments demonstrate the performance of the proposed APCs' configuration schemes and the proposed cross-track reconstruction method. PMID:27556471

  9. Measurement Matrix Optimization and Mismatch Problem Compensation for DLSLA 3-D SAR Cross-Track Reconstruction.

    PubMed

    Bao, Qian; Jiang, Chenglong; Lin, Yun; Tan, Weixian; Wang, Zhirui; Hong, Wen

    2016-08-22

    With a short linear array configured in the cross-track direction, downward looking sparse linear array three-dimensional synthetic aperture radar (DLSLA 3-D SAR) can obtain the 3-D image of an imaging scene. To improve the cross-track resolution, sparse recovery methods have been investigated in recent years. In the compressive sensing (CS) framework, the reconstruction performance depends on the property of measurement matrix. This paper concerns the technique to optimize the measurement matrix and deal with the mismatch problem of measurement matrix caused by the off-grid scatterers. In the model of cross-track reconstruction, the measurement matrix is mainly affected by the configuration of antenna phase centers (APC), thus, two mutual coherence based criteria are proposed to optimize the configuration of APCs. On the other hand, to compensate the mismatch problem of the measurement matrix, the sparse Bayesian inference based method is introduced into the cross-track reconstruction by jointly estimate the scatterers and the off-grid error. Experiments demonstrate the performance of the proposed APCs' configuration schemes and the proposed cross-track reconstruction method.

  10. Fast parallel interferometric 3D tracking of numerous optically trapped particles and their hydrodynamic interaction.

    PubMed

    Ruh, Dominic; Tränkle, Benjamin; Rohrbach, Alexander

    2011-10-24

    Multi-dimensional, correlated particle tracking is a key technology to reveal dynamic processes in living and synthetic soft matter systems. In this paper we present a new method for tracking micron-sized beads in parallel and in all three dimensions - faster and more precise than existing techniques. Using an acousto-optic deflector and two quadrant-photo-diodes, we can track numerous optically trapped beads at up to tens of kHz with a precision of a few nanometers by back-focal plane interferometry. By time-multiplexing the laser focus, we can calibrate individually all traps and all tracking signals in a few seconds and in 3D. We show 3D histograms and calibration constants for nine beads in a quadratic arrangement, although trapping and tracking is easily possible for more beads also in arbitrary 2D arrangements. As an application, we investigate the hydrodynamic coupling and diffusion anomalies of spheres trapped in a 3 × 3 arrangement. PMID:22109012

  11. A 3D front tracking method on a CPU/GPU system

    SciTech Connect

    Bo, Wurigen; Grove, John

    2011-01-21

    We describe the method to port a sequential 3D interface tracking code to a GPU with CUDA. The interface is represented as a triangular mesh. Interface geometry properties and point propagation are performed on a GPU. Interface mesh adaptation is performed on a CPU. The convergence of the method is assessed from the test problems with given velocity fields. Performance results show overall speedups from 11 to 14 for the test problems under mesh refinement. We also briefly describe our ongoing work to couple the interface tracking method with a hydro solver.

  12. 3D silicon sensors with variable electrode depth for radiation hard high resolution particle tracking

    NASA Astrophysics Data System (ADS)

    Da Vià, C.; Borri, M.; Dalla Betta, G.; Haughton, I.; Hasi, J.; Kenney, C.; Povoli, M.; Mendicino, R.

    2015-04-01

    3D sensors, with electrodes micro-processed inside the silicon bulk using Micro-Electro-Mechanical System (MEMS) technology, were industrialized in 2012 and were installed in the first detector upgrade at the LHC, the ATLAS IBL in 2014. They are the radiation hardest sensors ever made. A new idea is now being explored to enhance the three-dimensional nature of 3D sensors by processing collecting electrodes at different depths inside the silicon bulk. This technique uses the electric field strength to suppress the charge collection effectiveness of the regions outside the p-n electrodes' overlap. Evidence of this property is supported by test beam data of irradiated and non-irradiated devices bump-bonded with pixel readout electronics and simulations. Applications include High-Luminosity Tracking in the high multiplicity LHC forward regions. This paper will describe the technical advantages of this idea and the tracking application rationale.

  13. Using a wireless motion controller for 3D medical image catheter interactions

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    State-of-the-art morphological imaging techniques usually provide high resolution 3D images with a huge number of slices. In clinical practice, however, 2D slice-based examinations are still the method of choice even for these large amounts of data. Providing intuitive interaction methods for specific 3D medical visualization applications is therefore a critical feature for clinical imaging applications. For the domain of catheter navigation and surgery planning, it is crucial to assist the physician with appropriate visualization techniques, such as 3D segmentation maps, fly-through cameras or virtual interaction approaches. There has been an ongoing development and improvement for controllers that help to interact with 3D environments in the domain of computer games. These controllers are based on both motion and infrared sensors and are typically used to detect 3D position and orientation. We have investigated how a state-of-the-art wireless motion sensor controller (Wiimote), developed by Nintendo, can be used for catheter navigation and planning purposes. By default the Wiimote controller only measure rough acceleration over a range of +/- 3g with 10% sensitivity and orientation. Therefore, a pose estimation algorithm was developed for computing accurate position and orientation in 3D space regarding 4 Infrared LEDs. Current results show that for the translation it is possible to obtain a mean error of (0.38cm, 0.41cm, 4.94cm) and for the rotation (0.16, 0.28) respectively. Within this paper we introduce a clinical prototype that allows steering of a virtual fly-through camera attached to the catheter tip by the Wii controller on basis of a segmented vessel tree.

  14. 3D Geometry and Motion Estimations of Maneuvering Targets for Interferometric ISAR With Sparse Aperture.

    PubMed

    Xu, Gang; Xing, Mengdao; Xia, Xiang-Gen; Zhang, Lei; Chen, Qianqian; Bao, Zheng

    2016-05-01

    In the current scenario of high-resolution inverse synthetic aperture radar (ISAR) imaging, the non-cooperative targets may have strong maneuverability, which tends to cause time-variant Doppler modulation and imaging plane in the echoed data. Furthermore, it is still a challenge to realize ISAR imaging of maneuvering targets from sparse aperture (SA) data. In this paper, we focus on the problem of 3D geometry and motion estimations of maneuvering targets for interferometric ISAR (InISAR) with SA. For a target of uniformly accelerated rotation, the rotational modulation in echo is formulated as chirp sensing code under a chirp-Fourier dictionary to represent the maneuverability. In particular, a joint multi-channel imaging approach is developed to incorporate the multi-channel data and treat the multi-channel ISAR image formation as a joint-sparsity constraint optimization. Then, a modified orthogonal matching pursuit (OMP) algorithm is employed to solve the optimization problem to produce high-resolution range-Doppler (RD) images and chirp parameter estimation. The 3D target geometry and the motion estimations are followed by using the acquired RD images and chirp parameters. Herein, a joint estimation approach of 3D geometry and rotation motion is presented to realize outlier removing and error reduction. In comparison with independent single-channel processing, the proposed joint multi-channel imaging approach performs better in 2D imaging, 3D imaging, and motion estimation. Finally, experiments using both simulated and measured data are performed to confirm the effectiveness of the proposed algorithm. PMID:26930684

  15. 3D Geometry and Motion Estimations of Maneuvering Targets for Interferometric ISAR With Sparse Aperture.

    PubMed

    Xu, Gang; Xing, Mengdao; Xia, Xiang-Gen; Zhang, Lei; Chen, Qianqian; Bao, Zheng

    2016-05-01

    In the current scenario of high-resolution inverse synthetic aperture radar (ISAR) imaging, the non-cooperative targets may have strong maneuverability, which tends to cause time-variant Doppler modulation and imaging plane in the echoed data. Furthermore, it is still a challenge to realize ISAR imaging of maneuvering targets from sparse aperture (SA) data. In this paper, we focus on the problem of 3D geometry and motion estimations of maneuvering targets for interferometric ISAR (InISAR) with SA. For a target of uniformly accelerated rotation, the rotational modulation in echo is formulated as chirp sensing code under a chirp-Fourier dictionary to represent the maneuverability. In particular, a joint multi-channel imaging approach is developed to incorporate the multi-channel data and treat the multi-channel ISAR image formation as a joint-sparsity constraint optimization. Then, a modified orthogonal matching pursuit (OMP) algorithm is employed to solve the optimization problem to produce high-resolution range-Doppler (RD) images and chirp parameter estimation. The 3D target geometry and the motion estimations are followed by using the acquired RD images and chirp parameters. Herein, a joint estimation approach of 3D geometry and rotation motion is presented to realize outlier removing and error reduction. In comparison with independent single-channel processing, the proposed joint multi-channel imaging approach performs better in 2D imaging, 3D imaging, and motion estimation. Finally, experiments using both simulated and measured data are performed to confirm the effectiveness of the proposed algorithm.

  16. 3D Visualization of Monte-Carlo Simulation's of HZE Track Structure and Initial Chemical Species

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2009-01-01

    Heavy ions biophysics is important for space radiation risk assessment [1] and hadron-therapy [2]. The characteristic of heavy ions tracks include a very high energy deposition region close to the track (<20 nm) denoted as the track core, and an outer penumbra region consisting of individual secondary electrons (6-rays). A still open question is the radiobiological effects of 6- rays relative to the track core. Of importance is the induction of double-strand breaks (DSB) [3] and oxidative damage to the biomolecules and the tissue matrix, considered the most important lesions for acute and long term effects of radiation. In this work, we have simulated a 56Fe26+ ion track of 1 GeV/amu with our Monte-Carlo code RITRACKS [4]. The simulation results have been used to calculate the energy depiction and initial chemical species in a "voxelized" space, which is then visualized in 3D. Several voxels with dose >1000 Gy are found in the penumbra, some located 0.1 mm from the track core. In computational models, the DSB induction probability is calculated with radial dose [6], which may not take into account the higher RBE of electron track ends for DSB induction. Therefore, these simulations should help improve models of DSB induction and our understanding of heavy ions biophysics.

  17. Ground motion simulations in Marmara (Turkey) region from 3D finite difference method

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Ulrich, Thomas; Douglas, John

    2016-04-01

    In the framework of the European project MARSite (2012-2016), one of the main contributions from our research team was to provide ground-motion simulations for the Marmara region from various earthquake source scenarios. We adopted a 3D finite difference code, taking into account the 3D structure around the Sea of Marmara (including the bathymetry) and the sea layer. We simulated two moderate earthquakes (about Mw4.5) and found that the 3D structure improves significantly the waveforms compared to the 1D layer model. Simulations were carried out for different earthquakes (moderate point sources and large finite sources) in order to provide shake maps (Aochi and Ulrich, BSSA, 2015), to study the variability of ground-motion parameters (Douglas & Aochi, BSSA, 2016) as well as to provide synthetic seismograms for the blind inversion tests (Diao et al., GJI, 2016). The results are also planned to be integrated in broadband ground-motion simulations, tsunamis generation and simulations of triggered landslides (in progress by different partners). The simulations are freely shared among the partners via the internet and the visualization of the results is diffused on the project's homepage. All these simulations should be seen as a reference for this region, as they are based on the latest knowledge that obtained during the MARSite project, although their refinement and validation of the model parameters and the simulations are a continuing research task relying on continuing observations. The numerical code used, the models and the simulations are available on demand.

  18. Estimation of 3D myocardial motion from tagged MRI using LDDMM

    NASA Astrophysics Data System (ADS)

    Kotamraju, Vinay; McVeigh, Elliot; Beg, Mirza Faisal

    2007-03-01

    Non-invasive estimation of regional cardiac function is important for assessment of myocardial contractility. The use of MR tagging technique enables acquisition of intra-myocardial tissue motion by placing a spatially modulated pattern of magnetization whose deformation with the myocardium over the cardiac cycle can be imaged. Quantitative computation of parameters such as wall thickening, shearing, rotation, torsion and strain within the myocardium is traditionally achieved by processing the tag-marked MR image frames to 1) segment the tag lines and 2) detect the correspondence between points across the time-indexed frames. In this paper, we describe our approach to solving this problem using the Large Deformation Diffeomorphic Metric Mapping (LDDMM) algorithm in which tag-line segmentation and motion reconstruction occur simultaneously. Our method differs from earlier proposed non rigid registration based cardiac motion estimation methods in that our matching cost incorporates image intensity overlap via the L2 norm and the estimated tranformations are diffeomorphic. We also present a novel method of generating synthetic tag line images with known ground truth and motion characteristics that closely follow those in the original data; these can be used for validation of motion estimation algorithms. Initial validation shows that our method is able to accurately segment tag-lines and estimate a dense 3D motion field describing the motion of the myocardium in both the left and the right ventricle.

  19. Meanie3D - a mean-shift based, multivariate, multi-scale clustering and tracking algorithm

    NASA Astrophysics Data System (ADS)

    Simon, Jürgen-Lorenz; Malte, Diederich; Silke, Troemel

    2014-05-01

    Project OASE is the one of 5 work groups at the HErZ (Hans Ertel Centre for Weather Research), an ongoing effort by the German weather service (DWD) to further research at Universities concerning weather prediction. The goal of project OASE is to gain an object-based perspective on convective events by identifying them early in the onset of convective initiation and follow then through the entire lifecycle. The ability to follow objects in this fashion requires new ways of object definition and tracking, which incorporate all the available data sets of interest, such as Satellite imagery, weather Radar or lightning counts. The Meanie3D algorithm provides the necessary tool for this purpose. Core features of this new approach to clustering (object identification) and tracking are the ability to identify objects using the mean-shift algorithm applied to a multitude of variables (multivariate), as well as the ability to detect objects on various scales (multi-scale) using elements of Scale-Space theory. The algorithm works in 2D as well as 3D without modifications. It is an extension of a method well known from the field of computer vision and image processing, which has been tailored to serve the needs of the meteorological community. In spite of the special application to be demonstrated here (like convective initiation), the algorithm is easily tailored to provide clustering and tracking for a wide class of data sets and problems. In this talk, the demonstration is carried out on two of the OASE group's own composite sets. One is a 2D nationwide composite of Germany including C-Band Radar (2D) and Satellite information, the other a 3D local composite of the Bonn/Jülich area containing a high-resolution 3D X-Band Radar composite.

  20. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  1. Heart wall motion analysis by dynamic 3D strain rate imaging from tissue Doppler echocardiography

    NASA Astrophysics Data System (ADS)

    Hastenteufel, Mark; Wolf, Ivo; de Simone, Raffaele; Mottl-Link, Sibylle; Meinzer, Hans-Peter

    2002-04-01

    The knowledge about the complex three-dimensional (3D) heart wall motion pattern, particular in the left ventricle, provides valuable information about potential malfunctions, e.g., myocardial ischemia. Nowadays, echocardiography (cardiac ultrasound) is the predominant technique for evaluation of cardiac function. Beside morphology, tissue velocities can be obtained by Doppler techniques (tissue Doppler imaging, TDI). Strain rate imaging (SRI) is a new technique to diagnose heart vitality. It provides information about the contraction ability of the myocardium. Two-dimensional color Doppler echocardiography is still the most important clinical method for estimation of morphology and function. Two-dimensional methods leads to a lack of information due to the three-dimensional overall nature of the heart movement. Due to this complex three-dimensional motion pattern of the heart, the knowledge about velocity and strain rate distribution over the whole ventricle can provide more valuable diagnostic information about motion disorders. For the assessment of intracardiac blood flow three-dimensional color Doppler has already shown its clinical utility. We have developed methods to produce strain rate images by means of 3D tissue Doppler echocardiography. The tissue Doppler and strain rate images can be visualized and quantified by different methods. The methods are integrated into an interactively usable software environment, making them available in clinical everyday life. Our software provides the physician with a valuable tool for diagnosis of heart wall motion.

  2. Tracking motion, deformation, and texture using conditionally gaussian processes.

    PubMed

    Marks, Tim K; Hershey, John R; Movellan, Javier R

    2010-02-01

    We present a generative model and inference algorithm for 3D nonrigid object tracking. The model, which we call G-flow, enables the joint inference of 3D position, orientation, and nonrigid deformations, as well as object texture and background texture. Optimal inference under G-flow reduces to a conditionally Gaussian stochastic filtering problem. The optimal solution to this problem reveals a new space of computer vision algorithms, of which classic approaches such as optic flow and template matching are special cases that are optimal only under special circumstances. We evaluate G-flow on the problem of tracking facial expressions and head motion in 3D from single-camera video. Previously, the lack of realistic video data with ground truth nonrigid position information has hampered the rigorous evaluation of nonrigid tracking. We introduce a practical method of obtaining such ground truth data and present a new face video data set that was created using this technique. Results on this data set show that G-flow is much more robust and accurate than current deterministic optic-flow-based approaches. PMID:20075463

  3. Spatiotemporal non-rigid image registration for 3D ultrasound cardiac motion estimation

    NASA Astrophysics Data System (ADS)

    Loeckx, D.; Ector, J.; Maes, F.; D'hooge, J.; Vandermeulen, D.; Voigt, J.-U.; Heidbüchel, H.; Suetens, P.

    2007-03-01

    We present a new method to evaluate 4D (3D + time) cardiac ultrasound data sets by nonrigid spatio-temporal image registration. First, a frame-to-frame registration is performed that yields a dense deformation field. The deformation field is used to calculate local spatiotemporal properties of the myocardium, such as the velocity, strain and strain rate. The field is also used to propagate particular points and surfaces, representing e.g. the endo-cardial surface over the different frames. As such, the 4D path of these point is obtained, which can be used to calculate the velocity by which the wall moves and the evolution of the local surface area over time. The wall velocity is not angle-dependent as in classical Doppler imaging, since the 4D data allows calculating the true 3D motion. Similarly, all 3D myocardium strain components can be estimated. Combined they result in local surface area or volume changes which van be color-coded as a measure of local contractability. A diagnostic method that strongly benefits from this technique is cardiac motion and deformation analysis, which is an important aid to quantify the mechanical properties of the myocardium.

  4. Integration of 3D structure from disparity into biological motion perception independent of depth awareness.

    PubMed

    Wang, Ying; Jiang, Yi

    2014-01-01

    Images projected onto the retinas of our two eyes come from slightly different directions in the real world, constituting binocular disparity that serves as an important source for depth perception - the ability to see the world in three dimensions. It remains unclear whether the integration of disparity cues into visual perception depends on the conscious representation of stereoscopic depth. Here we report evidence that, even without inducing discernible perceptual representations, the disparity-defined depth information could still modulate the visual processing of 3D objects in depth-irrelevant aspects. Specifically, observers who could not discriminate disparity-defined in-depth facing orientations of biological motions (i.e., approaching vs. receding) due to an excessive perceptual bias nevertheless exhibited a robust perceptual asymmetry in response to the indistinguishable facing orientations, similar to those who could consciously discriminate such 3D information. These results clearly demonstrate that the visual processing of biological motion engages the disparity cues independent of observers' depth awareness. The extraction and utilization of binocular depth signals thus can be dissociable from the conscious representation of 3D structure in high-level visual perception.

  5. 3-D Flow Field Diagnostics and Validation Studies using Stereoscopic Tracking Velocimetry

    NASA Technical Reports Server (NTRS)

    Cha, Soyoung Stephen; Ramachandran, Narayanan; Whitaker, Ann F. (Technical Monitor)

    2002-01-01

    The measurement of 3-D three-component velocity fields is of great importance in both ground and space experiments for understanding materials processing and fluid physics. Here, we present the investigation results of stereoscopic tracking velocimetry (STV) for measuring 3-D velocity fields. The effort includes diagnostic technology development, experimental velocity measurement, and comparison with analytical and numerical computation. The advantages of STV stems from the system simplicity for building compact hardware and in software efficiency for continual near-real-time process monitoring. It also has illumination flexibility for observing volumetric flow fields from arbitrary directions. STV is based on stereoscopic CCD observations of particles seeded in a flow. Neural networks are used for data analysis. The developed diagnostic tool is tested with a simple directional solidification apparatus using Succinonitrile. The 3-D velocity field in the liquid phase is measured and compared with results from detailed numerical computations. Our theoretical, numerical, and experimental effort has shown STV to be a viable candidate for reliably quantifying the 3-D flow field in materials processing and fluids experiments.

  6. METHODS FOR USING 3-D ULTRASOUND SPECKLE TRACKING IN BIAXIAL MECHANICAL TESTING OF BIOLOGICAL TISSUE SAMPLES

    PubMed Central

    Yap, Choon Hwai; Park, Dae Woo; Dutta, Debaditya; Simon, Marc; Kim, Kang

    2014-01-01

    Being multilayered and anisotropic, biological tissues such as cardiac and arterial walls are structurally complex, making full assessment and understanding of their mechanical behavior challenging. Current standard mechanical testing uses surface markers to track tissue deformations and does not provide deformation data below the surface. In the study described here, we found that combining mechanical testing with 3-D ultrasound speckle tracking could overcome this limitation. Rat myocardium was tested with a biaxial tester and was concurrently scanned with high-frequency ultrasound in three dimensions. The strain energy function was computed from stresses and strains using an iterative non-linear curve-fitting algorithm. Because the strain energy function consists of terms for the base matrix and for embedded fibers, spatially varying fiber orientation was also computed by curve fitting. Using finite-element simulations, we first validated the accuracy of the non-linear curve-fitting algorithm. Next, we compared experimentally measured rat myocardium strain energy function values with those in the literature and found a matching order of magnitude. Finally, we retained samples after the experiments for fiber orientation quantification using histology and found that the results satisfactorily matched those computed in the experiments. We conclude that 3-D ultrasound speckle tracking can be a useful addition to traditional mechanical testing of biological tissues and may provide the benefit of enabling fiber orientation computation. PMID:25616585

  7. A Little Knowledge of Ground Motion: Explaining 3-D Physics-Based Modeling to Engineers

    NASA Astrophysics Data System (ADS)

    Porter, K.

    2014-12-01

    Users of earthquake planning scenarios require the ground-motion map to be credible enough to justify costly planning efforts, but not all ground-motion maps are right for all uses. There are two common ways to create a map of ground motion for a hypothetical earthquake. One approach is to map the median shaking estimated by empirical attenuation relationships. The other uses 3-D physics-based modeling, in which one analyzes a mathematical model of the earth's crust near the fault rupture and calculates the generation and propagation of seismic waves from source to ground surface by first principles. The two approaches produce different-looking maps. The more-familiar median maps smooth out variability and correlation. Using them in a planning scenario can lead to a systematic underestimation of damage and loss, and could leave a community underprepared for realistic shaking. The 3-D maps show variability, including some very high values that can disconcert non-scientists. So when the USGS Science Application for Risk Reduction's (SAFRR) Haywired scenario project selected 3-D maps, it was necessary to explain to scenario users—especially engineers who often use median maps—the differences, advantages, and disadvantages of the two approaches. We used authority, empirical evidence, and theory to support our choice. We prefaced our explanation with SAFRR's policy of using the best available earth science, and cited the credentials of the maps' developers and the reputation of the journal in which they published the maps. We cited recorded examples from past earthquakes of extreme ground motions that are like those in the scenario map. We explained the maps on theoretical grounds as well, explaining well established causes of variability: directivity, basin effects, and source parameters. The largest mapped motions relate to potentially unfamiliar extreme-value theory, so we used analogies to human longevity and the average age of the oldest person in samples of

  8. 3D measurement of the position of gold particles via evanescent digital holographic particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Satake, Shin-ichi; Unno, Noriyuki; Nakata, Shuichiro; Taniguchi, Jun

    2016-08-01

    A new technique based on digital holography and evanescent waves was developed for 3D measurements of the position of gold nanoparticles in water. In this technique, an intensity profile is taken from a holographic image of a gold particle. To detect the position of the gold particle with high accuracy, its holographic image is recorded on a nanosized step made of MEXFLON, which has a refractive index close to that of water, and the position of the particle is reconstructed by means of digital holography. The height of the nanosized step was measured by using a profilometer and the digitally reconstructed height of the glass substrate had good agreement with the measured value. Furthermore, this method can be used to accurately track the 3D position of a gold particle in water.

  9. 3D delivered dose assessment using a 4DCT-based motion model

    SciTech Connect

    Cai, Weixing; Hurwitz, Martina H.; Williams, Christopher L.; Dhou, Salam; Berbeco, Ross I.; Mishra, Pankaj E-mail: jhlewis@lroc.harvard.edu; Lewis, John H. E-mail: jhlewis@lroc.harvard.edu; Seco, Joao

    2015-06-15

    Purpose: The purpose of this work is to develop a clinically feasible method of calculating actual delivered dose distributions for patients who have significant respiratory motion during the course of stereotactic body radiation therapy (SBRT). Methods: A novel approach was proposed to calculate the actual delivered dose distribution for SBRT lung treatment. This approach can be specified in three steps. (1) At the treatment planning stage, a patient-specific motion model is created from planning 4DCT data. This model assumes that the displacement vector field (DVF) of any respiratory motion deformation can be described as a linear combination of some basis DVFs. (2) During the treatment procedure, 2D time-varying projection images (either kV or MV projections) are acquired, from which time-varying “fluoroscopic” 3D images of the patient are reconstructed using the motion model. The DVF of each timepoint in the time-varying reconstruction is an optimized linear combination of basis DVFs such that the 2D projection of the 3D volume at this timepoint matches the projection image. (3) 3D dose distribution is computed for each timepoint in the set of 3D reconstructed fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating deformed dose distributions. This approach was first validated using two modified digital extended cardio-torso (XCAT) phantoms with lung tumors and different respiratory motions. The estimated doses were compared to the dose that would be calculated for routine 4DCT-based planning and to the actual delivered dose that was calculated using “ground truth” XCAT phantoms at all timepoints. The approach was also tested using one set of patient data, which demonstrated the application of our method in a clinical scenario. Results: For the first XCAT phantom that has a mostly regular breathing pattern, the errors in 95% volume dose (D95) are 0.11% and 0.83%, respectively for 3D fluoroscopic images

  10. 3D delivered dose assessment using a 4DCT-based motion model

    PubMed Central

    Cai, Weixing; Hurwitz, Martina H.; Williams, Christopher L.; Dhou, Salam; Berbeco, Ross I.; Seco, Joao; Mishra, Pankaj; Lewis, John H.

    2015-01-01

    Purpose: The purpose of this work is to develop a clinically feasible method of calculating actual delivered dose distributions for patients who have significant respiratory motion during the course of stereotactic body radiation therapy (SBRT). Methods: A novel approach was proposed to calculate the actual delivered dose distribution for SBRT lung treatment. This approach can be specified in three steps. (1) At the treatment planning stage, a patient-specific motion model is created from planning 4DCT data. This model assumes that the displacement vector field (DVF) of any respiratory motion deformation can be described as a linear combination of some basis DVFs. (2) During the treatment procedure, 2D time-varying projection images (either kV or MV projections) are acquired, from which time-varying “fluoroscopic” 3D images of the patient are reconstructed using the motion model. The DVF of each timepoint in the time-varying reconstruction is an optimized linear combination of basis DVFs such that the 2D projection of the 3D volume at this timepoint matches the projection image. (3) 3D dose distribution is computed for each timepoint in the set of 3D reconstructed fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating deformed dose distributions. This approach was first validated using two modified digital extended cardio-torso (XCAT) phantoms with lung tumors and different respiratory motions. The estimated doses were compared to the dose that would be calculated for routine 4DCT-based planning and to the actual delivered dose that was calculated using “ground truth” XCAT phantoms at all timepoints. The approach was also tested using one set of patient data, which demonstrated the application of our method in a clinical scenario. Results: For the first XCAT phantom that has a mostly regular breathing pattern, the errors in 95% volume dose (D95) are 0.11% and 0.83%, respectively for 3D fluoroscopic images

  11. Oblique needle segmentation and tracking for 3D TRUS guided prostate brachytherapy

    SciTech Connect

    Wei Zhouping; Gardi, Lori; Downey, Donal B.; Fenster, Aaron

    2005-09-15

    An algorithm was developed in order to segment and track brachytherapy needles inserted along oblique trajectories. Three-dimensional (3D) transrectal ultrasound (TRUS) images of the rigid rod simulating the needle inserted into the tissue-mimicking agar and chicken breast phantoms were obtained to test the accuracy of the algorithm under ideal conditions. Because the robot possesses high positioning and angulation accuracies, we used the robot as a ''gold standard,'' and compared the results of algorithm segmentation to the values measured by the robot. Our testing results showed that the accuracy of the needle segmentation algorithm depends on the needle insertion distance into the 3D TRUS image and the angulations with respect to the TRUS transducer, e.g., at a 10 deg. insertion anglulation in agar phantoms, the error of the algorithm in determining the needle tip position was less than 1 mm when the insertion distance was greater than 15 mm. Near real-time needle tracking was achieved by scanning a small volume containing the needle. Our tests also showed that, the segmentation time was less than 60 ms, and the scanning time was less than 1.2 s, when the insertion distance into the 3D TRUS image was less than 55 mm. In our needle tracking tests in chicken breast phantoms, the errors in determining the needle orientation were less than 2 deg. in robot yaw and 0.7 deg. in robot pitch orientations, for up to 20 deg. needle insertion angles with the TRUS transducer in the horizontal plane when the needle insertion distance was greater than 15 mm.

  12. 3D motion of DNA-Au nanoconjugates in graphene liquid cell electron microscopy.

    PubMed

    Chen, Qian; Smith, Jessica M; Park, Jungwon; Kim, Kwanpyo; Ho, Davy; Rasool, Haider I; Zettl, Alex; Alivisatos, A Paul

    2013-09-11

    Liquid-phase transmission electron microscopy (TEM) can probe and visualize dynamic events with structural or functional details at the nanoscale in a liquid medium. Earlier efforts have focused on the growth and transformation kinetics of hard material systems, relying on their stability under electron beam. Our recently developed graphene liquid cell technique pushed the spatial resolution of such imaging to the atomic scale but still focused on growth trajectories of metallic nanocrystals. Here, we adopt this technique to imaging three-dimensional (3D) dynamics of soft materials instead, double strand (dsDNA) connecting Au nanocrystals as one example, at nanometer resolution. We demonstrate first that a graphene liquid cell can seal an aqueous sample solution of a lower vapor pressure than previously investigated well against the high vacuum in TEM. Then, from quantitative analysis of real time nanocrystal trajectories, we show that the status and configuration of dsDNA dictate the motions of linked nanocrystals throughout the imaging time of minutes. This sustained connecting ability of dsDNA enables this unprecedented continuous imaging of its dynamics via TEM. Furthermore, the inert graphene surface minimizes sample-substrate interaction and allows the whole nanostructure to rotate freely in the liquid environment; we thus develop and implement the reconstruction of 3D configuration and motions of the nanostructure from the series of 2D projected TEM images captured while it rotates. In addition to further proving the nanoconjugate structural stability, this reconstruction demonstrates 3D dynamic imaging by TEM beyond its conventional use in seeing a flattened and dry sample. Altogether, we foresee the new and exciting use of graphene liquid cell TEM in imaging 3D biomolecular transformations or interaction dynamics at nanometer resolution. PMID:23944844

  13. An automated tool for 3D tracking of single molecules in living cells

    NASA Astrophysics Data System (ADS)

    Gardini, L.; Capitanio, M.; Pavone, F. S.

    2015-03-01

    Since the behaviour of proteins and biological molecules is tightly related to cell's environment, more and more microscopy techniques are moving from in vitro to in living cells experiments. Looking at both diffusion and active transportation processes inside a cell requires three-dimensional localization over a few microns range, high SNR images and high temporal resolution. Since protein dynamics inside a cell involve all three dimensions, we developed an automated routine for 3D tracking of single fluorescent molecules inside living cells with nanometer accuracy, by exploiting the properties of the point-spread-function of out-of-focus Quantum Dots bound to the protein of interest.

  14. Longitudinal Measurement of Extracellular Matrix Rigidity in 3D Tumor Models Using Particle-tracking Microrheology

    PubMed Central

    El-Hamidi, Hamid; Celli, Jonathan P.

    2014-01-01

    The mechanical microenvironment has been shown to act as a crucial regulator of tumor growth behavior and signaling, which is itself remodeled and modified as part of a set of complex, two-way mechanosensitive interactions. While the development of biologically-relevant 3D tumor models have facilitated mechanistic studies on the impact of matrix rheology on tumor growth, the inverse problem of mapping changes in the mechanical environment induced by tumors remains challenging. Here, we describe the implementation of particle-tracking microrheology (PTM) in conjunction with 3D models of pancreatic cancer as part of a robust and viable approach for longitudinally monitoring physical changes in the tumor microenvironment, in situ. The methodology described here integrates a system of preparing in vitro 3D models embedded in a model extracellular matrix (ECM) scaffold of Type I collagen with fluorescently labeled probes uniformly distributed for position- and time-dependent microrheology measurements throughout the specimen. In vitro tumors are plated and probed in parallel conditions using multiwell imaging plates. Drawing on established methods, videos of tracer probe movements are transformed via the Generalized Stokes Einstein Relation (GSER) to report the complex frequency-dependent viscoelastic shear modulus, G*(ω). Because this approach is imaging-based, mechanical characterization is also mapped onto large transmitted-light spatial fields to simultaneously report qualitative changes in 3D tumor size and phenotype. Representative results showing contrasting mechanical response in sub-regions associated with localized invasion-induced matrix degradation as well as system calibration, validation data are presented. Undesirable outcomes from common experimental errors and troubleshooting of these issues are also presented. The 96-well 3D culture plating format implemented in this protocol is conducive to correlation of microrheology measurements with therapeutic

  15. Aref's chaotic orbits tracked by a general ellipsoid using 3D numerical simulations

    NASA Astrophysics Data System (ADS)

    Shui, Pei; Popinet, Stéphane; Govindarajan, Rama; Valluri, Prashant

    2015-11-01

    The motion of an ellipsoidal solid in an ideal fluid has been shown to be chaotic (Aref, 1993) under the limit of non-integrability of Kirchhoff's equations (Kozlov & Oniscenko, 1982). On the other hand, the particle could stop moving when the damping viscous force is strong enough. We present numerical evidence using our in-house immersed solid solver for 3D chaotic motion of a general ellipsoidal solid and suggest criteria for triggering such motion. Our immersed solid solver functions under the framework of the Gerris flow package of Popinet et al. (2003). This solver, the Gerris Immersed Solid Solver (GISS), resolves 6 degree-of-freedom motion of immersed solids with arbitrary geometry and number. We validate our results against the solution of Kirchhoff's equations. The study also shows that the translational/ rotational energy ratio plays the key role on the motion pattern, while the particle geometry and density ratio between the solid and fluid also have some influence on the chaotic behaviour. Along with several other benchmark cases for viscous flows, we propose prediction of chaotic Aref's orbits as a key benchmark test case for immersed boundary/solid solvers.

  16. Validation of INSAT-3D atmospheric motion vectors for monsoon 2015

    NASA Astrophysics Data System (ADS)

    Sharma, Priti; Rani, S. Indira; Das Gupta, M.

    2016-05-01

    Atmospheric Motion Vector (AMV) over Indian Ocean and surrounding region is one of the most important sources of tropospheric wind information assimilated in numerical weather prediction (NWP) system. Earlier studies showed that the quality of Indian geo-stationary satellite Kalpana-1 AMVs was not comparable to that of other geostationary satellites over this region and hence not used in NWP system. Indian satellite INSAT-3D was successfully launched on July 26, 2013 with upgraded imaging system as compared to that of previous Indian satellite Kalpana-1. INSAT-3D has middle infrared band (3.80 - 4.00 μm) which is capable of night time pictures of low clouds and fog. Three consecutive images of 30-minutes interval are used to derive the AMVs. New height assignment scheme (using NWP first guess and replacing old empirical GA method) along with modified quality control scheme were implemented for deriving INSAT-3D AMVs. In this paper an attempt has been made to validate these AMVs against in-situ observations as well as against NCMRWF's NWP first guess for monsoon 2015. AMVs are subdivided into three different pressure levels in the vertical viz. low (1000 - 700 hPa), middle (700 - 400 hPa) and high (400 - 100 hPa) for validation purpose. Several statistics viz. normalized root mean square vector difference; biases etc. have been computed over different latitudinal belt. Result shows that the general mean monsoon circulations along with all the transient monsoon systems are well captured by INSAT-3D AMVs, as well as the error statistics viz., RMSE etc of INSAT-3D AMVs is now comparable to other geostationary satellites.

  17. An automated tool for 3D tracking of single molecules in living cells

    NASA Astrophysics Data System (ADS)

    Gardini, L.; Capitanio, M.; Pavone, F. S.

    2015-07-01

    Recently, tremendous improvements have been achieved in the precision of localization of single fluorescent molecules, allowing localization and tracking of biomolecules at the nm level. Since the behaviour of proteins and biological molecules is tightly influenced by the cell's environment, a growing number of microscopy techniques are moving from in vitro to live cell experiments. Looking at both diffusion and active transportation processes inside a cell requires three-dimensional localization over a few microns range, high SNR images and high temporal resolution (ms order of magnitude). To satisfy these requirements we developed an automated routine that allow 3D tracking of single fluorescent molecules in living cells with nanometer accuracy, by exploiting the properties of the point-spread-function of out-of-focus Quantum Dots bound to the protein of interest.

  18. Passive Markers for Tracking Surgical Instruments in Real-Time 3-D Ultrasound Imaging

    PubMed Central

    Stoll, Jeffrey; Ren, Hongliang; Dupont, Pierre E.

    2013-01-01

    A family of passive echogenic markers is presented by which the position and orientation of a surgical instrument can be determined in a 3-D ultrasound volume, using simple image processing. Markers are attached near the distal end of the instrument so that they appear in the ultrasound volume along with the instrument tip. They are detected and measured within the ultrasound image, thus requiring no external tracking device. This approach facilitates imaging instruments and tissue simultaneously in ultrasound-guided interventions. Marker-based estimates of instrument pose can be used in augmented reality displays or for image-based servoing. Design principles for marker shapes are presented that ensure imaging system and measurement uniqueness constraints are met. An error analysis is included that can be used to guide marker design and which also establishes a lower bound on measurement uncertainty. Finally, examples of marker measurement and tracking algorithms are presented along with experimental validation of the concepts. PMID:22042148

  19. Spatial light modulation for improved microscope stereo vision and 3D tracking

    NASA Astrophysics Data System (ADS)

    Lee, Michael P.; Gibson, Graham; Tassieri, Manlio; Phillips, Dave; Bernet, Stefan; Ritsh-Marte, Monika; Padgett, Miles J.

    2013-09-01

    We present a new type of stereo microscopy which can be used for tracking in 3D over an extended depth. The use of Spatial Light Modulators (SLMs) in the Fourier plane of a microscope sample is a common technique in Holographic Optical Tweezers (HOT). This set up is readily transferable from a tweezer system to an imaging system, where the tweezing laser is replaced with a camera. Just as a HOT system can diffract many traps of different types, in the imaging system many different imaging types can be diffracted with the SLM. The type of imaging we have developed is stereo imaging combined with lens correction. This approach has similarities with human vision where each eye has a lens, and it also extends the depth over which we can accurately track particles.

  20. 3D Fluorescent and Reflective Imaging of Whole Stardust Tracks in Aerogel

    SciTech Connect

    Greenberg, M.; Ebel, D.S.

    2011-11-07

    The NASA Stardust mission returned to earth in 2006 with the cometary collector having captured over 1,000 particles in an aerogel medium at a relative velocity of 6.1 km/s. Particles captured in aerogel were heated, disaggregated and dispersed along 'tracks' or cavities in aerogel, singular tracks representing a history of one capture event. It has been our focus to chemically and morphologically characterize whole tracks in 3-dimensions, utilizing solely non-destructive methods. To this end, we have used a variety of methods: 3D Laser Scanning Confocal Microscopy (LSCM), synchrotron X-ray fluorescence (SXRF), and synchrotron X-ray diffraction (SXRD). In the past months we have developed two new techniques to aid in data collection. (1) We have received a new confocal microscope which has enabled autofluorescent and spectral imaging of aerogel samples. (2) We have developed a stereo-SXRF technique to chemically identify large grains in SXRF maps in 3-space. The addition of both of these methods to our analytic abilities provides a greater understanding of the mechanisms and results of track formation.

  1. Quantifying the 3D Odorant Concentration Field Used by Actively Tracking Blue Crabs

    NASA Astrophysics Data System (ADS)

    Webster, D. R.; Dickman, B. D.; Jackson, J. L.; Weissburg, M. J.

    2007-11-01

    Blue crabs and other aquatic organisms locate food and mates by tracking turbulent odorant plumes. The odorant concentration fluctuates unpredictably due to turbulent transport, and many characteristics of the fluctuation pattern have been hypothesized as useful cues for orienting to the odorant source. To make a direct linkage between tracking behavior and the odorant concentration signal, we developed a measurement system based the laser induced fluorescence technique to quantify the instantaneous 3D concentration field surrounding actively tracking blue crabs. The data suggest a correlation between upstream walking speed and the concentration of the odorant signal arriving at the antennule chemosensors, which are located near the mouth region. More specifically, we note an increase in upstream walking speed when high concentration bursts arrive at the antennules location. We also test hypotheses regarding the ability of blue crabs to steer relative to the plume centerline based on the signal contrast between the chemosensors located on their leg appendages. These chemosensors are located much closer to the substrate compared to the antennules and are separated by the width of the blue crab. In this case, it appears that blue crabs use the bilateral signal comparison to track along the edge of the plume.

  2. Adaptive Kalman snake for semi-autonomous 3D vessel tracking.

    PubMed

    Lee, Sang-Hoon; Lee, Sanghoon

    2015-10-01

    In this paper, we propose a robust semi-autonomous algorithm for 3D vessel segmentation and tracking based on an active contour model and a Kalman filter. For each computed tomography angiography (CTA) slice, we use the active contour model to segment the vessel boundary and the Kalman filter to track position and shape variations of the vessel boundary between slices. For successful segmentation via active contour, we select an adequate number of initial points from the contour of the first slice. The points are set manually by user input for the first slice. For the remaining slices, the initial contour position is estimated autonomously based on segmentation results of the previous slice. To obtain refined segmentation results, an adaptive control spacing algorithm is introduced into the active contour model. Moreover, a block search-based initial contour estimation procedure is proposed to ensure that the initial contour of each slice can be near the vessel boundary. Experiments were performed on synthetic and real chest CTA images. Compared with the well-known Chan-Vese (CV) model, the proposed algorithm exhibited better performance in segmentation and tracking. In particular, receiver operating characteristic analysis on the synthetic and real CTA images demonstrated the time efficiency and tracking robustness of the proposed model. In terms of computational time redundancy, processing time can be effectively reduced by approximately 20%.

  3. Broadband Near-Field Ground Motion Simulations in 3D Scattering Media

    NASA Astrophysics Data System (ADS)

    Imperatori, Walter; Mai, Martin

    2013-04-01

    The heterogeneous nature of Earth's crust is manifested in the scattering of propagating seismic waves. In recent years, different techniques have been developed to include such phenomenon in broadband ground-motion calculations, either considering scattering as a semi-stochastic or pure stochastic process. In this study, we simulate broadband (0-10 Hz) ground motions using a 3D finite-difference wave propagation solver using several 3D media characterized by Von Karman correlation functions with different correlation lengths and standard deviation values. Our goal is to investigate scattering characteristics and its influence on the seismic wave-field at short and intermediate distances from the source in terms of ground motion parameters. We also examine other relevant scattering-related phenomena, such as the loss of radiation pattern and the directivity breakdown. We first simulate broadband ground motions for a point-source characterized by a classic omega-squared spectrum model. Fault finiteness is then introduced by means of a Haskell-type source model presenting both sub-shear and super-shear rupture speed. Results indicate that scattering plays an important role in ground motion even at short distances from the source, where source effects are thought to be dominating. In particular, peak ground motion parameters can be affected even at relatively low frequencies, implying that earthquake ground-motion simulations should include scattering also for PGV calculations. At the same time, we find a gradual loss of the source signature in the 2-5 Hz frequency range, together with a distortion of the Mach cones in case of super-shear rupture. For more complex source models and truly heterogeneous Earth, these effects may occur even at lower frequencies. Our simulations suggest that Von Karman correlation functions with correlation length between several hundred meters and few kilometers, Hurst exponent around 0.3 and standard deviation in the 5-10% range

  4. The role of 3D and speckle tracking echocardiography in cardiac amyloidosis: a case report.

    PubMed

    Nucci, E M; Lisi, M; Cameli, M; Baldi, L; Puccetti, L; Mondillo, S; Favilli, R; Lunghetti, S

    2014-01-01

    Cardiac amyloidosis (CA) is a disorder characterized by amyloid fibrils deposition in cardiac interstitium; it results in a restrictive cardiomyopathy with heart failure (HF) and conduction abnormalities. The "gold standard" for diagnosis of CA is myocardial biopsy but possible sampling errors and procedural risks, limit it's use. Magnetic resonance (RMN) offers more information than traditional echocardiography and allows diagnosis of CA but often it's impossible to perform. We report the case of a man with HF and symptomatic bradyarrhythmia that required an urgent pacemaker implant. Echocardiography was strongly suggestive of CA but wasn't impossible to perform an RMN to confirm this hypothesis because the patient was implanted with a definitive pacemaker. So was performed a Speckle Tracking Echocardiography (STE) and a 3D echocardiography: STE allows to differentiate CA from others hypertrophic cardiomyopathy by longitudinal strain value < 12% and 3D echocardiography shows regional left ventricular dyssynchrony with a characteristic temporal pattern of dispersion of regional volume systolic change. On the basis of these results, finally was performed an endomyocardial biopsy that confirmed the diagnosis of CA. This case underlines the importance of news, noninvasive techniques such as eco 3D and STE for early diagnosis of CA, especially when RMN cannot be performed.

  5. 3D digital holographic interferometry as a tool to measure the tympanic membrane motion

    NASA Astrophysics Data System (ADS)

    del Socorro Hernández-Montes, M.; Muñoz Solis, S.; Mendoza Santoyo, F.

    2012-10-01

    Most of the current optical non-invasive methodologies used to characterize the tympanic membrane (TM) motion generate data in the z direction only, i.e., employ an out-of-plane sensitive configuration. In this paper, 3-D digital holographic interferometry (3-D DHI), is used to measure micrometer displacements from the TM surface. The proposed optical configuration provides information from three sensitivity vectors that separate the contributions from x, y and z displacement components. In order to achieve high accuracy of the sensitivity vector and to obtain the complete determination of the 3-D TM displacements, its surface contour is obtained by moving only two object illumination sources chosen from any pair within the DHI optical setup. Results are presented from measurements corresponding to individual displacements maps for the three orthogonal displacements components x, y and z combined with the TM shape from an ex-vivo cat. These results will no doubt contribute to enhance the understanding and determinate the mechanical properties of this complex tissue.

  6. Biodynamic Doppler imaging of subcellular motion inside 3D living tissue culture and biopsies (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Nolte, David D.

    2016-03-01

    Biodynamic imaging is an emerging 3D optical imaging technology that probes up to 1 mm deep inside three-dimensional living tissue using short-coherence dynamic light scattering to measure the intracellular motions of cells inside their natural microenvironments. Biodynamic imaging is label-free and non-invasive. The information content of biodynamic imaging is captured through tissue dynamics spectroscopy that displays the changes in the Doppler signatures from intracellular constituents in response to applied compounds. The affected dynamic intracellular mechanisms include organelle transport, membrane undulations, cytoskeletal restructuring, strain at cellular adhesions, cytokinesis, mitosis, exo- and endo-cytosis among others. The development of 3D high-content assays such as biodynamic profiling can become a critical new tool for assessing efficacy of drugs and the suitability of specific types of tissue growth for drug discovery and development. The use of biodynamic profiling to predict clinical outcome of living biopsies to cancer therapeutics can be developed into a phenotypic companion diagnostic, as well as a new tool for therapy selection in personalized medicine. This invited talk will present an overview of the optical, physical and physiological processes involved in biodynamic imaging. Several different biodynamic imaging modalities include motility contrast imaging (MCI), tissue-dynamics spectroscopy (TDS) and tissue-dynamics imaging (TDI). A wide range of potential applications will be described that include process monitoring for 3D tissue culture, drug discovery and development, cancer therapy selection, embryo assessment for in-vitro fertilization and artificial reproductive technologies, among others.

  7. Ultrasound image-based respiratory motion tracking

    NASA Astrophysics Data System (ADS)

    Hwang, Youngkyoo; Kim, Jung-Bae; Kim, Yong Sun; Bang, Won-Chul; Kim, James D. K.; Kim, ChangYeong

    2012-03-01

    Respiratory motion tracking has been issues for MR/CT imaging and noninvasive surgery such as HIFU and radiotherapy treatment when we apply these imaging or therapy technologies to moving organs such as liver, kidney or pancreas. Currently, some bulky and burdensome devices are placed externally on skin to estimate respiratory motion of an organ. It estimates organ motion indirectly using skin motion, not directly using organ itself. In this paper, we propose a system that measures directly the motion of organ itself only using ultrasound image. Our system has automatically selected a window in image sequences, called feature window, which is able to measure respiratory motion robustly even to noisy ultrasound images. The organ's displacement on each ultrasound image has been directly calculated through the feature window. It is very convenient to use since it exploits a conventional ultrasound probe. In this paper, we show that our proposed method can robustly extract respiratory motion signal with regardless of reference frame. It is superior to other image based method such as Mutual Information (MI) or Correlation Coefficient (CC). They are sensitive to what the reference frame is selected. Furthermore, our proposed method gives us clear information of the phase of respiratory cycle such as during inspiration or expiration and so on since it calculate not similarity measurement like MI or CC but actual organ's displacement.

  8. System matrix modelling of externally tracked motion

    PubMed Central

    Rahmim, Arman; Cheng, Ju-Chieh; Dinelle, Katie; Shilov, Mikhail; Segars, W. Paul; Rousset, Olivier G.; Tsui, Benjamin M.W.; Wong, Dean F.; Sossi, Vesna

    2010-01-01

    Background and aim In high-resolution emission tomography imaging, even small patient movements can considerably degrade image quality. The aim of this work was to develop a general approach to motion-corrected reconstruction of motion-contaminated data in the case of rigid motion (particularly brain imaging) which would be applicable to any PET scanner in the field, without specialized data-acquisition requirements. Methods Assuming the ability to externally track subject motion during scanning (e.g., using the Polaris camera), we proposed to incorporate the measured rigid motion information into the system matrix of the expectation maximization reconstruction algorithm. Furthermore, we noted and developed a framework to incorporate the additional effect of motion on modifying the attenuation factors. A new mathematical brain phantom was developed and used along with elaborate combined Simset/GATE simulations to compare the proposed framework with the cases of no motion correction. Results and conclusion Clear qualitative and quantitative improvements were observed when incorporating the proposed framework. The method is very practical to implement for any scanner in the field, not requiring any hardware modifications or access to the list-mode acquisition capability. PMID:18458606

  9. New method for detection of complex 3D fracture motion - Verification of an optical motion analysis system for biomechanical studies

    PubMed Central

    2012-01-01

    Background Fracture-healing depends on interfragmentary motion. For improved osteosynthesis and fracture-healing, the micromotion between fracture fragments is undergoing intensive research. The detection of 3D micromotions at the fracture gap still presents a challenge for conventional tactile measurement systems. Optical measurement systems may be easier to use than conventional systems, but, as yet, cannot guarantee accuracy. The purpose of this study was to validate the optical measurement system PONTOS 5M for use in biomechanical research, including measurement of micromotion. Methods A standardized transverse fracture model was created to detect interfragmentary motions under axial loadings of up to 200 N. Measurements were performed using the optical measurement system and compared with a conventional high-accuracy tactile system consisting of 3 standard digital dial indicators (1 μm resolution; 5 μm error limit). Results We found that the deviation in the mean average motion detection between the systems was at most 5.3 μm, indicating that detection of micromotion was possible with the optical measurement system. Furthermore, we could show two considerable advantages while using the optical measurement system. Only with the optical system interfragmentary motion could be analyzed directly at the fracture gap. Furthermore, the calibration of the optical system could be performed faster, safer and easier than that of the tactile system. Conclusion The PONTOS 5 M optical measurement system appears to be a favorable alternative to previously used tactile measurement systems for biomechanical applications. Easy handling, combined with a high accuracy for 3D detection of micromotions (≤ 5 μm), suggests the likelihood of high user acceptance. This study was performed in the context of the deployment of a new implant (dynamic locking screw; Synthes, Oberdorf, Switzerland). PMID:22405047

  10. Contribution of Visuospatial and Motion-Tracking to Invisible Motion.

    PubMed

    Battaglini, Luca; Casco, Clara

    2016-01-01

    People experience an object's motion even when it is occluded. We investigate the processing of invisible motion in three experiments. Observers saw a moving circle passing behind an invisible, irregular hendecagonal polygon and had to respond as quickly as possible when the target had "just reappeared" from behind the occluder. Without explicit cues allowing the end of each of the eight hidden trajectories to be predicted (length ranging between 4.7 and 5 deg), we found as expected, if visuospatial attention was involved, anticipation errors, providing that information on pre-occluder motion was available. This indicates that the observers, rather than simply responding when they saw the target, tended to anticipate its reappearance (Experiment 1). The new finding is that, with a fixation mark indicating the center of the invisible trajectory, a linear relationship between the physical and judged occlusion duration is found, but not without it (Experiment 2) or with a fixation mark varying in position from trial to trial (Experiment 3). We interpret the role of central fixation in the differences in distinguishing trajectories smaller than 0.3 deg, by suggesting that it reflects spatiotemporal computation and motion-tracking. These two mechanisms allow visual imagery to form of the point symmetrical to that of the disappearance, with respect to fixation, and then for the occluded moving target to be tracked up to this point. PMID:27683566

  11. Contribution of Visuospatial and Motion-Tracking to Invisible Motion

    PubMed Central

    Battaglini, Luca; Casco, Clara

    2016-01-01

    People experience an object's motion even when it is occluded. We investigate the processing of invisible motion in three experiments. Observers saw a moving circle passing behind an invisible, irregular hendecagonal polygon and had to respond as quickly as possible when the target had “just reappeared” from behind the occluder. Without explicit cues allowing the end of each of the eight hidden trajectories to be predicted (length ranging between 4.7 and 5 deg), we found as expected, if visuospatial attention was involved, anticipation errors, providing that information on pre-occluder motion was available. This indicates that the observers, rather than simply responding when they saw the target, tended to anticipate its reappearance (Experiment 1). The new finding is that, with a fixation mark indicating the center of the invisible trajectory, a linear relationship between the physical and judged occlusion duration is found, but not without it (Experiment 2) or with a fixation mark varying in position from trial to trial (Experiment 3). We interpret the role of central fixation in the differences in distinguishing trajectories smaller than 0.3 deg, by suggesting that it reflects spatiotemporal computation and motion-tracking. These two mechanisms allow visual imagery to form of the point symmetrical to that of the disappearance, with respect to fixation, and then for the occluded moving target to be tracked up to this point. PMID:27683566

  12. Contribution of Visuospatial and Motion-Tracking to Invisible Motion

    PubMed Central

    Battaglini, Luca; Casco, Clara

    2016-01-01

    People experience an object's motion even when it is occluded. We investigate the processing of invisible motion in three experiments. Observers saw a moving circle passing behind an invisible, irregular hendecagonal polygon and had to respond as quickly as possible when the target had “just reappeared” from behind the occluder. Without explicit cues allowing the end of each of the eight hidden trajectories to be predicted (length ranging between 4.7 and 5 deg), we found as expected, if visuospatial attention was involved, anticipation errors, providing that information on pre-occluder motion was available. This indicates that the observers, rather than simply responding when they saw the target, tended to anticipate its reappearance (Experiment 1). The new finding is that, with a fixation mark indicating the center of the invisible trajectory, a linear relationship between the physical and judged occlusion duration is found, but not without it (Experiment 2) or with a fixation mark varying in position from trial to trial (Experiment 3). We interpret the role of central fixation in the differences in distinguishing trajectories smaller than 0.3 deg, by suggesting that it reflects spatiotemporal computation and motion-tracking. These two mechanisms allow visual imagery to form of the point symmetrical to that of the disappearance, with respect to fixation, and then for the occluded moving target to be tracked up to this point.

  13. Nondestructive 3D confocal laser imaging with deconvolution of seven whole stardust tracks with complementary XRF and quantitative analysis

    SciTech Connect

    Greenberg, M.; Ebel, D.S.

    2009-03-19

    We present a nondestructive 3D system for analysis of whole Stardust tracks, using a combination of Laser Confocal Scanning Microscopy and synchrotron XRF. 3D deconvolution is used for optical corrections, and results of quantitative analyses of several tracks are presented. The Stardust mission to comet Wild 2 trapped many cometary and ISM particles in aerogel, leaving behind 'tracks' of melted silica aerogel on both sides of the collector. Collected particles and their tracks range in size from submicron to millimeter scale. Interstellar dust collected on the obverse of the aerogel collector is thought to have an average track length of {approx}15 {micro}m. It has been our goal to perform a total non-destructive 3D textural and XRF chemical analysis on both types of tracks. To that end, we use a combination of Laser Confocal Scanning Microscopy (LCSM) and X Ray Florescence (XRF) spectrometry. Utilized properly, the combination of 3D optical data and chemical data provides total nondestructive characterization of full tracks, prior to flattening or other destructive analysis methods. Our LCSM techniques allow imaging at 0.075 {micro}m/pixel, without the use of oil-based lenses. A full textural analysis on track No.82 is presented here as well as analysis of 6 additional tracks contained within 3 keystones (No.128, No.129 and No.140). We present a method of removing the axial distortion inherent in LCSM images, by means of a computational 3D Deconvolution algorithm, and present some preliminary experiments with computed point spread functions. The combination of 3D LCSM data and XRF data provides invaluable information, while preserving the integrity of the samples for further analysis. It is imperative that these samples, the first extraterrestrial solids returned since the Apollo era, be fully mapped nondestructively in 3D, to preserve the maximum amount of information prior to other, destructive analysis.

  14. Feasibility Study for Ballet E-Learning: Automatic Composition System for Ballet "Enchainement" with Online 3D Motion Data Archive

    ERIC Educational Resources Information Center

    Umino, Bin; Longstaff, Jeffrey Scott; Soga, Asako

    2009-01-01

    This paper reports on "Web3D dance composer" for ballet e-learning. Elementary "petit allegro" ballet steps were enumerated in collaboration with ballet teachers, digitally acquired through 3D motion capture systems, and categorised into families and sub-families. Digital data was manipulated into virtual reality modelling language (VRML) and fit…

  15. A soft biomimetic tongue: model reconstruction and motion tracking

    NASA Astrophysics Data System (ADS)

    Lu, Xuanming; Xu, Weiliang; Li, Xiaoning

    2016-04-01

    A bioinspired robotic tongue which is actuated by a network of compressed air is proposed for the purpose of mimicking the movements of human tongue. It can be applied in the fields such as medical science and food engineering. The robotic tongue is made of two kinds of silicone rubber Ecoflex 0030 and PDMS with the shape simplified from real human tongue. In order to characterize the robotic tongue, a series of experiments were carried out. Laser scan was applied to reconstruct the static model of robotic tongue when it was under pressurization. After each scan, the robotic tongue was scattered into dense points in the same 3D coordinate system and the coordinates of each point were recorded. Motion tracking system (OptiTrack) was used to track and record the whole process of deformation dynamically during the loading and unloading phase. In the experiments, five types of deformation were achieved including roll-up, roll-down, elongation, groove and twist. Utilizing the discrete points generated by laser scan, the accurate parameterized outline of robotic tongue under different pressure was obtained, which could help demonstrate the static characteristic of robotic tongue. The precise deformation process under one pressure was acquired through the OptiTrack system which contains a series of digital cameras, markers on the robotic tongue and a set of hardware and software for data processing. By means of tracking and recording different process of deformation under different pressure, the dynamic characteristic of robotic tongue could be achieved.

  16. Tumor-tracking radiotherapy of moving targets; verification using 3D polymer gel, 2D ion-chamber array and biplanar diode array

    NASA Astrophysics Data System (ADS)

    Ceberg, Sofie; Falk, Marianne; Rosenschöld, Per Munck Af; Cattell, Herbert; Gustafsson, Helen; Keall, Paul; Korreman, Stine S.; Medin, Joakim; Nordström, Fredrik; Persson, Gitte; Sawant, Amit; Svatos, Michelle; Zimmerman, Jens; Bäck, Sven ÅJ

    2010-11-01

    The aim of this study was to carry out a dosimetric verification of a dynamic multileaf collimator (DMLC)-based tumor-tracking delivery during respiratory-like motion. The advantage of tumor-tracking radiation delivery is the ability to allow a tighter margin around the target by continuously following and adapting the dose delivery to its motion. However, there are geometric and dosimetric uncertainties associated with beam delivery system constraints and output variations, and several investigations have to be accomplished before a clinical integration of this tracking technique. Two types of delivery were investigated in this study I) a single beam perpendicular to a target with a one dimensional motion parallel to the MLC moving direction, and II) an intensity modulated arc delivery (RapidArc®) with a target motion diagonal to the MLC moving direction. The feasibility study (I) was made using an 2D ionisation chamber array and a true 3D polymer gel. The arc delivery (II) was verified using polymer gel and a biplanar diode array. Good agreement in absorbed dose was found between delivery to a static target and to a moving target with DMLC tracking using all three detector systems. However, due to the limited spatial resolution of the 2D array a detailed comparison was not possible. The RapidArc® plan delivery was successfully verified using the biplanar diode array and true 3D polymer gel, and both detector systems could verify that the DMLC-based tumor-tracking delivery system has a very good ability to account for respiratory target motion.

  17. 3D cardiac motion reconstruction from CT data and tagged MRI.

    PubMed

    Wang, Xiaoxu; Mihalef, Viorel; Qian, Zhen; Voros, Szilard; Metaxas, Dimitris

    2012-01-01

    In this paper we present a novel method for left ventricle (LV) endocardium motion reconstruction using high resolution CT data and tagged MRI. High resolution CT data provide anatomic details on the LV endocardial surface, such as the papillary muscle and trabeculae carneae. Tagged MRI provides better time resolution. The combination of these two imaging techniques can give us better understanding on left ventricle motion. The high resolution CT images are segmented with mean shift method and generate the LV endocardium mesh. The meshless deformable model built with high resolution endocardium surface from CT data fit to the tagged MRI of the same phase. 3D deformation of the myocardium is computed with the Lagrangian dynamics and local Laplacian deformation. The segmented inner surface of left ventricle is compared with the heart inner surface picture and show high agreement. The papillary muscles are attached to the inner surface with roots. The free wall of the left ventricle inner surface is covered with trabeculae carneae. The deformation of the heart wall and the papillary muscle in the first half of the cardiac cycle is presented. The motion reconstruction results are very close to the live heart video. PMID:23366825

  18. 3D Cardiac Motion Reconstruction from CT Data and Tagged MRI

    PubMed Central

    Wang, Xiaoxu; Mihalef, Viorel; Qian, Zhen; Voros, Szilard; Metaxas, Dimitris

    2016-01-01

    In this paper we present a novel method for left ventricle (LV) endocardium motion reconstruction using high resolution CT data and tagged MRI. High resolution CT data provide anatomic details on the LV endocardial surface, such as the papillary muscle and trabeculae carneae. Tagged MRI provides better time resolution. The combination of these two imaging techniques can give us better understanding on left ventricle motion. The high resolution CT images are segmented with mean shift method and generate the LV endocardium mesh. The meshless deformable model built with high resolution endocardium surface from CT data fit to the tagged MRI of the same phase. 3D deformation of the myocardium is computed with the Lagrangian dynamics and local Laplacian deformation. The segmented inner surface of left ventricle is compared with the heart inner surface picture and show high agreement. The papillary muscles are attached to the inner surface with roots. The free wall of the left ventricle inner surface is covered with trabeculae carneae. The deformation of the heart wall and the papillary muscle in the first half of the cardiac cycle is presented. The motion reconstruction results are very close to the live heart video. PMID:23366825

  19. 3D hand motion trajectory prediction from EEG mu and beta bandpower.

    PubMed

    Korik, A; Sosnik, R; Siddique, N; Coyle, D

    2016-01-01

    A motion trajectory prediction (MTP) - based brain-computer interface (BCI) aims to reconstruct the three-dimensional (3D) trajectory of upper limb movement using electroencephalography (EEG). The most common MTP BCI employs a time series of bandpass-filtered EEG potentials (referred to here as the potential time-series, PTS, model) for reconstructing the trajectory of a 3D limb movement using multiple linear regression. These studies report the best accuracy when a 0.5-2Hz bandpass filter is applied to the EEG. In the present study, we show that spatiotemporal power distribution of theta (4-8Hz), mu (8-12Hz), and beta (12-28Hz) bands are more robust for movement trajectory decoding when the standard PTS approach is replaced with time-varying bandpower values of a specified EEG band, ie, with a bandpower time-series (BTS) model. A comprehensive analysis comprising of three subjects performing pointing movements with the dominant right arm toward six targets is presented. Our results show that the BTS model produces significantly higher MTP accuracy (R~0.45) compared to the standard PTS model (R~0.2). In the case of the BTS model, the highest accuracy was achieved across the three subjects typically in the mu (8-12Hz) and low-beta (12-18Hz) bands. Additionally, we highlight a limitation of the commonly used PTS model and illustrate how this model may be suboptimal for decoding motion trajectory relevant information. Although our results, showing that the mu and beta bands are prominent for MTP, are not in line with other MTP studies, they are consistent with the extensive literature on classical multiclass sensorimotor rhythm-based BCI studies (classification of limbs as opposed to motion trajectory prediction), which report the best accuracy of imagined limb movement classification using power values of mu and beta frequency bands. The methods proposed here provide a positive step toward noninvasive decoding of imagined 3D hand movements for movement-free BCIs

  20. 3D hand motion trajectory prediction from EEG mu and beta bandpower.

    PubMed

    Korik, A; Sosnik, R; Siddique, N; Coyle, D

    2016-01-01

    A motion trajectory prediction (MTP) - based brain-computer interface (BCI) aims to reconstruct the three-dimensional (3D) trajectory of upper limb movement using electroencephalography (EEG). The most common MTP BCI employs a time series of bandpass-filtered EEG potentials (referred to here as the potential time-series, PTS, model) for reconstructing the trajectory of a 3D limb movement using multiple linear regression. These studies report the best accuracy when a 0.5-2Hz bandpass filter is applied to the EEG. In the present study, we show that spatiotemporal power distribution of theta (4-8Hz), mu (8-12Hz), and beta (12-28Hz) bands are more robust for movement trajectory decoding when the standard PTS approach is replaced with time-varying bandpower values of a specified EEG band, ie, with a bandpower time-series (BTS) model. A comprehensive analysis comprising of three subjects performing pointing movements with the dominant right arm toward six targets is presented. Our results show that the BTS model produces significantly higher MTP accuracy (R~0.45) compared to the standard PTS model (R~0.2). In the case of the BTS model, the highest accuracy was achieved across the three subjects typically in the mu (8-12Hz) and low-beta (12-18Hz) bands. Additionally, we highlight a limitation of the commonly used PTS model and illustrate how this model may be suboptimal for decoding motion trajectory relevant information. Although our results, showing that the mu and beta bands are prominent for MTP, are not in line with other MTP studies, they are consistent with the extensive literature on classical multiclass sensorimotor rhythm-based BCI studies (classification of limbs as opposed to motion trajectory prediction), which report the best accuracy of imagined limb movement classification using power values of mu and beta frequency bands. The methods proposed here provide a positive step toward noninvasive decoding of imagined 3D hand movements for movement-free BCIs.

  1. Breakup of Finite-Size Colloidal Aggregates in Turbulent Flow Investigated by Three-Dimensional (3D) Particle Tracking Velocimetry.

    PubMed

    Saha, Debashish; Babler, Matthaus U; Holzner, Markus; Soos, Miroslav; Lüthi, Beat; Liberzon, Alex; Kinzelbach, Wolfgang

    2016-01-12

    Aggregates grown in mild shear flow are released, one at a time, into homogeneous isotropic turbulence, where their motion and intermittent breakup is recorded by three-dimensional particle tracking velocimetry (3D-PTV). The aggregates have an open structure with a fractal dimension of ∼2.2, and their size is 1.4 ± 0.4 mm, which is large, compared to the Kolmogorov length scale (η = 0.15 mm). 3D-PTV of flow tracers allows for the simultaneous measurement of aggregate trajectories and the full velocity gradient tensor along their pathlines, which enables us to access the Lagrangian stress history of individual breakup events. From this data, we found no consistent pattern that relates breakup to the local flow properties at the point of breakup. Also, the correlation between the aggregate size and both shear stress and normal stress at the location of breakage is found to be weaker, when compared with the correlation between size and drag stress. The analysis suggests that the aggregates are mostly broken due to the accumulation of the drag stress over a time lag on the order of the Kolmogorov time scale. This finding is explained by the fact that the aggregates are large, which gives their motion inertia and increases the time for stress propagation inside the aggregate. Furthermore, it is found that the scaling of the largest fragment and the accumulated stress at breakup follows an earlier established power law, i.e., dfrag ∼ σ(-0.6) obtained from laminar nozzle experiments. This indicates that, despite the large size and the different type of hydrodynamic stress, the microscopic mechanism causing breakup is consistent over a wide range of aggregate size and stress magnitude. PMID:26646289

  2. Breakup of Finite-Size Colloidal Aggregates in Turbulent Flow Investigated by Three-Dimensional (3D) Particle Tracking Velocimetry.

    PubMed

    Saha, Debashish; Babler, Matthaus U; Holzner, Markus; Soos, Miroslav; Lüthi, Beat; Liberzon, Alex; Kinzelbach, Wolfgang

    2016-01-12

    Aggregates grown in mild shear flow are released, one at a time, into homogeneous isotropic turbulence, where their motion and intermittent breakup is recorded by three-dimensional particle tracking velocimetry (3D-PTV). The aggregates have an open structure with a fractal dimension of ∼2.2, and their size is 1.4 ± 0.4 mm, which is large, compared to the Kolmogorov length scale (η = 0.15 mm). 3D-PTV of flow tracers allows for the simultaneous measurement of aggregate trajectories and the full velocity gradient tensor along their pathlines, which enables us to access the Lagrangian stress history of individual breakup events. From this data, we found no consistent pattern that relates breakup to the local flow properties at the point of breakup. Also, the correlation between the aggregate size and both shear stress and normal stress at the location of breakage is found to be weaker, when compared with the correlation between size and drag stress. The analysis suggests that the aggregates are mostly broken due to the accumulation of the drag stress over a time lag on the order of the Kolmogorov time scale. This finding is explained by the fact that the aggregates are large, which gives their motion inertia and increases the time for stress propagation inside the aggregate. Furthermore, it is found that the scaling of the largest fragment and the accumulated stress at breakup follows an earlier established power law, i.e., dfrag ∼ σ(-0.6) obtained from laminar nozzle experiments. This indicates that, despite the large size and the different type of hydrodynamic stress, the microscopic mechanism causing breakup is consistent over a wide range of aggregate size and stress magnitude.

  3. Analysis of a vibrating interventional device to improve 3-D colormark tracking.

    PubMed

    Fronheiser, Matthew P; Smith, Stephen W

    2007-08-01

    Ultrasound guidance of interventional devices during minimally invasive surgical procedures has been investigated by many researchers. Previously, we extended the methods used by the Colormark tracking system to several interventional devices using a real-time, three-dimensional (3-D) ultrasound system. These results showed that we needed to improve the efficiency and reliability of the tracking. In this paper, we describe an analytical model to predict the transverse vibrations along the length of an atrial septal puncture needle to enable design improvements of the tracking system. We assume the needle can be modeled as a hollow bar with a circular cross section with a fixed proximal end and a free distal end that is suspended vertically to ignore gravity effects. The initial results show an ability to predict the natural nodes and antinodes along the needle using the characteristic equation for free vibrations. Simulations show that applying a forcing function to the device at a natural antinode yields an order of magnitude larger vibration than when driving the device at a node. Pulsed wave spectral Doppler data was acquired along the distal portion of the needle in a water tank using a 2-D matrix array transesophageal echocardiography probe. This data was compared to simulations of forced vibrations from the model. These initial results suggest that the model is a good first order approximation of the vibrating device in a water tank. It is our belief that knowing the location of the natural nodes and antinodes will improve our ability to drive the device to ensure the vibrations at the proximal end will reach the tip of the device, which in turn should improve our ability to track the device in vivo. PMID:17703675

  4. A 3D Vector/Scalar Visualization and Particle Tracking Package

    1999-08-19

    BOILERMAKER is an interactive visualization system consisting of three components: a visualization component, a particle tracking component, and a communication layer. The software, to date, has been used primarily in the visualization of vector and scalar fields associated with computational fluid dynamics (CFD) models of flue gas flows in industrial boilers and incinerators. Users can interactively request and toggle static vector fields, dynamic streamlines, and flowing vector fields. In addition, the user can interactively placemore » injector nozzles on boiler walls and visualize massed, evaporating sprays emanating from them. Some characteristics of the spray can be adjusted from within the visualization environment including spray shape and particle size. Also included with this release is software that supports 3D menu capabilities, scrollbars, communication and navigation.« less

  5. A 3D Vector/Scalar Visualization and Particle Tracking Package

    SciTech Connect

    Freitag, Lori; Disz, Terry; Papka, Mike; Heath, Daniel; Diachin, Darin; Herzog, Jim; Ryan, and Bob

    1999-08-19

    BOILERMAKER is an interactive visualization system consisting of three components: a visualization component, a particle tracking component, and a communication layer. The software, to date, has been used primarily in the visualization of vector and scalar fields associated with computational fluid dynamics (CFD) models of flue gas flows in industrial boilers and incinerators. Users can interactively request and toggle static vector fields, dynamic streamlines, and flowing vector fields. In addition, the user can interactively place injector nozzles on boiler walls and visualize massed, evaporating sprays emanating from them. Some characteristics of the spray can be adjusted from within the visualization environment including spray shape and particle size. Also included with this release is software that supports 3D menu capabilities, scrollbars, communication and navigation.

  6. Application of 3D hydrodynamic and particle tracking models for better environmental management of finfish culture

    NASA Astrophysics Data System (ADS)

    Moreno Navas, Juan; Telfer, Trevor C.; Ross, Lindsay G.

    2011-04-01

    Hydrographic conditions, and particularly current speeds, have a strong influence on the management of fish cage culture. These hydrodynamic conditions can be used to predict particle movement within the water column and the results used to optimise environmental conditions for effective site selection, setting of environmental quality standards, waste dispersion, and potential disease transfer. To this end, a 3D hydrodynamic model, MOHID, has been coupled to a particle tracking model to study the effects of mean current speed, quiescent water periods and bulk water circulation in Mulroy Bay, Co. Donegal Ireland, an Irish fjard (shallow fjordic system) important to the aquaculture industry. A Lagangrian method simulated the instantaneous release of "particles" emulating discharge from finfish cages to show the behaviour of waste in terms of water circulation and water exchange. The 3D spatial models were used to identify areas of mixed and stratified water using a version of the Simpson-Hunter criteria, and to use this in conjunction with models of current flow for appropriate site selection for salmon aquaculture. The modelled outcomes for stratification were in good agreement with the direct measurements of water column stratification based on observed density profiles. Calculations of the Simpson-Hunter tidal parameter indicated that most of Mulroy Bay was potentially stratified with a well mixed region over the shallow channels where the water is faster flowing. The fjard was characterised by areas of both very low and high mean current speeds, with some areas having long periods of quiescent water. The residual current and the particle tracking animations created through the models revealed an anticlockwise eddy that may influence waste dispersion and potential for disease transfer, among salmon cages and which ensures that the retention time of waste substances from cages is extended. The hydrodynamic model results were incorporated into the ArcView TM GIS

  7. CISUS: an integrated 3D ultrasound system for IGT using a modular tracking API

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Viswanathan, Anand; Pieper, Steve; Choti, Michael A.; Taylor, Russell H.; Kikinis, Ron; Fichtinger, Gabor

    2004-05-01

    Ultrasound has become popular in clinical/surgical applications, both as the primary image guidance modality and also in conjunction with other modalities like CT or MRI. Three dimensional ultrasound (3DUS) systems have also demonstrated usefulness in image-guided therapy (IGT). At the same time, however, current lack of open-source and open-architecture multi-modal medical visualization systems prevents 3DUS from fulfilling its potential. Several stand-alone 3DUS systems, like Stradx or In-Vivo exist today. Although these systems have been found to be useful in real clinical setting, it is difficult to augment their functionality and integrate them in versatile IGT systems. To address these limitations, a robotic/freehand 3DUS open environment (CISUS) is being integrated into the 3D Slicer, an open-source research tool developed for medical image analysis and surgical planning. In addition, the system capitalizes on generic application programming interfaces (APIs) for tracking devices and robotic control. The resulting platform-independent open-source system may serve as a valuable tool to the image guided surgery community. Other researchers could straightforwardly integrate the generic CISUS system along with other functionalities (i.e. dual view visualization, registration, real-time tracking, segmentation, etc) to rapidly create their medical/surgical applications. Our current driving clinical application is robotically assisted and freehand 3DUS-guided liver ablation, which is fully being integrated under the CISUS-3D Slicer. Initial functionality and pre-clinical feasibility are demonstrated on phantom and ex-vivo animal models.

  8. Defragmented image based autostereoscopic 3D displays with dynamic eye tracking

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-12-01

    We studied defragmented image based autostereoscopic 3D displays with dynamic eye tracking. Specifically, we examined the impact of parallax barrier (PB) angular orientation on their image quality. The 3D display system required fine adjustment of PB angular orientation with respect to a display panel. This was critical for both image color balancing and minimizing image resolution mismatch between horizontal and vertical directions. For evaluating uniformity of image brightness, we applied optical ray tracing simulations. The simulations took effects of PB orientation misalignment into account. The simulation results were then compared with recorded experimental data. Our optimal simulated system produced significantly enhanced image uniformity at around sweet spots in viewing zones. However this was contradicted by real experimental results. We offer quantitative treatment of illuminance uniformity of view images to estimate misalignment of PB orientation, which could account for brightness non-uniformity observed experimentally. Our study also shows that slight imperfection in the adjustment of PB orientation due to practical restrictions of adjustment accuracy can induce substantial non-uniformity of view images' brightness. We find that image brightness non-uniformity critically depends on misalignment of PB angular orientation, for example, as slight as ≤ 0.01 ° in our system. This reveals that reducing misalignment of PB angular orientation from the order of 10-2 to 10-3 degrees can greatly improve the brightness uniformity.

  9. Using an Automated 3D-tracking System to Record Individual and Shoals of Adult Zebrafish

    PubMed Central

    Maaswinkel, Hans; Zhu, Liqun; Weng, Wei

    2013-01-01

    Like many aquatic animals, zebrafish (Danio rerio) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals. PMID:24336189

  10. Motion error analysis of the 3D coordinates of airborne lidar for typical terrains

    NASA Astrophysics Data System (ADS)

    Peng, Tao; Lan, Tian; Ni, Guoqiang

    2013-07-01

    A motion error model of 3D coordinates is established and the impact on coordinate errors caused by the non-ideal movement of the airborne platform is analyzed. The simulation results of the model show that when the lidar system operates at high altitude, the influence on the positioning errors derived from laser point cloud spacing is small. For the model the positioning errors obey simple harmonic vibration whose amplitude envelope gradually reduces with the increase of the vibration frequency. When the vibration period number is larger than 50, the coordinate errors are almost uncorrelated with time. The elevation error is less than the plane error and in the plane the error in the scanning direction is less than the error in the flight direction. Through the analysis of flight test data, the conclusion is verified.

  11. Tactical 3D model generation using structure-from-motion on video from unmanned systems

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Bilinski, Mark; Nguyen, Kim B.; Powell, Darren

    2015-05-01

    Unmanned systems have been cited as one of the future enablers of all the services to assist the warfighter in dominating the battlespace. The potential benefits of unmanned systems are being closely investigated -- from providing increased and potentially stealthy surveillance, removing the warfighter from harms way, to reducing the manpower required to complete a specific job. In many instances, data obtained from an unmanned system is used sparingly, being applied only to the mission at hand. Other potential benefits to be gained from the data are overlooked and, after completion of the mission, the data is often discarded or lost. However, this data can be further exploited to offer tremendous tactical, operational, and strategic value. To show the potential value of this otherwise lost data, we designed a system that persistently stores the data in its original format from the unmanned vehicle and then generates a new, innovative data medium for further analysis. The system streams imagery and video from an unmanned system (original data format) and then constructs a 3D model (new data medium) using structure-from-motion. The 3D generated model provides warfighters additional situational awareness, tactical and strategic advantages that the original video stream lacks. We present our results using simulated unmanned vehicle data with Google Earth™providing the imagery as well as real-world data, including data captured from an unmanned aerial vehicle flight.

  12. Velocity and Density Models Incorporating the Cascadia Subduction Zone for 3D Earthquake Ground Motion Simulations

    USGS Publications Warehouse

    Stephenson, William J.

    2007-01-01

    INTRODUCTION In support of earthquake hazards and ground motion studies in the Pacific Northwest, three-dimensional P- and S-wave velocity (3D Vp and Vs) and density (3D rho) models incorporating the Cascadia subduction zone have been developed for the region encompassed from about 40.2?N to 50?N latitude, and from about -122?W to -129?W longitude. The model volume includes elevations from 0 km to 60 km (elevation is opposite of depth in model coordinates). Stephenson and Frankel (2003) presented preliminary ground motion simulations valid up to 0.1 Hz using an earlier version of these models. The version of the model volume described here includes more structural and geophysical detail, particularly in the Puget Lowland as required for scenario earthquake simulations in the development of the Seattle Urban Hazards Maps (Frankel and others, 2007). Olsen and others (in press) used the model volume discussed here to perform a Cascadia simulation up to 0.5 Hz using a Sumatra-Andaman Islands rupture history. As research from the EarthScope Program (http://www.earthscope.org) is published, a wealth of important detail can be added to these model volumes, particularly to depths of the upper-mantle. However, at the time of development for this model version, no EarthScope-specific results were incorporated. This report is intended to be a reference for colleagues and associates who have used or are planning to use this preliminary model in their research. To this end, it is intended that these models will be considered a beginning template for a community velocity model of the Cascadia region as more data and results become available.

  13. Three-dimensional liver motion tracking using real-time two-dimensional MRI

    SciTech Connect

    Brix, Lau; Ringgaard, Steffen; Sørensen, Thomas Sangild; Poulsen, Per Rugaard

    2014-04-15

    Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (or tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial, sagittal

  14. The use of an MEG device as 3D digitizer and motion monitoring system.

    PubMed

    de Munck, J C; Verbunt, J P; Van't Ent, D; Van Dijk, B W

    2001-08-01

    An algorithm is described that localizes a set of simultaneously activated coils using MEG detectors. These coil positions are used for continuous or intermittent head position registration during long MEG sessions, to coregistrate MR and MEG data and to localize EEG electrodes attached to the scalp, when EEG and MEG are recorded simultaneously. The algorithm is based on a mathematical model in which the coils are described as stationary magnetic dipoles with known source time functions. This knowledge makes it possible to detect and remove bad channels automatically. It is also assumed that the source time functions are orthogonal. Therefore, the localization problem splits into independent localization problems. for each coil. The method is validated in a phantom experiment, where the relative coil positions were known. From this experiment it is found that the average error is 0.25 cm. An error of 0.23 cm was found in an experiment where 64 electrode positions were measured four times independently. Examples of the applications of the method are presented. Our method eliminates the use of an external 3D digitizer and maps the MEG directly onto other modalities. This is not only a practical advantage, but it also reduces the gross registration error. Furthermore, head motions can be monitored and MEG data can be corrected for these motions.

  15. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    SciTech Connect

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; Heiser, John; Yoo, Shinjae; Kalb, Paul

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together into larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.

  16. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    DOE PAGES

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; Heiser, John; Yoo, Shinjae; Kalb, Paul

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together intomore » larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.« less

  17. Numerical Benchmark of 3D Ground Motion Simulation in the Alpine valley of Grenoble, France.

    NASA Astrophysics Data System (ADS)

    Tsuno, S.; Chaljub, E.; Cornou, C.; Bard, P.

    2006-12-01

    Thank to the use of sophisticated numerical methods and to the access to increasing computational resources, our predictions of strong ground motion become more and more realistic and need to be carefully compared. We report our effort of benchmarking numerical methods of ground motion simulation in the case of the valley of Grenoble in the French Alps. The Grenoble valley is typical of a moderate seismicity area where strong site effects occur. The benchmark consisted in computing the seismic response of the `Y'-shaped Grenoble valley to (i) two local earthquakes (Ml<=3) for which recordings were avalaible; and (ii) two local hypothetical events (Mw=6) occuring on the so-called Belledonne Border Fault (BBF) [1]. A free-style prediction was also proposed, in which participants were allowed to vary the source and/or the model parameters and were asked to provide the resulting uncertainty in their estimation of ground motion. We received a total of 18 contributions from 14 different groups; 7 of these use 3D methods, among which 3 could handle surface topography, the other half comprises predictions based upon 1D (2 contributions), 2D (4 contributions) and empirical Green's function (EGF) (3 contributions) methods. Maximal frequency analysed ranged between 2.5 Hz for 3D calculations and 40 Hz for EGF predictions. We present a detailed comparison of the different predictions using raw indicators (e.g. peak values of ground velocity and acceleration, Fourier spectra, site over reference spectral ratios, ...) as well as sophisticated misfit criteria based upon previous works [2,3]. We further discuss the variability in estimating the importance of particular effects such as non-linear rheology, or surface topography. References: [1] Thouvenot F. et al., The Belledonne Border Fault: identification of an active seismic strike-slip fault in the western Alps, Geophys. J. Int., 155 (1), p. 174-192, 2003. [2] Anderson J., Quantitative measure of the goodness-of-fit of

  18. Tracking Gravity Probe B gyroscope polhode motion

    NASA Technical Reports Server (NTRS)

    Keiser, George M.; Parkinson, Bradford W.; Cohen, Clark E.

    1990-01-01

    The superconducting Gravity Probe B spacecraft is being developed to measure two untested predictions of Einstein's theory of general relativity by using orbiting gyroscopes; it possesses an intrinsic magnetic field which rotates with the rotor and is fixed with respect to the rotor body frame. In this paper, the path of the rotor spin axes is tracked using this trapped magnetic flux as a reference. Both the rotor motion and the magnetic field shape are estimated simultaneously, employing the higher order components of the magnetic field shape.

  19. Are There Side Effects to Watching 3D Movies? A Prospective Crossover Observational Study on Visually Induced Motion Sickness

    PubMed Central

    Solimini, Angelo G.

    2013-01-01

    Background The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. Methods and Findings A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Conclusions Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators. PMID:23418530

  20. Tracking Paths of Ocean Source Ambient Seismic Noise into, and through, the 3D Earth

    NASA Astrophysics Data System (ADS)

    Reading, A. M.; Gal, M.; Morse, P. E.; Koper, K. D.; Hemer, M. A.; Rawlinson, N.; Salmon, M.; De Kool, M.; Kennett, B. L. N.

    2014-12-01

    Array measurements of seismic noise (microseisms) are emerging as independent observables that inform our knowledge of ocean storms. Using an improved implementation of IAS Capon analysis, we can infer the location and amplitude of multiple sources of seismic noise over multiple decades. For the Southern Ocean, we can use seismic records to assist in identifying shifting patterns of ocean storms. Thus we can investigate topics such as the disparity between wave height trends identified using calibrated satellite records, which appear to be in increasing over multiple decades, and wave heights measured directly using a wave-rider buoy, which does not show a significant change over the same time frame. The passage of wave energy from the water column to the solid Earth, and through the 3D Earth to the seismic array must be tracked effectively. In this contribution, we focus on understanding the passage of seismic noise through the 3D Earth. In particular, we investigate path deviations from 1D Earth models for body waves sources from a variety of locations in the Southern Ocean recorded at Australian seismic arrays. We also investigate path deviations of surface waves travelling across the Australian continent, using the AusREM Earth model. We also appraise other factors affecting the interpretation of slowness, backazimuth and amplitude from seismic array records. These include the effect of the bathymetry-related transfer function controlling energy entering the solid Earth from the water column and the impact of local geology at the site of the seismic array. For a season of storms in the southern hemisphere winter, we simulate the path of energy from a representative range of locations to Australia seismic arrays. We employ a wavefront tracking technique, fast marching, that can support heterogeneous structure and the consideration of multiple arrivals. We find that storms in some locations are subject to a much larger deviation from the expected path of energy

  1. 3D Modelling of Inaccessible Areas using UAV-based Aerial Photography and Structure from Motion

    NASA Astrophysics Data System (ADS)

    Obanawa, Hiroyuki; Hayakawa, Yuichi; Gomez, Christopher

    2014-05-01

    In hardly accessible areas, the collection of 3D point-clouds using TLS (Terrestrial Laser Scanner) can be very challenging, while airborne equivalent would not give a correct account of subvertical features and concave geometries like caves. To solve such problem, the authors have experimented an aerial photography based SfM (Structure from Motion) technique on a 'peninsular-rock' surrounded on three sides by the sea at a Pacific coast in eastern Japan. The research was carried out using UAS (Unmanned Aerial System) combined with a commercial small UAV (Unmanned Aerial Vehicle) carrying a compact camera. The UAV is a DJI PHANTOM: the UAV has four rotors (quadcopter), it has a weight of 1000 g, a payload of 400 g and a maximum flight time of 15 minutes. The camera is a GoPro 'HERO3 Black Edition': resolution 12 million pixels; weight 74 g; and 0.5 sec. interval-shot. The 3D model has been constructed by digital photogrammetry using a commercial SfM software, Agisoft PhotoScan Professional®, which can generate sparse and dense point-clouds, from which polygonal models and orthophotographs can be calculated. Using the 'flight-log' and/or GCPs (Ground Control Points), the software can generate digital surface model. As a result, high-resolution aerial orthophotographs and a 3D model were obtained. The results have shown that it was possible to survey the sea cliff and the wave cut-bench, which are unobservable from land side. In details, we could observe the complexity of the sea cliff that is nearly vertical as a whole while slightly overhanging over the thinner base. The wave cut bench is nearly flat and develops extensively at the base of the cliff. Although there are some evidences of small rockfalls at the upper part of the cliff, there is no evidence of very recent activity, because no fallen rock exists on the wave cut bench. This system has several merits: firstly lower cost than the existing measuring methods such as manned-flight survey and aerial laser

  2. Methods for abdominal respiratory motion tracking.

    PubMed

    Spinczyk, Dominik; Karwan, Adam; Copik, Marcin

    2014-01-01

    Non-invasive surface registration methods have been developed to register and track breathing motions in a patient's abdomen and thorax. We evaluated several different registration methods, including marker tracking using a stereo camera, chessboard image projection, and abdominal point clouds. Our point cloud approach was based on a time-of-flight (ToF) sensor that tracked the abdominal surface. We tested different respiratory phases using additional markers as landmarks for the extension of the non-rigid Iterative Closest Point (ICP) algorithm to improve the matching of irregular meshes. Four variants for retrieving the correspondence data were implemented and compared. Our evaluation involved 9 healthy individuals (3 females and 6 males) with point clouds captured in opposite breathing phases (i.e., inhalation and exhalation). We measured three factors: surface distance, correspondence distance, and marker error. To evaluate different methods for computing the correspondence measurements, we defined the number of correspondences for every target point and the average correspondence assignment error of the points nearest the markers.

  3. Methods for abdominal respiratory motion tracking

    PubMed Central

    Karwan, Adam; Copik, Marcin

    2014-01-01

    Non-invasive surface registration methods have been developed to register and track breathing motions in a patient’s abdomen and thorax. We evaluated several different registration methods, including marker tracking using a stereo camera, chessboard image projection, and abdominal point clouds. Our point cloud approach was based on a time-of-flight (ToF) sensor that tracked the abdominal surface. We tested different respiratory phases using additional markers as landmarks for the extension of the non-rigid Iterative Closest Point (ICP) algorithm to improve the matching of irregular meshes. Four variants for retrieving the correspondence data were implemented and compared. Our evaluation involved 9 healthy individuals (3 females and 6 males) with point clouds captured in opposite breathing phases (i.e., inhalation and exhalation). We measured three factors: surface distance, correspondence distance, and marker error. To evaluate different methods for computing the correspondence measurements, we defined the number of correspondences for every target point and the average correspondence assignment error of the points nearest the markers. PMID:24720494

  4. GPU based, real-time tracking of perturbed, 3D plasma equilibria

    NASA Astrophysics Data System (ADS)

    Rath, N.; Bialek, J.; Byrne, P. J.; Debono, B.; Levesque, J. P.; Li, B.; Mauel, M. E.; Maurer, D. A.; Navratil, G. A.; Shiraki, D.

    2011-10-01

    The new high-resolution magnetic diagnostics and actuators of the HBT-EP tokamak are used to evaluate a novel approach to long-wavelength MHD mode control: instead of controlling the amplitude of specific preselected perturbations from axisymmetry, the control system will attempt to control the 3D shape of the plasma. This approach frees the experimenter from having to know the approximate shape of the expected instabilities ahead of time, and lifts the restriction of the control reference having to be the perfectly axisymmetric state. Instead, the plasma can be maintained in an arbitrary perturbed equilibrium, which may be selected for beneficial plasma properties. The increased computational demands on the control system are handled by a graphical computing unit (GPU) with 448 computing cores that interfaces directly to digitizers and analog output boards. The control system is designed to handle 96 inputs and 64 outputs with cycle times below 5 and I/O latencies below 10 microseconds. We report on the technical and theoretical design of the control system and give experimental results from testing the system's observer module which tracks the perturbed plasma equilibrium in real-time. This work was supported by US-DOE grant DE-FG02-86ER53222.

  5. Three-dimensional motion tracking for high-resolution optical microscopy, in vivo.

    PubMed

    Bakalar, M; Schroeder, J L; Pursley, R; Pohida, T J; Glancy, B; Taylor, J; Chess, D; Kellman, P; Xue, H; Balaban, R S

    2012-06-01

    When conducting optical imaging experiments, in vivo, the signal to noise ratio and effective spatial and temporal resolution is fundamentally limited by physiological motion of the tissue. A three-dimensional (3D) motion tracking scheme, using a multiphoton excitation microscope with a resonant galvanometer, (512 × 512 pixels at 33 frames s(-1)) is described to overcome physiological motion, in vivo. The use of commercially available graphical processing units permitted the rapid 3D cross-correlation of sequential volumes to detect displacements and adjust tissue position to track motions in near real-time. Motion phantom tests maintained micron resolution with displacement velocities of up to 200 μm min(-1), well within the drift observed in many biological tissues under physiologically relevant conditions. In vivo experiments on mouse skeletal muscle using the capillary vasculature with luminal dye as a displacement reference revealed an effective and robust method of tracking tissue motion to enable (1) signal averaging over time without compromising resolution, and (2) tracking of cellular regions during a physiological perturbation.

  6. Three-dimensional motion tracking for high-resolution optical microscopy, in vivo.

    PubMed

    Bakalar, M; Schroeder, J L; Pursley, R; Pohida, T J; Glancy, B; Taylor, J; Chess, D; Kellman, P; Xue, H; Balaban, R S

    2012-06-01

    When conducting optical imaging experiments, in vivo, the signal to noise ratio and effective spatial and temporal resolution is fundamentally limited by physiological motion of the tissue. A three-dimensional (3D) motion tracking scheme, using a multiphoton excitation microscope with a resonant galvanometer, (512 × 512 pixels at 33 frames s(-1)) is described to overcome physiological motion, in vivo. The use of commercially available graphical processing units permitted the rapid 3D cross-correlation of sequential volumes to detect displacements and adjust tissue position to track motions in near real-time. Motion phantom tests maintained micron resolution with displacement velocities of up to 200 μm min(-1), well within the drift observed in many biological tissues under physiologically relevant conditions. In vivo experiments on mouse skeletal muscle using the capillary vasculature with luminal dye as a displacement reference revealed an effective and robust method of tracking tissue motion to enable (1) signal averaging over time without compromising resolution, and (2) tracking of cellular regions during a physiological perturbation. PMID:22582797

  7. Use of MV and kV imager correlation for maintaining continuous real-time 3D internal marker tracking during beam interruptions

    NASA Astrophysics Data System (ADS)

    Wiersma, R. D.; Riaz, N.; Dieterich, Sonja; Suh, Yelin; Xing, L.

    2009-01-01

    The integration of onboard kV imaging together with a MV electronic portal imaging device (EPID) on linear accelerators (LINAC) can provide an easy to implement real-time 3D organ position monitoring solution for treatment delivery. Currently, real-time MV-kV tracking has only been demonstrated by simultaneous imagining by both MV and kV imaging devices. However, modalities such as step-and-shoot IMRT (SS-IMRT), which inherently contain MV beam interruptions, can lead to loss of target information necessary for 3D localization. Additionally, continuous kV imaging throughout the treatment delivery can lead to high levels of imaging dose to the patient. This work demonstrates for the first time how full 3D target tracking can be maintained even in the presence of such beam interruption, or MV/kV beam interleave, by use of a relatively simple correlation model together with MV-kV tracking. A moving correlation model was constructed using both present and prior positions of the marker in the available MV or kV image to compute the position of the marker on the interrupted imager. A commercially available radiotherapy system, equipped with both MV and kV imaging devices, was used to deliver typical SS-IMRT lung treatment plans to a 4D phantom containing internally embedded metallic markers. To simulate actual lung tumor motion, previous recorded 4D lung patient motion data were used. Lung tumor motion data of five separate patients were inputted into the 4D phantom, and typical SS-IMRT lung plans were delivered to simulate actual clinical deliveries. Application of the correlation model to SS-IMRT lung treatment deliveries was found to be an effective solution for maintaining continuous 3D tracking during 'step' beam interruptions. For deliveries involving five or more gantry angles with 50 or more fields per plan, the positional errors were found to have <=1 mm root mean squared error (RMSE) in all three spatial directions. In addition to increasing the robustness of

  8. Use of MV and kV imager correlation for maintaining continuous real-time 3D internal marker tracking during beam interruptions.

    PubMed

    Wiersma, R D; Riaz, N; Dieterich, Sonja; Suh, Yelin; Xing, L

    2009-01-01

    The integration of onboard kV imaging together with a MV electronic portal imaging device (EPID) on linear accelerators (LINAC) can provide an easy to implement real-time 3D organ position monitoring solution for treatment delivery. Currently, real-time MV-kV tracking has only been demonstrated by simultaneous imagining by both MV and kV imaging devices. However, modalities such as step-and-shoot IMRT (SS-IMRT), which inherently contain MV beam interruptions, can lead to loss of target information necessary for 3D localization. Additionally, continuous kV imaging throughout the treatment delivery can lead to high levels of imaging dose to the patient. This work demonstrates for the first time how full 3D target tracking can be maintained even in the presence of such beam interruption, or MV/kV beam interleave, by use of a relatively simple correlation model together with MV-kV tracking. A moving correlation model was constructed using both present and prior positions of the marker in the available MV or kV image to compute the position of the marker on the interrupted imager. A commercially available radiotherapy system, equipped with both MV and kV imaging devices, was used to deliver typical SS-IMRT lung treatment plans to a 4D phantom containing internally embedded metallic markers. To simulate actual lung tumor motion, previous recorded 4D lung patient motion data were used. Lung tumor motion data of five separate patients were inputted into the 4D phantom, and typical SS-IMRT lung plans were delivered to simulate actual clinical deliveries. Application of the correlation model to SS-IMRT lung treatment deliveries was found to be an effective solution for maintaining continuous 3D tracking during 'step' beam interruptions. For deliveries involving five or more gantry angles with 50 or more fields per plan, the positional errors were found to have < or =1 mm root mean squared error (RMSE) in all three spatial directions. In addition to increasing the robustness

  9. A Motion Tracking and Sensor Fusion Module for Medical Simulation.

    PubMed

    Shen, Yunhe; Wu, Fan; Tseng, Kuo-Shih; Ye, Ding; Raymond, John; Konety, Badrinath; Sweet, Robert

    2016-01-01

    Here we introduce a motion tracking or navigation module for medical simulation systems. Our main contribution is a sensor fusion method for proximity or distance sensors integrated with inertial measurement unit (IMU). Since IMU rotation tracking has been widely studied, we focus on the position or trajectory tracking of the instrument moving freely within a given boundary. In our experiments, we have found that this module reliably tracks instrument motion. PMID:27046606

  10. Fast, automatic, and accurate catheter reconstruction in HDR brachytherapy using an electromagnetic 3D tracking system

    SciTech Connect

    Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc; Binnekamp, Dirk

    2015-03-15

    Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.

  11. Proton spin tracking with symplectic integration of orbit motion

    SciTech Connect

    Luo, Y.; Dutheil, Y.; Huang, H.; Meot, F.; Ranjbar, V.

    2015-05-03

    Symplectic integration had been adopted for orbital motion tracking in code SimTrack. SimTrack has been extensively used for dynamic aperture calculation with beam-beam interaction for the Relativistic Heavy Ion Collider (RHIC). Recently proton spin tracking has been implemented on top of symplectic orbital motion in this code. In this article, we will explain the implementation of spin motion based on Thomas-BMT equation, and the benchmarking with other spin tracking codes currently used for RHIC. Examples to calculate spin closed orbit and spin tunes are presented too.

  12. Tracking magnetogram proper motions by multiscale regularization

    NASA Technical Reports Server (NTRS)

    Jones, Harrison P.

    1995-01-01

    Long uninterrupted sequences of solar magnetograms from the global oscillations network group (GONG) network and from the solar and heliospheric observatory (SOHO) satellite will provide the opportunity to study the proper motions of magnetic features. The possible use of multiscale regularization, a scale-recursive estimation technique which begins with a prior model of how state variables and their statistical properties propagate over scale. Short magnetogram sequences are analyzed with the multiscale regularization algorithm as applied to optical flow. This algorithm is found to be efficient, provides results for all the spatial scales spanned by the data and provides error estimates for the solutions. It is found that the algorithm is less sensitive to evolutionary changes than correlation tracking.

  13. Mapping 3D Strains with Ultrasound Speckle Tracking: Method Validation and Initial Results in Porcine Scleral Inflation.

    PubMed

    Cruz Perez, Benjamin; Pavlatos, Elias; Morris, Hugh J; Chen, Hong; Pan, Xueliang; Hart, Richard T; Liu, Jun

    2016-07-01

    This study aimed to develop and validate a high frequency ultrasound method for measuring distributive, 3D strains in the sclera during elevations of intraocular pressure. A 3D cross-correlation based speckle-tracking algorithm was implemented to compute the 3D displacement vector and strain tensor at each tracking point. Simulated ultrasound radiofrequency data from a sclera-like structure at undeformed and deformed states with known strains were used to evaluate the accuracy and signal-to-noise ratio (SNR) of strain estimation. An experimental high frequency ultrasound (55 MHz) system was built to acquire 3D scans of porcine eyes inflated from 15 to 17 and then 19 mmHg. Simulations confirmed good strain estimation accuracy and SNR (e.g., the axial strains had less than 4.5% error with SNRs greater than 16.5 for strains from 0.005 to 0.05). Experimental data in porcine eyes showed increasing tensile, compressive, and shear strains in the posterior sclera during inflation, with a volume ratio close to one suggesting near-incompressibility. This study established the feasibility of using high frequency ultrasound speckle tracking for measuring 3D tissue strains and its potential to characterize physiological deformations in the posterior eye. PMID:26563101

  14. Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography

    PubMed Central

    Carrasco-Zevallos, Oscar M.; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A.

    2016-01-01

    Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800

  15. Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography.

    PubMed

    Carrasco-Zevallos, Oscar M; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A

    2016-01-01

    Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800

  16. Tracking of cracks in bridges using GPR: a 3D approach

    NASA Astrophysics Data System (ADS)

    Benedetto, A.

    2012-04-01

    Corrosion associated with reinforcing bars is the most significant contributor to bridge deficiencies. The corrosion is usually caused by moisture and chloride ion exposure. In particular, corrosion products FeO, Fe2O3, Fe3O4 and other oxides along reinforcement bars. The reinforcing bars are attacked by corrosion and yield expansive corrosion products. These oxidation products occupy a larger volume than the original intact steel and internal expansive stresses lead to cracking and debonding. There are some conventional inspection methods for detection of reinforcing bar corrosion but they can be invasive and destructive, often laborious, lane closures is required and it is difficult or unreliable any quantification of corrosion. For these reasons, bridge engineers are always more preferring to use the Ground Penetrating Radar (GPR) technique. In this work a novel numerical approach for three dimensional tracking and mapping of cracks in the bridge is proposed. The work starts from some interesting results based on the use of the 3D imaging technique in order to improve the potentiality of GPR to detect voids, cracks or buried object. The numerical approach has been tested on data acquired on some bridges using a pulse GPR system specifically designed for bridge deck and pavement inspection that is called RIS Hi Bright. The equipment integrates two arrays of Ultra Wide Band ground coupled antennas, having a main working frequency of 2 GHz. The two arrays within the RIS Hi Bright are using antennas arranged with different polarization. One array includes sensors with parallel polarization with respect to the scanning direction (VV array), the other has sensors in orthogonal polarization (HH array). Overall the system collects 16 profiles within a single scan (8 HH + 8 VV). The cracks, associated often to moisture increasing and higher values of the dielectric constant, produce a not negligible increasing of the signal amplitude. Following this, the algorithm

  17. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    SciTech Connect

    Dhou, S; Hurwitz, M; Lewis, J; Mishra, P

    2014-06-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT

  18. A study of the effects of degraded imagery on tactical 3D model generation using structure-from-motion

    NASA Astrophysics Data System (ADS)

    Bolick, Leslie; Harguess, Josh

    2016-05-01

    An emerging technology in the realm of airborne intelligence, surveillance, and reconnaissance (ISR) systems is structure-from-motion (SfM), which enables the creation of three-dimensional (3D) point clouds and 3D models from two-dimensional (2D) imagery. There are several existing tools, such as VisualSFM and open source project OpenSfM, to assist in this process, however, it is well-known that pristine imagery is usually required to create meaningful 3D data from the imagery. In military applications, such as the use of unmanned aerial vehicles (UAV) for surveillance operations, imagery is rarely pristine. Therefore, we present an analysis of structure-from-motion packages on imagery that has been degraded in a controlled manner.

  19. Rapid, High-Throughput Tracking of Bacterial Motility in 3D via Phase-Contrast Holographic Video Microscopy

    PubMed Central

    Cheong, Fook Chiong; Wong, Chui Ching; Gao, YunFeng; Nai, Mui Hoon; Cui, Yidan; Park, Sungsu; Kenney, Linda J.; Lim, Chwee Teck

    2015-01-01

    Tracking fast-swimming bacteria in three dimensions can be extremely challenging with current optical techniques and a microscopic approach that can rapidly acquire volumetric information is required. Here, we introduce phase-contrast holographic video microscopy as a solution for the simultaneous tracking of multiple fast moving cells in three dimensions. This technique uses interference patterns formed between the scattered and the incident field to infer the three-dimensional (3D) position and size of bacteria. Using this optical approach, motility dynamics of multiple bacteria in three dimensions, such as speed and turn angles, can be obtained within minutes. We demonstrated the feasibility of this method by effectively tracking multiple bacteria species, including Escherichia coli, Agrobacterium tumefaciens, and Pseudomonas aeruginosa. In addition, we combined our fast 3D imaging technique with a microfluidic device to present an example of a drug/chemical assay to study effects on bacterial motility. PMID:25762336

  20. A 3D MR-acquisition scheme for nonrigid bulk motion correction in simultaneous PET-MR

    SciTech Connect

    Kolbitsch, Christoph Prieto, Claudia; Schaeffter, Tobias; Tsoumpas, Charalampos

    2014-08-15

    Purpose: Positron emission tomography (PET) is a highly sensitive medical imaging technique commonly used to detect and assess tumor lesions. Magnetic resonance imaging (MRI) provides high resolution anatomical images with different contrasts and a range of additional information important for cancer diagnosis. Recently, simultaneous PET-MR systems have been released with the promise to provide complementary information from both modalities in a single examination. Due to long scan times, subject nonrigid bulk motion, i.e., changes of the patient's position on the scanner table leading to nonrigid changes of the patient's anatomy, during data acquisition can negatively impair image quality and tracer uptake quantification. A 3D MR-acquisition scheme is proposed to detect and correct for nonrigid bulk motion in simultaneously acquired PET-MR data. Methods: A respiratory navigated three dimensional (3D) MR-acquisition with Radial Phase Encoding (RPE) is used to obtain T1- and T2-weighted data with an isotropic resolution of 1.5 mm. Healthy volunteers are asked to move the abdomen two to three times during data acquisition resulting in overall 19 movements at arbitrary time points. The acquisition scheme is used to retrospectively reconstruct dynamic 3D MR images with different temporal resolutions. Nonrigid bulk motion is detected and corrected in this image data. A simultaneous PET acquisition is simulated and the effect of motion correction is assessed on image quality and standardized uptake values (SUV) for lesions with different diameters. Results: Six respiratory gated 3D data sets with T1- and T2-weighted contrast have been obtained in healthy volunteers. All bulk motion shifts have successfully been detected and motion fields describing the transformation between the different motion states could be obtained with an accuracy of 1.71 ± 0.29 mm. The PET simulation showed errors of up to 67% in measured SUV due to bulk motion which could be reduced to less than

  1. The effect of motion on IMRT – looking at interplay with 3D measurements

    PubMed Central

    Thomas, A; Yan, H; Oldham, M; Juang, T; Adamovics, J; Yin, FF

    2013-01-01

    Six base of skull IMRT treatment plans were delivered to 3D dosimeters within the RPC Head and Neck Phantom for QA verification. Isotropic 2mm 3D data was obtained using the DLOS-PRESAGE system and compared to an Eclipse (Varian) treatment plan. Normalized Dose Distribution pass rates were obtained for a number of criteria. High quality 3D dosimetry data was observed from the DLOS system, illustrated here through colormaps, isodose lines, profiles, and NDD 3D maps. Excellent agreement with the planned dose distributions was also observed with NDD analysis revealing > 90% NDD pass rates [3%, 2mm], noise < 0.5%. This paper focuses on a detailed exploration of the quality and use of 3D dosimetry data obtained with the DLOS-PRESAGE system. PMID:26877756

  2. Motion prediction using dual Kalman filter for robust beating heart tracking.

    PubMed

    Yang, Bo; Liu, Chao; Poignet, Philippe; Zheng, Wenfeng; Liu, Shan

    2015-08-01

    A novel prediction method for robust beating heart tracking is proposed. The dual time-varying Fourier series is used to model the heart motion. The frequency parameters and Fourier coefficients in the model are estimated respectively by using a dual Kalman filter scheme. The instantaneous frequencies of breathing and heartbeat motion are measured online from the 3D trajectory of the point of interest using an orthogonal decomposition algorithm. The proposed method is evaluated based on both the simulated signals and the real motion signals, which are measured from the videos recorded using the da Vinci surgical system.

  3. On the Significance of Motion Degradation in High-Resolution 3D μMRI of Trabecular Bone

    PubMed Central

    Bhagat, Yusuf A.; Rajapakse, Chamith S.; Magland, Jeremy F.; Wald, Michael J.; Song, Hee Kwon; Leonard, Mary B.; Wehrli, Felix W.

    2011-01-01

    Rationale and Objectives Subtle subject movement during high-resolution 3D μMR imaging of trabecular bone (TB) causes blurring, thereby rendering the data unreliable for quantitative analysis. In this work, the effects of translational and rotational motion displacements have been evaluated qualitatively and quantitatively. Materials and Methods In Experiment I, motion was induced by applying various simulated and previously observed in vivo trajectories as phase shifts to k-space or rotation angles to k-space segments of a virtually motion-free data set. In Experiment II, images that were visually free of motion artifacts from two groups of 10 healthy individuals, differing in age, were selected for probing the effects of motion on TB parameters. In both experiments, images were rated for motion severity and the scores were compared to a focus criterion, the normalized gradient squared (NGS). Results Strong correlations were observed between the motion quality scores and the corresponding NGS values (R2= 0.52–0.64; p<0.01). The results from Experiment I demonstrated consistently lower image quality and alterations in structural parameters of 9–45% with increased amplitude of displacements. In Experiment II, the significant differences in structural parameter group means of the motion-free images were lost upon motion degradation. Autofocusing, a post-processing correction method, partially recovered the sharpness of the original motion-free images in 13/20 subjects. Conclusion Quantitative TB structural measures are highly sensitive to subtle motion-induced degradation which adversely affects precision and statistical power. The results underscore the influence of subject movement in high-resolution 3D μMRI and its correction for TB structure analysis. PMID:21816638

  4. Dynamics and cortical distribution of neural responses to 2D and 3D motion in human.

    PubMed

    Cottereau, Benoit R; McKee, Suzanne P; Norcia, Anthony M

    2014-02-01

    The perception of motion-in-depth is important for avoiding collisions and for the control of vergence eye-movements and other motor actions. Previous psychophysical studies have suggested that sensitivity to motion-in-depth has a lower temporal processing limit than the perception of lateral motion. The present study used functional MRI-informed EEG source-imaging to study the spatiotemporal properties of the responses to lateral motion and motion-in-depth in human visual cortex. Lateral motion and motion-in-depth displays comprised stimuli whose only difference was interocular phase: monocular oscillatory motion was either in-phase in the two eyes (lateral motion) or in antiphase (motion-in-depth). Spectral analysis was used to break the steady-state visually evoked potentials responses down into even and odd harmonic components within five functionally defined regions of interest: V1, V4, lateral occipital complex, V3A, and hMT+. We also characterized the responses within two anatomically defined regions: the inferior and superior parietal cortex. Even harmonic components dominated the evoked responses and were a factor of approximately two larger for lateral motion than motion-in-depth. These responses were slower for motion-in-depth and were largely independent of absolute disparity. In each of our regions of interest, responses at odd-harmonics were relatively small, but were larger for motion-in-depth than lateral motion, especially in parietal cortex, and depended on absolute disparity. Taken together, our results suggest a plausible neural basis for reduced psychophysical sensitivity to rapid motion-in-depth.

  5. Dynamics and cortical distribution of neural responses to 2D and 3D motion in human

    PubMed Central

    McKee, Suzanne P.; Norcia, Anthony M.

    2013-01-01

    The perception of motion-in-depth is important for avoiding collisions and for the control of vergence eye-movements and other motor actions. Previous psychophysical studies have suggested that sensitivity to motion-in-depth has a lower temporal processing limit than the perception of lateral motion. The present study used functional MRI-informed EEG source-imaging to study the spatiotemporal properties of the responses to lateral motion and motion-in-depth in human visual cortex. Lateral motion and motion-in-depth displays comprised stimuli whose only difference was interocular phase: monocular oscillatory motion was either in-phase in the two eyes (lateral motion) or in antiphase (motion-in-depth). Spectral analysis was used to break the steady-state visually evoked potentials responses down into even and odd harmonic components within five functionally defined regions of interest: V1, V4, lateral occipital complex, V3A, and hMT+. We also characterized the responses within two anatomically defined regions: the inferior and superior parietal cortex. Even harmonic components dominated the evoked responses and were a factor of approximately two larger for lateral motion than motion-in-depth. These responses were slower for motion-in-depth and were largely independent of absolute disparity. In each of our regions of interest, responses at odd-harmonics were relatively small, but were larger for motion-in-depth than lateral motion, especially in parietal cortex, and depended on absolute disparity. Taken together, our results suggest a plausible neural basis for reduced psychophysical sensitivity to rapid motion-in-depth. PMID:24198326

  6. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson’s Disease

    PubMed Central

    Piro, Neltje E.; Piro, Lennart K.; Kassubek, Jan; Blechschmidt-Trapp, Ronald A.

    2016-01-01

    Remote monitoring of Parkinson’s Disease (PD) patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS) classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a) an examination based on video recordings as a clinical reference; (b) an automatically classified UPDRS; and (c) a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG) sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b) the automatically classified UPDRS is 0.48 and with (c) the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team. PMID:27338400

  7. A collaborative computing framework of cloud network and WBSN applied to fall detection and 3-D motion reconstruction.

    PubMed

    Lai, Chin-Feng; Chen, Min; Pan, Jeng-Shyang; Youn, Chan-Hyun; Chao, Han-Chieh

    2014-03-01

    As cloud computing and wireless body sensor network technologies become gradually developed, ubiquitous healthcare services prevent accidents instantly and effectively, as well as provides relevant information to reduce related processing time and cost. This study proposes a co-processing intermediary framework integrated cloud and wireless body sensor networks, which is mainly applied to fall detection and 3-D motion reconstruction. In this study, the main focuses includes distributed computing and resource allocation of processing sensing data over the computing architecture, network conditions and performance evaluation. Through this framework, the transmissions and computing time of sensing data are reduced to enhance overall performance for the services of fall events detection and 3-D motion reconstruction.

  8. Robust 2D/3D registration for fast-flexion motion of the knee joint using hybrid optimization.

    PubMed

    Ohnishi, Takashi; Suzuki, Masahiko; Kobayashi, Tatsuya; Naomoto, Shinji; Sukegawa, Tomoyuki; Nawata, Atsushi; Haneishi, Hideaki

    2013-01-01

    Previously, we proposed a 2D/3D registration method that uses Powell's algorithm to obtain 3D motion of a knee joint by 3D computed-tomography and bi-plane fluoroscopic images. The 2D/3D registration is performed consecutively and automatically for each frame of the fluoroscopic images. This method starts from the optimum parameters of the previous frame for each frame except for the first one, and it searches for the next set of optimum parameters using Powell's algorithm. However, if the flexion motion of the knee joint is fast, it is likely that Powell's algorithm will provide a mismatch because the initial parameters are far from the correct ones. In this study, we applied a hybrid optimization algorithm (HPS) combining Powell's algorithm with the Nelder-Mead simplex (NM-simplex) algorithm to overcome this problem. The performance of the HPS was compared with the separate performances of Powell's algorithm and the NM-simplex algorithm, the Quasi-Newton algorithm and hybrid optimization algorithm with the Quasi-Newton and NM-simplex algorithms with five patient data sets in terms of the root-mean-square error (RMSE), target registration error (TRE), success rate, and processing time. The RMSE, TRE, and the success rate of the HPS were better than those of the other optimization algorithms, and the processing time was similar to that of Powell's algorithm alone.

  9. 3D nanometer images of biological fibers by directed motion of gold nanoparticles.

    PubMed

    Estrada, Laura C; Gratton, Enrico

    2011-11-01

    Using near-infrared femtosecond pulses, we move single gold nanoparticles (AuNPs) along biological fibers, such as collagen and actin filaments. While the AuNP is sliding on the fiber, its trajectory is measured in three dimensions (3D) with nanometer resolution providing a high-resolution image of the fiber. Here, we systematically moved a single AuNP along nanometer-size collagen fibers and actin filament inside chinese hamster ovary K1 living cells, mapping their 3D topography with high fidelity.

  10. 3D tracking and phase-contrast imaging by twin-beams digital holographic microscope in microfluidics

    NASA Astrophysics Data System (ADS)

    Miccio, L.; Memmolo, P.; Finizio, A.; Paturzo, M.; Merola, F.; Grilli, S.; Ferraro, P.

    2012-06-01

    A compact twin-beam interferometer that can be adopted as a flexible diagnostic tool in microfluidic platforms is presented. The devise has two functionalities, as explained in the follow, and can be easily integrated in microfluidic chip. The configuration allows 3D tracking of micro-particles and, at same time, furnishes Quantitative Phase-Contrast maps of tracked micro-objects by interference microscopy. Experimental demonstration of its effectiveness and compatibility with biological field is given on for in vitro cells in microfluidic environment. Nowadays, several microfluidic configuration exist and many of them are commercially available, their development is due to the possibility for manipulating droplets, handling micro and nano-objects, visualize and quantify processes occurring in small volumes and, clearly, for direct applications on lab-on-a chip devices. In microfluidic research field, optical/photonics approaches are the more suitable ones because they have various advantages as to be non-contact, full-field, non-invasive and can be packaged thanks to the development of integrable optics. Moreover, phase contrast approaches, adapted to a lab-on-a-chip configurations, give the possibility to get quantitative information with remarkable lateral and vertical resolution directly in situ without the need to dye and/or kill cells. Furthermore, numerical techniques for tracking of micro-objects needs to be developed for measuring velocity fields, trajectories patterns, motility of cancer cell and so on. Here, we present a compact holographic microscope that can ensure, by the same configuration and simultaneously, accurate 3D tracking and quantitative phase-contrast analysis. The system, simple and solid, is based on twin laser beams coming from a single laser source. Through a easy conceptual design, we show how these two different functionalities can be accomplished by the same optical setup. The working principle, the optical setup and the mathematical

  11. Catheter tracking via online learning for dynamic motion compensation in transcatheter aortic valve implantation.

    PubMed

    Wang, Peng; Zheng, Yefeng; John, Matthias; Comaniciu, Dorin

    2012-01-01

    Dynamic overlay of 3D models onto 2D X-ray images has important applications in image guided interventions. In this paper, we present a novel catheter tracking for motion compensation in the Transcatheter Aortic Valve Implantation (TAVI). To address such challenges as catheter shape and appearance changes, occlusions, and distractions from cluttered backgrounds, we present an adaptive linear discriminant learning method to build a measurement model online to distinguish catheters from background. An analytic solution is developed to effectively and efficiently update the discriminant model and to minimize the classification errors between the tracking object and backgrounds. The online learned discriminant model is further combined with an offline learned detector and robust template matching in a Bayesian tracking framework. Quantitative evaluations demonstrate the advantages of this method over current state-of-the-art tracking methods in tracking catheters for clinical applications. PMID:23286027

  12. Motion-compensated MR valve imaging with COMB tag tracking and super-resolution enhancement.

    PubMed

    Dowsey, Andrew W; Keegan, Jennifer; Lerotic, Mirna; Thom, Simon; Firmin, David; Yang, Guang-Zhong

    2007-10-01

    Understanding the morphology and function of heart valves is important to the study of underlying causes of heart failure. Existing techniques such as those based on echocardiography are limited by the relatively low signal-to-noise ratio (SNR), attenuation artefacts, and restricted access. The alternative of cardiovascular MR imaging offers versatility and accuracy in 3D localisation, but is hampered by large movements of the valves throughout the cardiac cycle. This paper presents a motion-compensated adaptive imaging approach for MR valve imaging. To illustrate its clinical potential, 3D motion of the aortic valve plane is first captured through a single breath-hold COMB tag pre-scan and then tracked in real-time with an automatic method based on multi-resolution image registration. Motion-compensated coverage of the aortic valve is then acquired prospectively, thus allowing its clear 3D reconstruction and visualisation. To provide isotropic voxel coverage of the imaging volume, retrospective projection onto convex sets (POCS) super-resolution enhancement is applied to the slice-select direction. In vivo results demonstrate the effectiveness of the proposed motion-compensation and super-resolution schemes for depicting the structure of the valve leaflets throughout the cardiac cycle. The proposed method fundamentally changes the way MR imaging is performed by transforming it from a spatially to materially localised imaging method. This also has important implications for quantifying blood flow and myocardial perfusion, as well as tracking anatomy and function of the heart.

  13. Real-time circumferential mapping catheter tracking for motion compensation in atrial fibrillation ablation procedures

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Bourier, Felix; Wimmer, Andreas; Koch, Martin; Kiraly, Atilla; Liao, Rui; Kurzidim, Klaus; Hornegger, Joachim; Strobel, Norbert

    2012-02-01

    Atrial fibrillation (AFib) has been identified as a major cause of stroke. Radiofrequency catheter ablation has become an increasingly important treatment option, especially when drug therapy fails. Navigation under X-ray can be enhanced by using augmented fluoroscopy. It renders overlay images from pre-operative 3-D data sets which are then fused with X-ray images to provide more details about the underlying soft-tissue anatomy. Unfortunately, these fluoroscopic overlay images are compromised by respiratory and cardiac motion. Various methods to deal with motion have been proposed. To meet clinical demands, they have to be fast. Methods providing a processing frame rate of 3 frames-per-second (fps) are considered suitable for interventional electrophysiology catheter procedures if an acquisition frame rate of 2 fps is used. Unfortunately, when working at a processing rate of 3 fps, the delay until the actual motion compensated image can be displayed is about 300 ms. More recent algorithms can achieve frame rates of up to 20 fps, which reduces the lag to 50 ms. By using a novel approach involving a 3-D catheter model, catheter segmentation and a distance transform, we can speed up motion compensation to 25 fps which results in a display delay of only 40 ms on a standard workstation for medical applications. Our method uses a constrained 2-D/3-D registration to perform catheter tracking, and it obtained a 2-D tracking error of 0.61 mm.

  14. Automatic 3D motion estimation of left ventricle from C-arm rotational angiocardiography using a prior motion model and learning based boundary detector.

    PubMed

    Chen, Mingqing; Zheng, Yefeng; Wang, Yang; Mueller, Kerstin; Lauritsch, Guenter

    2013-01-01

    Compared to pre-operative imaging modalities, it is more convenient to estimate the current cardiac physiological status from C-arm angiocardiography since C-arm is a widely used intra-operative imaging modality to guide many cardiac interventions. The 3D shape and motion of the left ventricle (LV) estimated from rotational angiocardiography provide important cardiac function measurements, e.g., ejection fraction and myocardium motion dyssynchrony. However, automatic estimation of the 3D LV motion is difficult since all anatomical structures overlap on the 2D X-ray projections and the nearby confounding strong image boundaries (e.g., pericardium) often cause ambiguities to LV endocardium boundary detection. In this paper, a new framework is proposed to overcome the aforementioned difficulties: (1) A new learning-based boundary detector is developed by training a boosting boundary classifier combined with the principal component analysis of a local image patch; (2) The prior LV motion model is learned from a set of dynamic cardiac computed tomography (CT) sequences to provide a good initial estimate of the 3D LV shape of different cardiac phases; (3) The 3D motion trajectory is learned for each mesh point; (4) All these components are integrated into a multi-surface graph optimization method to extract the globally coherent motion. The method is tested on seven patient scans, showing significant improvement on the ambiguous boundary cases with a detection accuracy of 2.87 +/- 1.00 mm on LV endocardium boundary delineation in the 2D projections.

  15. Automatic 3D motion estimation of left ventricle from C-arm rotational angiocardiography using a prior motion model and learning based boundary detector.

    PubMed

    Chen, Mingqing; Zheng, Yefeng; Wang, Yang; Mueller, Kerstin; Lauritsch, Guenter

    2013-01-01

    Compared to pre-operative imaging modalities, it is more convenient to estimate the current cardiac physiological status from C-arm angiocardiography since C-arm is a widely used intra-operative imaging modality to guide many cardiac interventions. The 3D shape and motion of the left ventricle (LV) estimated from rotational angiocardiography provide important cardiac function measurements, e.g., ejection fraction and myocardium motion dyssynchrony. However, automatic estimation of the 3D LV motion is difficult since all anatomical structures overlap on the 2D X-ray projections and the nearby confounding strong image boundaries (e.g., pericardium) often cause ambiguities to LV endocardium boundary detection. In this paper, a new framework is proposed to overcome the aforementioned difficulties: (1) A new learning-based boundary detector is developed by training a boosting boundary classifier combined with the principal component analysis of a local image patch; (2) The prior LV motion model is learned from a set of dynamic cardiac computed tomography (CT) sequences to provide a good initial estimate of the 3D LV shape of different cardiac phases; (3) The 3D motion trajectory is learned for each mesh point; (4) All these components are integrated into a multi-surface graph optimization method to extract the globally coherent motion. The method is tested on seven patient scans, showing significant improvement on the ambiguous boundary cases with a detection accuracy of 2.87 +/- 1.00 mm on LV endocardium boundary delineation in the 2D projections. PMID:24505748

  16. An eliminating method of motion-induced vertical parallax for time-division 3D display technology

    NASA Astrophysics Data System (ADS)

    Lin, Liyuan; Hou, Chunping

    2015-10-01

    A time difference between the left image and right image of the time-division 3D display makes a person perceive alternating vertical parallax when an object is moving vertically on a fixed depth plane, which causes the left image and right image perceived do not match and makes people more prone to visual fatigue. This mismatch cannot eliminate simply rely on the precise synchronous control of the left image and right image. Based on the principle of time-division 3D display technology and human visual system characteristics, this paper establishes a model of the true vertical motion velocity in reality and vertical motion velocity on the screen, and calculates the amount of the vertical parallax caused by vertical motion, and then puts forward a motion compensation method to eliminate the vertical parallax. Finally, subjective experiments are carried out to analyze how the time difference affects the stereo visual comfort by comparing the comfort values of the stereo image sequences before and after compensating using the eliminating method. The theoretical analysis and experimental results show that the proposed method is reasonable and efficient.

  17. Stereo and motion parallax cues in human 3D vision: can they vanish without a trace?

    PubMed

    Rauschecker, Andreas M; Solomon, Samuel G; Glennerster, Andrew

    2006-01-01

    In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the "correct" size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues. PMID:17209749

  18. Reconstruction Accuracy Assessment of Surface and Underwater 3D Motion Analysis: A New Approach.

    PubMed

    de Jesus, Kelly; de Jesus, Karla; Figueiredo, Pedro; Vilas-Boas, João Paulo; Fernandes, Ricardo Jorge; Machado, Leandro José

    2015-01-01

    This study assessed accuracy of surface and underwater 3D reconstruction of a calibration volume with and without homography. A calibration volume (6000 × 2000 × 2500 mm) with 236 markers (64 above and 88 underwater control points--with 8 common points at water surface--and 92 validation points) was positioned on a 25 m swimming pool and recorded with two surface and four underwater cameras. Planar homography estimation for each calibration plane was computed to perform image rectification. Direct linear transformation algorithm for 3D reconstruction was applied, using 1600000 different combinations of 32 and 44 points out of the 64 and 88 control points for surface and underwater markers (resp.). Root Mean Square (RMS) error with homography of control and validations points was lower than without it for surface and underwater cameras (P ≤ 0.03). With homography, RMS errors of control and validation points were similar between surface and underwater cameras (P ≥ 0.47). Without homography, RMS error of control points was greater for underwater than surface cameras (P ≤ 0.04) and the opposite was observed for validation points (P ≤ 0.04). It is recommended that future studies using 3D reconstruction should include homography to improve swimming movement analysis accuracy.

  19. Visual Target Tracking in the Presence of Unknown Observer Motion

    NASA Technical Reports Server (NTRS)

    Williams, Stephen; Lu, Thomas

    2009-01-01

    Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.

  20. 3D shape tracking of minimally invasive medical instruments using optical frequency domain reflectometry

    NASA Astrophysics Data System (ADS)

    Parent, Francois; Kanti Mandal, Koushik; Loranger, Sebastien; Watanabe Fernandes, Eric Hideki; Kashyap, Raman; Kadoury, Samuel

    2016-03-01

    We propose here a new alternative to provide real-time device tracking during minimally invasive interventions using a truly-distributed strain sensor based on optical frequency domain reflectometry (OFDR) in optical fibers. The guidance of minimally invasive medical instruments such as needles or catheters (ex. by adding a piezoelectric coating) has been the focus of extensive research in the past decades. Real-time tracking of instruments in medical interventions facilitates image guidance and helps the user to reach a pre-localized target more precisely. Image-guided systems using ultrasound imaging and shape sensors based on fiber Bragg gratings (FBG)-embedded optical fibers can provide retroactive feedback to the user in order to reach the targeted areas with even more precision. However, ultrasound imaging with electro-magnetic tracking cannot be used in the magnetic resonance imaging (MRI) suite, while shape sensors based on FBG embedded in optical fibers provides discrete values of the instrument position, which requires approximations to be made to evaluate its global shape. This is why a truly-distributed strain sensor based on OFDR could enhance the tracking accuracy. In both cases, since the strain is proportional to the radius of curvature of the fiber, a strain sensor can provide the three-dimensional shape of medical instruments by simply inserting fibers inside the devices. To faithfully follow the shape of the needle in the tracking frame, 3 fibers glued in a specific geometry are used, providing 3 degrees of freedom along the fiber. Near real-time tracking of medical instruments is thus obtained offering clear advantages for clinical monitoring in remotely controlled catheter or needle guidance. We present results demonstrating the promising aspects of this approach as well the limitations of using the OFDR technique.

  1. Automatic shape-based level set segmentation for needle tracking in 3-D TRUS-guided prostate brachytherapy.

    PubMed

    Yan, Ping; Cheeseborough, John C; Chao, K S Clifford

    2012-09-01

    Prostate brachytherapy is an effective treatment for early prostate cancer. The success depends critically on the correct needle implant positions. We have devised an automatic shape-based level set segmentation tool for needle tracking in 3-D transrectal ultrasound (TRUS) images, which uses the shape information and level set technique to localize the needle position and estimate the endpoint of needle in real-time. The 3-D TRUS images used in the evaluation of our tools were obtained using a 2-D TRUS transducer from Ultrasonix (Richmond, BC, Canada) and a computer-controlled stepper motor system from Thorlabs (Newton, NJ, USA). The accuracy and feedback mechanism had been validated using prostate phantoms and compared with 3-D positions of these needles derived from experts' readings. The experts' segmentation of needles from 3-D computed tomography images was the ground truth in this study. The difference between automatic and expert segmentations are within 0.1 mm for 17 of 19 implanted needles. The mean errors of automatic segmentations by comparing with the ground truth are within 0.25 mm. Our automated method allows real-time TRUS-based needle placement difference within one pixel compared with manual expert segmentation.

  2. 3-d brownian motion simulator for high-sensitivity nanobiotechnological applications.

    PubMed

    Toth, Arpád; Banky, Dániel; Grolmusz, Vince

    2011-12-01

    A wide variety of nanobiotechnologic applications are being developed for nanoparticle based in vitro diagnostic and imaging systems. Some of these systems make possible highly sensitive detection of molecular biomarkers. Frequently, the very low concentration of the biomarkers makes impossible the classical, partial differential equation-based mathematical simulation of the motion of the nanoparticles involved. We present a three-dimensional Brownian motion simulation tool for the prediction of the movement of nanoparticles in various thermal, viscosity, and geometric settings in a rectangular cuvette. For nonprofit users the server is freely available at the site http://brownian.pitgroup.org.

  3. Self optical motion-tracking for endoscopic optical coherence tomography probe using micro-beamsplitter probe

    NASA Astrophysics Data System (ADS)

    Li, Jiawen; Zhang, Jun; Chou, Lidek; Wang, Alex; Jing, Joseph; Chen, Zhongping

    2014-03-01

    Long range optical coherence tomography (OCT), with its high speed, high resolution, non-ionized properties and cross-sectional imaging capability, is suitable for upper airway lumen imaging. To render 2D OCT datasets to true 3D anatomy, additional tools are usually applied, such as X-ray guidance or a magnetic sensor. X-ray increases ionizing radiation. A magnetic sensor either increases probe size or requires an additional pull-back of the tracking sensor through the body cavity. In order to overcome these limitations, we present a novel tracking method using a 1.5 mm×1.5mm, 90/10-ratio micro-beamsplitter: 10% light through the beam-splitter is used for motion tracking and 90% light is used for regular OCT imaging and motion tracking. Two signals corresponding to these two split-beams that pass through different optical path length delays are obtained by the detector simultaneously. Using the two split beams' returned signals from the same marker line, the 2D inclination angle of each step is computed. By calculating the 2D inclination angle of each step and then connecting the translational displacements of each step, we can obtain the 2D motion trajectory of the probe. With two marker lines on the probe sheath, 3D inclination angles can be determined and then used for 3D trajectory reconstruction. We tested the accuracy of trajectory reconstruction using the probe and demonstrated the feasibility of the design for structure reconstruction of a biological sample using a porcine trachea specimen. This optical-tracking probe has the potential to be made as small as an outer diameter of 1.0mm, which is ideal for upper airway imaging.

  4. Sedimentary basin effects in Seattle, Washington: Ground-motion observations and 3D simulations

    USGS Publications Warehouse

    Frankel, Arthur; Stephenson, William; Carver, David

    2009-01-01

    Seismograms of local earthquakes recorded in Seattle exhibit surface waves in the Seattle basin and basin-edge focusing of S waves. Spectral ratios of Swaves and later arrivals at 1 Hz for stiff-soil sites in the Seattle basin show a dependence on the direction to the earthquake, with earthquakes to the south and southwest producing higher average amplification. Earthquakes to the southwest typically produce larger basin surface waves relative to S waves than earthquakes to the north and northwest, probably because of the velocity contrast across the Seattle fault along the southern margin of the Seattle basin. S to P conversions are observed for some events and are likely converted at the bottom of the Seattle basin. We model five earthquakes, including the M 6.8 Nisqually earthquake, using 3D finite-difference simulations accurate up to 1 Hz. The simulations reproduce the observed dependence of amplification on the direction to the earthquake. The simulations generally match the timing and character of basin surface waves observed for many events. The 3D simulation for the Nisqually earth-quake produces focusing of S waves along the southern margin of the Seattle basin near the area in west Seattle that experienced increased chimney damage from the earthquake, similar to the results of the higher-frequency 2D simulation reported by Stephenson et al. (2006). Waveforms from the 3D simulations show reasonable agreement with the data at low frequencies (0.2-0.4 Hz) for the Nisqually earthquake and an M 4.8 deep earthquake west of Seattle.

  5. The intrafraction motion induced dosimetric impacts in breast 3D radiation treatment: A 4DCT based study

    SciTech Connect

    Yue, Ning J.; Li Xiang; Beriwal, Sushil; Heron, Dwight E.; Sontag, Marc R.; Huq, M. Saiful

    2007-07-15

    The question remains regarding the dosimetric impact of intrafraction motion in 3D breast treatment. This study was conducted to investigate this issue utilizing the 4DCT scan. The 4D and helical CT scan sets were acquired for 12 breast cancer patients. For each of these patients, based on the helical CT scan, a conventional 3D conformal plan was generated. The breast treatment was then simulated based on the 4DCT scan. In each phase of the 4DCT scan, dose distribution was generated with the same beam parameters as the conventional plan. A software package was developed to compute the cumulative dose distribution from all the phases. Since the intrafraction organ motion is reflected by the 4DCT images, the cumulative dose computed based on the 4DCT images should be closer to what the patient received during treatment. Various dosimetric parameters were obtained from the plan and 4D cumulative dose distribution for the target volume and heart, and were compared to deduce the motion-induced impacts. The studies were performed for both whole breast and partial breast treatment. In the whole breast treatment, the average intrafraction motion induced changes in D{sub 95}, D{sub 90}, V{sub 100}, V{sub 95}, and V{sub 90} of the target volume were -5.4%, -3.1%, -13.4%, -5.1%, and -3.2%, respectively, with the largest values at -26.2%, -14.1%, -91.0%, -15.1%, and -9.0%, respectively. Motion had little impact on the D{sub max} of the target volume, but its impact on the D{sub min} of the target volume was significant. For left breast treatment, the motion-induced D{sub max} change to the heart could be negative or positive, with the largest increase at about 6 Gy. In partial breast treatment, the only non-insignificant impact was in the D{sub min} of the CTV (ranging from -15.2% to 11.7%). The results showed that the intrafraction motion may compromise target dose coverage in breast treatments and the degree of that compromise was correlated with motion magnitude. However

  6. Analysis of 3-D Tongue Motion from Tagged and Cine Magnetic Resonance Images

    ERIC Educational Resources Information Center

    Xing, Fangxu; Woo, Jonghye; Lee, Junghoon; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.

    2016-01-01

    Purpose: Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during…

  7. Motion Controllers for Learners to Manipulate and Interact with 3D Objects for Mental Rotation Training

    ERIC Educational Resources Information Center

    Yeh, Shih-Ching; Wang, Jin-Liang; Wang, Chin-Yeh; Lin, Po-Han; Chen, Gwo-Dong; Rizzo, Albert

    2014-01-01

    Mental rotation is an important spatial processing ability and an important element in intelligence tests. However, the majority of past attempts at training mental rotation have used paper-and-pencil tests or digital images. This study proposes an innovative mental rotation training approach using magnetic motion controllers to allow learners to…

  8. Combining marker-less patient setup and respiratory motion monitoring using low cost 3D camera technology

    NASA Astrophysics Data System (ADS)

    Tahavori, F.; Adams, E.; Dabbs, M.; Aldridge, L.; Liversidge, N.; Donovan, E.; Jordan, T.; Evans, PM.; Wells, K.

    2015-03-01

    Patient set-up misalignment/motion can be a significant source of error within external beam radiotherapy, leading to unwanted dose to healthy tissues and sub-optimal dose to the target tissue. Such inadvertent displacement or motion of the target volume may be caused by treatment set-up error, respiratory motion or an involuntary movement potentially decreasing therapeutic benefit. The conventional approach to managing abdominal-thoracic patient set-up is via skin markers (tattoos) and laser-based alignment. Alignment of the internal target volume with its position in the treatment plan can be achieved using Deep Inspiration Breath Hold (DIBH) in conjunction with marker-based respiratory motion monitoring. We propose a marker-less single system solution for patient set-up and respiratory motion management based on low cost 3D depth camera technology (such as the Microsoft Kinect). In this new work we assess this approach in a study group of six volunteer subjects. Separate simulated treatment mimic treatment "fractions" or set-ups are compared for each subject, undertaken using conventional laser-based alignment and with intrinsic depth images produced by Kinect. Microsoft Kinect is also compared with the well-known RPM system for respiratory motion management in terms of monitoring free-breathing and DIBH. Preliminary results suggest that Kinect is able to produce mm-level surface alignment and a comparable DIBH respiratory motion management when compared to the popular RPM system. Such an approach may also yield significant benefits in terms of patient throughput as marker alignment and respiratory motion can be automated in a single system.

  9. Tracking the interframe deformation of structures in 3D ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Syn, M.; Gosling, J. P.; Prager, Richard W.; Berman, Laurence; Crowley, J.

    1994-09-01

    Three dimensional ultrasound imaging with a freehand probe allows a flexible approach to medical visualization and diagnosis. Given the imperfect accuracy of proprioceptive devices used to log the position and tilt of the probe, it is important to utilize the position constraints provided by image evidence. This is also important if we wish to consider the visualization of structures which move significantly during acquisition, such as a heart of fetus. We present here an initial approach to more robust segmentation and shape recovery in a particularly noisy modality. We consider 2D segmentation based on edge evidence, using first an active contour, then finding an optimal segmentation using simulated annealing. Correspondence between contours in adjacent frames can only be solved in general cases by use of a 3D prior model. Dynamic physics-based mesh models as used by Pentland [20] and Nastar [17], allow for shape modelling, then over-constrained 3D shape recovery can be performed using the intrinsic vibration modes of the model.

  10. Landmark detection and coupled patch registration for cardiac motion tracking

    NASA Astrophysics Data System (ADS)

    Wang, Haiyan; Shi, Wenzhe; Zhuang, Xiahai; Wu, Xianliang; Tung, Kai-Pin; Ourselin, Sebastien; Edwards, Philip; Rueckert, Daniel

    2013-03-01

    Increasing attention has been focused on the estimation of the deformation of the endocardium to aid the diagnosis of cardiac malfunction. Landmark tracking can provide sparse, anatomically relevant constraints to help establish correspondences between images being tracked or registered. However, landmarks on the endocardium are often characterized by ambiguous appearance in cardiac MR images which makes the extraction and tracking of these landmarks problematic. In this paper we propose an automatic framework to select and track a sparse set of distinctive landmarks in the presence of relatively large deformations in order to capture the endocardial motion in cardiac MR sequences. To achieve this a sparse set of the landmarks is identified using an entropy-based approach. In particular we use singular value decomposition (SVD) to reduce the search space and localize the landmarks with relatively large deformation across the cardiac cycle. The tracking of the sparse set of landmarks is performed simultaneously by optimizing a two-stage Markov Random Field (MRF) model. The tracking result is further used to initialize registration based dense motion tracking. We have applied this framework to extract a set of landmarks at the endocardial border of the left ventricle in MR image sequences from 51 subjects. Although the left ventricle undergoes a number of different deformations, we show how the radial, longitudinal motion and twisting of the endocardial surface can be captured by the proposed approach. Our experiments demonstrate that motion tracking using sparse landmarks can outperform conventional motion tracking by a substantial amount, with improvements in terms of tracking accuracy of 20:8% and 19:4% respectively.

  11. Laetoli’s lost tracks: 3D generated mean shape and missing footprints

    PubMed Central

    Bennett, M. R.; Reynolds, S. C.; Morse, S. A.; Budka, M.

    2016-01-01

    The Laetoli site (Tanzania) contains the oldest known hominin footprints, and their interpretation remains open to debate, despite over 35 years of research. The two hominin trackways present are parallel to one another, one of which is a composite formed by at least two individuals walking in single file. Most researchers have focused on the single, clearly discernible G1 trackway while the G2/3 trackway has been largely dismissed due to its composite nature. Here we report the use of a new technique that allows us to decouple the G2 and G3 tracks for the first time. In so doing we are able to quantify the mean footprint topology of the G3 trackway and render it useable for subsequent data analyses. By restoring the effectively ‘lost’ G3 track, we have doubled the available data on some of the rarest traces directly associated with our Pliocene ancestors. PMID:26902912

  12. 3D environment modeling and location tracking using off-the-shelf components

    NASA Astrophysics Data System (ADS)

    Luke, Robert H.

    2016-05-01

    The remarkable popularity of smartphones over the past decade has led to a technological race for dominance in market share. This has resulted in a flood of new processors and sensors that are inexpensive, low power and high performance. These sensors include accelerometers, gyroscope, barometers and most importantly cameras. This sensor suite, coupled with multicore processors, allows a new community of researchers to build small, high performance platforms for low cost. This paper describes a system using off-the-shelf components to perform position tracking as well as environment modeling. The system relies on tracking using stereo vision and inertial navigation to determine movement of the system as well as create a model of the environment sensed by the system.

  13. Using Fuzzy Gaussian Inference and Genetic Programming to Classify 3D Human Motions

    NASA Astrophysics Data System (ADS)

    Khoury, Mehdi; Liu, Honghai

    This research introduces and builds on the concept of Fuzzy Gaussian Inference (FGI) (Khoury and Liu in Proceedings of UKCI, 2008 and IEEE Workshop on Robotic Intelligence in Informationally Structured Space (RiiSS 2009), 2009) as a novel way to build Fuzzy Membership Functions that map to hidden Probability Distributions underlying human motions. This method is now combined with a Genetic Programming Fuzzy rule-based system in order to classify boxing moves from natural human Motion Capture data. In this experiment, FGI alone is able to recognise seven different boxing stances simultaneously with an accuracy superior to a GMM-based classifier. Results seem to indicate that adding an evolutionary Fuzzy Inference Engine on top of FGI improves the accuracy of the classifier in a consistent way.

  14. Free-breathing 3D cardiac MRI using iterative image-based respiratory motion correction.

    PubMed

    Moghari, Mehdi H; Roujol, Sébastien; Chan, Raymond H; Hong, Susie N; Bello, Natalie; Henningsson, Markus; Ngo, Long H; Goddu, Beth; Goepfert, Lois; Kissinger, Kraig V; Manning, Warren J; Nezafat, Reza

    2013-10-01

    Respiratory motion compensation using diaphragmatic navigator gating with a 5 mm gating window is conventionally used for free-breathing cardiac MRI. Because of the narrow gating window, scan efficiency is low resulting in long scan times, especially for patients with irregular breathing patterns. In this work, a new retrospective motion compensation algorithm is presented to reduce the scan time for free-breathing cardiac MRI that increasing the gating window to 15 mm without compromising image quality. The proposed algorithm iteratively corrects for respiratory-induced cardiac motion by optimizing the sharpness of the heart. To evaluate this technique, two coronary MRI datasets with 1.3 mm(3) resolution were acquired from 11 healthy subjects (seven females, 25 ± 9 years); one using a navigator with a 5 mm gating window acquired in 12.0 ± 2.0 min and one with a 15 mm gating window acquired in 7.1 ± 1.0 min. The images acquired with a 15 mm gating window were corrected using the proposed algorithm and compared to the uncorrected images acquired with the 5 and 15 mm gating windows. The image quality score, sharpness, and length of the three major coronary arteries were equivalent between the corrected images and the images acquired with a 5 mm gating window (P-value > 0.05), while the scan time was reduced by a factor of 1.7. PMID:23132549

  15. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the

  16. Tracking immune-related cell responses to drug delivery microparticles in 3D dense collagen matrix.

    PubMed

    Obarzanek-Fojt, Magdalena; Curdy, Catherine; Loggia, Nicoletta; Di Lena, Fabio; Grieder, Kathrin; Bitar, Malak; Wick, Peter

    2016-10-01

    Beyond the therapeutic purpose, the impact of drug delivery microparticles on the local tissue and inflammatory responses remains to be further elucidated specifically for reactions mediated by the host immune cells. Such immediate and prolonged reactions may adversely influence the release efficacy and intended therapeutic pathway. The lack of suitable in vitro platforms limits our ability to gain insight into the nature of immune responses at a single cell level. In order to establish an in vitro 3D system mimicking the connective host tissue counterpart, we utilized reproducible, compressed, rat-tail collagen polymerized matrices. THP1 cells (human acute monocytic leukaemia cells) differentiated into macrophage-like cells were chosen as cell model and their functionality was retained in the dense rat-tail collagen matrix. Placebo microparticles were later combined in the immune cell seeded system during collagen polymerization and secreted pro-inflammatory factors: TNFα and IL-8 were used as immune response readout (ELISA). Our data showed an elevated TNFα and IL-8 secretion by macrophage THP1 cells indicating that Placebo microparticles trigger certain immune cell responses under 3D in vivo like conditions. Furthermore, we have shown that the system is sensitive to measure the differences in THP1 macrophage pro-inflammatory responses to Active Pharmaceutical Ingredient (API) microparticles with different API release kinetics. We have successfully developed a tissue-like, advanced, in vitro system enabling selective "readouts" of specific responses of immune-related cells. Such system may provide the basis of an advanced toolbox enabling systemic evaluation and prediction of in vivo microparticle reactions on human immune-related cells.

  17. Motion object tracking algorithm using multi-cameras

    NASA Astrophysics Data System (ADS)

    Kong, Xiaofang; Chen, Qian; Gu, Guohua

    2015-09-01

    Motion object tracking is one of the most important research directions in computer vision. Challenges in designing a robust tracking method are usually caused by partial or complete occlusions on targets. However, motion object tracking algorithm based on multiple cameras according to the homography relation in three views can deal with this issue effectively since the information combining from multiple cameras in different views can make the target more complete and accurate. In this paper, a robust visual tracking algorithm based on the homography relations of three cameras in different views is presented to cope with the occlusion. First of all, being the main contribution of this paper, the motion object tracking algorithm based on the low-rank matrix representation under the framework of the particle filter is applied to track the same target in the public region respectively in different views. The target model and the occlusion model are established and an alternating optimization algorithm is utilized to solve the proposed optimization formulation while tracking. Then, we confirm the plane in which the target has the largest occlusion weight to be the principal plane and calculate the homography to find out the mapping relations between different views. Finally, the images of the other two views are projected into the main plane. By making use of the homography relation between different views, the information of the occluded target can be obtained completely. The proposed algorithm has been examined throughout several challenging image sequences, and experiments show that it overcomes the failure of the motion tracking especially under the situation of the occlusion. Besides, the proposed algorithm improves the accuracy of the motion tracking comparing with other state-of-the-art algorithms.

  18. 3D tracking of surgical instruments using a single camera for laparoscopic surgery simulation.

    PubMed

    Shin, Sangkyun; Kim, Youngjun; Kwak, Hyunsoo; Lee, Deukhee; Park, Sehyung

    2011-01-01

    Most laparoscopic surgery simulation systems are expensive and complex. To overcome these problems, this study presents a novel three-dimensional tracking method for laparoscopic surgical instruments that uses only a single camera and fiducial markers. The proposed method does not require any mechanical parts to measure the three-dimensional positions/orientations of surgical instruments and the opening angle of graspers. We implemented simple and cost-effective hardware using the proposed method and successfully combined it with virtual simulation software for laparoscopic surgery.

  19. Visualizing and Tracking Evolving Features in 3D Unstructured and Adaptive Datasets

    SciTech Connect

    Silver, D.; Zabusky, N.

    2002-08-01

    The massive amounts of time-varying datasets being generated demand new visualization and quantification techniques. Visualization alone is not sufficient. Without proper measurement information/computations real science cannot be done. Our focus is this work was to combine visualization with quantification of the data to allow for advanced querying and searching. As part of this proposal, we have developed a feature extraction adn tracking methodology which allows researcher to identify features of interest and follow their evolution over time. The implementation is distributed and operates over data In-situ: where it is stored and when it was computed.

  20. Pulmonary CT image registration and warping for tracking tissue deformation during the respiratory cycle through 3D consistent image registration

    PubMed Central

    Li, Baojun; Christensen, Gary E.; Hoffman, Eric A.; McLennan, Geoffrey; Reinhardt, Joseph M.

    2008-01-01

    Tracking lung tissues during the respiratory cycle has been a challenging task for diagnostic CT and CT-guided radiotherapy. We propose an intensity- and landmark-based image registration algorithm to perform image registration and warping of 3D pulmonary CT image data sets, based on consistency constraints and matching corresponding airway branchpoints. In this paper, we demonstrate the effectivenss and accuracy of this algorithm in tracking lung tissues by both animal and human data sets. In the animal study, the result showed a tracking accuracy of 1.9 mm between 50% functional residual capacity (FRC) and 85% total lung capacity (TLC) for 12 metal seeds implanted in the lungs of a breathing sheep under precise volume control using a pulmonary ventilator. Visual inspection of the human subject results revealed the algorithm’s potential not only in matching the global shapes, but also in registering the internal structures (e.g., oblique lobe fissures, pulmonary artery branches, etc.). These results suggest that our algorithm has significant potential for warping and tracking lung tissue deformation with applications in diagnostic CT, CT-guided radiotherapy treatment planning, and therapeutic effect evaluation. PMID:19175115

  1. Pulmonary CT image registration and warping for tracking tissue deformation during the respiratory cycle through 3D consistent image registration.

    PubMed

    Li, Baojun; Christensen, Gary E; Hoffman, Eric A; McLennan, Geoffrey; Reinhardt, Joseph M

    2008-12-01

    Tracking lung tissues during the respiratory cycle has been a challenging task for diagnostic CT and CT-guided radiotherapy. We propose an intensity- and landmark-based image registration algorithm to perform image registration and warping of 3D pulmonary CT image data sets, based on consistency constraints and matching corresponding airway branchpoints. In this paper, we demonstrate the effectivenss and accuracy of this algorithm in tracking lung tissues by both animal and human data sets. In the animal study, the result showed a tracking accuracy of 1.9 mm between 50% functional residual capacity (FRC) and 85% total lung capacity (TLC) for 12 metal seeds implanted in the lungs of a breathing sheep under precise volume control using a pulmonary ventilator. Visual inspection of the human subject results revealed the algorithm's potential not only in matching the global shapes, but also in registering the internal structures (e.g., oblique lobe fissures, pulmonary artery branches, etc.). These results suggest that our algorithm has significant potential for warping and tracking lung tissue deformation with applications in diagnostic CT, CT-guided radiotherapy treatment planning, and therapeutic effect evaluation.

  2. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV kV imaging

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wiersma, R. D.; Mao, W.; Luxton, G.; Xing, L.

    2008-12-01

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from ~0.5 mm for the normal adult breathing pattern to ~1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real

  3. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV-kV imaging.

    PubMed

    Liu, W; Wiersma, R D; Mao, W; Luxton, G; Xing, L

    2008-12-21

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from approximately 0.5 mm for the normal adult breathing pattern to approximately 1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general

  4. Lagrangian 3D particle tracking in high-speed flows: Shake-The-Box for multi-pulse systems

    NASA Astrophysics Data System (ADS)

    Novara, Matteo; Schanz, Daniel; Reuther, Nico; Kähler, Christian J.; Schröder, Andreas

    2016-08-01

    The Shake-The-Box (STB) particle tracking technique, recently introduced for time-resolved 3D particle image velocimetry (PIV) images, is applied here to data from a multi-pulse investigation of a turbulent boundary layer flow with adverse pressure gradient in air at 36 m/s ( Re τ = 10,650). The multi-pulse acquisition strategy allows for the recording of four-pulse long time-resolved sequences with a time separation of a few microseconds. The experimental setup consists of a dual-imaging system and a dual-double-cavity laser emitting orthogonal polarization directions to separate the four pulses. The STB particle triangulation and tracking strategy is adapted here to cope with the limited amount of realizations available along the time sequence and to take advantage of the ghost track reduction offered by the use of two independent imaging systems. Furthermore, a correction scheme to compensate for camera vibrations is discussed, together with a method to accurately identify the position of the wall within the measurement domain. Results show that approximately 80,000 tracks can be instantaneously reconstructed within the measurement volume, enabling the evaluation of both dense velocity fields, suitable for spatial gradients evaluation, and highly spatially resolved boundary layer profiles. Turbulent boundary layer profiles obtained from ensemble averaging of the STB tracks are compared to results from 2D-PIV and long-range micro particle tracking velocimetry; the comparison shows the capability of the STB approach in delivering accurate results across a wide range of scales.

  5. An efficient quasi-3D particle tracking-based approach for transport through fractures with application to dynamic dispersion calculation.

    PubMed

    Wang, Lichun; Cardenas, M Bayani

    2015-08-01

    The quantitative study of transport through fractured media has continued for many decades, but has often been constrained by observational and computational challenges. Here, we developed an efficient quasi-3D random walk particle tracking (RWPT) algorithm to simulate solute transport through natural fractures based on a 2D flow field generated from the modified local cubic law (MLCL). As a reference, we also modeled the actual breakthrough curves (BTCs) through direct simulations with the 3D advection-diffusion equation (ADE) and Navier-Stokes equations. The RWPT algorithm along with the MLCL accurately reproduced the actual BTCs calculated with the 3D ADE. The BTCs exhibited non-Fickian behavior, including early arrival and long tails. Using the spatial information of particle trajectories, we further analyzed the dynamic dispersion process through moment analysis. From this, asymptotic time scales were determined for solute dispersion to distinguish non-Fickian from Fickian regimes. This analysis illustrates the advantage and benefit of using an efficient combination of flow modeling and RWPT. PMID:26042625

  6. A smart homecage system with 3D tracking for long-term behavioral experiments.

    PubMed

    Byunghun Lee; Kiani, Mehdi; Ghovanloo, Maysam

    2014-01-01

    A wirelessly-powered homecage system, called the EnerCage-HC, that is equipped with multi-coil wireless power transfer, closed-loop power control, optical behavioral tracking, and a graphic user interface (GUI) is presented for long-term electrophysiology experiments. The EnerCage-HC system can wirelessly power a mobile unit attached to a small animal subject and also track its behavior in real-time as it is housed inside a standard homecage. The EnerCage-HC system is equipped with one central and four overlapping slanted wire-wound coils (WWCs) with optimal geometries to form 3-and 4-coil power transmission links while operating at 13.56 MHz. Utilizing multi-coil links increases the power transfer efficiency (PTE) compared to conventional 2-coil links and also reduces the number of power amplifiers (PAs) to only one, which significantly reduces the system complexity, cost, and dissipated heat. A Microsoft Kinect installed 90 cm above the homecage localizes the animal position and orientation with 1.6 cm accuracy. An in vivo experiment was conducted on a freely behaving rat by continuously delivering 24 mW to the mobile unit for > 7 hours inside a standard homecage. PMID:25570379

  7. Multisensor 3D tracking for counter small unmanned air vehicles (CSUAV)

    NASA Astrophysics Data System (ADS)

    Vasquez, Juan R.; Tarplee, Kyle M.; Case, Ellen E.; Zelnio, Anne M.; Rigling, Brian D.

    2008-04-01

    A variety of unmanned air vehicles (UAVs) have been developed for both military and civilian use. The typical large UAV is typically state owned, whereas small UAVs (SUAVs) may be in the form of remote controlled aircraft that are widely available. The potential threat of these SUAVs to both the military and civilian populace has led to research efforts to counter these assets via track, ID, and attack. Difficulties arise from the small size and low radar cross section when attempting to detect and track these targets with a single sensor such as radar or video cameras. In addition, clutter objects make accurate ID difficult without very high resolution data, leading to the use of an acoustic array to support this function. This paper presents a multi-sensor architecture that exploits sensor modes including EO/IR cameras, an acoustic array, and future inclusion of a radar. A sensor resource management concept is presented along with preliminary results from three of the sensors.

  8. A Smart Homecage System with 3D Tracking for Long-Term Behavioral Experiments

    PubMed Central

    Lee, Byunghun; Kiani, Mehdi; Ghovanloo, Maysam

    2015-01-01

    A wirelessly-powered homecage system, called the EnerCage-HC, that is equipped with multi-coil wireless power transfer, closed-loop power control, optical behavioral tracking, and a graphic user interface (GUI) is presented for long-term electrophysiology experiments. The EnerCage-HC system can wirelessly power a mobile unit attached to a small animal subject and also track its behavior in real-time as it is housed inside a standard homecage. The EnerCage-HC system is equipped with one central and four overlapping slanted wire-wound coils (WWCs) with optimal geometries to form 3- and 4-coil power transmission links while operating at 13.56 MHz. Utilizing multi-coil links increases the power transfer efficiency (PTE) compared to conventional 2-coil links and also reduces the number of power amplifiers (PAs) to only one, which significantly reduces the system complexity, cost, and dissipated heat. A Microsoft Kinect installed 90 cm above the homecage localizes the animal position and orientation with 1.6 cm accuracy. An in vivo experiment was conducted on a freely behaving rat by continuously delivering 24 mW to the mobile unit for > 7 hours inside a standard homecage. PMID:25570379

  9. Catheter tracking in asynchronous biplane fluoroscopy images by 3D B-snakes

    NASA Astrophysics Data System (ADS)

    Schenderlein, Marcel; Stierlin, Susanne; Manzke, Robert; Rasche, Volker; Dietmayer, Klaus

    2010-02-01

    Minimally invasive catheter ablation procedures are guided by biplane fluoroscopy images visualising the interventional scene from two different orientations. However, these images do not provide direct access to their inherent spatial information. A three-dimensional reconstruction and visualisation of the catheters from such projections has the potential to support quick and precise catheter navigation. It enhances the perception of the interventional situation and provides means of three-dimensional catheter pose documentation. In this contribution we develop an algorithm for tracking the three-dimensional pose of electro-physiological catheters in biplane fluoroscopy images. It is based on the B-Snake algorithm which had to be adapted to the biplane and in particular the asynchronous image acquisition situation. A three-dimensional B-spline curve is transformed so that its projections are consistent with the catheter path enhancing feature images, while the information from the missing image caused by the asynchronous acquisition is interpolated from its sequence neighbours. In order to analyse the three-dimensional precision, virtual images were created from patient data sets and threedimensional ground truth catheter paths. The evaluation of the three-dimensional catheter pose reconstruction by means of our algorithm on 33 of such virtual image sets indicated a mean catheter pose error of 1.26 mm and a mean tip deviation of 3.28 mm. The tracking capability of the algorithm was evaluated on 10 patient data sets. In 94 % of all images our algorithm followed the catheter projections.

  10. A Detailed Study of FDIRC Prototype with Waveform Digitizing Electronics in Cosmic Ray Telescope Using 3D Tracks.

    SciTech Connect

    Nishimura, K

    2012-07-01

    We present a detailed study of a novel Cherenkov imaging detector called the Focusing DIRC (FDIRC) with waveform digitizing electronics. In this test study, the FDIRC prototype has been instrumented with seven Hamamatsu H-8500 MaPMTs. Waveforms from ~450 pixels are digitized with waveform sampling electronics based on the BLAB2 ASIC, operating at a sampling speed of ~2.5 GSa/s. The FDIRC prototype was tested in a large cosmic ray telescope (CRT) providing 3D muon tracks with ~1.5 mrad angular resolution and muon energy of Emuon greater than 1.6 GeV. In this study we provide a detailed analysis of the tails in the Cherenkov angle distribution as a function of various variables, compare experimental results with simulation, and identify the major contributions to the tails. We demonstrate that to see the full impact of these tails on the Cherenkov angle resolution, it is crucial to use 3D tracks, and have a full understanding of the role of ambiguities. These issues could not be fully explored in previous FDIRC studies where the beam was perpendicular to the quartz radiator bars. This work is relevant for the final FDIRC prototype of the PID detector at SuperB, which will be tested this year in the CRT setup.

  11. A Detailed Study of FDIRC Prototype with Waveform Digitizing Electronics in Cosmic Ray Telescope Using 3D Tracks

    SciTech Connect

    Nishimura, K.; Dey, B.; Aston, D.; Leith, D.W.G.S.; Ratcliff, B.; Roberts, D.; Ruckman, L.; Shtol, D.; Varner, G.S.; Va'vra, J.; Vavra, Jerry; /SLAC

    2012-07-30

    We present a detailed study of a novel Cherenkov imaging detector called the Focusing DIRC (FDIRC) with waveform digitizing electronics. In this test study, the FDIRC prototype has been instrumented with seven Hamamatsu H-8500 MaPMTs. Waveforms from {approx}450 pixels are digitized with waveform sampling electronics based on the BLAB2 ASIC, operating at a sampling speed of {approx}2.5 GSa/s. The FDIRC prototype was tested in a large cosmic ray telescope (CRT) providing 3D muon tracks with {approx}1.5 mrad angular resolution and muon energy of E{sub muon} > 1.6 GeV. In this study we provide a detailed analysis of the tails in the Cherenkov angle distribution as a function of various variables, compare experimental results with simulation, and identify the major contributions to the tails. We demonstrate that to see the full impact of these tails on the Cherenkov angle resolution, it is crucial to use 3D tracks, and have a full understanding of the role of ambiguities. These issues could not be fully explored in previous FDIRC studies where the beam was perpendicular to the quartz radiator bars. This work is relevant for the final FDIRC prototype of the PID detector at SuperB, which will be tested this year in the CRT setup.

  12. Dynamic tracking of a deformable tissue based on 3D-2D MR-US image registration

    NASA Astrophysics Data System (ADS)

    Marami, Bahram; Sirouspour, Shahin; Fenster, Aaron; Capson, David W.

    2014-03-01

    Real-time registration of pre-operative magnetic resonance (MR) or computed tomography (CT) images with intra-operative Ultrasound (US) images can be a valuable tool in image-guided therapies and interventions. This paper presents an automatic method for dynamically tracking the deformation of a soft tissue based on registering pre-operative three-dimensional (3D) MR images to intra-operative two-dimensional (2D) US images. The registration algorithm is based on concepts in state estimation where a dynamic finite element (FE)- based linear elastic deformation model correlates the imaging data in the spatial and temporal domains. A Kalman-like filtering process estimates the unknown deformation states of the soft tissue using the deformation model and a measure of error between the predicted and the observed intra-operative imaging data. The error is computed based on an intensity-based distance metric, namely, modality independent neighborhood descriptor (MIND), and no segmentation or feature extraction from images is required. The performance of the proposed method is evaluated by dynamically deforming 3D pre-operative MR images of a breast phantom tissue based on real-time 2D images obtained from an US probe. Experimental results on different registration scenarios showed that deformation tracking converges in a few iterations. The average target registration error on the plane of 2D US images for manually selected fiducial points was between 0.3 and 1.5 mm depending on the size of deformation.

  13. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.

  14. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    PubMed

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming. PMID:27590974

  15. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    PubMed

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming.

  16. Ultra-high-speed 3D astigmatic particle tracking velocimetry: application to particle-laden supersonic impinging jets

    NASA Astrophysics Data System (ADS)

    Buchmann, N. A.; Cierpka, C.; Kähler, C. J.; Soria, J.

    2014-11-01

    The paper demonstrates ultra-high-speed three-component, three-dimensional (3C3D) velocity measurements of micron-sized particles suspended in a supersonic impinging jet flow. Understanding the dynamics of individual particles in such flows is important for the design of particle impactors for drug delivery or cold gas dynamic spray processing. The underexpanded jet flow is produced via a converging nozzle, and micron-sized particles ( d p = 110 μm) are introduced into the gas flow. The supersonic jet impinges onto a flat surface, and the particle impact velocity and particle impact angle are studied for a range of flow conditions and impingement distances. The imaging system consists of an ultra-high-speed digital camera (Shimadzu HPV-1) capable of recording rates of up to 1 Mfps. Astigmatism particle tracking velocimetry (APTV) is used to measure the 3D particle position (Cierpka et al., Meas Sci Technol 21(045401):13, 2010) by coding the particle depth location in the 2D images by adding a cylindrical lens to the high-speed imaging system. Based on the reconstructed 3D particle positions, the particle trajectories are obtained via a higher-order tracking scheme that takes advantage of the high temporal resolution to increase robustness and accuracy of the measurement. It is shown that the particle velocity and impingement angle are affected by the gas flow in a manner depending on the nozzle pressure ratio and stand-off distance where higher pressure ratios and stand-off distances lead to higher impact velocities and larger impact angles.

  17. Readily Accessible Multiplane Microscopy: 3D Tracking the HIV-1 Genome in Living Cells.

    PubMed

    Itano, Michelle S; Bleck, Marina; Johnson, Daniel S; Simon, Sanford M

    2016-02-01

    Human immunodeficiency virus (HIV)-1 infection and the associated disease AIDS are a major cause of human death worldwide with no vaccine or cure available. The trafficking of HIV-1 RNAs from sites of synthesis in the nucleus, through the cytoplasm, to sites of assembly at the plasma membrane are critical steps in HIV-1 viral replication, but are not well characterized. Here we present a broadly accessible microscopy method that captures multiple focal planes simultaneously, which allows us to image the trafficking of HIV-1 genomic RNAs with high precision. This method utilizes a customization of a commercial multichannel emission splitter that enables high-resolution 3D imaging with single-macromolecule sensitivity. We show with high temporal and spatial resolution that HIV-1 genomic RNAs are most mobile in the cytosol, and undergo confined mobility at sites along the nuclear envelope and in the nucleus and nucleolus. These provide important insights regarding the mechanism by which the HIV-1 RNA genome is transported to the sites of assembly of nascent virions. PMID:26567131

  18. Readily Accessible Multiplane Microscopy: 3D Tracking the HIV-1 Genome in Living Cells.

    PubMed

    Itano, Michelle S; Bleck, Marina; Johnson, Daniel S; Simon, Sanford M

    2016-02-01

    Human immunodeficiency virus (HIV)-1 infection and the associated disease AIDS are a major cause of human death worldwide with no vaccine or cure available. The trafficking of HIV-1 RNAs from sites of synthesis in the nucleus, through the cytoplasm, to sites of assembly at the plasma membrane are critical steps in HIV-1 viral replication, but are not well characterized. Here we present a broadly accessible microscopy method that captures multiple focal planes simultaneously, which allows us to image the trafficking of HIV-1 genomic RNAs with high precision. This method utilizes a customization of a commercial multichannel emission splitter that enables high-resolution 3D imaging with single-macromolecule sensitivity. We show with high temporal and spatial resolution that HIV-1 genomic RNAs are most mobile in the cytosol, and undergo confined mobility at sites along the nuclear envelope and in the nucleus and nucleolus. These provide important insights regarding the mechanism by which the HIV-1 RNA genome is transported to the sites of assembly of nascent virions.

  19. Spatial synchronization of an insole pressure distribution system with a 3D motion analysis system for center of pressure measurements.

    PubMed

    Fradet, Laetitia; Siegel, Johannes; Dahl, Marieke; Alimusaj, Merkur; Wolf, Sebastian I

    2009-01-01

    Insole pressure systems are often more appropriate than force platforms for analysing center of pressure (CoP) as they are more flexible in use and indicate the position of the CoP that characterizes the contact foot/shoe during gait with shoes. However, these systems are typically not synchronized with 3D motion analysis systems. The present paper proposes a direct method that does not require a force platform for synchronizing an insole pressure system with a 3D motion analysis system. The distance separating 24 different CoPs measured optically and their equivalents measured by the insoles and transformed in the global coordinate system did not exceed 2 mm, confirming the suitability of the method proposed. Additionally, during static single limb stance, distances smaller than 7 mm and correlations higher than 0.94 were found between CoP trajectories measured with insoles and force platforms. Similar measurements were performed during gait to illustrate the characteristics of the CoP measured with each system. The distance separating the two CoPs was below 19 mm and the coefficient of correlation above 0.86. The proposed method offers the possibility to conduct new experiments, such as the investigation of proprioception in climbing stairs or in the presence of obstacles.

  20. A 3D analysis of fore- and hindlimb motion during overground and ladder walking: comparison of control and unloaded rats.

    PubMed

    Canu, Marie-Hélène; Garnier, Cyril

    2009-07-01

    During locomotion, muscles are controlled by a network of neurones located in the spinal cord and by supraspinal structures. Alterations in that neuromuscular system have a functional impact, in particular on locomotion. The hindlimb unloading (HU) model in rat has been commonly used to generate disuse since it suppresses the hindlimb loading and limits movements. In consequence, it induces plastic mechanisms in the muscle, the spinal cord and the sensorimotor cortex. The aim of this study was to assess the locomotion in HU rats in two conditions: (1) on a runway and (2) in a challenging situation involving the participation of supraspinal structures (ladder walking). For that purpose, the motor pattern has been investigated by means of 3D motion analysis of the right fore- and hindlimbs as well as electromyographic recording of the soleus and tibialis anterior muscles. The 3D motion results show that HU induces a support-dependent alteration of the kinematics: increased duration of step, stance and swing; increased ankle flexion during stance and hyperextension at toe-off; lower protraction during swing. The electromyographic results show that whatever the support, the flexor and extensor burst duration was longer in HU rats. In addition, results show that ladder exacerbates some effects of HU. As ladder walking is a situation which requires precision, it is suggested that the control of hindlimb movement by supraspinal structures is affected in HU rats. PMID:19393236

  1. Particle tracking approach for transport in three-dimensional discrete fracture networks: Particle tracking in 3-D DFNs

    SciTech Connect

    Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.; Gable, Carl W.; Karra, Satish

    2015-09-16

    The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates mass balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.

  2. Particle tracking approach for transport in three-dimensional discrete fracture networks: Particle tracking in 3-D DFNs

    DOE PAGES

    Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.; Gable, Carl W.; Karra, Satish

    2015-09-16

    The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates massmore » balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.« less

  3. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking

    SciTech Connect

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-15

    Purpose: In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. Methods: The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a {gamma}-test with a 3%/3 mm criterion. Results: The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the {gamma}-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation

  4. 3D PET image reconstruction including both motion correction and registration directly into an MR or stereotaxic spatial atlas

    NASA Astrophysics Data System (ADS)

    Gravel, Paul; Verhaeghe, Jeroen; Reader, Andrew J.

    2013-01-01

    This work explores the feasibility and impact of including both the motion correction and the image registration transformation parameters from positron emission tomography (PET) image space to magnetic resonance (MR), or stereotaxic, image space within the system matrix of PET image reconstruction. This approach is motivated by the fields of neuroscience and psychiatry, where PET is used to investigate differences in activation patterns between different groups of participants, requiring all images to be registered to a common spatial atlas. Currently, image registration is performed after image reconstruction which introduces interpolation effects into the final image. Furthermore, motion correction (also requiring registration) introduces a further level of interpolation, and the overall result of these operations can lead to resolution degradation and possibly artifacts. It is important to note that performing such operations on a post-reconstruction basis means, strictly speaking, that the final images are not ones which maximize the desired objective function (e.g. maximum likelihood (ML), or maximum a posteriori reconstruction (MAP)). To correctly seek parameter estimates in the desired spatial atlas which are in accordance with the chosen reconstruction objective function, it is necessary to include the transformation parameters for both motion correction and registration within the system modeling stage of image reconstruction. Such an approach not only respects the statistically chosen objective function (e.g. ML or MAP), but furthermore should serve to reduce the interpolation effects. To evaluate the proposed method, this work investigates registration (including motion correction) using 2D and 3D simulations based on the high resolution research tomograph (HRRT) PET scanner geometry, with and without resolution modeling, using the ML expectation maximization (MLEM) reconstruction algorithm. The quality of reconstruction was assessed using bias

  5. Diaphragm motion characterization using chest motion data for biomechanics-based lung tumor tracking during EBRT

    NASA Astrophysics Data System (ADS)

    Karami, Elham; Gaede, Stewart; Lee, Ting-Yim; Samani, Abbas

    2016-03-01

    Despite recent advances in image-guided interventions, lung cancer External Beam Radiation Therapy (EBRT) is still very challenging due to respiration induced tumor motion. Among various proposed methods of tumor motion compensation, real-time tumor tracking is known to be one of the most effective solutions as it allows for maximum normal tissue sparing, less overall radiation exposure and a shorter treatment session. As such, we propose a biomechanics-based real-time tumor tracking method for effective lung cancer radiotherapy. In the proposed algorithm, the required boundary conditions for the lung Finite Element model, including diaphragm motion, are obtained using the chest surface motion as a surrogate signal. The primary objective of this paper is to demonstrate the feasibility of developing a function which is capable of inputting the chest surface motion data and outputting the diaphragm motion in real-time. For this purpose, after quantifying the diaphragm motion with a Principal Component Analysis (PCA) model, correlation coefficient between the model parameters of diaphragm motion and chest motion data was obtained through Partial Least Squares Regression (PLSR). Preliminary results obtained in this study indicate that the PCA coefficients representing the diaphragm motion can be obtained through chest surface motion tracking with high accuracy.

  6. Modulated Magnetic Nanowires for Controlling Domain Wall Motion: Toward 3D Magnetic Memories.

    PubMed

    Ivanov, Yurii P; Chuvilin, Andrey; Lopatin, Sergei; Kosel, Jurgen

    2016-05-24

    Cylindrical magnetic nanowires are attractive materials for next generation data storage devices owing to the theoretically achievable high domain wall velocity and their efficient fabrication in highly dense arrays. In order to obtain control over domain wall motion, reliable and well-defined pinning sites are required. Here, we show that modulated nanowires consisting of alternating nickel and cobalt sections facilitate efficient domain wall pinning at the interfaces of those sections. By combining electron holography with micromagnetic simulations, the pinning effect can be explained by the interaction of the stray fields generated at the interface and the domain wall. Utilizing a modified differential phase contrast imaging, we visualized the pinned domain wall with a high resolution, revealing its three-dimensional vortex structure with the previously predicted Bloch point at its center. These findings suggest the potential of modulated nanowires for the development of high-density, three-dimensional data storage devices. PMID:27138460

  7. Dynamic force measurements for a high bar using 3D motion capturing.

    PubMed

    Cagran, C; Huber, P; Müller, W

    2010-03-01

    The displacement of a calibrated horizontal bar is used as a measure for forces acting on the bar itself during dynamic performances in artistic gymnastics. The high bar is loaded with known forces and the displacement is monitored by means of a Vicon motion capturing system. The calibration results are fitted according to the Euler-Bernoulli beam theory. After calibration, forces can straightforwardly be measured by multiplication of the bar displacement with the determined fit parameter. This approach is also able to account for non-central force application (two hands on the bar) and the effect of the bar's inertia. Uncertainties in measured forces are assessed to be +/-25 N plus an additional 1% for the unknown weight distribution between the two hands. PMID:19906379

  8. Modulated Magnetic Nanowires for Controlling Domain Wall Motion: Toward 3D Magnetic Memories.

    PubMed

    Ivanov, Yurii P; Chuvilin, Andrey; Lopatin, Sergei; Kosel, Jurgen

    2016-05-24

    Cylindrical magnetic nanowires are attractive materials for next generation data storage devices owing to the theoretically achievable high domain wall velocity and their efficient fabrication in highly dense arrays. In order to obtain control over domain wall motion, reliable and well-defined pinning sites are required. Here, we show that modulated nanowires consisting of alternating nickel and cobalt sections facilitate efficient domain wall pinning at the interfaces of those sections. By combining electron holography with micromagnetic simulations, the pinning effect can be explained by the interaction of the stray fields generated at the interface and the domain wall. Utilizing a modified differential phase contrast imaging, we visualized the pinned domain wall with a high resolution, revealing its three-dimensional vortex structure with the previously predicted Bloch point at its center. These findings suggest the potential of modulated nanowires for the development of high-density, three-dimensional data storage devices.

  9. Quantification of Ground Motion Reductions by Fault Zone Plasticity with 3D Spontaneous Rupture Simulations

    NASA Astrophysics Data System (ADS)

    Roten, D.; Olsen, K. B.; Cui, Y.; Day, S. M.

    2015-12-01

    We explore the effects of fault zone nonlinearity on peak ground velocities (PGVs) by simulating a suite of surface rupturing earthquakes in a visco-plastic medium. Our simulations, performed with the AWP-ODC 3D finite difference code, cover magnitudes from 6.5 to 8.0, with several realizations of the stochastic stress drop for a given magnitude. We test three different models of rock strength, with friction angles and cohesions based on criteria which are frequently applied to fractured rock masses in civil engineering and mining. We use a minimum shear-wave velocity of 500 m/s and a maximum frequency of 1 Hz. In rupture scenarios with average stress drop (~3.5 MPa), plastic yielding reduces near-fault PGVs by 15 to 30% in pre-fractured, low-strength rock, but less than 1% in massive, high quality rock. These reductions are almost insensitive to the scenario earthquake magnitude. In the case of high stress drop (~7 MPa), however, plasticity reduces near-fault PGVs by 38 to 45% in rocks of low strength and by 5 to 15% in rocks of high strength. Because plasticity reduces slip rates and static slip near the surface, these effects can partially be captured by defining a shallow velocity-strengthening layer. We also perform a dynamic nonlinear simulation of a high stress drop M 7.8 earthquake rupturing the southern San Andreas fault along 250 km from Indio to Lake Hughes. With respect to the viscoelastic solution (a), nonlinearity in the fault damage zone and in near-surface deposits would reduce long-period (> 1 s) peak ground velocities in the Los Angeles basin by 15-50% (b), depending on the strength of crustal rocks and shallow sediments. These simulation results suggest that nonlinear effects may be relevant even at long periods, especially for earthquakes with high stress drop.

  10. Hybrid markerless tracking of complex articulated motion in golf swings.

    PubMed

    Fung, Sim Kwoh; Sundaraj, Kenneth; Ahamed, Nizam Uddin; Kiang, Lam Chee; Nadarajah, Sivadev; Sahayadhas, Arun; Ali, Md Asraf; Islam, Md Anamul; Palaniappan, Rajkumar

    2014-04-01

    Sports video tracking is a research topic that has attained increasing attention due to its high commercial potential. A number of sports, including tennis, soccer, gymnastics, running, golf, badminton and cricket have been utilised to display the novel ideas in sports motion tracking. The main challenge associated with this research concerns the extraction of a highly complex articulated motion from a video scene. Our research focuses on the development of a markerless human motion tracking system that tracks the major body parts of an athlete straight from a sports broadcast video. We proposed a hybrid tracking method, which consists of a combination of three algorithms (pyramidal Lucas-Kanade optical flow (LK), normalised correlation-based template matching and background subtraction), to track the golfer's head, body, hands, shoulders, knees and feet during a full swing. We then match, track and map the results onto a 2D articulated human stick model to represent the pose of the golfer over time. Our work was tested using two video broadcasts of a golfer, and we obtained satisfactory results. The current outcomes of this research can play an important role in enhancing the performance of a golfer, provide vital information to sports medicine practitioners by providing technically sound guidance on movements and should assist to diminish the risk of golfing injuries.

  11. Hybrid markerless tracking of complex articulated motion in golf swings.

    PubMed

    Fung, Sim Kwoh; Sundaraj, Kenneth; Ahamed, Nizam Uddin; Kiang, Lam Chee; Nadarajah, Sivadev; Sahayadhas, Arun; Ali, Md Asraf; Islam, Md Anamul; Palaniappan, Rajkumar

    2014-04-01

    Sports video tracking is a research topic that has attained increasing attention due to its high commercial potential. A number of sports, including tennis, soccer, gymnastics, running, golf, badminton and cricket have been utilised to display the novel ideas in sports motion tracking. The main challenge associated with this research concerns the extraction of a highly complex articulated motion from a video scene. Our research focuses on the development of a markerless human motion tracking system that tracks the major body parts of an athlete straight from a sports broadcast video. We proposed a hybrid tracking method, which consists of a combination of three algorithms (pyramidal Lucas-Kanade optical flow (LK), normalised correlation-based template matching and background subtraction), to track the golfer's head, body, hands, shoulders, knees and feet during a full swing. We then match, track and map the results onto a 2D articulated human stick model to represent the pose of the golfer over time. Our work was tested using two video broadcasts of a golfer, and we obtained satisfactory results. The current outcomes of this research can play an important role in enhancing the performance of a golfer, provide vital information to sports medicine practitioners by providing technically sound guidance on movements and should assist to diminish the risk of golfing injuries. PMID:24725790

  12. Tracking Motions Of Manually Controlled Welding Torches

    NASA Technical Reports Server (NTRS)

    Russell, Carolyn; Gangl, Ken

    1996-01-01

    Techniques for measuring motions of manually controlled welding torches undergoing development. Positions, orientations, and velocities determined in real time during manual arc welding. Makes possible to treat manual welding processes more systematically so manual welds made more predictable, especially in cases in which mechanical strengths and other properties of welded parts highly sensitive to heat inputs and thus to velocities and orientations of welding torches.

  13. 3D optical imagery for motion compensation in a limb ultrasound system

    NASA Astrophysics Data System (ADS)

    Ranger, Bryan J.; Feigin, Micha; Zhang, Xiang; Mireault, Al; Raskar, Ramesh; Herr, Hugh M.; Anthony, Brian W.

    2016-04-01

    Conventional processes for prosthetic socket fabrication are heavily subjective, often resulting in an interface to the human body that is neither comfortable nor completely functional. With nearly 100% of amputees reporting that they experience discomfort with the wearing of their prosthetic limb, designing an effective interface to the body can significantly affect quality of life and future health outcomes. Active research in medical imaging and biomechanical tissue modeling of residual limbs has led to significant advances in computer aided prosthetic socket design, demonstrating an interest in moving toward more quantifiable processes that are still patient-specific. In our work, medical ultrasonography is being pursued to acquire data that may quantify and improve the design process and fabrication of prosthetic sockets while greatly reducing cost compared to an MRI-based framework. This paper presents a prototype limb imaging system that uses a medical ultrasound probe, mounted to a mechanical positioning system and submerged in a water bath. The limb imaging is combined with three-dimensional optical imaging for motion compensation. Images are collected circumferentially around the limb and combined into cross-sectional axial image slices, resulting in a compound image that shows tissue distributions and anatomical boundaries similar to magnetic resonance imaging. In this paper we provide a progress update on our system development, along with preliminary results as we move toward full volumetric imaging of residual limbs for prosthetic socket design. This demonstrates a novel multi-modal approach to residual limb imaging.

  14. Applying IR Tomo PIV and 3D Organism Tracking to Study Turbulence Effects on Oceanic Predator-Prey Interactions

    NASA Astrophysics Data System (ADS)

    Adhikari, Deepak; Hallberg, Michael; Gemmell, Brad; Longmire, Ellen; Buskey, Edward

    2012-11-01

    The behavorial response of aquatic predators and prey depends strongly on the surrounding fluid motion. We present a facility and non-intrusive instrumentation system designed to quantify the motions associated with interactions between small coral reef fish (blennies) and evasive zooplankton prey (copepod) subject to various flow disturbances. A recirculating water channel facility is driven by a paddlewheel to prevent damaging the zooplankton located throughout the channel. Fluid velocity vectors surrounding both species are determined by time-resolved infrared tomographic PIV, while a circular Hough transform and PTV technique is used to track the fish eye in three-dimensional space. Simultaneously, zooplankton motions are detected and tracked using two additional high-speed cameras with IR filters. For capturing larger scales, a measurement volume of 80 x 40 x 18 mm is used with spatial resolution of 3.5 mm. For capturing smaller scales, particularly for observing flow near the mouth of the fish during feeding, the measurement volume is reduced to 20 × 18 × 18 mm with spatial resolution of 1.5 mm. Results will be presented for both freshwater and seawater species. Supported by NSF IDBR grant #0852875.

  15. Study of human body: Kinematics and kinetics of a martial arts (Silat) performers using 3D-motion capture

    NASA Astrophysics Data System (ADS)

    Soh, Ahmad Afiq Sabqi Awang; Jafri, Mohd Zubir Mat; Azraai, Nur Zaidi

    2015-04-01

    The Interest in this studies of human kinematics goes back very far in human history drove by curiosity or need for the understanding the complexity of human body motion. To find new and accurate information about the human movement as the advance computing technology became available for human movement that can perform. Martial arts (silat) were chose and multiple type of movement was studied. This project has done by using cutting-edge technology which is 3D motion capture to characterize and to measure the motion done by the performers of martial arts (silat). The camera will detect the markers (infrared reflection by the marker) around the performer body (total of 24 markers) and will show as dot in the computer software. The markers detected were analyzing using kinematic kinetic approach and time as reference. A graph of velocity, acceleration and position at time,t (seconds) of each marker was plot. Then from the information obtain, more parameters were determined such as work done, momentum, center of mass of a body using mathematical approach. This data can be used for development of the effectiveness movement in martial arts which is contributed to the people in arts. More future works can be implemented from this project such as analysis of a martial arts competition.

  16. MRI-3D ultrasound-X-ray image fusion with electromagnetic tracking for transendocardial therapeutic injections: in-vitro validation and in-vivo feasibility.

    PubMed

    Hatt, Charles R; Jain, Ameet K; Parthasarathy, Vijay; Lang, Andrew; Raval, Amish N

    2013-03-01

    Myocardial infarction (MI) is one of the leading causes of death in the world. Small animal studies have shown that stem-cell therapy offers dramatic functional improvement post-MI. An endomyocardial catheter injection approach to therapeutic agent delivery has been proposed to improve efficacy through increased cell retention. Accurate targeting is critical for reaching areas of greatest therapeutic potential while avoiding a life-threatening myocardial perforation. Multimodal image fusion has been proposed as a way to improve these procedures by augmenting traditional intra-operative imaging modalities with high resolution pre-procedural images. Previous approaches have suffered from a lack of real-time tissue imaging and dependence on X-ray imaging to track devices, leading to increased ionizing radiation dose. In this paper, we present a new image fusion system for catheter-based targeted delivery of therapeutic agents. The system registers real-time 3D echocardiography, magnetic resonance, X-ray, and electromagnetic sensor tracking within a single flexible framework. All system calibrations and registrations were validated and found to have target registration errors less than 5 mm in the worst case. Injection accuracy was validated in a motion enabled cardiac injection phantom, where targeting accuracy ranged from 0.57 to 3.81 mm. Clinical feasibility was demonstrated with in-vivo swine experiments, where injections were successfully made into targeted regions of the heart.

  17. Evaluation of suitability of a micro-processing unit of motion analysis for upper limb tracking.

    PubMed

    Barraza Madrigal, José Antonio; Cardiel, Eladio; Rogeli, Pablo; Leija Salas, Lorenzo; Muñoz Guerrero, Roberto

    2016-08-01

    The aim of this study is to assess the suitability of a micro-processing unit of motion analysis (MPUMA), for monitoring, reproducing, and tracking upper limb movements. The MPUMA is based on an inertial measurement unit, a 16-bit digital signal controller and a customized algorithm. To validate the performance of the system, simultaneous recordings of the angular trajectory were performed with a video-based motion analysis system. A test of the flexo-extension of the shoulder joint during the active elevation in a complete range of 120º of the upper limb was carried out in 10 healthy volunteers. Additional tests were carried out to assess MPUMA performance during upper limb tracking. The first, a 3D motion reconstruction of three movements of the shoulder joint (flexo-extension, abduction-adduction, horizontal internal-external rotation), and the second, an upper limb tracking online during the execution of three movements of the shoulder joint followed by a continuous random movement without any restrictions by using a virtual model and a mechatronic device of the shoulder joint. Experimental results demonstrated that the MPUMA measured joint angles that are close to those from a motion-capture system with orientation RMS errors less than 3º. PMID:27185034

  18. Eye-tracking and EMG supported 3D Virtual Reality - an integrated tool for perceptual and motor development of children with severe physical disabilities: a research concept.

    PubMed

    Pulay, Márk Ágoston

    2015-01-01

    Letting children with severe physical disabilities (like Tetraparesis spastica) to get relevant motional experiences of appropriate quality and quantity is now the greatest challenge for us in the field of neurorehabilitation. These motional experiences may establish many cognitive processes, but may also cause additional secondary cognitive dysfunctions such as disorders in body image, figure invariance, visual perception, auditory differentiation, concentration, analytic and synthetic ways of thinking, visual memory etc. Virtual Reality is a technology that provides a sense of presence in a real environment with the help of 3D pictures and animations formed in a computer environment and enable the person to interact with the objects in that environment. One of our biggest challenges is to find a well suited input device (hardware) to let the children with severe physical disabilities to interact with the computer. Based on our own experiences and a thorough literature review we have come to the conclusion that an effective combination of eye-tracking and EMG devices should work well.

  19. Crossed beam roof target for motion tracking

    NASA Technical Reports Server (NTRS)

    Olczak, Eugene (Inventor)

    2009-01-01

    A system for detecting motion between a first body and a second body includes first and second detector-emitter pairs, disposed on the first body, and configured to transmit and receive first and second optical beams, respectively. At least a first optical rotator is disposed on the second body and configured to receive and reflect at least one of the first and second optical beams. First and second detectors of the detector-emitter pairs are configured to detect the first and second optical beams, respectively. Each of the first and second detectors is configured to detect motion between the first and second bodies in multiple degrees of freedom (DOFs). The first optical rotator includes a V-notch oriented to form an apex of an isosceles triangle with respect to a base of the isosceles triangle formed by the first and second detector-emitter pairs. The V-notch is configured to receive the first optical beam and reflect the first optical beam to both the first and second detectors. The V-notch is also configured to receive the second optical beam and reflect the second optical beam to both the first and second detectors.

  20. Local intensity feature tracking and motion modeling for respiratory signal extraction in cone beam CT projections.

    PubMed

    Dhou, Salam; Motai, Yuichi; Hugo, Geoffrey D

    2013-02-01

    Accounting for respiration motion during imaging can help improve targeting precision in radiation therapy. We propose local intensity feature tracking (LIFT), a novel markerless breath phase sorting method in cone beam computed tomography (CBCT) scan images. The contributions of this study are twofold. First, LIFT extracts the respiratory signal from the CBCT projections of the thorax depending only on tissue feature points that exhibit respiration. Second, the extracted respiratory signal is shown to correlate with standard respiration signals. LIFT extracts feature points in the first CBCT projection of a sequence and tracks those points in consecutive projections forming trajectories. Clustering is applied to select trajectories showing an oscillating behavior similar to the breath motion. Those "breathing" trajectories are used in a 3-D reconstruction approach to recover the 3-D motion of the lung which represents the respiratory signal. Experiments were conducted on datasets exhibiting regular and irregular breathing patterns. Results showed that LIFT-based respiratory signal correlates with the diaphragm position-based signal with an average phase shift of 1.68 projections as well as with the internal marker-based signal with an average phase shift of 1.78 projections. LIFT was able to detect the respiratory signal in all projections of all datasets.

  1. Aging affects postural tracking of complex visual motion cues.

    PubMed

    Sotirakis, H; Kyvelidou, A; Mademli, L; Stergiou, N; Hatzitaki, V

    2016-09-01

    Postural tracking of visual motion cues improves perception-action coupling in aging, yet the nature of the visual cues to be tracked is critical for the efficacy of such a paradigm. We investigated how well healthy older (72.45 ± 4.72 years) and young (22.98 ± 2.9 years) adults can follow with their gaze and posture horizontally moving visual target cues of different degree of complexity. Participants tracked continuously for 120 s the motion of a visual target (dot) that oscillated in three different patterns: a simple periodic (simulated by a sine), a more complex (simulated by the Lorenz attractor that is deterministic displaying mathematical chaos) and an ultra-complex random (simulated by surrogating the Lorenz attractor) pattern. The degree of coupling between performance (posture and gaze) and the target motion was quantified in the spectral coherence, gain, phase and cross-approximate entropy (cross-ApEn) between signals. Sway-target coherence decreased as a function of target complexity and was lower for the older compared to the young participants when tracking the chaotic target. On the other hand, gaze-target coherence was not affected by either target complexity or age. Yet, a lower cross-ApEn value when tracking the chaotic stimulus motion revealed a more synchronous gaze-target relationship for both age groups. Results suggest limitations in online visuo-motor processing of complex motion cues and a less efficient exploitation of the body sway dynamics with age. Complex visual motion cues may provide a suitable training stimulus to improve visuo-motor integration and restore sway variability in older adults.

  2. Multilevel and motion model-based ultrasonic speckle tracking algorithms.

    PubMed

    Yeung, F; Levinson, S F; Parker, K J

    1998-03-01

    A multilevel motion model-based approach to ultrasonic speckle tracking has been developed that addresses the inherent trade-offs associated with traditional single-level block matching (SLBM) methods. The multilevel block matching (MLBM) algorithm uses variable matching block and search window sizes in a coarse-to-fine scheme, preserving the relative immunity to noise associated with the use of a large matching block while preserving the motion field detail associated with the use of a small matching block. To decrease further the sensitivity of the multilevel approach to noise, speckle decorrelation and false matches, a smooth motion model-based block matching (SMBM) algorithm has been implemented that takes into account the spatial inertia of soft tissue elements. The new algorithms were compared to SLBM through a series of experiments involving manual translation of soft tissue phantoms, motion field computer simulations of rotation, compression and shear deformation, and an experiment involving contraction of human forearm muscles. Measures of tracking accuracy included mean squared tracking error, peak signal-to-noise ratio (PSNR) and blinded observations of optical flow. Measures of tracking efficiency included the number of sum squared difference calculations and the computation time. In the phantom translation experiments, the SMBM algorithm successfully matched the accuracy of SLBM using both large and small matching blocks while significantly reducing the number of computations and computation time when a large matching block was used. For the computer simulations, SMBM yielded better tracking accuracies and spatial resolution when compared with SLBM using a large matching block. For the muscle experiment, SMBM outperformed SLBM both in terms of PSNR and observations of optical flow. We believe that the smooth motion model-based MLBM approach represents a meaningful development in ultrasonic soft tissue motion measurement. PMID:9587997

  3. Aging affects postural tracking of complex visual motion cues.

    PubMed

    Sotirakis, H; Kyvelidou, A; Mademli, L; Stergiou, N; Hatzitaki, V

    2016-09-01

    Postural tracking of visual motion cues improves perception-action coupling in aging, yet the nature of the visual cues to be tracked is critical for the efficacy of such a paradigm. We investigated how well healthy older (72.45 ± 4.72 years) and young (22.98 ± 2.9 years) adults can follow with their gaze and posture horizontally moving visual target cues of different degree of complexity. Participants tracked continuously for 120 s the motion of a visual target (dot) that oscillated in three different patterns: a simple periodic (simulated by a sine), a more complex (simulated by the Lorenz attractor that is deterministic displaying mathematical chaos) and an ultra-complex random (simulated by surrogating the Lorenz attractor) pattern. The degree of coupling between performance (posture and gaze) and the target motion was quantified in the spectral coherence, gain, phase and cross-approximate entropy (cross-ApEn) between signals. Sway-target coherence decreased as a function of target complexity and was lower for the older compared to the young participants when tracking the chaotic target. On the other hand, gaze-target coherence was not affected by either target complexity or age. Yet, a lower cross-ApEn value when tracking the chaotic stimulus motion revealed a more synchronous gaze-target relationship for both age groups. Results suggest limitations in online visuo-motor processing of complex motion cues and a less efficient exploitation of the body sway dynamics with age. Complex visual motion cues may provide a suitable training stimulus to improve visuo-motor integration and restore sway variability in older adults. PMID:27126061

  4. Fast and accurate global multiphase arrival tracking: the irregular shortest-path method in a 3-D spherical earth model

    NASA Astrophysics Data System (ADS)

    Huang, Guo-Jiao; Bai, Chao-Ying; Greenhalgh, Stewart

    2013-09-01

    The traditional grid/cell-based wavefront expansion algorithms, such as the shortest path algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortest path method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.

  5. Motion-based prediction explains the role of tracking in motion extrapolation.

    PubMed

    Khoei, Mina A; Masson, Guillaume S; Perrinet, Laurent U

    2013-11-01

    During normal viewing, the continuous stream of visual input is regularly interrupted, for instance by blinks of the eye. Despite these frequents blanks (that is the transient absence of a raw sensory source), the visual system is most often able to maintain a continuous representation of motion. For instance, it maintains the movement of the eye such as to stabilize the image of an object. This ability suggests the existence of a generic neural mechanism of motion extrapolation to deal with fragmented inputs. In this paper, we have modeled how the visual system may extrapolate the trajectory of an object during a blank using motion-based prediction. This implies that using a prior on the coherency of motion, the system may integrate previous motion information even in the absence of a stimulus. In order to compare with experimental results, we simulated tracking velocity responses. We found that the response of the motion integration process to a blanked trajectory pauses at the onset of the blank, but that it quickly recovers the information on the trajectory after reappearance. This is compatible with behavioral and neural observations on motion extrapolation. To understand these mechanisms, we have recorded the response of the model to a noisy stimulus. Crucially, we found that motion-based prediction acted at the global level as a gain control mechanism and that we could switch from a smooth regime to a binary tracking behavior where the dot is tracked or lost. Our results imply that a local prior implementing motion-based prediction is sufficient to explain a large range of neural and behavioral results at a more global level. We show that the tracking behavior deteriorates for sensory noise levels higher than a certain value, where motion coherency and predictability fail to hold longer. In particular, we found that motion-based prediction leads to the emergence of a tracking behavior only when enough information from the trajectory has been accumulated

  6. Quaternion correlation for tracking crystal motions

    NASA Astrophysics Data System (ADS)

    Shi, Qiwei; Latourte, Félix; Hild, François; Roux, Stéphane

    2016-09-01

    During in situ mechanical tests performed on polycrystalline materials in a scanning electron microscope, crystal orientation maps may be recorded at different stages of deformation from electron backscattered diffraction (EBSD). The present study introduces a novel correlation technique that exploits the crystallographic orientation field as a surface pattern to measure crystal motions. Introducing a quaternion-based formalism reveals crystal symmetry that is very convenient to handle and orientation extraction. Spatial regularization is provided by a penalty to deviation of displacement fields from being the solution to a homogeneous linear elastic problem. This procedure allows the large scale features of the displacement field to be captured, mostly from grain boundaries, and a fair interpolation of the displacement to be obtained within the grains. From these data, crystal rotations can be estimated very accurately. Both synthetic and real experimental cases are considered to illustrate the method.

  7. How Plates Pull Transforms Apart: 3-D Numerical Models of Oceanic Transform Fault Response to Changes in Plate Motion Direction

    NASA Astrophysics Data System (ADS)

    Morrow, T. A.; Mittelstaedt, E. L.; Olive, J. A. L.

    2015-12-01

    Observations along oceanic fracture zones suggest that some mid-ocean ridge transform faults (TFs) previously split into multiple strike-slip segments separated by short (<~50 km) intra-transform spreading centers and then reunited to a single TF trace. This history of segmentation appears to correspond with changes in plate motion direction. Despite the clear evidence of TF segmentation, the processes governing its development and evolution are not well characterized. Here we use a 3-D, finite-difference / marker-in-cell technique to model the evolution of localized strain at a TF subjected to a sudden change in plate motion direction. We simulate the oceanic lithosphere and underlying asthenosphere at a ridge-transform-ridge setting using a visco-elastic-plastic rheology with a history-dependent plastic weakening law and a temperature- and stress-dependent mantle viscosity. To simulate the development of topography, a low density, low viscosity 'sticky air' layer is present above the oceanic lithosphere. The initial thermal gradient follows a half-space cooling solution with an offset across the TF. We impose an enhanced thermal diffusivity in the uppermost 6 km of lithosphere to simulate the effects of hydrothermal circulation. An initial weak seed in the lithosphere helps localize shear deformation between the two offset ridge axes to form a TF. For each model case, the simulation is run initially with TF-parallel plate motion until the thermal structure reaches a steady state. The direction of plate motion is then rotated either instantaneously or over a specified time period, placing the TF in a state of trans-tension. Model runs continue until the system reaches a new steady state. Parameters varied here include: initial TF length, spreading rate, and the rotation rate and magnitude of spreading obliquity. We compare our model predictions to structural observations at existing TFs and records of TF segmentation preserved in oceanic fracture zones.

  8. Effectiveness of an automatic tracking software in underwater motion analysis.

    PubMed

    Magalhaes, Fabrício A; Sawacha, Zimi; Di Michele, Rocco; Cortesi, Matteo; Gatta, Giorgio; Fantozzi, Silvia

    2013-01-01

    Tracking of markers placed on anatomical landmarks is a common practice in sports science to perform the kinematic analysis that interests both athletes and coaches. Although different software programs have been developed to automatically track markers and/or features, none of them was specifically designed to analyze underwater motion. Hence, this study aimed to evaluate the effectiveness of a software developed for automatic tracking of underwater movements (DVP), based on the Kanade-Lucas-Tomasi feature tracker. Twenty-one video recordings of different aquatic exercises (n = 2940 markers' positions) were manually tracked to determine the markers' center coordinates. Then, the videos were automatically tracked using DVP and a commercially available software (COM). Since tracking techniques may produce false targets, an operator was instructed to stop the automatic procedure and to correct the position of the cursor when the distance between the calculated marker's coordinate and the reference one was higher than 4 pixels. The proportion of manual interventions required by the software was used as a measure of the degree of automation. Overall, manual interventions were 10.4% lower for DVP (7.4%) than for COM (17.8%). Moreover, when examining the different exercise modes separately, the percentage of manual interventions was 5.6% to 29.3% lower for DVP than for COM. Similar results were observed when analyzing the type of marker rather than the type of exercise, with 9.9% less manual interventions for DVP than for COM. In conclusion, based on these results, the developed automatic tracking software presented can be used as a valid and useful tool for underwater motion analysis. Key PointsThe availability of effective software for automatic tracking would represent a significant advance for the practical use of kinematic analysis in swimming and other aquatic sports.An important feature of automatic tracking software is to require limited human interventions and

  9. Polar motion from laser tracking of artificial satellites

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Kolenkiewicz, R.; Dunn, P. J.; Plotkin, H. H.; Johnson, T. S.

    1972-01-01

    Laser ranges to the Beacon Explorer C spacecraft from a single Goddard Space Flight Center tracking system were used to determine the change in latitude of the station arising from polar motion. A precision of 0.03 arcsecs rms was obtained for the latitude during a five-month period in 1970.

  10. Polar motion from laser tracking of artificial satellites.

    PubMed

    Smith, D E; Kolenkiewicz, R; Dunn, P J; Plotkin, H H; Johnson, T S

    1972-10-27

    Measurements of the range to the Beacon Explorer C spacecraft from a single laser tracking system at Goddard Space Flight Center have been used to determine the change in latitude of the station arising from polar motion. A precision of 0.03 arc second was obtained for the latitude during a 5-month period in 1970.

  11. Polar motion from laser tracking of artificial satellites.

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Kolenkiewicz, R.; Plotkin, H. H.; Johnson, T. S.; Dunn, P. J.

    1972-01-01

    Measurements of the range to the Beacon Explorer C spacecraft from a single laser tracking system at Goddard Space Flight Center have been used to determine the change in latitude of the station arising from polar motion. A precision of 0.03 arc second was obtained for the latitude during a 5-month period in 1970.

  12. 3-D or median map? Earthquake scenario ground-motion maps from physics-based models versus maps from ground-motion prediction equations

    NASA Astrophysics Data System (ADS)

    Porter, K.

    2015-12-01

    There are two common ways to create a ground-motion map for a hypothetical earthquake: using ground motion prediction equations (by far the more common of the two) and using 3-D physics-based modeling. The former is very familiar to engineers, the latter much less so, and the difference can present a problem because engineers tend to trust the familiar and distrust novelty. Maps for essentially the same hypothetical earthquake using the two different methods can look very different, while appearing to present the same information. Using one or the other can lead an engineer or disaster planner to very different estimates of damage and risk. The reasons have to do with depiction of variability, spatial correlation of shaking, the skewed distribution of real-world shaking, and the upward-curving relationship between shaking and damage. The scientists who develop the two kinds of map tend to specialize in one or the other and seem to defend their turf, which can aggravate the problem of clearly communicating with engineers.The USGS Science Application for Risk Reduction's (SAFRR) HayWired scenario has addressed the challenge of explaining to engineers the differences between the two maps, and why, in a disaster planning scenario, one might want to use the less-familiar 3-D map.

  13. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    PubMed Central

    Afik, Eldad

    2015-01-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection. PMID:26329642

  14. A quantitative study of 3D-scanning frequency and Δd of tracking points on the tooth surface

    PubMed Central

    Li, Hong; Lyu, Peijun; Sun, Yuchun; Wang, Yong; Liang, Xiaoyue

    2015-01-01

    Micro-movement of human jaws in the resting state might influence the accuracy of direct three-dimensional (3D) measurement. Providing a reference for sampling frequency settings of intraoral scanning systems to overcome this influence is important. In this study, we measured micro-movement, or change in distance (∆d), as the change in position of a single tracking point from one sampling time point to another in five human subjects. ∆d of tracking points on incisors at 7 sampling frequencies was judged against the clinical accuracy requirement to select proper sampling frequency settings. The curve equation was then fit quantitatively between ∆d median and the sampling frequency to predict the trend of ∆d with increasing f. The difference of ∆d among the subjects and the difference between upper and lower incisor feature points of the same subject were analyzed by a non-parametric test (α = 0.05). Significant differences of incisor feature points were noted among different subjects and between upper and lower jaws of the same subject (P < 0.01). Overall, ∆d decreased with increasing frequency. When the frequency was 60 Hz, ∆d nearly reached the clinical accuracy requirement. Frequencies higher than 60 Hz did not significantly decrease Δd further. PMID:26400112

  15. The feasibility of head motion tracking in helical CT: A step toward motion correction

    SciTech Connect

    Kim, Jung-Ha; Nuyts, Johan; Kuncic, Zdenka; Fulton, Roger

    2013-04-15

    Purpose: To establish a practical and accurate motion tracking method for the development of rigid motion correction methods in helical x-ray computed tomography (CT). Methods: A commercially available optical motion tracking system provided 6 degrees of freedom pose measurements at 60 Hz. A 4 Multiplication-Sign 4 calibration matrix was determined to convert raw pose data acquired in tracker coordinates to a fixed CT coordinate system with origin at the isocenter of the scanner. Two calibration methods, absolute orientation (AO), and a new method based on image registration (IR), were compared by means of landmark analysis and correlation coefficient in phantom images coregistered using the derived motion transformations. Results: Transformations calculated using the IR-derived calibration matrix were found to be more accurate, with positional errors less than 0.5 mm (mean RMS), and highly correlated image voxel intensities. The AO-derived calibration matrix yielded larger mean RMS positional errors ( Asymptotically-Equal-To 1.0 mm), and poorer correlation coefficients. Conclusions: The authors have demonstrated the feasibility of accurate motion tracking for retrospective motion correction in helical CT. Their new IR-based calibration method based on image registration and function minimization was simpler to perform and delivered more accurate calibration matrices. This technique is a useful tool for future work on rigid motion correction in helical CT and potentially also other imaging modalities.

  16. A kinematic model for Bayesian tracking of cyclic human motion

    NASA Astrophysics Data System (ADS)

    Greif, Thomas; Lienhart, Rainer

    2010-01-01

    We introduce a two-dimensional kinematic model for cyclic motions of humans, which is suitable for the use as temporal prior in any Bayesian tracking framework. This human motion model is solely based on simple kinematic properties: the joint accelerations. Distributions of joint accelerations subject to the cycle progress are learned from training data. We present results obtained by applying the introduced model to the cyclic motion of backstroke swimming in a Kalman filter framework that represents the posterior distribution by a Gaussian. We experimentally evaluate the sensitivity of the motion model with respect to the frequency and noise level of assumed appearance-based pose measurements by simulating various fidelities of the pose measurements using ground truth data.

  17. Tracking small targets in wide area motion imagery data

    NASA Astrophysics Data System (ADS)

    Mathew, Alex; Asari, Vijayan K.

    2013-03-01

    Object tracking in aerial imagery is of immense interest to the wide area surveillance community. In this paper, we propose a method to track very small targets such as pedestrians in AFRL Columbus Large Image Format (CLIF) Wide Area Motion Imagery (WAMI) data. Extremely small target sizes, combined with low frame rates and significant view changes, make tracking a very challenging task in WAMI data. Two problems should be tackled for object tracking frame registration and feature extraction. We employ SURF for frame registration. Although there are several feature extraction methods that work reasonably well when the scene is of high resolution, most methods fail when the resolution is very low. In our approach, we represent the target as a collection of intensity histograms and use a robust statistical distance to distinguish between the target and the background. We divide the object into m ×n regions and compute the normalized intensity histogram in each region to build a histogram matrix. The features can be compared using the histogram comparison techniques. For tracking, we use a combination of a bearing-only Kalman filter and the proposed feature extraction technique. The problem of template drift is solved by further localizing the target with a blob detection algorithm. The new template is taken as the detected blob. We show the robustness of the algorithm by giving a comparison of feature extraction part of our method with other feature extraction methods like SURF, SIFT and HoG and tracking part with mean-shift tracking.

  18. Motion tracking in undergraduate physics laboratories with the Wii remote

    NASA Astrophysics Data System (ADS)

    Tomarken, Spencer L.; Simons, Dallas R.; Helms, Richard W.; Johns, Will E.; Schriver, Kenneth E.; Webster, Medford S.

    2012-04-01

    We report the incorporation of the Wiimote, a light-tracking remote control device, into two undergraduate-level experiments. We provide an overview of the Wiimote's basic functions and a systematic analysis of its motion tracking capabilities. We describe the Wiimote's use in measuring conservation of linear and angular momentum on an air table, and measuring the gravitational constant with the classic Cavendish torsion pendulum. Our results show that Wiimote is a simple and affordable way to streamline the data acquisition process and produce results that are generally superior to those obtained with conventional techniques.

  19. Does fluid infiltration affect the motion of sediment grains? - A 3-D numerical modelling approach using SPH

    NASA Astrophysics Data System (ADS)

    Bartzke, Gerhard; Rogers, Benedict D.; Fourtakas, Georgios; Mokos, Athanasios; Huhn, Katrin

    2016-04-01

    The processes that cause the creation of a variety of sediment morphological features, e.g. laminated beds, ripples, or dunes, are based on the initial motion of individual sediment grains. However, with experimental techniques it is difficult to measure the flow characteristics, i.e., the velocity of the pore water flow in sediments, at a sufficient resolution and in a non-intrusive way. As a result, the role of fluid infiltration at the surface and in the interior affecting the initiation of motion of a sediment bed is not yet fully understood. Consequently, there is a strong need for numerical models, since these are capable of quantifying fluid driven sediment transport processes of complex sediment beds composed of irregular shapes. The numerical method Smoothed Particle Hydrodynamics (SPH) satisfies this need. As a meshless and Lagrangian technique, SPH is ideally suited to simulating flows in sediment beds composed of various grain shapes, but also flow around single grains at a high temporal and spatial resolution. The solver chosen is DualSPHysics (www.dual.sphysics.org) since this is validated for a range of flow conditions. For the present investigation a 3-D numerical flume model was generated using SPH with a length of 4.0 cm, a width of 0.05 cm and a height of 0.2 cm where mobile sediment particles were deposited in a recess. An experimental setup was designed to test sediment configurations composed of irregular grain shapes (grain diameter, D50=1000 μm). Each bed consisted of 3500 mobile objects. After the bed generation process, the entire domain was flooded with 18 million fluid particles. To drive the flow, an oscillating motion perpendicular to the bed was applied to the fluid, reaching a peak value of 0.3 cm/s, simulating 4 seconds of real time. The model results showed that flow speeds decreased logarithmically from the top of the domain towards the surface of the beds, indicating a fully developed boundary layer. Analysis of the fluid

  20. Object motion tracking in the NDE laboratory by random sample iterative closest point

    NASA Astrophysics Data System (ADS)

    Radkowski, Rafael; Wehr, David; Gregory, Elizabeth; Holland, Stephen D.

    2016-02-01

    We present a computationally efficient technique for real-time motion tracking in the NDE laboratory. Our goal is to track object shapes in an flash thermography test stand to determine the position and orientation of the specimen which facilitates to register thermography data to a 3D part model. Object shapes can be different specimens and fixtures. Specimens can be manually aligned at any test stand, the position and orientation of every a-priori known shape can be computed and forwarded to the data management software. Our technique relies on the random sample consensus (RANSAC) approach to the iterative closest point (ICP) problem for identifying object shapes, thus, it is robust in different situations. The paper introduces the computational techniques and experiments along with the results.

  1. Video object motion tracking: a structured versus unstructured mesh topology

    NASA Astrophysics Data System (ADS)

    Badawy, Wael

    2001-11-01

    This paper presents a novel concept for very low bit rate video codec. It uses a new hierarchical adaptive structured mesh topology. The proposed video codec can be used in wireless video applications. It uses structures to model the dynamics of the video object where the proposed the adaptive structure splitting significantly reduces the number of bits used for mesh description. Moreover, it reduces the latency of motion estimation and compensation operations. A comprehensive performance study is presented for the proposed mesh-based motion tracking and the commonly used techniques. It shows the superior of the proposed concept compare to the current MPEG techniques.

  2. An evaluation of 3-D velocity models of the Kanto basin for long-period ground motion simulations

    NASA Astrophysics Data System (ADS)

    Dhakal, Yadab P.; Yamanaka, Hiroaki

    2013-07-01

    We performed three-dimensional (3-D) finite difference simulations of long-period ground motions (2-10 s) in the Kanto basin using the Japan Seismic Hazard Information Station (J-SHIS 2009), Yamada and Yamanaka (Exploration Geophysics 65(3):139-150, 2012) (YY), and Head Quarter for Earthquake Research Promotion (HERP 2012) velocity models for two intermediate depth (68-80 km) moderate earthquakes (Mw 5.8-5.9), which occurred beneath the Kanto basin. The models primarily differ in the basic data set used in the construction of the velocity models. The J-SHIS and HERP models are the results of integration of mainly geological, geophysical, and earthquake data. On the other hand, the YY model is oriented towards the microtremor-array-observation data. We obtained a goodness of fit between the observed and synthetic data based on three parameters, peak ground velocities (PGVs), smoothed Fourier spectra (FFT), and cross-correlations, using an algorithm proposed by Olsen and Mayhew (Seism Res Lett 81:715-723, 2010). We found that the three models reproduced the PGVs and FFT satisfactorily at most sites. However, the models performed poorly in terms of cross-correlations especially at the basin edges. We found that the synthetics using the YY model overestimate the observed waveforms at several sites located in the areas having V s 0.3 km/s in the top layer; on the other hand, the J-SHIS and HERP models explain the waveforms better at the sites and perform similarly at most sites. We also found that the J-SHIS and HERP models consist of thick sediments beneath some sites, where the YY model is preferable. Thus, we have concluded that the models require revisions for the reliable prediction of long-period ground motions from future large earthquakes.

  3. Vehicle tracking in wide area motion imagery from an airborne platform

    NASA Astrophysics Data System (ADS)

    van Eekeren, Adam W. M.; van Huis, Jasper R.; Eendebak, Pieter T.; Baan, Jan

    2015-10-01

    Airborne platforms, such as UAV's, with Wide Area Motion Imagery (WAMI) sensors can cover multiple square kilometers and produce large amounts of video data. Analyzing all data for information need purposes becomes increasingly labor-intensive for an image analyst. Furthermore, the capacity of the datalink in operational areas may be inadequate to transfer all data to the ground station. Automatic detection and tracking of people and vehicles enables to send only the most relevant footage to the ground station and assists the image analysts in effective data searches. In this paper, we propose a method for detecting and tracking vehicles in high-resolution WAMI images from a moving airborne platform. For the vehicle detection we use a cascaded set of classifiers, using an Adaboost training algorithm on Haar features. This detector works on individual images and therefore does not depend on image motion stabilization. For the vehicle tracking we use a local template matching algorithm. This approach has two advantages. In the first place, it does not depend on image motion stabilization and it counters the inaccuracy of the GPS data that is embedded in the video data. In the second place, it can find matches when the vehicle detector would miss a certain detection. This results in long tracks even when the imagery is of low frame-rate. In order to minimize false detections, we also integrate height information from a 3D reconstruction that is created from the same images. By using the locations of buildings and roads, we are able to filter out false detections and increase the performance of the tracker. In this paper we show that the vehicle tracks can also be used to detect more complex events, such as traffic jams and fast moving vehicles. This enables the image analyst to do a faster and more effective search of the data.

  4. Fish tracking by combining motion based segmentation and particle filtering

    NASA Astrophysics Data System (ADS)

    Bichot, E.; Mascarilla, L.; Courtellemont, P.

    2006-01-01

    In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.

  5. Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction

    NASA Astrophysics Data System (ADS)

    Verrier, N.; Grosjean, N.; Dib, E.; Méès, L.; Fournier, C.; Marié, J.-L.

    2016-04-01

    Digital holography is a valuable tool for three-dimensional information extraction. Among existing configurations, the originally proposed set-up (i.e. Gabor, or in-line holography), is reasonably immune to variations in the experimental environment making it a method of choice for studies of fluid dynamics. Nevertheless, standard hologram reconstruction techniques, based on numerical light back-propagation are prone to artifacts such as twin images or aliases that limit both the quality and quantity of information extracted from the acquired holograms. To get round this issue, the hologram reconstruction as a parametric inverse problem has been shown to accurately estimate 3D positions and the size of seeding particles directly from the hologram. To push the bounds of accuracy on size estimation still further, we propose to fully exploit the information redundancy of a hologram video sequence using joint estimation reconstruction. Applying this approach in a bench-top experiment, we show that it led to a relative precision of 0.13% (for a 60 μm diameter droplet) for droplet size estimation, and a tracking precision of {σx}× {σy}× {σz}=0.15× 0.15× 1~\\text{pixels} .

  6. Shoulder 3D range of motion and humerus rotation in two volleyball spike techniques: injury prevention and performance.

    PubMed

    Seminati, Elena; Marzari, Alessandra; Vacondio, Oreste; Minetti, Alberto E

    2015-06-01

    Repetitive stresses and movements on the shoulder in the volleyball spike expose this joint to overuse injuries, bringing athletes to a career threatening injury. Assuming that specific spike techniques play an important role in injury risk, we compared the kinematic of the traditional (TT) and the alternative (AT) techniques in 21 elite athletes, evaluating their safety with respect to performance. Glenohumeral joint was set as the centre of an imaginary sphere, intersected by the distal end of the humerus at different angles. Shoulder range of motion and angular velocities were calculated and compared to the joint limits. Ball speed and jump height were also assessed. Results indicated the trajectory of the humerus to be different for the TT, with maximal flexion of the shoulder reduced by 10 degrees, and horizontal abduction 15 degrees higher. No difference was found for external rotation angles, while axial rotation velocities were significantly higher in AT, with a 5% higher ball speed. Results suggest AT as a potential preventive solution to shoulder chronic pathologies, reducing shoulder flexion during spiking. The proposed method allows visualisation of risks associated with different overhead manoeuvres, by depicting humerus angles and velocities with respect to joint limits in the same 3D space. PMID:26151344

  7. Does close proximity robot motion tracking alter gait?

    PubMed

    Yamokoski, John D; Banks, Scott A

    2011-10-01

    Dynamic fluoroscopic imaging and three-dimensional model-image registration techniques provide detailed joint kinematic measurements for motions constrained to small volumes of space. Several groups are working to mount radiographic imaging hardware onto mobile platforms to provide these same imaging capabilities for observation of unrestricted activities. These dynamic radiographic imaging systems could provide accurate skeletal kinematics during a wide range of clinically relevant, daily activities. However, the premise that people move naturally when followed by a dynamic imaging system has not been evaluated. The goal of this study was to determine if a close-up robot tracking system affects natural free-speed gait. 14 healthy adults were recruited to walk through the workspace of a dynamic radiographic imaging system. Randomized walking trials were performed with and without the dynamic imaging system tracking the motions of the subject's left knee. With- and without-robot trials were compared using detailed temporal-spatial and frequency analysis of kinematic and kinetic parameters. On average, participants increase their stride length by 0.9 cm. There also were slight increases in unexplained variation in ankle flexion/extension and ground reaction forces compared to baseline measurements. The statistically significant differences indicate that, on average, people tried to move faster through the workspace of the dynamic radiographic imaging system while it was actively tracking their motion. These differences are small and potentially clinically irrelevant.

  8. SU-E-J-80: Interplay Effect Between VMAT Intensity Modulation and Tumor Motion in Hypofractioned Lung Treatment, Investigated with 3D Pressage Dosimeter

    SciTech Connect

    Touch, M; Wu, Q; Oldham, M

    2014-06-01

    Purpose: To demonstrate an embedded tissue equivalent presage dosimeter for measuring 3D doses in moving tumors and to study the interplay effect between the tumor motion and intensity modulation in hypofractioned Volumetric Modulated Arc Therapy(VMAT) lung treatment. Methods: Motion experiments were performed using cylindrical Presage dosimeters (5cm diameter by 7cm length) mounted inside the lung insert of a CIRS thorax phantom. Two different VMAT treatment plans were created and delivered in three different scenarios with the same prescribed dose of 18 Gy. Plan1, containing a 2 centimeter spherical CTV with an additional 2mm setup margin, was delivered on a stationary phantom. Plan2 used the same CTV except expanded by 1 cm in the Sup-Inf direction to generate ITV and PTV respectively. The dosimeters were irradiated in static and variable motion scenarios on a Truebeam system. After irradiation, high resolution 3D dosimetry was performed using the Duke Large Field-of-view Optical-CT Scanner, and compared to the calculated dose from Eclipse. Results: In the control case (no motion), good agreement was observed between the planned and delivered dose distributions as indicated by 100% 3D Gamma (3% of maximum planned dose and 3mm DTA) passing rates in the CTV. In motion cases gamma passing rates was 99% in CTV. DVH comparisons also showed good agreement between the planned and delivered dose in CTV for both control and motion cases. However, differences of 15% and 5% in dose to PTV were observed in the motion and control cases respectively. Conclusion: With very high dose nature of a hypofraction treatment, significant effect was observed only motion is introduced to the target. This can be resulted from the motion of the moving target and the modulation of the MLC. 3D optical dosimetry can be of great advantage in hypofraction treatment dose validation studies.

  9. Calculating the Probability of Strong Ground Motions Using 3D Seismic Waveform Modeling - SCEC CyberShake

    NASA Astrophysics Data System (ADS)

    Gupta, N.; Callaghan, S.; Graves, R.; Mehta, G.; Zhao, L.; Deelman, E.; Jordan, T. H.; Kesselman, C.; Okaya, D.; Cui, Y.; Field, E.; Gupta, V.; Vahi, K.; Maechling, P. J.

    2006-12-01

    Researchers from the SCEC Community Modeling Environment (SCEC/CME) project are utilizing the CyberShake computational platform and a distributed high performance computing environment that includes USC High Performance Computer Center and the NSF TeraGrid facilities to calculate physics-based probabilistic seismic hazard curves for several sites in the Southern California area. Traditionally, probabilistic seismic hazard analysis (PSHA) is conducted using intensity measure relationships based on empirical attenuation relationships. However, a more physics-based approach using waveform modeling could lead to significant improvements in seismic hazard analysis. Members of the SCEC/CME Project have integrated leading-edge PSHA software tools, SCEC-developed geophysical models, validated anelastic wave modeling software, and state-of-the-art computational technologies on the TeraGrid to calculate probabilistic seismic hazard curves using 3D waveform-based modeling. The CyberShake calculations for a single probablistic seismic hazard curve require tens of thousands of CPU hours and multiple terabytes of disk storage. The CyberShake workflows are run on high performance computing systems including multiple TeraGrid sites (currently SDSC and NCSA), and the USC Center for High Performance Computing and Communications. To manage the extensive job scheduling and data requirements, CyberShake utilizes a grid-based scientific workflow system based on the Virtual Data System (VDS), the Pegasus meta-scheduler system, and the Globus toolkit. Probabilistic seismic hazard curves for spectral acceleration at 3.0 seconds have been produced for eleven sites in the Southern California region, including rock and basin sites. At low ground motion levels, there is little difference between the CyberShake and attenuation relationship curves. At higher ground motion (lower probability) levels, the curves are similar for some sites (downtown LA, I-5/SR-14 interchange) but different for

  10. Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters.

    PubMed

    Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng

    2016-01-01

    Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers. PMID:27618046

  11. Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters

    PubMed Central

    Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng

    2016-01-01

    Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers. PMID:27618046

  12. Real time markerless motion tracking using linked kinematic chains

    DOEpatents

    Luck, Jason P.; Small, Daniel E.

    2007-08-14

    A markerless method is described for tracking the motion of subjects in a three dimensional environment using a model based on linked kinematic chains. The invention is suitable for tracking robotic, animal or human subjects in real-time using a single computer with inexpensive video equipment, and does not require the use of markers or specialized clothing. A simple model of rigid linked segments is constructed of the subject and tracked using three dimensional volumetric data collected by a multiple camera video imaging system. A physics based method is then used to compute forces to align the model with subsequent volumetric data sets in real-time. The method is able to handle occlusion of segments and accommodates joint limits, velocity constraints, and collision constraints and provides for error recovery. The method further provides for elimination of singularities in Jacobian based calculations, which has been problematic in alternative methods.

  13. Motion patterns in activities of daily living: 3- year longitudinal follow-up after total shoulder arthroplasty using an optical 3D motion analysis system

    PubMed Central

    2014-01-01

    Background Total shoulder arthroplasty (TSA) can improve function in osteoarthritic shoulders, but the ability to perform activities of daily living (ADLs) can still remain impaired. Routinely, shoulder surgeons measure range of motion (ROM) using a goniometer. Objective data are limited, however, concerning functional three-dimensional changes in ROM in ADLs after TSA in patients with degenerative glenohumeral osteoarthritis. Methods This study included ten consecutive patients, who received TSA for primary glenohumeral osteoarthritis. The patients were examined the day before, 6 months, and 3 years after shoulder replacement as well. We compared them with a control group (n = 10) without any shoulder pathology and measured shoulder movement by 3D motion analysis using a novel 3 D model. The measurement included static maximum values, the ability to perform and the ROM of the ADLs “combing the hair”, “washing the opposite armpit”, “tying an apron”, and “taking a book from a shelf”. Results Six months after surgery, almost all TSA patients were able to perform the four ADLs (3 out of 40 tasks could not be performed by the 10 patients); 3 years postoperatively all patients were able to carry out all ADLs (40 out of 40 tasks possible). In performing the ADLs, comparison of the pre- with the 6-month and 3-year postoperative status of the TSA group showed that the subjects did not fully use the available maximum flexion/extension ROM in performing the four ADLs. The ROM used for flexion/extension did not change significantly (preoperatively 135°-0° -34° vs. 3 years postoperatively 131° -0° -53°). For abduction/adduction, ROM improved significantly from 33°-0° -27° preoperatively to 76° -0° -35° postoperatively. Compared to the controls (118°) the TSA group used less ROM for abduction to perform the four ADLs 3 years postoperatively. Conclusion TSA improves the ability to perform ADL and the individual ROM in ADLs in patients with

  14. Tracking 'differential organ motion' with a 'breathing' multileaf collimator: magnitude of problem assessed using 4D CT data and a motion-compensation strategy

    NASA Astrophysics Data System (ADS)

    McClelland, J. R.; Webb, S.; McQuaid, D.; Binnie, D. M.; Hawkes, D. J.

    2007-08-01

    Intrafraction tumour (e.g. lung) motion due to breathing can, in principle, be compensated for by applying identical breathing motions to the leaves of a multileaf collimator (MLC) as intensity-modulated radiation therapy is delivered by the dynamic MLC (DMLC) technique. A difficulty arising, however, is that irradiated voxels, which are in line with a bixel at one breathing phase (at which the treatment plan has been made), may move such that they cease to be in line with that breathing bixel at another phase. This is the phenomenon of differential voxel motion and existing tracking solutions have ignored this very real problem. There is absolutely no tracking solution to the problem of compensating for differential voxel motion. However, there is a strategy that can be applied in which the leaf breathing is determined to minimize the geometrical mismatch in a least-squares sense in irradiating differentially-moving voxels. A 1D formulation in very restricted circumstances is already in the literature and has been applied to some model breathing situations which can be studied analytically. These are, however, highly artificial. This paper presents the general 2D formulation of the problem including allowing different importance factors to be applied to planning target volume and organ at risk (or most generally) each voxel. The strategy also extends the literature strategy to the situation where the number of voxels connecting to a bixel is a variable. Additionally the phenomenon of 'cross-leaf-track/channel' voxel motion is formally addressed. The general equations are presented and analytic results are given for some 1D, artificially contrived, motions based on the Lujan equations of breathing motion. Further to this, 3D clinical voxel motion data have been extracted from 4D CT measurements to both assess the magnitude of the problem of 2D motion perpendicular to the beam-delivery axis in clinical practice and also to find the 2D optimum breathing-leaf strategy

  15. 3D velocity field of present-day crustal motion of the Tibetan Plateau derived from GPS measurements

    NASA Astrophysics Data System (ADS)

    Gan, W.

    2013-12-01

    Using the measurements of 564 GPS stations around the Tibetan plateau for over 10 years, we derived a high-resolution 3D velocity field for the present-day crustal motion of the plateau with improved precision. The horizontal velocity field of the plateau relative to stable Eurasia displays in details the crustal movement and tectonic deformation features of India-Eurasia continental collision zone with thrust compression, lateral extrusion and clockwise rotation. The vertical velocities reveal that the plateau is still rising as a whole relative to its stable north neighbor. However, in some subregions uplift is insignificant or even negative. The main features of the vertical crustal deformation are: a) The Himalayan range is rising at a rate of ~3mm/yr, the most significant in the whole plateau. The uplift rate of the Himalayan range is ~6mm/a relative to its south foot; b) The mid-eastern plateau has an typical uplift rate between 1~2 mm/a, and some high mountain ranges in this area have surprising uplift rates as large as 2~3mm/a; c) In the mid-southern plateau, there is a basin and endorheic subregion with a series of NE striking normal faults, showing obvious sinking with the rates between 0 to -4mm/a; d) The present-day rising and sinking subregions generally correspond well to the Cenozoic orogenic belts and basins, respectively; e) At the southeastern corner of the plateau, although the horizontal velocity field demonstrates an outstanding clockwise rotation and fan-like front of a flow zone, the vertical velocity field does not show a general uplift or incline trend. Horizontal GPS velocities of the Tibetan plateau relative to stable Eurasia Vertical GPS velocities of the Tibetan plateau relative to its stable northern neighbor

  16. Crosstalk minimization in autostereoscopic multiveiw 3D display by eye tracking and fusion (overlapping) of viewing zones

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Seon-Kyu; Yoon, Ki-Hyuk

    2012-06-01

    An autostereoscopic 3D display provides the binocular perception without eye glasses, but induces the low 3D effect and dizziness due to the crosstalk effect. The crosstalk related problems give the deterioration of 3D effect, clearness, and reality of 3D image. A novel method of reducing the crosstalk is designed and tested; the method is based on the fusion of viewing zones and the real time eye position. It is shown experimentally that the crosstalk is effectively reduced at any position around the optimal viewing distance.

  17. Direct measurement of particle size and 3D velocity of a gas-solid pipe flow with digital holographic particle tracking velocimetry.

    PubMed

    Wu, Yingchun; Wu, Xuecheng; Yao, Longchao; Gréhan, Gérard; Cen, Kefa

    2015-03-20

    The 3D measurement of the particles in a gas-solid pipe flow is of great interest, but remains challenging due to curved pipe walls in various engineering applications. Because of the astigmatism induced by the pipe, concentric ellipse fringes in the hologram of spherical particles are observed in the experiments. With a theoretical analysis of the particle holography by an ABCD matrix, the in-focus particle image can be reconstructed by the modified convolution method and fractional Fourier transform. Thereafter, the particle size, 3D position, and velocity are simultaneously measured by digital holographic particle tracking velocimetry (DHPTV). The successful application of DHPTV to the particle size and 3D velocity measurement in a glass pipe's flow can facilitate its 3D diagnostics.

  18. Respiratory motion compensation for simultaneous PET/MR based on a 3D-2D registration of strongly undersampled radial MR data: a simulation study

    NASA Astrophysics Data System (ADS)

    Rank, Christopher M.; Heußer, Thorsten; Flach, Barbara; Brehm, Marcus; Kachelrieß, Marc

    2015-03-01

    We propose a new method for PET/MR respiratory motion compensation, which is based on a 3D-2D registration of strongly undersampled MR data and a) runs in parallel with the PET acquisition, b) can be interlaced with clinical MR sequences, and c) requires less than one minute of the total MR acquisition time per bed position. In our simulation study, we applied a 3D encoded radial stack-of-stars sampling scheme with 160 radial spokes per slice and an acquisition time of 38 s. Gated 4D MR images were reconstructed using a 4D iterative reconstruction algorithm. Based on these images, motion vector fields were estimated using our newly-developed 3D-2D registration framework. A 4D PET volume of a patient with eight hot lesions in the lungs and upper abdomen was simulated and MoCo 4D PET images were reconstructed based on the motion vector fields derived from MR. For evaluation, average SUVmean values of the artificial lesions were determined for a 3D, a gated 4D, a MoCo 4D and a reference (with ten-fold measurement time) gated 4D reconstruction. Compared to the reference, 3D reconstructions yielded an underestimation of SUVmean values due to motion blurring. In contrast, gated 4D reconstructions showed the highest variation of SUVmean due to low statistics. MoCo 4D reconstructions were only slightly affected by these two sources of uncertainty resulting in a significant visual and quantitative improvement in terms of SUVmean values. Whereas temporal resolution was comparable to the gated 4D images, signal-to-noise ratio and contrast-to-noise ratio were close to the 3D reconstructions.

  19. Motion management during IMAT treatment of mobile lung tumors—A comparison of MLC tracking and gated delivery

    PubMed Central

    Falk, Marianne; Pommer, Tobias; Keall, Paul; Korreman, Stine; Persson, Gitte; Poulsen, Per; Munck af Rosenschöld, Per

    2014-01-01

    Purpose: To compare real-time dynamic multileaf collimator (MLC) tracking, respiratory amplitude and phase gating, and no compensation for intrafraction motion management during intensity modulated arc therapy (IMAT). Methods: Motion management with MLC tracking and gating was evaluated for four lung cancer patients. The IMAT plans were delivered to a dosimetric phantom mounted onto a 3D motion phantom performing patient-specific lung tumor motion. The MLC tracking system was guided by an optical system that used stereoscopic infrared (IR) cameras and five spherical reflecting markers attached to the dosimetric phantom. The gated delivery used a duty cycle of 35% and collected position data using an IR camera and two reflecting markers attached to a marker block. Results: The average gamma index failure rate (2% and 2 mm criteria) was <0.01% with amplitude gating for all patients, and <0.1% with phase gating and <3.7% with MLC tracking for three of the four patients. One of the patients had an average failure rate of 15.1% with phase gating and 18.3% with MLC tracking. With no motion compensation, the average gamma index failure rate ranged from 7.1% to 46.9% for the different patients. Evaluation of the dosimetric error contributions showed that the gated delivery mainly had errors in target localization, while MLC tracking also had contributions from MLC leaf fitting and leaf adjustment. The average treatment time was about three times longer with gating compared to delivery with MLC tracking (that did not prolong the treatment time) or no motion compensation. For two of the patients, the different motion compensation techniques allowed for approximately the same margin reduction but for two of the patients, gating enabled a larger reduction of the margins than MLC tracking. Conclusions: Both gating and MLC tracking reduced the effects of the target movements, although the gated delivery showed a better dosimetric accuracy and enabled a larger reduction of the

  20. Proof of concept of MRI-guided tracked radiation delivery: tracking one-dimensional motion.

    PubMed

    Crijns, S P M; Raaymakers, B W; Lagendijk, J J W

    2012-12-01

    In radiotherapy one aims to deliver a radiation dose to a tumour with high geometrical accuracy while sparing organs at risk (OARs). Although image guidance decreases geometrical uncertainties, treatment of cancer of abdominal organs is further complicated by respiratory motion, requiring intra-fraction motion compensation to fulfil the treatment intent. With an ideal delivery system, the optimal method of intra-fraction motion compensation is to adapt the beam collimation to the moving target using a dynamic multi-leaf collimator (MLC) aperture. The many guidance strategies for such tracked radiation delivery tested up to now mainly use markers and are therefore invasive and cannot deal with target deformations or adaptations for OAR positions. We propose to address these shortcomings using the online MRI guidance provided by an MRI accelerator and present a first step towards demonstration of the technical feasibility of this proposal. The position of a phantom subjected to one-dimensional (1D) periodic translation was tracked using a fast 1D MR sequence. Real-time communication with the MR scanner and control of the MLC aperture were established. Based on the time-resolved position of the phantom, tracked radiation delivery to the phantom was realized. Dose distributions for various delivery conditions were recorded on a gafchromic film. Without motion a sharply defined dose distribution is obtained, whereas considerable blur occurs for delivery to a moving phantom. With compensation for motion, the sharpness of the dose distribution is nearly restored. The total latency in our motion management architecture is approximately 200 ms. Combination of the recorded phantom and aperture positions with the planned dose distribution enabled the reconstruction of the delivered dose in all cases, which illustrates the promise of online dose accumulation and confirms that latency compensation could further enhance our results. For a simple 1D tracked delivery scenario, the

  1. Automatic acquisition of motion trajectories: tracking hockey players

    NASA Astrophysics Data System (ADS)

    Okuma, Kenji; Little, James J.; Lowe, David

    2003-12-01

    Computer systems that have the capability of analyzing complex and dynamic scenes play an essential role in video annotation. Scenes can be complex in such a way that there are many cluttered objects with different colors, shapes and sizes, and can be dynamic with multiple interacting moving objects and a constantly changing background. In reality, there are many scenes that are complex, dynamic, and challenging enough for computers to describe. These scenes include games of sports, air traffic, car traffic, street intersections, and cloud transformations. Our research is about the challenge of inventing a descriptive computer system that analyzes scenes of hockey games where multiple moving players interact with each other on a constantly moving background due to camera motions. Ultimately, such a computer system should be able to acquire reliable data by extracting the players" motion as their trajectories, querying them by analyzing the descriptive information of data, and predict the motions of some hockey players based on the result of the query. Among these three major aspects of the system, we primarily focus on visual information of the scenes, that is, how to automatically acquire motion trajectories of hockey players from video. More accurately, we automatically analyze the hockey scenes by estimating parameters (i.e., pan, tilt, and zoom) of the broadcast cameras, tracking hockey players in those scenes, and constructing a visual description of the data by displaying trajectories of those players. Many technical problems in vision such as fast and unpredictable players' motions and rapid camera motions make our challenge worth tackling. To the best of our knowledge, there have not been any automatic video annotation systems for hockey developed in the past. Although there are many obstacles to overcome, our efforts and accomplishments would hopefully establish the infrastructure of the automatic hockey annotation system and become a milestone for

  2. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  3. Accuracy of tumor motion compensation algorithm from a robotic respiratory tracking system: A simulation study

    SciTech Connect

    Seppenwoolde, Yvette; Berbeco, Ross I.; Nishioka, Seiko; Shirato, Hiroki; Heijmen, Ben

    2007-07-15

    The Synchrony{sup TM} Respiratory Tracking System (RTS) is a treatment option of the CyberKnife robotic treatment device to irradiate extra-cranial tumors that move due to respiration. Advantages of RTS are that patients can breath normally and that there is no loss of linac duty cycle such as with gated therapy. Tracking is based on a measured correspondence model (linear or polynomial) between internal tumor motion and external (chest/abdominal) marker motion. The radiation beam follows the tumor movement via the continuously measured external marker motion. To establish the correspondence model at the start of treatment, the 3D internal tumor position is determined at 15 discrete time points by automatic detection of implanted gold fiducials in two orthogonal x-ray images; simultaneously, the positions of the external markers are measured. During the treatment, the relationship between internal and external marker positions is continuously accounted for and is regularly checked and updated. Here we use computer simulations based on continuously and simultaneously recorded internal and external marker positions to investigate the effectiveness of tumor tracking by the RTS. The Cyberknife does not allow continuous acquisition of x-ray images to follow the moving internal markers (typical imaging frequency is once per minute). Therefore, for the simulations, we have used data for eight lung cancer patients treated with respiratory gating. All of these patients had simultaneous and continuous recordings of both internal tumor motion and external abdominal motion. The available continuous relationship between internal and external markers for these patients allowed investigation of the consequences of the lower acquisition frequency of the RTS. With the use of the RTS, simulated treatment errors due to breathing motion were reduced largely and consistently over treatment time for all studied patients. A considerable part of the maximum reduction in treatment error

  4. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability.

  5. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  6. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability. PMID:25207828

  7. Tracking Arabia-India motion from Miocene to Present

    NASA Astrophysics Data System (ADS)

    Chamot-Rooke, N. R.; Fournier, M.

    2009-12-01

    Although small, the present-day Arabia-India motion has been captured by several global and regional geodetic surveys that consistently show dextral motion of a few mm/yr, either transpressive or transtensive (Fournier et al., 2008). This motion is accommodated along the Owen Fracture Zone, an active strike-slip boundary that runs for more than 700 km from the Somalia-India-Arabia triple junction in the south to the Dalrymple trough in the north. Two recent marine cruises conducted along this fault aboard the BHO Beautemps-Beaupré (AOC 2006 and OWEN 2009) using a high resolution multibeam sounder (Simrad EM120, 10 m vertical resolution) provided a complete map of the active fault and confirmed a present-day pure dextral motion. The surface breaks closely follow a small circle of the Arabia-India motion, with several pull-part basins at the junctions between the main segments of the fault. Geomorphologic offsets reach 10 km, suggesting that the mapped fault has been active with the same style for past several million years. When did this motion start? The difficulty in tracking the past Arabia-India motion is that there is no direct kinematic indicator available, since the boundary has been strike-slip and/or convergent during the Tertiary. Motion was most probably sinistral during the rapid northward travelling of India towards Eurasia in the early Tertiary, Arabia being rigidly attached to Africa until the opening of the Gulf of Aden. However, the nature and location of the Arabia-India boundary at that time remain speculative. Throughout the Miocene, the relative motion between India and Arabia has been indirectly recorded at the Sheba and Carslberg ridges, the former recording Arabia-Somalia motion (opening of the Gulf of Aden) and the latter India-Somalia motion (Indian Ocean opening). Both ridges have been studied with some details recently, using up to date magnetic lineations identification (Merkouriev and DeMets, 2006; Fournier et al., 2009). We combine

  8. Surrogate-driven deformable motion model for organ motion tracking in particle radiation therapy

    NASA Astrophysics Data System (ADS)

    Fassi, Aurora; Seregni, Matteo; Riboldi, Marco; Cerveri, Pietro; Sarrut, David; Battista Ivaldi, Giovanni; Tabarelli de Fatis, Paola; Liotta, Marco; Baroni, Guido

    2015-02-01

    The aim of this study is the development and experimental testing of a tumor tracking method for particle radiation therapy, providing the daily respiratory dynamics of the patient’s thoraco-abdominal anatomy as a function of an external surface surrogate combined with an a priori motion model. The proposed tracking approach is based on a patient-specific breathing motion model, estimated from the four-dimensional (4D) planning computed tomography (CT) through deformable image registration. The model is adapted to the interfraction baseline variations in the patient’s anatomical configuration. The driving amplitude and phase parameters are obtained intrafractionally from a respiratory surrogate signal derived from the external surface displacement. The developed technique was assessed on a dataset of seven lung cancer patients, who underwent two repeated 4D CT scans. The first 4D CT was used to build the respiratory motion model, which was tested on the second scan. The geometric accuracy in localizing lung lesions, mediated over all breathing phases, ranged between 0.6 and 1.7 mm across all patients. Errors in tracking the surrounding organs at risk, such as lungs, trachea and esophagus, were lower than 1.3 mm on average. The median absolute variation in water equivalent path length (WEL) within the target volume did not exceed 1.9 mm-WEL for simulated particle beams. A significant improvement was achieved compared with error compensation based on standard rigid alignment. The present work can be regarded as a feasibility study for the potential extension of tumor tracking techniques in particle treatments. Differently from current tracking methods applied in conventional radiotherapy, the proposed approach allows for the dynamic localization of all anatomical structures scanned in the planning CT, thus providing complete information on density and WEL variations required for particle beam range adaptation.

  9. SU-E-T-562: Motion Tracking Optimization for Conformal Arc Radiotherapy Plans: A QUASAR Phantom Based Study

    SciTech Connect

    Xu, Z; Wang, I; Yao, R; Podgorsak, M

    2015-06-15

    Purpose: This study is to use plan parameters optimization (Dose rate, collimator angle, couch angle, initial starting phase) to improve the performance of conformal arc radiotherapy plans with motion tracking by increasing the plan performance score (PPS). Methods: Two types of 3D conformal arc plans were created based on QUASAR respiratory motion phantom with spherical and cylindrical targets. Sinusoidal model was applied to the MLC leaves to generate motion tracking plans. A MATLAB program was developed to calculate PPS of each plan (ranges from 0–1) and optimize plan parameters. We first selected the dose rate for motion tracking plans and then used simulated annealing algorithm to search for the combination of the other parameters that resulted in the plan of the maximal PPS. The optimized motion tracking plan was delivered by Varian Truebeam Linac. In-room cameras and stopwatch were used for starting phase selection and synchronization between phantom motion and plan delivery. Gaf-EBT2 dosimetry films were used to measure the dose delivered to the target in QUASAR phantom. Dose profiles and Truebeam trajectory log files were used for plan delivery performance evaluation. Results: For spherical target, the maximal PPS (PPSsph) of the optimized plan was 0.79: (Dose rate: 500MU/min, Collimator: 90°, Couch: +10°, starting phase: 0.83π). For cylindrical target, the maximal PPScyl was 0.75 (Dose rate: 300MU/min, Collimator: 87°, starting phase: 0.97π) with couch at 0°. Differences of dose profiles between motion tracking plans (with the maximal and the minimal PPS) and 3D conformal plans were as follows: PPSsph=0.79: %ΔFWHM: 8.9%, %Dmax: 3.1%; PPSsph=0.52: %ΔFWHM: 10.4%, %Dmax: 6.1%. PPScyl=0.75: %ΔFWHM: 4.7%, %Dmax: 3.6%; PPScyl=0.42: %ΔFWHM: 12.5%, %Dmax: 9.6%. Conclusion: By achieving high plan performance score through parameters optimization, we can improve target dose conformity of motion tracking plan by decreasing total MLC leaf travel distance

  10. Respiration induced fiducial motion tracking in ultrasound using an extended SFA approach

    NASA Astrophysics Data System (ADS)

    Cao, Kunlin; Bednarz, Bryan; Smith, L. S.; Foo, Thomas K. F.; Patwardhan, Kedar A.

    2015-03-01

    Radiation therapy (RT) plays an essential role in the management of cancers. The precision of the treatment delivery process in chest and abdominal cancers is often impeded by respiration induced tumor positional variations, which are accounted for by using larger therapeutic margins around the tumor volume leading to sub-optimal treatment deliveries and risk to healthy tissue. Real-time tracking of tumor motion during RT will help reduce unnecessary margin area and benefit cancer patients by allowing the treatment volume to closely match the positional variation of the tumor volume over time. In this work, we propose a fast approach which enables transferring the pre-estimated target (e.g. tumor) motion extracted from ultrasound (US) image sequences in training stage (e.g. before RT) to online data in real-time (e.g. acquired during RT). The method is based on extracting feature points of the target object, exploiting low-dimensional description of the feature motion through slow feature analysis, and finding the most similar image frame from training data for estimating current/online object location. The approach is evaluated on two 2D + time and one 3D + time US acquisitions. The locations of six annotated fiducials are used for designing experiments and validating tracking accuracy. The average fiducial distance between expert's annotation and the location extracted from our indexed training frame is 1.9+/-0.5mm. Adding a fast template matching procedure within a small search range reduces the distance to 1.4+/-0.4mm. The tracking time per frame is on the order of millisecond, which is below the frame acquisition time.

  11. Effect of Task-Correlated Physiological Fluctuations and Motion in 2D and 3D Echo-Planar Imaging in a Higher Cognitive Level fMRI Paradigm

    PubMed Central

    Ladstein, Jarle; Evensmoen, Hallvard R.; Håberg, Asta K.; Kristoffersen, Anders; Goa, Pål E.

    2016-01-01

    Purpose: To compare 2D and 3D echo-planar imaging (EPI) in a higher cognitive level fMRI paradigm. In particular, to study the link between the presence of task-correlated physiological fluctuations and motion and the fMRI contrast estimates from either 2D EPI or 3D EPI datasets, with and without adding nuisance regressors to the model. A signal model in the presence of partly task-correlated fluctuations is derived, and predictions for contrast estimates with and without nuisance regressors are made. Materials and Methods: Thirty-one healthy volunteers were scanned using 2D EPI and 3D EPI during a virtual environmental learning paradigm. In a subgroup of 7 subjects, heart rate and respiration were logged, and the correlation with the paradigm was evaluated. FMRI analysis was performed using models with and without nuisance regressors. Differences in the mean contrast estimates were investigated by analysis-of-variance using Subject, Sequence, Day, and Run as factors. The distributions of group level contrast estimates were compared. Results: Partially task-correlated fluctuations in respiration, heart rate and motion were observed. Statistically significant differences were found in the mean contrast estimates between the 2D EPI and 3D EPI when using a model without nuisance regressors. The inclusion of nuisance regressors for cardiorespiratory effects and motion reduced the difference to a statistically non-significant level. Furthermore, the contrast estimate values shifted more when including nuisance regressors for 3D EPI compared to 2D EPI. Conclusion: The results are consistent with 3D EPI having a higher sensitivity to fluctuations compared to 2D EPI. In the presence partially task-correlated physiological fluctuations or motion, proper correction is necessary to get expectation correct contrast estimates when using 3D EPI. As such task-correlated physiological fluctuations or motion is difficult to avoid in paradigms exploring higher cognitive functions, 2

  12. HSA: integrating multi-track Hi-C data for genome-scale reconstruction of 3D chromatin structure.

    PubMed

    Zou, Chenchen; Zhang, Yuping; Ouyang, Zhengqing

    2016-03-02

    Genome-wide 3C technologies (Hi-C) are being increasingly employed to study three-dimensional (3D) genome conformations. Existing computational approaches are unable to integrate accumulating data to facilitate studying 3D chromatin structure and function. We present HSA ( http://ouyanglab.jax.org/hsa/ ), a flexible tool that jointly analyzes multiple contact maps to infer 3D chromatin structure at the genome scale. HSA globally searches the latent structure underlying different cleavage footprints. Its robustness and accuracy outperform or rival existing tools on extensive simulations and orthogonal experiment validations. Applying HSA to recent in situ Hi-C data, we found the 3D chromatin structures are highly conserved across various human cell types.

  13. Computer numerical control (CNC) lithography: light-motion synchronized UV-LED lithography for 3D microfabrication

    NASA Astrophysics Data System (ADS)

    Kim, Jungkwun; Yoon, Yong-Kyu; Allen, Mark G.

    2016-03-01

    This paper presents a computer-numerical-controlled ultraviolet light-emitting diode (CNC UV-LED) lithography scheme for three-dimensional (3D) microfabrication. The CNC lithography scheme utilizes sequential multi-angled UV light exposures along with a synchronized switchable UV light source to create arbitrary 3D light traces, which are transferred into the photosensitive resist. The system comprises a switchable, movable UV-LED array as a light source, a motorized tilt-rotational sample holder, and a computer-control unit. System operation is such that the tilt-rotational sample holder moves in a pre-programmed routine, and the UV-LED is illuminated only at desired positions of the sample holder during the desired time period, enabling the formation of complex 3D microstructures. This facilitates easy fabrication of complex 3D structures, which otherwise would have required multiple manual exposure steps as in the previous multidirectional 3D UV lithography approach. Since it is batch processed, processing time is far less than that of the 3D printing approach at the expense of some reduction in the degree of achievable 3D structure complexity. In order to produce uniform light intensity from the arrayed LED light source, the UV-LED array stage has been kept rotating during exposure. UV-LED 3D fabrication capability was demonstrated through a plurality of complex structures such as V-shaped micropillars, micropanels, a micro-‘hi’ structure, a micro-‘cat’s claw,’ a micro-‘horn,’ a micro-‘calla lily,’ a micro-‘cowboy’s hat,’ and a micro-‘table napkin’ array.

  14. SU-E-J-199: Evaluation of Motion Tracking Effects On Stereotactic Body Radiotherapy of Abdominal Targets

    SciTech Connect

    Monterroso, M; Dogan, N; Yang, Y

    2014-06-01

    Purpose: To evaluate the effects of respiratory motion on the delivered dose distribution of CyberKnife motion tracking-based stereotactic body radiotherapy (SBRT) of abdominal targets. Methods: Four patients (two pancreas and two liver, and all with 4DCT scans) were retrospectively evaluated. A plan (3D plan) using CyberKnife Synchrony was optimized on the end-exhale phase in the CyberKnife's MultiPlan treatment planning system (TPS), with 40Gy prescribed in 5 fractions. A 4D plan was then created following the 4D planning utility in the MultiPlan TPS, by recalculating dose from the 3D plan beams on all 4DCT phases, with the same prescribed isodose line. The other seven phases of the 4DCT were then deformably registered to the end-exhale phase for 4D dose summation. Doses to the target and organs at risk (OAR) were compared between 3D and 4D plans for each patient. The mean and maximum doses to duodenum, liver, spinal cord and kidneys, and doses to 5cc of duodenum, 700cc of liver, 0.25cc of spinal cord and 200cc of kidneys were used. Results: Target coverage in the 4D plans was about 1% higher for two patients and about 9% lower in the other two. OAR dose differences between 3D and 4D varied among structures, with doses as much as 8.26Gy lower or as much as 5.41Gy higher observed in the 4D plans. Conclusion: The delivered dose can be significantly different from the planned dose for both the target and OAR close to the target, which is caused by the relative geometry change while the beams chase the moving target. Studies will be performed on more patients in the future. The differences of motion tracking versus passive motion management with the use of internal target volumes will also be investigated.

  15. Unstructured grids in 3D and 4D for a time-dependent interface in front tracking with improved accuracy

    SciTech Connect

    Glimm, J.; Grove, J. W.; Li, X. L.; Li, Y.; Xu, Z.

    2002-01-01

    Front tracking traces the dynamic evolution of an interface separating differnt materials or fluid components. In this paper, they describe three types of the grid generation methods used in the front tracking method. One is the unstructured surface grid. The second is a structured grid-based reconstruction method. The third is a time-space grid, also grid based, for a conservative tracking algorithm with improved accuracy.

  16. An Assessment of a Low-Cost Visual Tracking System (VTS) to Detect and Compensate for Patient Motion during SPECT

    PubMed Central

    McNamara, Joseph E.; Bruyant, Philippe; Johnson, Karen; Feng, Bing; Lehovich, Andre; Gu, Songxiang; Gennert, Michael A.; King, Michael A.

    2008-01-01

    Patient motion is inevitable in SPECT and PET due to the lengthy period of time patients are imaged and patient motion can degrade diagnostic accuracy. The goal of our studies is to perfect a methodology for tracking and correcting patient motion when it occurs. In this paper we report on enhancements to the calibration, camera stability, accuracy of motion tracking, and temporal synchronization of a low-cost visual tracking system (VTS) we are developing. The purpose of the VTS is to track the motion of retro-reflective markers on stretchy bands wrapped about the chest and abdomen of patients. We have improved the accuracy of 3D spatial calibration by using a MATLAB optical camera calibration package with a planar calibration pattern. This allowed us to determine the intrinsic and extrinsic parameters for stereo-imaging with our CCD cameras. Locations in the VTS coordinate system are transformed to the SPECT coordinate system by a VTS/SPECT mapping using a phantom of 7 retro-reflective spheres each filled with a drop of Tc99m. We switched from pan, tilt and zoom (PTZ) network cameras to fixed network cameras to reduce the amount of camera drift. The improved stability was verified by tracking the positions of fixed retro-reflective markers on a wall. The ability of our VTS to track movement, on average, with sub-millimeter and sub-degree accuracy was established with the 7-sphere phantom for 1 cm vertical and axial steps as well as for an arbitrary rotation and translation. The difference in the time of optical image acquisition as decoded from the image headers relative to synchronization signals sent to the SPECT system was used to establish temporal synchrony between optical and list-mode SPECT acquisition. Two experiments showed better than 100 ms agreement between VTS and SPECT observed motion for three axial translations. We were able to track 3 reflective markers on an anthropomorphic phantom with a precision that allowed us to correct motion such that no

  17. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  18. Imaging bacterial 3D motion using digital in-line holographic microscopy and correlation-based de-noising algorithm

    PubMed Central

    Molaei, Mehdi; Sheng, Jian

    2014-01-01

    Abstract: Better understanding of bacteria environment interactions in the context of biofilm formation requires accurate 3-dimentional measurements of bacteria motility. Digital Holographic Microscopy (DHM) has demonstrated its capability in resolving 3D distribution and mobility of particulates in a dense suspension. Due to their low scattering efficiency, bacteria are substantially difficult to be imaged by DHM. In this paper, we introduce a novel correlation-based de-noising algorithm to remove the background noise and enhance the quality of the hologram. Implemented in conjunction with DHM, we demonstrate that the method allows DHM to resolve 3-D E. coli bacteria locations of a dense suspension (>107 cells/ml) with submicron resolutions (<0.5 µm) over substantial depth and to obtain thousands of 3D cell trajectories. PMID:25607177

  19. A common-path optical coherence tomography distance-sensor based surface tracking and motion compensation hand-held microsurgical tool

    NASA Astrophysics Data System (ADS)

    Zhang, Kang; Gehlbach, Peter; Kang, Jin U.

    2011-03-01

    Microsurgery requires constant attention to the involuntary motion due to physiological tremors. In this work, we demonstrated a simple and compact hand-held microsurgical tool capable of surface tracking and motion compensation based on common-path optical coherence tomography (CP-OCT) distance-sensor to improve the accuracy and safety of microsurgery. This tool is miniaturized into a 15mm-diameter plastic syringe and capable of surface tracking at less than 5 micrometer resolution. A phantom made with Intralipid layers is used to simulate a real tissue surface and a single-fiber integrated micro-dissector works as a surgical tip to perform tracking and accurate incision on the phantom surface. The micro-incision depth is evaluated after each operation through a fast 3D scanning by the Fourier domain OCT system. The results using the surface tracking and motion compensation tool show significant improvement compared to the results by free-hand.

  20. Influence of Head Motion on the Accuracy of 3D Reconstruction with Cone-Beam CT: Landmark Identification Errors in Maxillofacial Surface Model

    PubMed Central

    Song, Jin-Myoung; Cho, Jin-Hyoung

    2016-01-01

    Purpose The purpose of this study was to investigate the influence of head motion on the accuracy of three-dimensional (3D) reconstruction with cone-beam computed tomography (CBCT) scan. Materials and Methods Fifteen dry skulls were incorporated into a motion controller which simulated four types of head motion during CBCT scan: 2 horizontal rotations (to the right/to the left) and 2 vertical rotations (upward/downward). Each movement was triggered to occur at the start of the scan for 1 second by remote control. Four maxillofacial surface models with head motion and one control surface model without motion were obtained for each skull. Nine landmarks were identified on the five maxillofacial surface models for each skull, and landmark identification errors were compared between the control model and each of the models with head motion. Results Rendered surface models with head motion were similar to the control model in appearance; however, the landmark identification errors showed larger values in models with head motion than in the control. In particular, the Porion in the horizontal rotation models presented statistically significant differences (P < .05). Statistically significant difference in the errors between the right and left side landmark was present in the left side rotation which was opposite direction to the scanner rotation (P < .05). Conclusions Patient movement during CBCT scan might cause landmark identification errors on the 3D surface model in relation to the direction of the scanner rotation. Clinicians should take this into consideration to prevent patient movement during CBCT scan, particularly horizontal movement. PMID:27065238

  1. Bedside assistance in freehand ultrasonic diagnosis by real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion

    NASA Astrophysics Data System (ADS)

    Fukuzawa, M.; Kawata, K.; Nakamori, N.; Kitsunezuka, Y.

    2011-03-01

    By real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion, freehand ultrasonic diagnosis of neonatal ischemic diseases has been assisted at the bedside. The 2D ultrasonic movie was taken with a conventional ultrasonic apparatus (ATL HDI5000) and ultrasonic probes of 5-7 MHz with the compact tilt-sensor to measure the probe orientation. The real-time 3D visualization was realized by developing an extended version of the PC-based visualization system. The software was originally developed on the DirectX platform and optimized with the streaming SIMD extensions. The 3D scatter diagram of the latest pulsatile tissues has been continuously generated and visualized as projection image with the ultrasonic movie in the current section more than 15 fps. It revealed the 3D structure of pulsatile tissues such as middle and posterior cerebral arteries, Willis ring and cerebellar arteries, in which pediatricians have great interests in the blood flow because asphyxiated and/or low-birth-weight neonates have a high risk of ischemic diseases such as hypoxic-ischemic encephalopathy and periventricular leukomalacia. Since the pulsatile tissue-motion is due to local blood flow, it can be concluded that the system developed in this work is very useful to assist freehand ultrasonic diagnosis of ischemic diseases in the neonatal cranium.

  2. How Fast Is Your Body Motion? Determining a Sufficient Frame Rate for an Optical Motion Tracking System Using Passive Markers

    PubMed Central

    Song, Min-Ho; Godøy, Rolf Inge

    2016-01-01

    This paper addresses how to determine a sufficient frame (sampling) rate for an optical motion tracking system using passive reflective markers. When using passive markers for the optical motion tracking, avoiding identity confusion between the markers becomes a problem as the speed of motion increases, necessitating a higher frame rate to avoid a failure of the motion tracking caused by marker confusions and/or dropouts. Initially, one might believe that the Nyquist-Shannon sampling rate estimated from the assumed maximal temporal variation of a motion (i.e. a sampling rate at least twice that of the maximum motion frequency) could be the complete solution to the problem. However, this paper shows that also the spatial distance between the markers should be taken into account in determining the suitable frame rate of an optical motion tracking with passive markers. In this paper, a frame rate criterion for the optical tracking using passive markers is theoretically derived and also experimentally verified using a high-quality optical motion tracking system. Both the theoretical and the experimental results showed that the minimum frame rate is proportional to the ratio between the maximum speed of the motion and the minimum spacing between markers, and may also be predicted precisely if the proportional constant is known in advance. The inverse of the proportional constant is here defined as the tracking efficiency constant and it can be easily determined with some test measurements. Moreover, this newly defined constant can provide a new way of evaluating the tracking algorithm performance of an optical tracking system. PMID:26967900

  3. Integrating structure-from-motion photogrammetry with geospatial software as a novel technique for quantifying 3D ecological characteristics of coral reefs

    PubMed Central

    Delparte, D; Gates, RD; Takabayashi, M

    2015-01-01

    The structural complexity of coral reefs plays a major role in the biodiversity, productivity, and overall functionality of reef ecosystems. Conventional metrics with 2-dimensional properties are inadequate for characterization of reef structural complexity. A 3-dimensional (3D) approach can better quantify topography, rugosity and other structural characteristics that play an important role in the ecology of coral reef communities. Structure-from-Motion (SfM) is an emerging low-cost photogrammetric method for high-resolution 3D topographic reconstruction. This study utilized SfM 3D reconstruction software tools to create textured mesh models of a reef at French Frigate Shoals, an atoll in the Northwestern Hawaiian Islands. The reconstructed orthophoto and digital elevation model were then integrated with geospatial software in order to quantify metrics pertaining to 3D complexity. The resulting data provided high-resolution physical properties of coral colonies that were then combined with live cover to accurately characterize the reef as a living structure. The 3D reconstruction of reef structure and complexity can be integrated with other physiological and ecological parameters in future research to develop reliable ecosystem models and improve capacity to monitor changes in the health and function of coral reef ecosystems. PMID:26207190

  4. Ocular tracking responses to background motion gated by feature-based attention.

    PubMed

    Souto, David; Kerzel, Dirk

    2014-09-01

    Involuntary ocular tracking responses to background motion offer a window on the dynamics of motion computations. In contrast to spatial attention, we know little about the role of feature-based attention in determining this ocular response. To probe feature-based effects of background motion on involuntary eye movements, we presented human observers with a balanced background perturbation. Two clouds of dots moved in opposite vertical directions while observers tracked a target moving in horizontal direction. Additionally, they had to discriminate a change in the direction of motion (±10° from vertical) of one of the clouds. A vertical ocular following response occurred in response to the motion of the attended cloud. When motion selection was based on motion direction and color of the dots, the peak velocity of the tracking response was 30% of the tracking response elicited in a single task with only one direction of background motion. In two other experiments, we tested the effect of the perturbation when motion selection was based on color, by having motion direction vary unpredictably, or on motion direction alone. Although the gain of pursuit in the horizontal direction was significantly reduced in all experiments, indicating a trade-off between perceptual and oculomotor tasks, ocular responses to perturbations were only observed when selection was based on both motion direction and color. It appears that selection by motion direction can only be effective for driving ocular tracking when the relevant elements can be segregated before motion onset.

  5. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    SciTech Connect

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon; Ra, Jong Beom; Lee, Jae Young

    2015-01-15

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a

  6. Dynamic simulation and modeling of the motion modes produced during the 3D controlled manipulation of biological micro/nanoparticles based on the AFM.

    PubMed

    Saraee, Mahdieh B; Korayem, Moharam H

    2015-08-01

    Determining the motion modes and the exact position of a particle displaced during the manipulation process is of special importance. This issue becomes even more important when the studied particles are biological micro/nanoparticles and the goals of manipulation are the transfer of these particles within body cells, repair of cancerous cells and the delivery of medication to damaged cells. However, due to the delicate nature of biological nanoparticles and their higher vulnerability, by obtaining the necessary force of manipulation for the considered motion mode, we can prevent the sample from interlocking with or sticking to the substrate because of applying a weak force or avoid damaging the sample due to the exertion of excessive force. In this paper, the dynamic behaviors and the motion modes of biological micro/nanoparticles such as DNA, yeast, platelet and bacteria due to the 3D manipulation effect have been investigated. Since the above nanoparticles generally have a cylindrical shape, the cylindrical contact models have been employed in an attempt to more precisely model the forces exerted on the nanoparticle during the manipulation process. Also, this investigation has performed a comprehensive modeling and simulation of all the possible motion modes in 3D manipulation by taking into account the eccentricity of the applied load on the biological nanoparticle. The obtained results indicate that unlike the macroscopic scale, the sliding of nanoparticle on substrate in nano-scale takes place sooner than the other motion modes and that the spinning about the vertical and transverse axes and the rolling of nanoparticle occur later than the other motion modes. The simulation results also indicate that the applied force necessary for the onset of nanoparticle movement and the resulting motion mode depend on the size and aspect ratio of the nanoparticle.

  7. Validation and Comparison of 2D and 3D Codes for Nearshore Motion of Long Waves Using Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey

    2016-04-01

    Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT

  8. Patient motion tracking in the presence of measurement errors.

    PubMed

    Haidegger, Tamás; Benyó, Zoltán; Kazanzides, Peter

    2009-01-01

    The primary aim of computer-integrated surgical systems is to provide physicians with superior surgical tools for better patient outcome. Robotic technology is capable of both minimally invasive surgery and microsurgery, offering remarkable advantages for the surgeon and the patient. Current systems allow for sub-millimeter intraoperative spatial positioning, however certain limitations still remain. Measurement noise and unintended changes in the operating room environment can result in major errors. Positioning errors are a significant danger to patients in procedures involving robots and other automated devices. We have developed a new robotic system at the Johns Hopkins University to support cranial drilling in neurosurgery procedures. The robot provides advanced visualization and safety features. The generic algorithm described in this paper allows for automated compensation of patient motion through optical tracking and Kalman filtering. When applied to the neurosurgery setup, preliminary results show that it is possible to identify patient motion within 700 ms, and apply the appropriate compensation with an average of 1.24 mm positioning error after 2 s of setup time.

  9. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    NASA Astrophysics Data System (ADS)

    Min, Yugang; Santhanam, Anand; Neelakkantan, Harini; Ruddy, Bari H.; Meeks, Sanford L.; Kupelian, Patrick A.

    2010-09-01

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  10. Using virtual reality technology and hand tracking technology to create software for training surgical skills in 3D game

    NASA Astrophysics Data System (ADS)

    Zakirova, A. A.; Ganiev, B. A.; Mullin, R. I.

    2015-11-01

    The lack of visible and approachable ways of training surgical skills is one of the main problems in medical education. Existing simulation training devices are not designed to teach students, and are not available due to the high cost of the equipment. Using modern technologies such as virtual reality and hands movements fixation technology we want to create innovative method of learning the technics of conducting operations in 3D game format, which can make education process interesting and effective. Creating of 3D format virtual simulator will allow to solve several conceptual problems at once: opportunity of practical skills improvement unlimited by the time without the risk for patient, high realism of environment in operational and anatomic body structures, using of game mechanics for information perception relief and memorization of methods acceleration, accessibility of this program.

  11. Longitudinal, label-free, quantitative tracking of cell death and viability in a 3D tumor model with OCT

    NASA Astrophysics Data System (ADS)

    Jung, Yookyung; Klein, Oliver J.; Wang, Hequn; Evans, Conor L.

    2016-06-01

    Three-dimensional in vitro tumor models are highly useful tools for studying tumor growth and treatment response of malignancies such as ovarian cancer. Existing viability and treatment assessment assays, however, face shortcomings when applied to these large, complex, and heterogeneous culture systems. Optical coherence tomography (OCT) is a noninvasive, label-free, optical imaging technique that can visualize live cells and tissues over time with subcellular resolution and millimeters of optical penetration depth. Here, we show that OCT is capable of carrying out high-content, longitudinal assays of 3D culture treatment response. We demonstrate the usage and capability of OCT for the dynamic monitoring of individual and combination therapeutic regimens in vitro, including both chemotherapy drugs and photodynamic therapy (PDT) for ovarian cancer. OCT was validated against the standard LIVE/DEAD Viability/Cytotoxicity Assay in small tumor spheroid cultures, showing excellent correlation with existing standards. Importantly, OCT was shown to be capable of evaluating 3D spheroid treatment response even when traditional viability assays failed. OCT 3D viability imaging revealed synergy between PDT and the standard-of-care chemotherapeutic carboplatin that evolved over time. We believe the efficacy and accuracy of OCT in vitro drug screening will greatly contribute to the field of cancer treatment and therapy evaluation.

  12. Longitudinal, label-free, quantitative tracking of cell death and viability in a 3D tumor model with OCT

    PubMed Central

    Jung, Yookyung; Klein, Oliver J.; Wang, Hequn; Evans, Conor L.

    2016-01-01

    Three-dimensional in vitro tumor models are highly useful tools for studying tumor growth and treatment response of malignancies such as ovarian cancer. Existing viability and treatment assessment assays, however, face shortcomings when applied to these large, complex, and heterogeneous culture systems. Optical coherence tomography (OCT) is a noninvasive, label-free, optical imaging technique that can visualize live cells and tissues over time with subcellular resolution and millimeters of optical penetration depth. Here, we show that OCT is capable of carrying out high-content, longitudinal assays of 3D culture treatment response. We demonstrate the usage and capability of OCT for the dynamic monitoring of individual and combination therapeutic regimens in vitro, including both chemotherapy drugs and photodynamic therapy (PDT) for ovarian cancer. OCT was validated against the standard LIVE/DEAD Viability/Cytotoxicity Assay in small tumor spheroid cultures, showing excellent correlation with existing standards. Importantly, OCT was shown to be capable of evaluating 3D spheroid treatment response even when traditional viability assays failed. OCT 3D viability imaging revealed synergy between PDT and the standard-of-care chemotherapeutic carboplatin that evolved over time. We believe the efficacy and accuracy of OCT in vitro drug screening will greatly contribute to the field of cancer treatment and therapy evaluation. PMID:27248849

  13. Quantification of Coupled Stiffness and Fiber Orientation Remodeling in Hypertensive Rat Right-Ventricular Myocardium Using 3D Ultrasound Speckle Tracking with Biaxial Testing

    PubMed Central

    Park, Dae Woo; Sebastiani, Andrea; Yap, Choon Hwai; Simon, Marc A.; Kim, Kang

    2016-01-01

    Mechanical and structural changes of right ventricular (RV) in response to pulmonary hypertension (PH) are inadequately understood. While current standard biaxial testing provides information on the mechanical behavior of RV tissues using surface markers, it is unable to fully assess structural and mechanical properties across the full tissue thickness. In this study, the mechanical and structural properties of normotensive and pulmonary hypertension right ventricular (PHRV) myocardium through its full thickness were examined using mechanical testing combined with 3D ultrasound speckle tracking (3D-UST). RV pressure overload was induced in Sprague–Dawley rats by pulmonary artery (PA) banding. The second Piola–Kirchhoff stress tensors and Green-Lagrangian strain tensors were computed in the RV myocardium using the biaxial testing combined with 3D-UST. A previously established non-linear curve-fitting algorithm was applied to fit experimental data to a Strain Energy Function (SEF) for computation of myofiber orientation. The fiber orientations obtained by the biaxial testing with 3D-UST compared well with the fiber orientations computed from the histology. In addition, the re-orientation of myofiber in the right ventricular free wall (RVFW) along longitudinal direction (apex-to-outflow-tract direction) was noticeable in response to PH. For normotensive RVFW samples, the average fiber orientation angles obtained by 3D-UST with biaxial test spiraled from 20° at the endo-cardium to -42° at the epi-cardium (Δ = 62°). For PHRV samples, the average fiber orientation angles obtained by 3D-UST with biaxial test had much less spiral across tissue thickness: 3° at endo-cardium to -7° at epi-cardium (Δ = 10°, P<0.005 compared to normotensive). PMID:27780271

  14. Chaotic orbits tracked by a 3D asymmetric immersed solid at high Reynolds numbers using a novel Gerris-Immersed Solid (DNS) Solver

    NASA Astrophysics Data System (ADS)

    Shui, Pei; Popinet, Stéphane; Valluri, Prashant; Govindarajan, Rama

    2014-11-01

    The motion of a neutrally buoyant ellipsoidal solid with an initial momentum has been theoretically predicted to be chaotic in inviscid flow by Aref (1993). On the other hand, the particle could stop moving when the damping viscous force is strong enough. This work provides numerical evidence for 3D chaotic motion of a neutrally buoyant general ellipsoidal solid and suggests criteria for triggering this motion. The study also shows that the translational/rotational energy ratio plays the key role on the motion pattern, while the particle geometry and density aspect ratios also have some influence on the chaotic behaviour. We have developed a novel variant of the immersed solid solver under the framework of the Gerris flow package of Popinet et al. (2003). Our solid solver, the Gerris Immersed Solid Solver (GISS), is capable of handling 6 degree-of-freedom motion of particles with arbitrary geometry and number in three-dimensions and can precisely predict the hydrodynamic interactions and their effects on particle trajectories. The reliability and accuracy have been checked by a series of classical studies, testing both translational and rotational motions with a vast range of flow properties.

  15. Automated 3D architecture reconstruction from photogrammetric structure-and-motion: A case study of the One Pilla pagoda, Hanoi, Vienam

    NASA Astrophysics Data System (ADS)

    To, T.; Nguyen, D.; Tran, G.

    2015-04-01

    Heritage system of Vietnam has decline because of poor-conventional condition. For sustainable development, it is required a firmly control, space planning organization, and reasonable investment. Moreover, in the field of Cultural Heritage, the use of automated photogrammetric systems, based on Structure from Motion techniques (SfM), is widely used. With the potential of high-resolution, low-cost, large field of view, easiness, rapidity and completeness, the derivation of 3D metric information from Structure-and- Motion images is receiving great attention. In addition, heritage objects in form of 3D physical models are recorded not only for documentation issues, but also for historical interpretation, restoration, cultural and educational purposes. The study suggests the archaeological documentation of the "One Pilla" pagoda placed in Hanoi capital, Vietnam. The data acquired through digital camera Cannon EOS 550D, CMOS APS-C sensor 22.3 x 14.9 mm. Camera calibration and orientation were carried out by VisualSFM, CMPMVS (Multi-View Reconstruction) and SURE (Photogrammetric Surface Reconstruction from Imagery) software. The final result represents a scaled 3D model of the One Pilla Pagoda and displayed different views in MeshLab software.

  16. Performance and suitability assessment of a real-time 3D electromagnetic needle tracking system for interstitial brachytherapy

    PubMed Central

    Boutaleb, Samir; Fillion, Olivier; Bonillas, Antonio; Hautvast, Gilion; Binnekamp, Dirk; Beaulieu, Luc

    2015-01-01

    Purpose Accurate insertion and overall needle positioning are key requirements for effective brachytherapy treatments. This work aims at demonstrating the accuracy performance and the suitability of the Aurora® V1 Planar Field Generator (PFG) electromagnetic tracking system (EMTS) for real-time treatment assistance in interstitial brachytherapy procedures. Material and methods The system's performance was characterized in two distinct studies. First, in an environment free of EM disturbance, the boundaries of the detection volume of the EMTS were characterized and a tracking error analysis was performed. Secondly, a distortion analysis was conducted as a means of assessing the tracking accuracy performance of the system in the presence of potential EM disturbance generated by the proximity of standard brachytherapy components. Results The tracking accuracy experiments showed that positional errors were typically 2 ± 1 mm in a zone restricted to the first 30 cm of the detection volume. However, at the edges of the detection volume, sensor position errors of up to 16 mm were recorded. On the other hand, orientation errors remained low at ± 2° for most of the measurements. The EM distortion analysis showed that the presence of typical brachytherapy components in vicinity of the EMTS had little influence on tracking accuracy. Position errors of less than 1 mm were recorded with all components except with a metallic arm support, which induced a mean absolute error of approximately 1.4 mm when located 10 cm away from the needle sensor. Conclusions The Aurora® V1 PFG EMTS possesses a great potential for real-time treatment assistance in general interstitial brachytherapy. In view of our experimental results, we however recommend that the needle axis remains as parallel as possible to the generator surface during treatment and that the tracking zone be restricted to the first 30 cm from the generator surface. PMID:26622231

  17. SU-E-J-156: Preclinical Inverstigation of Dynamic Tumor Tracking Using Vero SBRT Linear Accelerator: Motion Phantom Dosimetry Study

    SciTech Connect

    Mamalui-Hunter, M; Wu, J; Li, Z; Su, Z

    2014-06-01

    Purpose: Following the ‘end-to-end testing’ paradigm of Dynamic Target Tracking option in our Image-Guided dedicated SBRT VeroTM linac, we verify the capability of the system to deliver planned dose to moving targets in the heterogeneous thorax phantom (CIRSTM). The system includes gimbaled C-band linac head, robotic 6 degree of freedom couch and a tumor tracking method based on predictive modeling of target position using fluoroscopically tracked implanted markers and optically tracked infrared reflecting external markers. Methods: 4DCT scan of the motion phantom with the VisicoilTM implanted marker in the close vicinity of the target was acquired, the ‘exhale’=most prevalent phase was used for planning (iPlan by BrainLabTM). Typical 3D conformal SBRT treatment plans aimed to deliver 6-8Gy/fx to two types of targets: a)solid water-equivalent target 3cm in diameter; b)single VisicoilTM marker inserted within lung equivalent material. The planning GTV/CTV-to-PTV margins were 2mm, the block margins were 3 mm. The dose calculated by MonteCarlo algorithm with 1% variance using option Dose-to-water was compared to the ion chamber (CC01 by IBA Dosimetry) measurements in case (a) and GafchromicTM EBT3 film measurements in case (b). During delivery, the target 6 motion patterns available as a standard on CIRSTM motion phantom were investigated: in case (a), the target was moving along the designated sine or cosine4 3D trajectory; in case (b), the inserted marker was moving sinusoidally in 1D. Results: The ion chamber measurements have shown the agreement with the planned dose within 1% under all the studied motion conditions. The film measurements show 98.1% agreement with the planar calculated dose (gamma criteria: 3%/3mm). Conclusion: We successfully verified the capability of the SBRT VeroTM linac to perform real-time tumor tracking and accurate dose delivery to the target, based on predictive modeling of the correlation between implanted marker motion and

  18. Tumor tracking and motion compensation with an adaptive tumor tracking system (ATTS): System description and prototype testing

    SciTech Connect

    Wilbert, Juergen; Meyer, Juergen; Baier, Kurt; Guckenberger, Matthias; Herrmann, Christian; Hess, Robin; Janka, Christian; Ma Lei; Mersebach, Torben; Richter, Anne; Roth, Michael; Schilling, Klaus; Flentje, Michael

    2008-09-15

    A novel system for real-time tumor tracking and motion compensation with a robotic HexaPOD treatment couch is described. The approach is based on continuous tracking of the tumor motion in portal images without implanted fiducial markers, using the therapeutic megavoltage beam, and tracking of abdominal breathing motion with optical markers. Based on the two independently acquired data sets the table movements for motion compensation are calculated. The principle of operation of the entire prototype system is detailed first. In the second part the performance of the HexaPOD couch was investigated with a robotic four-dimensional-phantom capable of simulating real patient tumor trajectories in three-dimensional space. The performance and limitations of the HexaPOD table and the control system were characterized in terms of its dynamic behavior. The maximum speed and acceleration of the HexaPOD were 8 mm/s and 34.5 mm/s{sup 2} in the lateral direction, and 9.5 mm/s and 29.5 mm/s{sup 2} in longitudinal and anterior-posterior direction, respectively. Base line drifts of the mean tumor position of realistic lung tumor trajectories could be fully compensated. For continuous tumor tracking and motion compensation a reduction of tumor motion up to 68% of the original amplitude was achieved. In conclusion, this study demonstrated that it is technically feasible to compensate breathing induced tumor motion in the lung with the adaptive tumor tracking system.

  19. [Recent echocardiographic examination of the left ventricle – from M-mode to 3D speckle-tracking imaging].

    PubMed

    Nemes, Attila; Forster, Tamás

    2015-10-25

    The left ventricle has a vital role in maintaining circulation of the body, therefore, its non-invasive assessment is essential. The aim of the present review is to demonstrate clinical relevance of different echocardiographic methods in the evaluation of left ventricle emphasizing the importance of the most recent three-dimensional (and) speckle-tracking methodologies.

  20. Estimation of Rigid-Body and Respiratory Motion of the Heart From Marker-Tracking Data for SPECT Motion Correction

    PubMed Central

    Mukherjee, Joyeeta Mitra; McNamara, Joseph E.; Johnson, Karen L.; Dey, Joyoni; King, Michael A.

    2009-01-01

    Motion of patients undergoing cardiac SPECT perfusion imaging causes artifacts in the acquired images which may lead to difficulty in interpretation. Our work investigates a technique of obtaining patient motion estimates from retro-reflective markers on stretchy bands wrapped around the chest and abdomen of patients being imaged clinically. Motion signals obtained from the markers consist of at least two components, body motion (BM) and periodic motion (PM) due to respiration. We present a method for separating these components from the motion-tracking data of each marker, and then report a method for combining the BM estimated from chest markers to estimate the 6-degree-of-freedom (6-DOF) rigid-body motion (RBM) of the heart. Motion studies of volunteers and patients are used to evaluate the methods. Illustrative examples of the motion of the heart due to patient body movement and respiration (upward creep) are presented and compared to estimates of the motion of the heart obtained directly from SPECT data. Our motion-tracking method is seen to give reasonable agreement with the motion-estimates from the SPECT data while being considerably less noisy. PMID:20539825

  1. Generation of fluoroscopic 3D images with a respiratory motion model based on an external surrogate signal.

    PubMed

    Hurwitz, Martina; Williams, Christopher L; Mishra, Pankaj; Rottmann, Joerg; Dhou, Salam; Wagar, Matthew; Mannarino, Edward G; Mak, Raymond H; Lewis, John H

    2015-01-21

    Respiratory motion during radiotherapy can cause uncertainties in definition of the target volume and in estimation of the dose delivered to the target and healthy tissue. In this paper, we generate volumetric images of the internal patient anatomy during treatment using only the motion of a surrogate signal. Pre-treatment four-dimensional CT imaging is used to create a patient-specific model correlating internal respiratory motion with the trajectory of an external surrogate placed on the chest. The performance of this model is assessed with digital and physical phantoms reproducing measured irregular patient breathing patterns. Ten patient breathing patterns are incorporated in a digital phantom. For each patient breathing pattern, the model is used to generate images over the course of thirty seconds. The tumor position predicted by the model is compared to ground truth information from the digital phantom. Over the ten patient breathing patterns, the average absolute error in the tumor centroid position predicted by the motion model is 1.4 mm. The corresponding error for one patient breathing pattern implemented in an anthropomorphic physical phantom was 0.6 mm. The global voxel intensity error was used to compare the full image to the ground truth and demonstrates good agreement between predicted and true images. The model also generates accurate predictions for breathing patterns with irregular phases or amplitudes.

  2. Hybrid 3-D rocket trajectory program. Part 1: Formulation and analysis. Part 2: Computer programming and user's instruction. [computerized simulation using three dimensional motion analysis

    NASA Technical Reports Server (NTRS)

    Huang, L. C. P.; Cook, R. A.

    1973-01-01

    Models utilizing various sub-sets of the six degrees of freedom are used in trajectory simulation. A 3-D model with only linear degrees of freedom is especially attractive, since the coefficients for the angular degrees of freedom are the most difficult to determine and the angular equations are the most time consuming for the computer to evaluate. A computer program is developed that uses three separate subsections to predict trajectories. A launch rail subsection is used until the rocket has left its launcher. The program then switches to a special 3-D section which computes motions in two linear and one angular degrees of freedom. When the rocket trims out, the program switches to the standard, three linear degrees of freedom model.

  3. A 3D analysis of fore- and hindlimb motion during locomotion: comparison of overground and ladder walking in rats.