Science.gov

Sample records for 3d motion tracking

  1. Characterisation of walking loads by 3D inertial motion tracking

    NASA Astrophysics Data System (ADS)

    Van Nimmen, K.; Lombaert, G.; Jonkers, I.; De Roeck, G.; Van den Broeck, P.

    2014-09-01

    The present contribution analyses the walking behaviour of pedestrians in situ by 3D inertial motion tracking. The technique is first tested in laboratory experiments with simultaneous registration of the ground reaction forces. The registered motion of the pedestrian allows for the identification of stride-to-stride variations, which is usually disregarded in the simulation of walking forces. Subsequently, motion tracking is used to register the walking behaviour of (groups of) pedestrians during in situ measurements on a footbridge. The calibrated numerical model of the structure and the information gathered using the motion tracking system enables detailed simulation of the step-by-step pedestrian induced vibrations. Accounting for the in situ identified walking variability of the test-subjects leads to a significantly improved agreement between the measured and the simulated structural response.

  2. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  3. Tracking 3-D body motion for docking and robot control

    NASA Technical Reports Server (NTRS)

    Donath, M.; Sorensen, B.; Yang, G. B.; Starr, R.

    1987-01-01

    An advanced method of tracking three-dimensional motion of bodies has been developed. This system has the potential to dynamically characterize machine and other structural motion, even in the presence of structural flexibility, thus facilitating closed loop structural motion control. The system's operation is based on the concept that the intersection of three planes defines a point. Three rotating planes of laser light, fixed and moving photovoltaic diode targets, and a pipe-lined architecture of analog and digital electronics are used to locate multiple targets whose number is only limited by available computer memory. Data collection rates are a function of the laser scan rotation speed and are currently selectable up to 480 Hz. The tested performance on a preliminary prototype designed for 0.1 in accuracy (for tracking human motion) at a 480 Hz data rate includes a worst case resolution of 0.8 mm (0.03 inches), a repeatability of plus or minus 0.635 mm (plus or minus 0.025 inches), and an absolute accuracy of plus or minus 2.0 mm (plus or minus 0.08 inches) within an eight cubic meter volume with all results applicable at the 95 percent level of confidence along each coordinate region. The full six degrees of freedom of a body can be computed by attaching three or more target detectors to the body of interest.

  4. Vision-Based 3D Motion Estimation for On-Orbit Proximity Satellite Tracking and Navigation

    DTIC Science & Technology

    2015-06-01

    printed using the Fortus 400mc 3D rapid- prototyping printer of the NPS Space Systems Academic Group, while the internal structure is made of aluminum...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited VISION-BASED 3D ...REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE VISION-BASED 3D MOTION ESTIMATION FOR ON-ORBIT PROXIMITY SATELLITE TRACKING

  5. 3D model-based catheter tracking for motion compensation in EP procedures

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Liao, Rui; Hornegger, Joachim; Strobel, Norbert

    2010-02-01

    Atrial fibrillation is the most common sustained heart arrhythmia and a leading cause of stroke. Its treatment by radio-frequency catheter ablation, performed using fluoroscopic image guidance, is gaining increasingly more importance. Two-dimensional fluoroscopic navigation can take advantage of overlay images derived from pre-operative 3-D data to add anatomical details otherwise not visible under X-ray. Unfortunately, respiratory motion may impair the utility of these static overlay images for catheter navigation. We developed an approach for image-based 3-D motion compensation as a solution to this problem. A bi-plane C-arm system is used to take X-ray images of a special circumferential mapping catheter from two directions. In the first step of the method, a 3-D model of the device is reconstructed. Three-dimensional respiratory motion at the site of ablation is then estimated by tracking the reconstructed catheter model in 3-D. This step involves bi-plane fluoroscopy and 2-D/3-D registration. Phantom data and clinical data were used to assess our model-based catheter tracking method. Experiments involving a moving heart phantom yielded an average 2-D tracking error of 1.4 mm and an average 3-D tracking error of 1.1 mm. Our evaluation of clinical data sets comprised 469 bi-plane fluoroscopy frames (938 monoplane fluoroscopy frames). We observed an average 2-D tracking error of 1.0 mm +/- 0.4 mm and an average 3-D tracking error of 0.8 mm +/- 0.5 mm. These results demonstrate that model-based motion-compensation based on 2-D/3-D registration is both feasible and accurate.

  6. Handling Motion-Blur in 3D Tracking and Rendering for Augmented Reality.

    PubMed

    Park, Youngmin; Lepetit, Vincent; Woo, Woontack

    2012-09-01

    The contribution of this paper is two-fold. First, we show how to extend the ESM algorithm to handle motion blur in 3D object tracking. ESM is a powerful algorithm for template matching-based tracking, but it can fail under motion blur. We introduce an image formation model that explicitly consider the possibility of blur, and shows its results in a generalization of the original ESM algorithm. This allows to converge faster, more accurately and more robustly even under large amount of blur. Our second contribution is an efficient method for rendering the virtual objects under the estimated motion blur. It renders two images of the object under 3D perspective, and warps them to create many intermediate images. By fusing these images we obtain a final image for the virtual objects blurred consistently with the captured image. Because warping is much faster than 3D rendering, we can create realistically blurred images at a very low computational cost.

  7. Structured light 3D tracking system for measuring motions in PET brain imaging

    NASA Astrophysics Data System (ADS)

    Olesen, Oline V.; Jørgensen, Morten R.; Paulsen, Rasmus R.; Højgaard, Liselotte; Roed, Bjarne; Larsen, Rasmus

    2010-02-01

    Patient motion during scanning deteriorates image quality, especially for high resolution PET scanners. A new proposal for a 3D head tracking system for motion correction in high resolution PET brain imaging is set up and demonstrated. A prototype tracking system based on structured light with a DLP projector and a CCD camera is set up on a model of the High Resolution Research Tomograph (HRRT). Methods to reconstruct 3D point clouds of simple surfaces based on phase-shifting interferometry (PSI) are demonstrated. The projector and camera are calibrated using a simple stereo vision procedure where the projector is treated as a camera. Additionally, the surface reconstructions are corrected for the non-linear projector output prior to image capture. The results are convincing and a first step toward a fully automated tracking system for measuring head motions in PET imaging.

  8. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  9. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    SciTech Connect

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    2014-02-15

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  10. Real-time 3D motion tracking for small animal brain PET

    NASA Astrophysics Data System (ADS)

    Kyme, A. Z.; Zhou, V. W.; Meikle, S. R.; Fulton, R. R.

    2008-05-01

    High-resolution positron emission tomography (PET) imaging of conscious, unrestrained laboratory animals presents many challenges. Some form of motion correction will normally be necessary to avoid motion artefacts in the reconstruction. The aim of the current work was to develop and evaluate a motion tracking system potentially suitable for use in small animal PET. This system is based on the commercially available stereo-optical MicronTracker S60 which we have integrated with a Siemens Focus-220 microPET scanner. We present measured performance limits of the tracker and the technical details of our implementation, including calibration and synchronization of the system. A phantom study demonstrating motion tracking and correction was also performed. The system can be calibrated with sub-millimetre accuracy, and small lightweight markers can be constructed to provide accurate 3D motion data. A marked reduction in motion artefacts was demonstrated in the phantom study. The techniques and results described here represent a step towards a practical method for rigid-body motion correction in small animal PET. There is scope to achieve further improvements in the accuracy of synchronization and pose measurements in future work.

  11. Automated 3D motion tracking using Gabor filter bank, robust point matching, and deformable models.

    PubMed

    Chen, Ting; Wang, Xiaoxu; Chung, Sohae; Metaxas, Dimitris; Axel, Leon

    2010-01-01

    Tagged magnetic resonance imaging (tagged MRI or tMRI) provides a means of directly and noninvasively displaying the internal motion of the myocardium. Reconstruction of the motion field is needed to quantify important clinical information, e.g., the myocardial strain, and detect regional heart functional loss. In this paper, we present a three-step method for this task. First, we use a Gabor filter bank to detect and locate tag intersections in the image frames, based on local phase analysis. Next, we use an improved version of the robust point matching (RPM) method to sparsely track the motion of the myocardium, by establishing a transformation function and a one-to-one correspondence between grid tag intersections in different image frames. In particular, the RPM helps to minimize the impact on the motion tracking result of 1) through-plane motion and 2) relatively large deformation and/or relatively small tag spacing. In the final step, a meshless deformable model is initialized using the transformation function computed by RPM. The model refines the motion tracking and generates a dense displacement map, by deforming under the influence of image information, and is constrained by the displacement magnitude to retain its geometric structure. The 2D displacement maps in short and long axis image planes can be combined to drive a 3D deformable model, using the moving least square method, constrained by the minimization of the residual error at tag intersections. The method has been tested on a numerical phantom, as well as on in vivo heart data from normal volunteers and heart disease patients. The experimental results show that the new method has a good performance on both synthetic and real data. Furthermore, the method has been used in an initial clinical study to assess the differences in myocardial strain distributions between heart disease (left ventricular hypertrophy) patients and the normal control group. The final results show that the proposed method

  12. 3D tracking the Brownian motion of colloidal particles using digital holographic microscopy and joint reconstruction.

    PubMed

    Verrier, Nicolas; Fournier, Corinne; Fournel, Thierry

    2015-06-01

    In-line digital holography is a valuable tool for sizing, locating, and tracking micro- or nano-objects in a volume. When a parametric imaging model is available, inverse problem approaches provide a straightforward estimate of the object parameters by fitting data with the model, thereby allowing accurate reconstruction. As recently proposed and demonstrated, combining pixel super-resolution techniques with inverse problem approaches improves the estimation of particle size and 3D position. Here, we demonstrate the accurate tracking of colloidal particles in Brownian motion. Particle size and 3D position are jointly optimized from video holograms acquired with a digital holographic microscopy setup based on a low-end microscope objective (×20, NA 0.5). Exploiting information redundancy makes it possible to characterize particles with a standard deviation of 15 nm in size and a theoretical resolution of 2×2×5  nm3 for position under additive white Gaussian noise assumption.

  13. Comparison of 2D and 3D modeled tumor motion estimation/prediction for dynamic tumor tracking during arc radiotherapy.

    PubMed

    Liu, Wu; Ma, Xiangyu; Yan, Huagang; Chen, Zhe; Nath, Ravinder; Li, Haiyun

    2017-03-06

    Many real-time imaging techniques have been developed to localize the target in 3D space or in 2D beam's eye view (BEV) plane for intrafraction motion tracking in radiation therapy. With tracking system latency, 3D-modeled method is expected to be more accurate even in terms of 2D BEV tracking error. No quantitative analysis, however, has been reported. In this study, we simulated co-planar arc deliveries using respiratory motion data acquired from 42 patients to quantitatively compare the accuracy between 2D BEV and 3D-modeled tracking in arc therapy and determine whether 3D information is needed for motion tracking. We used our previously developed low kV dose adaptive MV-kV imaging and motion compensation framework as a representative of 3D-modeled methods. It optimizes the balance between additional kV imaging dose and 3D tracking accuracy and solves the MLC blockage issue. With simulated Gaussian marker detection errors (zero mean and 0.39 mm standard deviation) and ~155/310/460 ms tracking system latencies, the mean percentage of time that the target moved >2 mm from the predicted 2D BEV position are 1.1%/4.0%/7.8% and 1.3%/5.8%/11.6% for 3D-modeled and 2D-only tracking, respectively. The corresponding average BEV RMS errors are 0.67/0.90/1.13 mm and 0.79/1.10/1.37 mm. Compared to the 2D method, the 3D method reduced the average RMS unresolved motion along the beam direction from ~3 mm to ~1 mm, resulting on average only <1% dosimetric advantage in the depth direction. Only for a small fraction of the patients, when tracking latency is long, the 3D-modeled method showed significant improvement of BEV tracking accuracy, indicating potential dosimetric advantage. However, if the tracking latency is short (~150 ms or less), those improvements are limited. Therefore, 2D BEV tracking has sufficient targeting accuracy for most clinical cases. The 3D technique is, however, still important in solving the MLC blockage problem during 2D BEV tracking.

  14. Infrared tomographic PIV and 3D motion tracking system applied to aquatic predator-prey interaction

    NASA Astrophysics Data System (ADS)

    Adhikari, Deepak; Longmire, Ellen K.

    2013-02-01

    Infrared tomographic PIV and 3D motion tracking are combined to measure evolving volumetric velocity fields and organism trajectories during aquatic predator-prey interactions. The technique was used to study zebrafish foraging on both non-evasive and evasive prey species. Measurement volumes of 22.5 mm × 10.5 mm × 12 mm were reconstructed from images captured on a set of four high-speed cameras. To obtain accurate fluid velocity vectors within each volume, fish were first masked out using an automated visual hull method. Fish and prey locations were identified independently from the same image sets and tracked separately within the measurement volume. Experiments demonstrated that fish were not influenced by the infrared laser illumination or the tracer particles. Results showed that the zebrafish used different strategies, suction and ram feeding, for successful capture of non-evasive and evasive prey, respectively. The two strategies yielded different variations in fluid velocity between the fish mouth and the prey. In general, the results suggest that the local flow field, the direction of prey locomotion with respect to the predator and the relative accelerations and speeds of the predator and prey may all be significant in determining predation success.

  15. MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems

    NASA Astrophysics Data System (ADS)

    Kopecky, Ken; Winer, Eliot

    2014-06-01

    Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.

  16. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations.

  17. DLP technology application: 3D head tracking and motion correction in medical brain imaging

    NASA Astrophysics Data System (ADS)

    Olesen, Oline V.; Wilm, Jakob; Paulsen, Rasmus R.; Højgaard, Liselotte; Larsen, Rasmus

    2014-03-01

    In this paper we present a novel sensing system, robust Near-infrared Structured Light Scanning (NIRSL) for three-dimensional human model scanning application. Human model scanning due to its nature of various hair and dress appearance and body motion has long been a challenging task. Previous structured light scanning methods typically emitted visible coded light patterns onto static and opaque objects to establish correspondence between a projector and a camera for triangulation. In the success of these methods rely on scanning objects with proper reflective surface for visible light, such as plaster, light colored cloth. Whereas for human model scanning application, conventional methods suffer from low signal to noise ratio caused by low contrast of visible light over the human body. The proposed robust NIRSL, as implemented with the near infrared light, is capable of recovering those dark surfaces, such as hair, dark jeans and black shoes under visible illumination. Moreover, successful structured light scan relies on the assumption that the subject is static during scanning. Due to the nature of body motion, it is very time sensitive to keep this assumption in the case of human model scan. The proposed sensing system, by utilizing the new near-infrared capable high speed LightCrafter DLP projector, is robust to motion, provides accurate and high resolution three-dimensional point cloud, making our system more efficient and robust for human model reconstruction. Experimental results demonstrate that our system is effective and efficient to scan real human models with various dark hair, jeans and shoes, robust to human body motion and produces accurate and high resolution 3D point cloud.

  18. Atmospheric Motion Vectors from INSAT-3D: Initial quality assessment and its impact on track forecast of cyclonic storm NANAUK

    NASA Astrophysics Data System (ADS)

    Deb, S. K.; Kishtawal, C. M.; Kumar, Prashant; Kiran Kumar, A. S.; Pal, P. K.; Kaushik, Nitesh; Sangar, Ghansham

    2016-03-01

    The advanced Indian meteorological geostationary satellite INSAT-3D was launched on 26 July 2013 with an improved imager and an infrared sounder and is placed at 82°E over the Indian Ocean region. With the advancement in retrieval techniques of different atmospheric parameters and with improved imager data have enhanced the scope for better understanding of the different tropical atmospheric processes over this region. The retrieval techniques and accuracy of one such parameter, Atmospheric Motion Vectors (AMV) has improved significantly with the availability of improved spatial resolution data along with more options of spectral channels in the INSAT-3D imager. The present work is mainly focused on providing brief descriptions of INSAT-3D data and AMV derivation processes using these data. It also discussed the initial quality assessment of INSAT-3D AMVs for a period of six months starting from 01 February 2014 to 31 July 2014 with other independent observations: i) Meteosat-7 AMVs available over this region, ii) in-situ radiosonde wind measurements, iii) cloud tracked winds from Multi-angle Imaging Spectro-Radiometer (MISR) and iv) numerical model analysis. It is observed from this study that the qualities of newly derived INSAT-3D AMVs are comparable with existing two versions of Meteosat-7 AMVs over this region. To demonstrate its initial application, INSAT-3D AMVs are assimilated in the Weather Research and Forecasting (WRF) model and it is found that the assimilation of newly derived AMVs has helped in reduction of track forecast errors of the recent cyclonic storm NANAUK over the Arabian Sea. Though, the present study is limited to its application to one case study, however, it will provide some guidance to the operational agencies for implementation of this new AMV dataset for future applications in the Numerical Weather Prediction (NWP) over the south Asia region.

  19. Integrating eye tracking and motion sensor on mobile phone for interactive 3D display

    NASA Astrophysics Data System (ADS)

    Sun, Yu-Wei; Chiang, Chen-Kuo; Lai, Shang-Hong

    2013-09-01

    In this paper, we propose an eye tracking and gaze estimation system for mobile phone. We integrate an eye detector, cornereye center and iso-center to improve pupil detection. The optical flow information is used for eye tracking. We develop a robust eye tracking system that integrates eye detection and optical-flow based image tracking. In addition, we further incorporate the orientation sensor information from the mobile phone to improve the eye tracking for accurate gaze estimation. We demonstrate the accuracy of the proposed eye tracking and gaze estimation system through experiments on some public video sequences as well as videos acquired directly from mobile phone.

  20. The 3D Tele Motion Tracking for the Orthodontic Facial Analysis

    PubMed Central

    Nota, Alessandro; Marchetti, Enrico; Padricelli, Giuseppe; Marzo, Giuseppe

    2016-01-01

    Aim. This study aimed to evaluate the reliability of 3D-TMT, previously used only for dynamic testing, in a static cephalometric evaluation. Material and Method. A group of 40 patients (20 males and 20 females; mean age 14.2 ± 1.2 years; 12–18 years old) was included in the study. The measurements obtained by the 3D-TMT cephalometric analysis with a conventional frontal cephalometric analysis were compared for each subject. Nine passive markers reflectors were positioned on the face skin for the detection of the profile of the patient. Through the acquisition of these points, corresponding plans for three-dimensional posterior-anterior cephalometric analysis were found. Results. The cephalometric results carried out with 3D-TMT and with traditional posterior-anterior cephalometric analysis showed the 3D-TMT system values are slightly higher than the values measured on radiographs but statistically significant; nevertheless their correlation is very high. Conclusion. The recorded values obtained using the 3D-TMT analysis were correlated to cephalometric analysis, with small but statistically significant differences. The Dahlberg errors resulted to be always lower than the mean difference between the 2D and 3D measurements. A clinician should use, during the clinical monitoring of a patient, always the same method, to avoid comparing different millimeter magnitudes. PMID:28044130

  1. Hybrid MV-kV 3D respiratory motion tracking during radiation therapy with low imaging dose.

    PubMed

    Yan, Huagang; Li, Haiyun; Liu, Zhixiang; Nath, Ravinder; Liu, Wu

    2012-12-21

    A novel real-time adaptive MV-kV imaging framework for image-guided radiation therapy is developed to reduce the thoracic and abdominal tumor targeting uncertainty caused by respiration-induced intrafraction motion with ultra-low patient imaging dose. In our method, continuous stereoscopic MV-kV imaging is used at the beginning of a radiation therapy delivery for several seconds to measure the implanted marker positions. After this stereoscopic imaging period, the kV imager is switched off except for the times when no fiducial marker is detected in the cine-MV images. The 3D time-varying marker positions are estimated by combining the MV 2D projection data and the motion correlations between directional components of marker motion established from the stereoscopic imaging period and updated afterwards; in particular, the most likely position is assumed to be the position on the projection line that has the shortest distance to the first principal component line segment constructed from previous trajectory points. An adaptive windowed auto-regressive prediction is utilized to predict the marker position a short time later (310 ms and 460 ms in this study) to allow for tracking system latency. To demonstrate the feasibility and evaluate the accuracy of the proposed method, computer simulations were performed for both arc and fixed-gantry deliveries using 66 h of retrospective tumor motion data from 42 patients treated for thoracic or abdominal cancers. The simulations reveal that using our hybrid approach, a smaller than 1.2 mm or 1.5 mm root-mean-square tracking error can be achieved at a system latency of 310 ms or 460 ms, respectively. Because the kV imaging is only used for a short period of time in our method, extra patient imaging dose can be reduced by an order of magnitude compared to continuous MV-kV imaging, while the clinical tumor targeting accuracy for thoracic or abdominal cancers is maintained. Furthermore, no additional hardware is required

  2. Hybrid MV-kV 3D respiratory motion tracking during radiation therapy with low imaging dose

    NASA Astrophysics Data System (ADS)

    Yan, Huagang; Li, Haiyun; Liu, Zhixiang; Nath, Ravinder; Liu, Wu

    2012-12-01

    A novel real-time adaptive MV-kV imaging framework for image-guided radiation therapy is developed to reduce the thoracic and abdominal tumor targeting uncertainty caused by respiration-induced intrafraction motion with ultra-low patient imaging dose. In our method, continuous stereoscopic MV-kV imaging is used at the beginning of a radiation therapy delivery for several seconds to measure the implanted marker positions. After this stereoscopic imaging period, the kV imager is switched off except for the times when no fiducial marker is detected in the cine-MV images. The 3D time-varying marker positions are estimated by combining the MV 2D projection data and the motion correlations between directional components of marker motion established from the stereoscopic imaging period and updated afterwards; in particular, the most likely position is assumed to be the position on the projection line that has the shortest distance to the first principal component line segment constructed from previous trajectory points. An adaptive windowed auto-regressive prediction is utilized to predict the marker position a short time later (310 ms and 460 ms in this study) to allow for tracking system latency. To demonstrate the feasibility and evaluate the accuracy of the proposed method, computer simulations were performed for both arc and fixed-gantry deliveries using 66 h of retrospective tumor motion data from 42 patients treated for thoracic or abdominal cancers. The simulations reveal that using our hybrid approach, a smaller than 1.2 mm or 1.5 mm root-mean-square tracking error can be achieved at a system latency of 310 ms or 460 ms, respectively. Because the kV imaging is only used for a short period of time in our method, extra patient imaging dose can be reduced by an order of magnitude compared to continuous MV-kV imaging, while the clinical tumor targeting accuracy for thoracic or abdominal cancers is maintained. Furthermore, no additional hardware is required with the

  3. Combining 3D tracking and surgical instrumentation to determine the stiffness of spinal motion segments: a validation study.

    PubMed

    Reutlinger, C; Gédet, P; Büchler, P; Kowal, J; Rudolph, T; Burger, J; Scheffler, K; Hasler, C

    2011-04-01

    The spine is a complex structure that provides motion in three directions: flexion and extension, lateral bending and axial rotation. So far, the investigation of the mechanical and kinematic behavior of the basic unit of the spine, a motion segment, is predominantly a domain of in vitro experiments on spinal loading simulators. Most existing approaches to measure spinal stiffness intraoperatively in an in vivo environment use a distractor. However, these concepts usually assume a planar loading and motion. The objective of our study was to develop and validate an apparatus, that allows to perform intraoperative in vivo measurements to determine both the applied force and the resulting motion in three dimensional space. The proposed setup combines force measurement with an instrumented distractor and motion tracking with an optoelectronic system. As the orientation of the applied force and the three dimensional motion is known, not only force-displacement, but also moment-angle relations could be determined. The validation was performed using three cadaveric lumbar ovine spines. The lateral bending stiffness of two motion segments per specimen was determined with the proposed concept and compared with the stiffness acquired on a spinal loading simulator which was considered to be gold standard. The mean values of the stiffness computed with the proposed concept were within a range of ±15% compared to data obtained with the spinal loading simulator under applied loads of less than 5 Nm.

  4. The birth of a dinosaur footprint: Subsurface 3D motion reconstruction and discrete element simulation reveal track ontogeny

    PubMed Central

    2014-01-01

    Locomotion over deformable substrates is a common occurrence in nature. Footprints represent sedimentary distortions that provide anatomical, functional, and behavioral insights into trackmaker biology. The interpretation of such evidence can be challenging, however, particularly for fossil tracks recovered at bedding planes below the originally exposed surface. Even in living animals, the complex dynamics that give rise to footprint morphology are obscured by both foot and sediment opacity, which conceals animal–substrate and substrate–substrate interactions. We used X-ray reconstruction of moving morphology (XROMM) to image and animate the hind limb skeleton of a chicken-like bird traversing a dry, granular material. Foot movement differed significantly from walking on solid ground; the longest toe penetrated to a depth of ∼5 cm, reaching an angle of 30° below horizontal before slipping backward on withdrawal. The 3D kinematic data were integrated into a validated substrate simulation using the discrete element method (DEM) to create a quantitative model of limb-induced substrate deformation. Simulation revealed that despite sediment collapse yielding poor quality tracks at the air–substrate interface, subsurface displacements maintain a high level of organization owing to grain–grain support. Splitting the substrate volume along “virtual bedding planes” exposed prints that more closely resembled the foot and could easily be mistaken for shallow tracks. DEM data elucidate how highly localized deformations associated with foot entry and exit generate specific features in the final tracks, a temporal sequence that we term “track ontogeny.” This combination of methodologies fosters a synthesis between the surface/layer-based perspective prevalent in paleontology and the particle/volume-based perspective essential for a mechanistic understanding of sediment redistribution during track formation. PMID:25489092

  5. 3-D model-based vehicle tracking.

    PubMed

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  6. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  7. Accuracy and precision of a custom camera-based system for 2-d and 3-d motion tracking during speech and nonspeech motor tasks.

    PubMed

    Feng, Yongqiang; Max, Ludo

    2014-04-01

    PURPOSE Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and submillimeter accuracy. METHOD The authors examined the accuracy and precision of 2-D and 3-D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially available computer software (APAS, Ariel Dynamics), and a custom calibration device. RESULTS Overall root-mean-square error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3- vs. 6-mm diameter) was negligible at all frame rates for both 2-D and 3-D data. CONCLUSION Motion tracking with consumer-grade digital cameras and the APAS software can achieve submillimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes.

  8. Intrinsic Feature Motion Tracking

    SciTech Connect

    Goddard, Jr., James S.

    2013-03-19

    Subject motion during 3D medical scanning can cause blurring and artifacts in the 3D images resulting in either rescans or poor diagnosis. Anesthesia or physical restraints may be used to eliminate motion but are undesirable and can affect results. This software measures the six degree of freedom 3D motion of the subject during the scan under a rigidity assumption using only the intrinsic features present on the subject area being monitored. This movement over time can then be used to correct the scan data removing the blur and artifacts. The software acquires images from external cameras or images stored on disk for processing. The images are from two or three calibrated cameras in a stereo arrangement. Algorithms extract and track the features over time and calculate position and orientation changes relative to an initial position. Output is the 3D position and orientation change measured at each image.

  9. Automatic respiration tracking for radiotherapy using optical 3D camera

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Rapid optical three-dimensional (O3D) imaging systems provide accurate digitized 3D surface data in real-time, with no patient contact nor radiation. The accurate 3D surface images offer crucial information in image-guided radiation therapy (IGRT) treatments for accurate patient repositioning and respiration management. However, applications of O3D imaging techniques to image-guided radiotherapy have been clinically challenged by body deformation, pathological and anatomical variations among individual patients, extremely high dimensionality of the 3D surface data, and irregular respiration motion. In existing clinical radiation therapy (RT) procedures target displacements are caused by (1) inter-fractional anatomy changes due to weight, swell, food/water intake; (2) intra-fractional variations from anatomy changes within any treatment session due to voluntary/involuntary physiologic processes (e.g. respiration, muscle relaxation); (3) patient setup misalignment in daily reposition due to user errors; and (4) changes of marker or positioning device, etc. Presently, viable solution is lacking for in-vivo tracking of target motion and anatomy changes during the beam-on time without exposing patient with additional ionized radiation or high magnet field. Current O3D-guided radiotherapy systems relay on selected points or areas in the 3D surface to track surface motion. The configuration of the marks or areas may change with time that makes it inconsistent in quantifying and interpreting the respiration patterns. To meet the challenge of performing real-time respiration tracking using O3D imaging technology in IGRT, we propose a new approach to automatic respiration motion analysis based on linear dimensionality reduction technique based on PCA (principle component analysis). Optical 3D image sequence is decomposed with principle component analysis into a limited number of independent (orthogonal) motion patterns (a low dimension eigen-space span by eigen-vectors). New

  10. Quantitative Evaluation of 3D Mouse Behaviors and Motor Function in the Open-Field after Spinal Cord Injury Using Markerless Motion Tracking

    PubMed Central

    Sheets, Alison L.; Lai, Po-Lun; Fisher, Lesley C.; Basso, D. Michele

    2013-01-01

    Thousands of scientists strive to identify cellular mechanisms that could lead to breakthroughs in developing ameliorative treatments for debilitating neural and muscular conditions such as spinal cord injury (SCI). Most studies use rodent models to test hypotheses, and these are all limited by the methods available to evaluate animal motor function. This study’s goal was to develop a behavioral and locomotor assessment system in a murine model of SCI that enables quantitative kinematic measurements to be made automatically in the open-field by applying markerless motion tracking approaches. Three-dimensional movements of eight naïve, five mild, five moderate, and four severe SCI mice were recorded using 10 cameras (100 Hz). Background subtraction was used in each video frame to identify the animal’s silhouette, and the 3D shape at each time was reconstructed using shape-from-silhouette. The reconstructed volume was divided into front and back halves using k-means clustering. The animal’s front Center of Volume (CoV) height and whole-body CoV speed were calculated and used to automatically classify animal behaviors including directed locomotion, exploratory locomotion, meandering, standing, and rearing. More detailed analyses of CoV height, speed, and lateral deviation during directed locomotion revealed behavioral differences and functional impairments in animals with mild, moderate, and severe SCI when compared with naïve animals. Naïve animals displayed the widest variety of behaviors including rearing and crossing the center of the open-field, the fastest speeds, and tallest rear CoV heights. SCI reduced the range of behaviors, and decreased speed (r = .70 p<.005) and rear CoV height (r = .65 p<.01) were significantly correlated with greater lesion size. This markerless tracking approach is a first step toward fundamentally changing how rodent movement studies are conducted. By providing scientists with sensitive, quantitative measurement

  11. 3D hand tracking using Kalman filter in depth space

    NASA Astrophysics Data System (ADS)

    Park, Sangheon; Yu, Sunjin; Kim, Joongrock; Kim, Sungjin; Lee, Sangyoun

    2012-12-01

    Hand gestures are an important type of natural language used in many research areas such as human-computer interaction and computer vision. Hand gestures recognition requires the prior determination of the hand position through detection and tracking. One of the most efficient strategies for hand tracking is to use 2D visual information such as color and shape. However, visual-sensor-based hand tracking methods are very sensitive when tracking is performed under variable light conditions. Also, as hand movements are made in 3D space, the recognition performance of hand gestures using 2D information is inherently limited. In this article, we propose a novel real-time 3D hand tracking method in depth space using a 3D depth sensor and employing Kalman filter. We detect hand candidates using motion clusters and predefined wave motion, and track hand locations using Kalman filter. To verify the effectiveness of the proposed method, we compare the performance of the proposed method with the visual-based method. Experimental results show that the performance of the proposed method out performs visual-based method.

  12. Accuracy and Precision of a Custom Camera-Based System for 2-D and 3-D Motion Tracking during Speech and Nonspeech Motor Tasks

    ERIC Educational Resources Information Center

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose: Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable…

  13. Tracking earthquake source evolution in 3-D

    NASA Astrophysics Data System (ADS)

    Kennett, B. L. N.; Gorbatov, A.; Spiliopoulos, S.

    2014-08-01

    Starting from the hypocentre, the point of initiation of seismic energy, we seek to estimate the subsequent trajectory of the points of emission of high-frequency energy in 3-D, which we term the `evocentres'. We track these evocentres as a function of time by energy stacking for putative points on a 3-D grid around the hypocentre that is expanded as time progresses, selecting the location of maximum energy release as a function of time. The spatial resolution in the neighbourhood of a target point can be simply estimated by spatial mapping using the properties of isochrons from the stations. The mapping of a seismogram segment to space is by inverse slowness, and thus more distant stations have a broader spatial contribution. As in hypocentral estimation, the inclusion of a wide azimuthal distribution of stations significantly enhances 3-D capability. We illustrate this approach to tracking source evolution in 3-D by considering two major earthquakes, the 2007 Mw 8.1 Solomons islands event that ruptured across a plate boundary and the 2013 Mw 8.3 event 610 km beneath the Sea of Okhotsk. In each case we are able to provide estimates of the evolution of high-frequency energy that tally well with alternative schemes, but also to provide information on the 3-D characteristics that is not available from backprojection from distant networks. We are able to demonstrate that the major characteristics of event rupture can be captured using just a few azimuthally distributed stations, which opens the opportunity for the approach to be used in a rapid mode immediately after a major event to provide guidance for, for example tsunami warning for megathrust events.

  14. Robust patella motion tracking using intensity-based 2D-3D registration on dynamic bi-plane fluoroscopy: towards quantitative assessment in MPFL reconstruction surgery

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Esnault, Matthieu; Grupp, Robert; Kosugi, Shinichi; Sato, Yoshinobu

    2016-03-01

    The determination of in vivo motion of multiple-bones using dynamic fluoroscopic images and computed tomography (CT) is useful for post-operative assessment of orthopaedic surgeries such as medial patellofemoral ligament reconstruction. We propose a robust method to measure the 3D motion of multiple rigid objects with high accuracy using a series of bi-plane fluoroscopic images and a multi-resolution, intensity-based, 2D-3D registration. A Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimizer was used with a gradient correlation similarity metric. Four approaches to register three rigid objects (femur, tibia-fibula and patella) were implemented: 1) an individual bone approach registering one bone at a time, each with optimization of a six degrees of freedom (6DOF) parameter, 2) a sequential approach registering one bone at a time but using the previous bone results as the background in DRR generation, 3) a simultaneous approach registering all the bones together (18DOF) and 4) a combination of the sequential and the simultaneous approaches. These approaches were compared in experiments using simulated images generated from the CT of a healthy volunteer and measured fluoroscopic images. Over the 120 simulated frames of motion, the simultaneous approach showed improved registration accuracy compared to the individual approach: with less than 0.68mm root-mean-square error (RMSE) for translation and less than 1.12° RMSE for rotation. A robustness evaluation was conducted with 45 trials of a randomly perturbed initialization showed that the sequential approach improved robustness significantly (74% success rate) compared to the individual bone approach (34% success) for patella registration (femur and tibia-fibula registration had a 100% success rate with each approach).

  15. 3D Tracking via Shoe Sensing

    PubMed Central

    Li, Fangmin; Liu, Guo; Liu, Jian; Chen, Xiaochuang; Ma, Xiaolin

    2016-01-01

    Most location-based services are based on a global positioning system (GPS), which only works well in outdoor environments. Compared to outdoor environments, indoor localization has created more buzz in recent years as people spent most of their time indoors working at offices and shopping at malls, etc. Existing solutions mainly rely on inertial sensors (i.e., accelerometer and gyroscope) embedded in mobile devices, which are usually not accurate enough to be useful due to the mobile devices’ random movements while people are walking. In this paper, we propose the use of shoe sensing (i.e., sensors attached to shoes) to achieve 3D indoor positioning. Specifically, a short-time energy-based approach is used to extract the gait pattern. Moreover, in order to improve the accuracy of vertical distance estimation while the person is climbing upstairs, a state classification is designed to distinguish the walking status including plane motion (i.e., normal walking and jogging horizontally), walking upstairs, and walking downstairs. Furthermore, we also provide a mechanism to reduce the vertical distance accumulation error. Experimental results show that we can achieve nearly 100% accuracy when extracting gait patterns from walking/jogging with a low-cost shoe sensor, and can also achieve 3D indoor real-time positioning with high accuracy. PMID:27801839

  16. Speeding up 3D speckle tracking using PatchMatch

    NASA Astrophysics Data System (ADS)

    Zontak, Maria; O'Donnell, Matthew

    2016-03-01

    Echocardiography provides valuable information to diagnose heart dysfunction. A typical exam records several minutes of real-time cardiac images. To enable complete analysis of 3D cardiac strains, 4-D (3-D+t) echocardiography is used. This results in a huge dataset and requires effective automated analysis. Ultrasound speckle tracking is an effective method for tissue motion analysis. It involves correlation of a 3D kernel (block) around a voxel with kernels in later frames. The search region is usually confined to a local neighborhood, due to biomechanical and computational constraints. For high strains and moderate frame-rates, however, this search region will remain large, leading to a considerable computational burden. Moreover, speckle decorrelation (due to high strains) leads to errors in tracking. To solve this, spatial motion coherency between adjacent voxels should be imposed, e.g., by averaging their correlation functions.1 This requires storing correlation functions for neighboring voxels, thus increasing memory demands. In this work, we propose an efficient search using PatchMatch, 2 a powerful method to find correspondences between images. Here we adopt PatchMatch for 3D volumes and radio-frequency signals. As opposed to an exact search, PatchMatch performs random sampling of the search region and propagates successive matches among neighboring voxels. We show that: 1) Inherently smooth offset propagation in PatchMatch contributes to spatial motion coherence without any additional processing or memory demand. 2) For typical scenarios, PatchMatch is at least 20 times faster than the exact search, while maintaining comparable tracking accuracy.

  17. 3D Human Motion Editing and Synthesis: A Survey

    PubMed Central

    Wang, Xin; Chen, Qiudi; Wang, Wanliang

    2014-01-01

    The ways to compute the kinematics and dynamic quantities of human bodies in motion have been studied in many biomedical papers. This paper presents a comprehensive survey of 3D human motion editing and synthesis techniques. Firstly, four types of methods for 3D human motion synthesis are introduced and compared. Secondly, motion capture data representation, motion editing, and motion synthesis are reviewed successively. Finally, future research directions are suggested. PMID:25045395

  18. Motion estimation in the 3-D Gabor domain.

    PubMed

    Feng, Mu; Reed, Todd R

    2007-08-01

    Motion estimation methods can be broadly classified as being spatiotemporal or frequency domain in nature. The Gabor representation is an analysis framework providing localized frequency information. When applied to image sequences, the 3-D Gabor representation displays spatiotemporal/spatiotemporal-frequency (st/stf) information, enabling the application of robust frequency domain methods with adjustable spatiotemporal resolution. In this work, the 3-D Gabor representation is applied to motion analysis. We demonstrate that piecewise uniform translational motion can be estimated by using a uniform translation motion model in the st/stf domain. The resulting motion estimation method exhibits both good spatiotemporal resolution and substantial noise resistance compared to existing spatiotemporal methods. To form the basis of this model, we derive the signature of the translational motion in the 3-D Gabor domain. Finally, to obtain higher spatiotemporal resolution for more complex motions, a dense motion field estimation method is developed to find a motion estimate for every pixel in the sequence.

  19. Motion Tracking System

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Integrated Sensors, Inc. (ISI), under NASA contract, developed a sensor system for controlling robot vehicles. This technology would enable a robot supply vehicle to automatically dock with Earth-orbiting satellites or the International Space Station. During the docking phase the ISI-developed sensor must sense the satellite's relative motion, then spin so the robot vehicle can adjust its motion to align with the satellite and slowly close until docking is completed. ISI used the sensing/tracking technology as the basis of its OPAD system, which simultaneously tracks an object's movement in six degrees of freedom. Applications include human limb motion analysis, assembly line position analysis and auto crash dummy motion analysis. The NASA technology is also the basis for Motion Analysis Workstation software, a package to simplify the video motion analysis process.

  20. Collective Motion of Mammalian Cell Cohorts in 3D

    PubMed Central

    Sharma, Yasha; Vargas, Diego A.; Pegoraro, Adrian F.; Lepzelter, David; Weitz, David A.; Zaman, Muhammad H

    2016-01-01

    Collective cell migration is ubiquitous in biology, from development to cancer; it occurs in complex systems comprised of heterogeneous cell types, signals and matrices, and requires large scale regulation in space and time. Understanding how cells achieve organized collective motility is crucial to addressing cellular and tissue function and disease progression. While current two-dimensional model systems recapitulate the dynamic properties of collective cell migration, quantitative three-dimensional equivalent model systems have proved elusive. To establish such a model system, we study cell collectives by tracking individuals within cell cohorts embedded in three dimensional collagen scaffolding. We develop a custom algorithm to quantify the temporal and spatial heterogeneity of motion in cell cohorts during motility events. In the absence of external driving agents, we show that these cohorts rotate in short bursts, <2 hours, and translate for up to 6 hours. We observe, track, and analyze three dimensional motion of cell cohorts composed of 3–31 cells, and pave a path toward understanding cell collectives in 3D as a complex emergent system. PMID:26549557

  1. The agreement between 3D, standard 2D and triplane 2D speckle tracking: effects of image quality and 3D volume rate.

    PubMed

    Trache, Tudor; Stöbe, Stephan; Tarr, Adrienn; Pfeiffer, Dietrich; Hagendorff, Andreas

    2014-12-01

    Comparison of 3D and 2D speckle tracking performed on standard 2D and triplane 2D datasets of normal and pathological left ventricular (LV) wall-motion patterns with a focus on the effect that 3D volume rate (3DVR), image quality and tracking artifacts have on the agreement between 2D and 3D speckle tracking. 37 patients with normal LV function and 18 patients with ischaemic wall-motion abnormalities underwent 2D and 3D echocardiography, followed by offline speckle tracking measurements. The values of 3D global, regional and segmental strain were compared with the standard 2D and triplane 2D strain values. Correlation analysis with the LV ejection fraction (LVEF) was also performed. The 3D and 2D global strain values correlated good in both normally and abnormally contracting hearts, though systematic differences between the two methods were observed. Of the 3D strain parameters, the area strain showed the best correlation with the LVEF. The numerical agreement of 3D and 2D analyses varied significantly with the volume rate and image quality of the 3D datasets. The highest correlation between 2D and 3D peak systolic strain values was found between 3D area and standard 2D longitudinal strain. Regional wall-motion abnormalities were similarly detected by 2D and 3D speckle tracking. 2DST of triplane datasets showed similar results to those of conventional 2D datasets. 2D and 3D speckle tracking similarly detect normal and pathological wall-motion patterns. Limited image quality has a significant impact on the agreement between 3D and 2D numerical strain values.

  2. Electrically tunable lens speeds up 3D orbital tracking

    PubMed Central

    Annibale, Paolo; Dvornikov, Alexander; Gratton, Enrico

    2015-01-01

    3D orbital particle tracking is a versatile and effective microscopy technique that allows following fast moving fluorescent objects within living cells and reconstructing complex 3D shapes using laser scanning microscopes. We demonstrated notable improvements in the range, speed and accuracy of 3D orbital particle tracking by replacing commonly used piezoelectric stages with Electrically Tunable Lens (ETL) that eliminates mechanical movement of objective lenses. This allowed tracking and reconstructing shape of structures extending 500 microns in the axial direction. Using the ETL, we tracked at high speed fluorescently labeled genomic loci within the nucleus of living cells with unprecedented temporal resolution of 8ms using a 1.42NA oil-immersion objective. The presented technology is cost effective and allows easy upgrade of scanning microscopes for fast 3D orbital tracking. PMID:26114037

  3. Three-dimensional motion measurements using feature tracking.

    PubMed

    Kuo, Johnny; von Ramm, Olaf T

    2008-04-01

    Feature tracking was developed to efficiently compute motion measurements from volumetric ultrasound images. Prior studies have demonstrated the motion magnitude accuracy and computation speed of feature tracking. However, the previous feature tracking implementations were limited by performance of their calculations in rectilinear coordinates. Also, the previous feature tracking approaches did not fully explore the three dimensional (3- D) nature of volumetric image analysis or utilize the 3-D directional information from the tracking calculations. This study presents an improved feature tracking method which achieves further computation speed gains by performing all calculations in the native spherical coordinates of the 3-D ultrasound image. The novel method utilizes a statistical analysis of tracked directions of motion to achieve better rejection of false tracking matches. Results from in vitro tracking of a speckle target show that the new feature tracking method is significantly faster than correlation search and can accurately determine target motion magnitude and 3-D direction.

  4. Motion field estimation for a dynamic scene using a 3D LiDAR.

    PubMed

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-09-09

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively.

  5. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR

    PubMed Central

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  6. 3D gaze tracking system for NVidia 3D Vision®.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  7. On the dynamics of jellyfish locomotion via 3D particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Piper, Matthew; Kim, Jin-Tae; Chamorro, Leonardo P.

    2016-11-01

    The dynamics of jellyfish (Aurelia aurita) locomotion is experimentally studied via 3D particle tracking velocimetry. 3D locations of the bell tip are tracked over 1.5 cycles to describe the jellyfish path. Multiple positions of the jellyfish bell margin are initially tracked in 2D from four independent planes and individually projected in 3D based on the jellyfish path and geometrical properties of the setup. A cubic spline interpolation and the exponentially weighted moving average are used to estimate derived quantities, including velocity and acceleration of the jellyfish locomotion. We will discuss distinctive features of the jellyfish 3D motion at various swimming phases, and will provide insight on the 3D contraction and relaxation in terms of the locomotion, the steadiness of the bell margin eccentricity, and local Reynolds number based on the instantaneous mean diameter of the bell.

  8. [Evaluation of Motion Sickness Induced by 3D Video Clips].

    PubMed

    Matsuura, Yasuyuki; Takada, Hiroki

    2016-01-01

    The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology.

  9. 3D tongue motion from tagged and cine MR images.

    PubMed

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z; Lee, Junghoon; Stone, Maureen; Prince, Jerry L

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach suffers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information to yield improved estimation of 3D tongue motion. The method uses the harmonic phase (HARP) algorithm to extract motion from tags and diffeomorphic demons to provide surface deformation. It then uses an incompressible deformation estimation algorithm to incorporate both sources of displacement information to form an estimate of the 3D whole tongue motion. Experimental results show that use of combined information improves motion estimation near the tongue surface, a problem that has previously been reported as problematic in HARP analysis, while preserving accurate internal motion estimates. Results on both normal and abnormal tongue motions are shown.

  10. Deployment of a 3D tag tracking method utilising RFID

    NASA Astrophysics Data System (ADS)

    Wasif Reza, Ahmed; Yun, Teoh Wei; Dimyati, Kaharudin; Geok Tan, Kim; Ariffin Noordin, Kamarul

    2012-04-01

    Recent trend shows that one of the crucial problems faced while using radio frequency to track the objects is the inconsistency of the signal strength reception, which can be mainly due to the environmental factors and the blockage, which always have the most impact on the tracking accuracy. Besides, three dimensions are more relevant to a warehouse scanning. Therefore, this study proposes a highly accurate and new three-dimensional (3D) radio frequency identification-based indoor tracking system with the consideration of different attenuation factors and obstacles. The obtained results show that the proposed system yields high-quality performance with an average error as low as 0.27 m (without obstacles and attenuation effects). The obtained results also show that the proposed tracking technique can achieve relatively lower errors (0.4 and 0.36 m, respectively) even in the presence of the highest attenuation effect, e = 3.3 or when the environment is largely affected by 50% of the obstacles. Furthermore, the superiority of the proposed 3D tracking system has been proved by comparing with other existing approaches. The 3D tracking system proposed in this study can be applicable to a warehouse scanning.

  11. Monocular 3-D gait tracking in surveillance scenes.

    PubMed

    Rogez, Grégory; Rihan, Jonathan; Guerrero, Jose J; Orrite, Carlos

    2014-06-01

    Gait recognition can potentially provide a noninvasive and effective biometric authentication from a distance. However, the performance of gait recognition systems will suffer in real surveillance scenarios with multiple interacting individuals and where the camera is usually placed at a significant angle and distance from the floor. We present a methodology for view-invariant monocular 3-D human pose tracking in man-made environments in which we assume that observed people move on a known ground plane. First, we model 3-D body poses and camera viewpoints with a low dimensional manifold and learn a generative model of the silhouette from this manifold to a reduced set of training views. During the online stage, 3-D body poses are tracked using recursive Bayesian sampling conducted jointly over the scene's ground plane and the pose-viewpoint manifold. For each sample, the homography that relates the corresponding training plane to the image points is calculated using the dominant 3-D directions of the scene, the sampled location on the ground plane and the sampled camera view. Each regressed silhouette shape is projected using this homographic transformation and is matched in the image to estimate its likelihood. Our framework is able to track 3-D human walking poses in a 3-D environment exploring only a 4-D state space with success. In our experimental evaluation, we demonstrate the significant improvements of the homographic alignment over a commonly used similarity transformation and provide quantitative pose tracking results for the monocular sequences with a high perspective effect from the CAVIAR dataset.

  12. Linear tracking for 3-D medical ultrasound imaging.

    PubMed

    Huang, Qing-Hua; Yang, Zhao; Hu, Wei; Jin, Lian-Wen; Wei, Gang; Li, Xuelong

    2013-12-01

    As the clinical application grows, there is a rapid technical development of 3-D ultrasound imaging. Compared with 2-D ultrasound imaging, 3-D ultrasound imaging can provide improved qualitative and quantitative information for various clinical applications. In this paper, we proposed a novel tracking method for a freehand 3-D ultrasound imaging system with improved portability, reduced degree of freedom, and cost. We designed a sliding track with a linear position sensor attached, and it transmitted positional data via a wireless communication module based on Bluetooth, resulting in a wireless spatial tracking modality. A traditional 2-D ultrasound probe fixed to the position sensor on the sliding track was used to obtain real-time B-scans, and the positions of the B-scans were simultaneously acquired when moving the probe along the track in a freehand manner. In the experiments, the proposed method was applied to ultrasound phantoms and real human tissues. The results demonstrated that the new system outperformed a previously developed freehand system based on a traditional six-degree-of-freedom spatial sensor in phantom and in vivo studies, indicating its merit in clinical applications for human tissues and organs.

  13. 3D imaging of neutron tracks using confocal microscopy

    NASA Astrophysics Data System (ADS)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  14. Discerning nonrigid 3D shapes from motion cues

    PubMed Central

    Jain, Anshul; Zaidi, Qasim

    2011-01-01

    Many organisms and objects deform nonrigidly when moving, requiring perceivers to separate shape changes from object motions. Surprisingly, the abilities of observers to correctly infer nonrigid volumetric shapes from motion cues have not been measured, and structure from motion models predominantly use variants of rigidity assumptions. We show that observers are equally sensitive at discriminating cross-sections of flexing and rigid cylinders based on motion cues, when the cylinders are rotated simultaneously around the vertical and depth axes. A computational model based on motion perspective (i.e., assuming perceived depth is inversely proportional to local velocity) predicted the psychometric curves better than shape from motion factorization models using shape or trajectory basis functions. Asymmetric percepts of symmetric cylinders, arising because of asymmetric velocity profiles, provided additional evidence for the dominant role of relative velocity in shape perception. Finally, we show that inexperienced observers are generally incapable of using motion cues to detect inflation/deflation of rigid and flexing cylinders, but this handicap can be overcome with practice for both nonrigid and rigid shapes. The empirical and computational results of this study argue against the use of rigidity assumptions in extracting 3D shape from motion and for the primacy of motion deformations computed from motion shears. PMID:21205884

  15. Tracking tissue section surfaces for automated 3D confocal cytometry

    NASA Astrophysics Data System (ADS)

    Agustin, Ramses; Price, Jeffrey H.

    2002-05-01

    Three-dimensional cytometry, whereby large volumes of tissue would be measured automatically, requires a computerized method for detecting the upper and lower tissue boundaries. In conventional confocal microscopy, the user interactively sets limits for axial scanning for each field-of-view. Biological specimens vary in section thickness, thereby driving the requirement for setting vertical scan limits. Limits could be set arbitrarily large to ensure the entire tissue is scanned, but automatic surface identification would eliminate storing undue numbers of empty optical sections and forms the basis for incorporating lateral microscope stage motion to collect unlimited numbers of stacks. This walk-away automation of 3D confocal scanning for biological imaging is the first sep towards practical, computerized statistical sampling from arbitrarily large tissue volumes. Preliminary results for automatic tissue surface tracking were obtained for phase-contrast microscopy by measuring focus sharpness (previously used for high-speed autofocus by our group). Measurements were taken from 5X5 fields-of-view from hamster liver sections, varying from five to twenty microns in thickness, then smoothed to lessen variations of in-focus information at each axial position. Because image sharpness (as the power of high spatial frequency components) drops across the axial boundaries of a tissue section, mathematical quantities including the full-width at half-maximum, extrema in the first derivative, and second derivative were used to locate the proximal and distal surfaces of a tissue. Results from these tests were evaluated against manual (i.e., visual) determination of section boundaries.

  16. Awake Animal Imaging Motion Tracking Software

    SciTech Connect

    Goddard, James

    2010-03-15

    The Awake Animal Motion Tracking Software code calculates the 3D movement of the head motion of a live, awake animal during a medical imaging scan. In conjunction with markers attached to the head, images acquired from multiple cameras are processed and marker locations precisely determined. Using off-line camera calibration data, the 3D positions of the markers are calculated along with a 6 degree of freedom position and orientation (pose) relative to a fixed initial position. This calculation is performed in real time at frame rates up to 30 frames per second. A time stamp with microsecond accuracy from a time base source is attached to each pose measurement.

  17. A 3D diamond detector for particle tracking

    NASA Astrophysics Data System (ADS)

    Bachmair, F.; Bäni, L.; Bergonzo, P.; Caylar, B.; Forcolin, G.; Haughton, I.; Hits, D.; Kagan, H.; Kass, R.; Li, L.; Oh, A.; Phan, S.; Pomorski, M.; Smith, D. S.; Tyzhnevyi, V.; Wallny, R.; Whitehead, D.

    2015-06-01

    A novel device using single-crystal chemical vapour deposited diamond and resistive electrodes in the bulk forming a 3D diamond detector is presented. The electrodes of the device were fabricated with laser assisted phase change of diamond into a combination of diamond-like carbon, amorphous carbon and graphite. The connections to the electrodes of the device were made using a photo-lithographic process. The electrical and particle detection properties of the device were investigated. A prototype detector system consisting of the 3D device connected to a multi-channel readout was successfully tested with 120 GeV protons proving the feasibility of the 3D diamond detector concept for particle tracking applications for the first time.

  18. Light driven micro-robotics with holographic 3D tracking

    NASA Astrophysics Data System (ADS)

    Glückstad, Jesper

    2016-04-01

    We recently pioneered the concept of light-driven micro-robotics including the new and disruptive 3D-printed micro-tools coined Wave-guided Optical Waveguides that can be real-time optically trapped and "remote-controlled" in a volume with six-degrees-of-freedom. To be exploring the full potential of this new drone-like 3D light robotics approach in challenging microscopic geometries requires a versatile and real-time reconfigurable light coupling that can dynamically track a plurality of "light robots" in 3D to ensure continuous optimal light coupling on the fly. Our latest developments in this new and exciting area will be reviewed in this invited paper.

  19. Nonstationary 3D motion of an elastic spherical shell

    NASA Astrophysics Data System (ADS)

    Tarlakovskii, D. V.; Fedotenkov, G. V.

    2015-03-01

    A 3D model of motion of a thin elastic spherical Timoshenko shell under the action of arbitrarily distributed nonstationary pressure is considered. An approach for splitting the system of equations of 3D motion of the shell is proposed. The integral representations of the solution with kernels in the form of influence functions, which can be determined analytically by using series expansions in the eigenfunctions and the Laplace transform, are constructed. An algorithm for solving the problem on the action of nonstationary normal pressure on the shell is constructed and implemented. The obtained results find practical use in aircraft and rocket construction and in many other industrial fields where thin-walled shell structural members under nonstationary working conditions are widely used.

  20. Automated 3-D tracking of centrosomes in sequences of confocal image stacks.

    PubMed

    Kerekes, Ryan A; Gleason, Shaun S; Trivedi, Niraj; Solecki, David J

    2009-01-01

    In order to facilitate the study of neuron migration, we propose a method for 3-D detection and tracking of centrosomes in time-lapse confocal image stacks of live neuron cells. We combine Laplacian-based blob detection, adaptive thresholding, and the extraction of scale and roundness features to find centrosome-like objects in each frame. We link these detections using the joint probabilistic data association filter (JPDAF) tracking algorithm with a Newtonian state-space model tailored to the motion characteristics of centrosomes in live neurons. We apply our algorithm to image sequences containing multiple cells, some of which had been treated with motion-inhibiting drugs. We provide qualitative results and quantitative comparisons to manual segmentation and tracking results showing that our average motion estimates agree to within 13% of those computed manually by neurobiologists.

  1. 3D Guided Wave Motion Analysis on Laminated Composites

    NASA Technical Reports Server (NTRS)

    Tian, Zhenhua; Leckey, Cara; Yu, Lingyu

    2013-01-01

    Ultrasonic guided waves have proved useful for structural health monitoring (SHM) and nondestructive evaluation (NDE) due to their ability to propagate long distances with less energy loss compared to bulk waves and due to their sensitivity to small defects in the structure. Analysis of actively transmitted ultrasonic signals has long been used to detect and assess damage. However, there remain many challenging tasks for guided wave based SHM due to the complexity involved with propagating guided waves, especially in the case of composite materials. The multimodal nature of the ultrasonic guided waves complicates the related damage analysis. This paper presents results from parallel 3D elastodynamic finite integration technique (EFIT) simulations used to acquire 3D wave motion in the subject laminated carbon fiber reinforced polymer composites. The acquired 3D wave motion is then analyzed by frequency-wavenumber analysis to study the wave propagation and interaction in the composite laminate. The frequency-wavenumber analysis enables the study of individual modes and visualization of mode conversion. Delamination damage has been incorporated into the EFIT model to generate "damaged" data. The potential for damage detection in laminated composites is discussed in the end.

  2. Ground Motion and Variability from 3-D Deterministic Broadband Simulations

    NASA Astrophysics Data System (ADS)

    Withers, Kyle Brett

    The accuracy of earthquake source descriptions is a major limitation in high-frequency (> 1 Hz) deterministic ground motion prediction, which is critical for performance-based design by building engineers. With the recent addition of realistic fault topography in 3D simulations of earthquake source models, ground motion can be deterministically calculated more realistically up to higher frequencies. We first introduce a technique to model frequency-dependent attenuation and compare its impact on strong ground motions recorded for the 2008 Chino Hills earthquake. Then, we model dynamic rupture propagation for both a generic strike-slip event and blind thrust scenario earthquakes matching the fault geometry of the 1994 Mw 6.7 Northridge earthquake along rough faults up to 8 Hz. We incorporate frequency-dependent attenuation via a power law above a reference frequency in the form Q0fn, with high accuracy down to Q values of 15, and include nonlinear effects via Drucker-Prager plasticity. We model the region surrounding the fault with and without small-scale medium complexity in both a 1D layered model characteristic of southern California rock and a 3D medium extracted from the SCEC CVMSi.426 including a near-surface geotechnical layer. We find that the spectral acceleration from our models are within 1-2 interevent standard deviations from recent ground motion prediction equations (GMPEs) and compare well with that of recordings from strong ground motion stations at both short and long periods. At periods shorter than 1 second, Q(f) is needed to match the decay of spectral acceleration seen in the GMPEs as a function of distance from the fault. We find that the similarity between the intraevent variability of our simulations and observations increases when small-scale heterogeneity and plasticity are included, extremely important as uncertainty in ground motion estimates dominates the overall uncertainty in seismic risk. In addition to GMPEs, we compare with simple

  3. Tracking people and cars using 3D modeling and CCTV.

    PubMed

    Edelman, Gerda; Bijhold, Jurrien

    2010-10-10

    The aim of this study was to find a method for the reconstruction of movements of people and cars using CCTV footage and a 3D model of the environment. A procedure is proposed, in which video streams are synchronized and displayed in a 3D model, by using virtual cameras. People and cars are represented by cylinders and boxes, which are moved in the 3D model, according to their movements as shown in the video streams. The procedure was developed and tested in an experimental setup with test persons who logged their GPS coordinates as a recording of the ground truth. Results showed that it is possible to implement this procedure and to reconstruct movements of people and cars from video recordings. The procedure was also applied to a forensic case. In this work we experienced that more situational awareness was created by the 3D model, which made it easier to track people on multiple video streams. Based on all experiences from the experimental set up and the case, recommendations are formulated for use in practice.

  4. Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle

    NASA Astrophysics Data System (ADS)

    Ettl, Svenja

    2015-04-01

    'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.

  5. 3-D model-based tracking for UAV indoor localization.

    PubMed

    Teulière, Céline; Marchand, Eric; Eck, Laurent

    2015-05-01

    This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights.

  6. Stereo and motion in the display of 3-D scattergrams

    SciTech Connect

    Littlefield, R.J.

    1982-04-01

    A display technique is described that is useful for detecting structure in a 3-dimensional distribution of points. The technique uses a high resolution color raster display to produce a 3-D scattergram. Depth cueing is provided by motion parallax using a capture-replay mechanism. Stereo vision depth cues can also be provided. The paper discusses some general aspects of stereo scattergrams and describes their implementation as red/green anaglyphs. These techniques have been used with data sets containing over 20,000 data points. They can be implemented on relatively inexpensive hardware. (A film of the display was shown at the conference.)

  7. Fusing inertial sensor data in an extended Kalman filter for 3D camera tracking.

    PubMed

    Erdem, Arif Tanju; Ercan, Ali Özer

    2015-02-01

    In a setup where camera measurements are used to estimate 3D egomotion in an extended Kalman filter (EKF) framework, it is well-known that inertial sensors (i.e., accelerometers and gyroscopes) are especially useful when the camera undergoes fast motion. Inertial sensor data can be fused at the EKF with the camera measurements in either the correction stage (as measurement inputs) or the prediction stage (as control inputs). In general, only one type of inertial sensor is employed in the EKF in the literature, or when both are employed they are both fused in the same stage. In this paper, we provide an extensive performance comparison of every possible combination of fusing accelerometer and gyroscope data as control or measurement inputs using the same data set collected at different motion speeds. In particular, we compare the performances of different approaches based on 3D pose errors, in addition to camera reprojection errors commonly found in the literature, which provides further insight into the strengths and weaknesses of different approaches. We show using both simulated and real data that it is always better to fuse both sensors in the measurement stage and that in particular, accelerometer helps more with the 3D position tracking accuracy, whereas gyroscope helps more with the 3D orientation tracking accuracy. We also propose a simulated data generation method, which is beneficial for the design and validation of tracking algorithms involving both camera and inertial measurement unit measurements in general.

  8. Tracked 3D ultrasound in radio-frequency liver ablation

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Fichtinger, Gabor; Taylor, Russell H.; Choti, Michael A.

    2003-05-01

    Recent studies have shown that radio frequency (RF) ablation is a simple, safe and potentially effective treatment for selected patients with liver metastases. Despite all recent therapeutic advancements, however, intra-procedural target localization and precise and consistent placement of the tissue ablator device are still unsolved problems. Various imaging modalities, including ultrasound (US) and computed tomography (CT) have been tried as guidance modalities. Transcutaneous US imaging, due to its real-time nature, may be beneficial in many cases, but unfortunately, fails to adequately visualize the tumor in many cases. Intraoperative or laparoscopic US, on the other hand, provides improved visualization and target imaging. This paper describes a system for computer-assisted RF ablation of liver tumors, combining navigational tracking of a conventional imaging ultrasound probe to produce 3D ultrasound imaging with a tracked RF ablation device supported by a passive mechanical arm and spatially registered to the ultrasound volume.

  9. 3D Orbital Tracking in a Modified Two-photon Microscope: An Application to the Tracking of Intracellular Vesicles

    PubMed Central

    Gratton, Enrico

    2014-01-01

    The objective of this video protocol is to discuss how to perform and analyze a three-dimensional fluorescent orbital particle tracking experiment using a modified two-photon microscope1. As opposed to conventional approaches (raster scan or wide field based on a stack of frames), the 3D orbital tracking allows to localize and follow with a high spatial (10 nm accuracy) and temporal resolution (50 Hz frequency response) the 3D displacement of a moving fluorescent particle on length-scales of hundreds of microns2. The method is based on a feedback algorithm that controls the hardware of a two-photon laser scanning microscope in order to perform a circular orbit around the object to be tracked: the feedback mechanism will maintain the fluorescent object in the center by controlling the displacement of the scanning beam3-5. To demonstrate the advantages of this technique, we followed a fast moving organelle, the lysosome, within a living cell6,7. Cells were plated according to standard protocols, and stained using a commercially lysosome dye. We discuss briefly the hardware configuration and in more detail the control software, to perform a 3D orbital tracking experiment inside living cells. We discuss in detail the parameters required in order to control the scanning microscope and enable the motion of the beam in a closed orbit around the particle. We conclude by demonstrating how this method can be effectively used to track the fast motion of a labeled lysosome along microtubules in 3D within a live cell. Lysosomes can move with speeds in the range of 0.4-0.5 µm/sec, typically displaying a directed motion along the microtubule network8. PMID:25350070

  10. 3D orbital tracking in a modified two-photon microscope: an application to the tracking of intracellular vesicles.

    PubMed

    Anzalone, Andrea; Annibale, Paolo; Gratton, Enrico

    2014-10-01

    The objective of this video protocol is to discuss how to perform and analyze a three-dimensional fluorescent orbital particle tracking experiment using a modified two-photon microscope(1). As opposed to conventional approaches (raster scan or wide field based on a stack of frames), the 3D orbital tracking allows to localize and follow with a high spatial (10 nm accuracy) and temporal resolution (50 Hz frequency response) the 3D displacement of a moving fluorescent particle on length-scales of hundreds of microns(2). The method is based on a feedback algorithm that controls the hardware of a two-photon laser scanning microscope in order to perform a circular orbit around the object to be tracked: the feedback mechanism will maintain the fluorescent object in the center by controlling the displacement of the scanning beam(3-5). To demonstrate the advantages of this technique, we followed a fast moving organelle, the lysosome, within a living cell(6,7). Cells were plated according to standard protocols, and stained using a commercially lysosome dye. We discuss briefly the hardware configuration and in more detail the control software, to perform a 3D orbital tracking experiment inside living cells. We discuss in detail the parameters required in order to control the scanning microscope and enable the motion of the beam in a closed orbit around the particle. We conclude by demonstrating how this method can be effectively used to track the fast motion of a labeled lysosome along microtubules in 3D within a live cell. Lysosomes can move with speeds in the range of 0.4-0.5 µm/sec, typically displaying a directed motion along the microtubule network(8).

  11. Geometric-model-free tracking of extended targets using 3D lidar measurements

    NASA Astrophysics Data System (ADS)

    Steinemann, Philipp; Klappstein, Jens; Dickmann, Juergen; von Hundelshausen, Felix; Wünsche, Hans-Joachim

    2012-06-01

    Tracking of extended targets in high definition, 360-degree 3D-LIDAR (Light Detection and Ranging) measurements is a challenging task and a current research topic. It is a key component in robotic applications, and is relevant to path planning and collision avoidance. This paper proposes a new method without a geometric model to simultaneously track and accumulate 3D-LIDAR measurements of an object. The method itself is based on a particle filter and uses an object-related local 3D grid for each object. No geometric object hypothesis is needed. Accumulation allows coping with occlusions. The prediction step of the particle filter is governed by a motion model consisting of a deterministic and a probabilistic part. Since this paper is focused on tracking ground vehicles, a bicycle model is used for the deterministic part. The probabilistic part depends on the current state of each particle. A function for calculating the current probability density function for state transition is developed. It is derived in detail and based on a database consisting of vehicle dynamics measurements over several hundreds of kilometers. The adaptive probability density function narrows down the gating area for measurement data association. The second part of the proposed method addresses weighting the particles with a cost function. Different 3D-griddependent cost functions are presented and evaluated. Evaluations with real 3D-LIDAR measurements show the performance of the proposed method. The results are also compared to ground truth data.

  12. Nonlinear Synchronization for Automatic Learning of 3D Pose Variability in Human Motion Sequences

    NASA Astrophysics Data System (ADS)

    Mozerov, M.; Rius, I.; Roca, X.; González, J.

    2009-12-01

    A dense matching algorithm that solves the problem of synchronizing prerecorded human motion sequences, which show different speeds and accelerations, is proposed. The approach is based on minimization of MRF energy and solves the problem by using Dynamic Programming. Additionally, an optimal sequence is automatically selected from the input dataset to be a time-scale pattern for all other sequences. The paper utilizes an action specific model which automatically learns the variability of 3D human postures observed in a set of training sequences. The model is trained using the public CMU motion capture dataset for the walking action, and a mean walking performance is automatically learnt. Additionally, statistics about the observed variability of the postures and motion direction are also computed at each time step. The synchronized motion sequences are used to learn a model of human motion for action recognition and full-body tracking purposes.

  13. The Visual Priming of Motion-Defined 3D Objects

    PubMed Central

    Jiang, Xiong; Jiang, Yang

    2015-01-01

    The perception of a stimulus can be influenced by previous perceptual experience, a phenomenon known as perceptual priming. However, there has been limited investigation on perceptual priming of shape perception of three-dimensional object structures defined by moving dots. Here we examined the perceptual priming of a 3D object shape defined purely by motion-in-depth cues (i.e., Shape-From-Motion, SFM) using a classic prime-target paradigm. The results from the first two experiments revealed a significant increase in accuracy when a “cloudy” SFM stimulus (whose object structure was difficult to recognize due to the presence of strong noise) was preceded by an unambiguous SFM that clearly defined the same transparent 3D shape. In contrast, results from Experiment 3 revealed no change in accuracy when a “cloudy” SFM stimulus was preceded by a static shape or a semantic word that defined the same object shape. Instead, there was a significant decrease in accuracy when preceded by a static shape or a semantic word that defined a different object shape. These results suggested that the perception of a noisy SFM stimulus can be facilitated by a preceding unambiguous SFM stimulus—but not a static image or a semantic stimulus—that defined the same shape. The potential neural and computational mechanisms underlying the difference in priming are discussed. PMID:26658496

  14. High resolution 3D insider detection and tracking.

    SciTech Connect

    Nelson, Cynthia Lee

    2003-09-01

    Vulnerability analysis studies show that one of the worst threats against a facility is that of an active insider during an emergency evacuation. When a criticality or other emergency alarm occurs, employees immediately proceed along evacuation routes to designated areas. Procedures are then implemented to account for all material, classified parts, etc. The 3-Dimensional Video Motion Detection (3DVMD) technology could be used to detect and track possible insider activities during alarm situations, as just described, as well as during normal operating conditions. The 3DVMD technology uses multiple cameras to create 3-dimensional detection volumes or zones. Movement throughout detection zones is tracked and high-level information, such as the number of people and their direction of motion, is extracted. In the described alarm scenario, deviances of evacuation procedures taken by an individual could be immediately detected and relayed to a central alarm station. The insider could be tracked and any protected items removed from the area could be flagged. The 3DVMD technology could also be used to monitor such items as machines that are used to build classified parts. During an alarm, detections could be made if items were removed from the machine. Overall, the use of 3DVMD technology during emergency evacuations would help to prevent the loss of classified items and would speed recovery from emergency situations. Further security could also be added by analyzing tracked behavior (motion) as it corresponds to predicted behavior, e.g., behavior corresponding with the execution of required procedures. This information would be valuable for detecting a possible insider not only during emergency situations, but also during times of normal operation.

  15. SU-E-J-135: An Investigation of Ultrasound Imaging for 3D Intra-Fraction Prostate Motion Estimation

    SciTech Connect

    O'Shea, T; Harris, E; Bamber, J; Evans, P

    2014-06-01

    Purpose: This study investigates the use of a mechanically swept 3D ultrasound (US) probe to estimate intra-fraction motion of the prostate during radiation therapy using an US phantom and simulated transperineal imaging. Methods: A 3D motion platform was used to translate an US speckle phantom while simulating transperineal US imaging. Motion patterns for five representative types of prostate motion, generated from patient data previously acquired with a Calypso system, were using to move the phantom in 3D. The phantom was also implanted with fiducial markers and subsequently tracked using the CyberKnife kV x-ray system for comparison. A normalised cross correlation block matching algorithm was used to track speckle patterns in 3D and 2D US data. Motion estimation results were compared with known phantom translations. Results: Transperineal 3D US could track superior-inferior (axial) and anterior-posterior (lateral) motion to better than 0.8 mm root-mean-square error (RMSE) at a volume rate of 1.7 Hz (comparable with kV x-ray tracking RMSE). Motion estimation accuracy was poorest along the US probe's swept axis (right-left; RL; RMSE < 4.2 mm) but simple regularisation methods could be used to improve RMSE (< 2 mm). 2D US was found to be feasible for slowly varying motion (RMSE < 0.5 mm). 3D US could also allow accurate radiation beam gating with displacement thresholds of 2 mm and 5 mm exhibiting a RMSE of less than 0.5 mm. Conclusion: 2D and 3D US speckle tracking is feasible for prostate motion estimation during radiation delivery. Since RL prostate motion is small in magnitude and frequency, 2D or a hybrid (2D/3D) US imaging approach which also accounts for potential prostate rotations could be used. Regularisation methods could be used to ensure the accuracy of tracking data, making US a feasible approach for gating or tracking in standard or hypo-fractionated prostate treatments.

  16. 3D motion analysis of keratin filaments in living cells

    NASA Astrophysics Data System (ADS)

    Herberich, Gerlind; Windoffer, Reinhard; Leube, Rudolf; Aach, Til

    2010-03-01

    We present a novel and efficient approach for 3D motion estimation of keratin intermediate filaments in vitro. Keratin filaments are elastic cables forming a complex scaffolding within epithelial cells. To understand the mechanisms of filament formation and network organisation under physiological and pathological conditions, quantitative measurements of dynamic network alterations are essential. Therefore we acquired time-lapse series of 3D images using a confocal laser scanning microscope. Based on these image series, we show that a dense vector field can be computed such that the displacements from one frame to the next can be determined. Our method is based on a two-step registration process: First, a rigid pre-registration is applied in order to compensate for possible global cell movement. This step enables the subsequent nonrigid registration to capture only the sought local deformations of the filaments. As the transformation model of the deformable registration algorithm is based on Free Form Deformations, it is well suited for modeling filament network dynamics. The optimization is performed using efficient linear programming techniques such that the huge amount of image data of a time series can be efficiently processed. The evaluation of our results illustrates the potential of our approach.

  17. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  18. 3D visualisation and analysis of single and coalescing tracks in Solid state Nuclear Track Detectors

    NASA Astrophysics Data System (ADS)

    Wertheim, David; Gillmore, Gavin; Brown, Louise; Petford, Nick

    2010-05-01

    Exposure to radon gas (222Rn) and associated ionising decay products can cause lung cancer in humans (1). Solid state Nuclear Track Detectors (SSNTDs) can be used to monitor radon concentrations (2). Radon particles form tracks in the detectors and these tracks can be etched in order to enable 2D surface image analysis. We have previously shown that confocal microscopy can be used for 3D visualisation of etched SSNTDs (3). The aim of the study was to further investigate track angles and patterns in SSNTDs. A 'LEXT' confocal laser scanning microscope (Olympus Corporation, Japan) was used to acquire 3D image datasets of five CR-39 plastic SSNTD's. The resultant 3D visualisations were analysed by eye and inclination angles assessed on selected tracks. From visual assessment, single isolated tracks as well as coalescing tracks were observed on the etched detectors. In addition varying track inclination angles were observed. Several different patterns of track formation were seen such as single isolated and double coalescing tracks. The observed track angles of inclination may help to assess the angle at which alpha particles hit the detector. Darby, S et al. Radon in homes and risk of lung cancer : collaborative analysis of individual data from 13 European case-control studies. British Medical Journal 2005; 330, 223-226. Phillips, P.S., Denman, A.R., Crockett, R.G.M., Gillmore, G., Groves-Kirkby, C.J., Woolridge, A., Comparative Analysis of Weekly vs. Three monthly radon measurements in dwellings. DEFRA Report No., DEFRA/RAS/03.006. (2004). Wertheim D, Gillmore G, Brown L, and Petford N. A new method of imaging particle tracks in Solid State Nuclear Track Detectors. Journal of Microscopy 2010; 237: 1-6.

  19. Error control in the set-up of stereo camera systems for 3d animal tracking

    NASA Astrophysics Data System (ADS)

    Cavagna, A.; Creato, C.; Del Castello, L.; Giardina, I.; Melillo, S.; Parisi, L.; Viale, M.

    2015-12-01

    Three-dimensional tracking of animal systems is the key to the comprehension of collective behavior. Experimental data collected via a stereo camera system allow the reconstruction of the 3d trajectories of each individual in the group. Trajectories can then be used to compute some quantities of interest to better understand collective motion, such as velocities, distances between individuals and correlation functions. The reliability of the retrieved trajectories is strictly related to the accuracy of the 3d reconstruction. In this paper, we perform a careful analysis of the most significant errors affecting 3d reconstruction, showing how the accuracy depends on the camera system set-up and on the precision of the calibration parameters.

  20. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Astrophysics Data System (ADS)

    Nandhakumar, N.; Smith, Philip W.

    1993-12-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  1. Teleoperation of a robot manipulator from 3D human hand-arm motion

    NASA Astrophysics Data System (ADS)

    Kofman, Jonathan; Verma, Siddharth; Wu, Xianghai; Luu, Timothy

    2003-10-01

    The control of a robot manipulator by a human operator is often necessary in unstructured dynamic environments with unfamiliar objects. Remote teleoperation is required when human presence at the robot site is undesirable or difficult, such as in handling hazardous materials and operating in dangerous or inaccessible environments. Previous approaches have employed mechanical or other contacting interfaces which require unnatural motions for object manipulation tasks or hinder dexterous human motion. This paper presents a non-contacting method of teleoperating a robot manipulator by having the human operator perform the 3D human hand-arm motion that would naturally be used to compete an object manipulation task and tracking the motion with a stereo-camera system at a local site. The 3D human hand-arm motion is reconstructed at the remote robot site and is used to control the position and orientation of the robot manipulator end-effector in real-time. Images captured of the robot interacting with objects at the remote site provide visual feedback to the human operator. Tests in teleoperation of the robot manipulator have demonstrated the ability of the human to carry out object manipulator tasks remotely and the teleoperated robot manipulator system to copy human-arm motions in real-time.

  2. Holographic microscopy for 3D tracking of bacteria

    NASA Astrophysics Data System (ADS)

    Nadeau, Jay; Cho, Yong Bin; El-Kholy, Marwan; Bedrossian, Manuel; Rider, Stephanie; Lindensmith, Christian; Wallace, J. Kent

    2016-03-01

    Understanding when, how, and if bacteria swim is key to understanding critical ecological and biological processes, from carbon cycling to infection. Imaging motility by traditional light microscopy is limited by focus depth, requiring cells to be constrained in z. Holographic microscopy offers an instantaneous 3D snapshot of a large sample volume, and is therefore ideal in principle for quantifying unconstrained bacterial motility. However, resolving and tracking individual cells is difficult due to the low amplitude and phase contrast of the cells; the index of refraction of typical bacteria differs from that of water only at the second decimal place. In this work we present a combination of optical and sample-handling approaches to facilitating bacterial tracking by holographic phase imaging. The first is the design of the microscope, which is an off-axis design with the optics along a common path, which minimizes alignment issues while providing all of the advantages of off-axis holography. Second, we use anti-reflective coated etalon glass in the design of sample chambers, which reduce internal reflections. Improvement seen with the antireflective coating is seen primarily in phase imaging, and its quantification is presented here. Finally, dyes may be used to increase phase contrast according to the Kramers-Kronig relations. Results using three test strains are presented, illustrating the different types of bacterial motility characterized by an enteric organism (Escherichia coli), an environmental organism (Bacillus subtilis), and a marine organism (Vibrio alginolyticus). Data processing steps to increase the quality of the phase images and facilitate tracking are also discussed.

  3. 3D whiteboard: collaborative sketching with 3D-tracked smart phones

    NASA Astrophysics Data System (ADS)

    Lue, James; Schulze, Jürgen P.

    2014-02-01

    We present the results of our investigation of the feasibility of a new approach for collaborative drawing in 3D, based on Android smart phones. Our approach utilizes a number of fiduciary markers, placed in the working area where they can be seen by the smart phones' cameras, in order to estimate the pose of each phone in the room. Our prototype allows two users to draw 3D objects with their smart phones by moving their phones around in 3D space. For example, 3D lines are drawn by recording the path of the phone as it is moved around in 3D space, drawing line segments on the screen along the way. Each user can see the virtual drawing space on their smart phones' displays, as if the display was a window into this space. Besides lines, our prototype application also supports 3D geometry creation, geometry transformation operations, and it shows the location of the other user's phone.

  4. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    NASA Astrophysics Data System (ADS)

    Kanphet, J.; Suriyapee, S.; Dumrongkijudom, N.; Sanghangthum, T.; Kumkhwao, J.; Wisetrintong, M.

    2016-03-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable.

  5. 3D motion and strain estimation of the heart: initial clinical findings

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Hristova, Krassimira; Loeckx, Dirk; Rademakers, Frank; Claus, Piet; D'hooge, Jan

    2010-03-01

    The quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. As such, tissue Doppler imaging and speckle tracking based methods have been introduced to estimate local myocardial strain. Recently, volumetric ultrasound has become more readily available, allowing therefore the 3D estimation of motion and myocardial deformation. Our lab has previously presented a method based on spatio-temporal elastic registration of ultrasound volumes to estimate myocardial motion and deformation in 3D, overcoming the spatial limitations of the existing methods. This method was optimized on simulated data sets in previous work and is currently tested in a clinical setting. In this manuscript, 10 healthy volunteers, 10 patient with myocardial infarction and 10 patients with arterial hypertension were included. The cardiac strain values extracted with the proposed method were compared with the ones estimated with 1D tissue Doppler imaging and 2D speckle tracking in all patient groups. Although the absolute values of the 3D strain components assessed by this new methodology were not identical to the reference methods, the relationship between the different patient groups was similar.

  6. On Integral Invariants for Effective 3-D Motion Trajectory Matching and Recognition.

    PubMed

    Shao, Zhanpeng; Li, Youfu

    2016-02-01

    Motion trajectories tracked from the motions of human, robots, and moving objects can provide an important clue for motion analysis, classification, and recognition. This paper defines some new integral invariants for a 3-D motion trajectory. Based on two typical kernel functions, we design two integral invariants, the distance and area integral invariants. The area integral invariants are estimated based on the blurred segment of noisy discrete curve to avoid the computation of high-order derivatives. Such integral invariants for a motion trajectory enjoy some desirable properties, such as computational locality, uniqueness of representation, and noise insensitivity. Moreover, our formulation allows the analysis of motion trajectories at a range of scales by varying the scale of kernel function. The features of motion trajectories can thus be perceived at multiscale levels in a coarse-to-fine manner. Finally, we define a distance function to measure the trajectory similarity to find similar trajectories. Through the experiments, we examine the robustness and effectiveness of the proposed integral invariants and find that they can capture the motion cues in trajectory matching and sign recognition satisfactorily.

  7. Swimming Behavior of Pseudomonas aeruginosa Studied by Holographic 3D Tracking

    PubMed Central

    Vater, Svenja M.; Weiße, Sebastian; Maleschlijski, Stojan; Lotz, Carmen; Koschitzki, Florian; Schwartz, Thomas; Obst, Ursula; Rosenhahn, Axel

    2014-01-01

    Holographic 3D tracking was applied to record and analyze the swimming behavior of Pseudomonas aeruginosa. The obtained trajectories allow to qualitatively and quantitatively analyze the free swimming behavior of the bacterium. This can be classified into five distinct swimming patterns. In addition to the previously reported smooth and oscillatory swimming motions, three additional patterns are distinguished. We show that Pseudomonas aeruginosa performs helical movements which were so far only described for larger microorganisms. Occurrence of the swimming patterns was determined and transitions between the patterns were analyzed. PMID:24498187

  8. Reliability of 3D upper limb motion analysis in children with obstetric brachial plexus palsy.

    PubMed

    Mahon, Judy; Malone, Ailish; Kiernan, Damien; Meldrum, Dara

    2017-03-01

    Kinematics, measured by 3D upper limb motion analysis (3D-ULMA), can potentially increase understanding of movement patterns by quantifying individual joint contributions. Reliability in children with obstetric brachial plexus palsy (OBPP) has not been established.

  9. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI.

    PubMed

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z; Stone, Maureen; Prince, Jerry L

    2014-12-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations.

  10. Faceless identification: a model for person identification using the 3D shape and 3D motion as cues

    NASA Astrophysics Data System (ADS)

    Klasen, Lena M.; Li, Haibo

    1999-02-01

    Person identification by using biometric methods based on image sequences, or still images, often requires a controllable and cooperative environment during the image capturing stage. In the forensic case the situation is more likely to be the opposite. In this work we propose a method that makes use of the anthropometry of the human body and human actions as cues for identification. Image sequences from surveillance systems are used, which can be seen as monocular image sequences. A 3D deformable wireframe body model is used as a platform to handle the non-rigid information of the 3D shape and 3D motion of the human body from the image sequence. A recursive method for estimating global motion and local shape variations is presented, using two recursive feedback systems.

  11. Analysis of thoracic aorta hemodynamics using 3D particle tracking velocimetry and computational fluid dynamics.

    PubMed

    Gallo, Diego; Gülan, Utku; Di Stefano, Antonietta; Ponzini, Raffaele; Lüthi, Beat; Holzner, Markus; Morbiducci, Umberto

    2014-09-22

    Parallel to the massive use of image-based computational hemodynamics to study the complex flow establishing in the human aorta, the need for suitable experimental techniques and ad hoc cases for the validation and benchmarking of numerical codes has grown more and more. Here we present a study where the 3D pulsatile flow in an anatomically realistic phantom of human ascending aorta is investigated both experimentally and computationally. The experimental study uses 3D particle tracking velocimetry (PTV) to characterize the flow field in vitro, while finite volume method is applied to numerically solve the governing equations of motion in the same domain, under the same conditions. Our findings show that there is an excellent agreement between computational and measured flow fields during the forward flow phase, while the agreement is poorer during the reverse flow phase. In conclusion, here we demonstrate that 3D PTV is very suitable for a detailed study of complex unsteady flows as in aorta and for validating computational models of aortic hemodynamics. In a future step, it will be possible to take advantage from the ability of 3D PTV to evaluate velocity fluctuations and, for this reason, to gain further knowledge on the process of transition to turbulence occurring in the thoracic aorta.

  12. Multiview 3-D Echocardiography Fusion with Breath-Hold Position Tracking Using an Optical Tracking System.

    PubMed

    Punithakumar, Kumaradevan; Hareendranathan, Abhilash R; McNulty, Alexander; Biamonte, Marina; He, Allen; Noga, Michelle; Boulanger, Pierre; Becher, Harald

    2016-08-01

    Recent advances in echocardiography allow real-time 3-D dynamic image acquisition of the heart. However, one of the major limitations of 3-D echocardiography is the limited field of view, which results in an acquisition insufficient to cover the whole geometry of the heart. This study proposes the novel approach of fusing multiple 3-D echocardiography images using an optical tracking system that incorporates breath-hold position tracking to infer that the heart remains at the same position during different acquisitions. In six healthy male volunteers, 18 pairs of apical/parasternal 3-D ultrasound data sets were acquired during a single breath-hold as well as in subsequent breath-holds. The proposed method yielded a field of view improvement of 35.4 ± 12.5%. To improve the quality of the fused image, a wavelet-based fusion algorithm was developed that computes pixelwise likelihood values for overlapping voxels from multiple image views. The proposed wavelet-based fusion approach yielded significant improvement in contrast (66.46 ± 21.68%), contrast-to-noise ratio (49.92 ± 28.71%), signal-to-noise ratio (57.59 ± 47.85%) and feature count (13.06 ± 7.44%) in comparison to individual views.

  13. LayTracks3D: A new approach for meshing general solids using medial axis transform

    SciTech Connect

    Quadros, William Roshan

    2015-08-22

    This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to the MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.

  14. A 3D feature point tracking method for ion radiation

    NASA Astrophysics Data System (ADS)

    Kouwenberg, Jasper J. M.; Ulrich, Leonie; Jäkel, Oliver; Greilich, Steffen

    2016-06-01

    A robust and computationally efficient algorithm for automated tracking of high densities of particles travelling in (semi-) straight lines is presented. It extends the implementation of (Sbalzarini and Koumoutsakos 2005) and is intended for use in the analysis of single ion track detectors. By including information of existing tracks in the exclusion criteria and a recursive cost minimization function, the algorithm is robust to variations on the measured particle tracks. A trajectory relinking algorithm was included to resolve the crossing of tracks in high particle density images. Validation of the algorithm was performed using fluorescent nuclear track detectors (FNTD) irradiated with high- and low (heavy) ion fluences and showed less than 1% faulty trajectories in the latter.

  15. Markerless motion tracking of awake animals in positron emission tomography.

    PubMed

    Kyme, Andre; Se, Stephen; Meikle, Steven; Angelis, Georgios; Ryder, Will; Popovic, Kata; Yatigammana, Dylan; Fulton, Roger

    2014-11-01

    Noninvasive functional imaging of awake, unrestrained small animals using motion-compensation removes the need for anesthetics and enables an animal's behavioral response to stimuli or administered drugs to be studied concurrently with imaging. While the feasibility of motion-compensated radiotracer imaging of awake rodents using marker-based optical motion tracking has been shown, markerless motion tracking would avoid the risk of marker detachment, streamline the experimental workflow, and potentially provide more accurate pose estimates over a greater range of motion. We have developed a stereoscopic tracking system which relies on native features on the head to estimate motion. Features are detected and matched across multiple camera views to accumulate a database of head landmarks and pose is estimated based on 3D-2D registration of the landmarks to features in each image. Pose estimates of a taxidermal rat head phantom undergoing realistic rat head motion via robot control had a root mean square error of 0.15 and 1.8 mm using markerless and marker-based motion tracking, respectively. Markerless motion tracking also led to an appreciable reduction in motion artifacts in motion-compensated positron emission tomography imaging of a live, unanesthetized rat. The results suggest that further improvements in live subjects are likely if nonrigid features are discriminated robustly and excluded from the pose estimation process.

  16. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis

    PubMed Central

    Cerveri, Pietro; Barros, Ricardo M. L.; Marins, João C. B.; Silvatti, Amanda P.

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846

  17. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  18. A Hidden Markov Model for 3D Catheter Tip Tracking with 2D X-ray Catheterization Sequence and 3D Rotational Angiography.

    PubMed

    Ambrosini, Pierre; Smal, Ihor; Ruijters, Daniel; Niessen, Wiro; Moelker, Adriaan; van Walsum, Theo

    2016-11-07

    In minimal invasive image guided catheterization procedures, physicians require information of the catheter position with respect to the patient's vasculature. However, in fluoroscopic images, visualization of the vasculature requires toxic contrast agent. Static vasculature roadmapping, which can reduce the usage of iodine contrast, is hampered by the breathing motion in abdominal catheterization. In this paper, we propose a method to track the catheter tip inside the patient's 3D vessel tree using intra-operative single-plane 2D X-ray image sequences and a peri-operative 3D rotational angiography (3DRA). The method is based on a hidden Markov model (HMM) where states of the model are the possible positions of the catheter tip inside the 3D vessel tree. The transitions from state to state model the probabilities for the catheter tip to move from one position to another. The HMM is updated following the observation scores, based on the registration between the 2D catheter centerline extracted from the 2D X-ray image, and the 2D projection of 3D vessel tree centerline extracted from the 3DRA. The method is extensively evaluated on simulated and clinical datasets acquired during liver abdominal catheterization. The evaluations show a median 3D tip tracking error of 2.3 mm with optimal settings in simulated data. The registered vessels close to the tip have a median distance error of 4.7 mm with angiographic data and optimal settings. Such accuracy is sufficient to help the physicians with an up-to-date roadmapping. The method tracks in real-time the catheter tip and enables roadmapping during catheterization procedures.

  19. Study of a viewer tracking system with multiview 3D display

    NASA Astrophysics Data System (ADS)

    Yang, Jinn-Cherng; Wu, Chang-Shuo; Hsiao, Chuan-Heng; Yang, Ming-Chieh; Liu, Wen-Chieh; Hung, Yi-Ping

    2008-02-01

    An autostereoscopic display provides users great enjoyment of stereo visualization without uncomfortable and inconvenient drawbacks of wearing stereo glasses. However, bandwidth constraints of current multi-view 3D display severely restrict the number of views that can be simultaneously displayed without degrading resolution or increasing display cost unacceptably. An alternative to multiple view presentation is that the position of observer can be measured by using viewer-tracking sensor. It is a very important module of the viewer-tracking component for fluently rendering and accurately projecting the stereo video. In order to render stereo content with respect to user's view points and to optically project the content onto the left and right eyes of the user accurately, the real-time viewer tracking technique that allows the user to move around freely when watching the autostereoscopic display is developed in this study. It comprises the face detection by using multiple eigenspaces of various lighting conditions, fast block matching for tracking four motion parameters of the user's face region. The Edge Orientation Histogram (EOH) on Real AdaBoost to improve the performance of original AdaBoost algorithm is also applied in this study. The AdaBoost algorithm using Haar feature in OpenCV library developed by Intel to detect human face and enhance the accuracy performance with rotating image. The frame rate of viewer tracking process can achieve up to 15 Hz. Since performance of the viewer tracking autostereoscopic display is still influenced under variant environmental conditions, the accuracy, robustness and efficiency of the viewer-tracking system are evaluated in this study.

  20. On-line 3D motion estimation using low resolution MRI

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; de Senneville, B. Denis; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2015-08-01

    Image processing such as deformable image registration finds its way into radiotherapy as a means to track non-rigid anatomy. With the advent of magnetic resonance imaging (MRI) guided radiotherapy, intrafraction anatomy snapshots become technically feasible. MRI provides the needed tissue signal for high-fidelity image registration. However, acquisitions, especially in 3D, take a considerable amount of time. Pushing towards real-time adaptive radiotherapy, MRI needs to be accelerated without degrading the quality of information. In this paper, we investigate the impact of image resolution on the quality of motion estimations. Potentially, spatially undersampled images yield comparable motion estimations. At the same time, their acquisition times would reduce greatly due to the sparser sampling. In order to substantiate this hypothesis, exemplary 4D datasets of the abdomen were downsampled gradually. Subsequently, spatiotemporal deformations are extracted consistently using the same motion estimation for each downsampled dataset. Errors between the original and the respectively downsampled version of the dataset are then evaluated. Compared to ground-truth, results show high similarity of deformations estimated from downsampled image data. Using a dataset with {{≤ft(2.5 \\text{mm}\\right)}3} voxel size, deformation fields could be recovered well up to a downsampling factor of 2, i.e. {{≤ft(5 \\text{mm}\\right)}3} . In a therapy guidance scenario MRI, imaging speed could accordingly increase approximately fourfold, with acceptable loss of estimated motion quality.

  1. Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller

    PubMed Central

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  2. A 3D diamond detector for particle tracking

    NASA Astrophysics Data System (ADS)

    Artuso, M.; Bachmair, F.; Bäni, L.; Bartosik, M.; Beacham, J.; Bellini, V.; Belyaev, V.; Bentele, B.; Berdermann, E.; Bergonzo, P.; Bes, A.; Brom, J.-M.; Bruzzi, M.; Cerv, M.; Chau, C.; Chiodini, G.; Chren, D.; Cindro, V.; Claus, G.; Collot, J.; Costa, S.; Cumalat, J.; Dabrowski, A.; D`Alessandro, R.; de Boer, W.; Dehning, B.; Dobos, D.; Dünser, M.; Eremin, V.; Eusebi, R.; Forcolin, G.; Forneris, J.; Frais-Kölbl, H.; Gan, K. K.; Gastal, M.; Goffe, M.; Goldstein, J.; Golubev, A.; Gonella, L.; Gorišek, A.; Graber, L.; Grigoriev, E.; Grosse-Knetter, J.; Gui, B.; Guthoff, M.; Haughton, I.; Hidas, D.; Hits, D.; Hoeferkamp, M.; Hofmann, T.; Hosslet, J.; Hostachy, J.-Y.; Hügging, F.; Jansen, H.; Janssen, J.; Kagan, H.; Kanxheri, K.; Kasieczka, G.; Kass, R.; Kassel, F.; Kis, M.; Kramberger, G.; Kuleshov, S.; Lacoste, A.; Lagomarsino, S.; Lo Giudice, A.; Maazouzi, C.; Mandic, I.; Mathieu, C.; McFadden, N.; McGoldrick, G.; Menichelli, M.; Mikuž, M.; Morozzi, A.; Moss, J.; Mountain, R.; Murphy, S.; Oh, A.; Olivero, P.; Parrini, G.; Passeri, D.; Pauluzzi, M.; Pernegger, H.; Perrino, R.; Picollo, F.; Pomorski, M.; Potenza, R.; Quadt, A.; Re, A.; Riley, G.; Roe, S.; Sapinski, M.; Scaringella, M.; Schnetzer, S.; Schreiner, T.; Sciortino, S.; Scorzoni, A.; Seidel, S.; Servoli, L.; Sfyrla, A.; Shimchuk, G.; Smith, D. S.; Sopko, B.; Sopko, V.; Spagnolo, S.; Spanier, S.; Stenson, K.; Stone, R.; Sutera, C.; Taylor, A.; Traeger, M.; Tromson, D.; Trischuk, W.; Tuve, C.; Uplegger, L.; Velthuis, J.; Venturi, N.; Vittone, E.; Wagner, S.; Wallny, R.; Wang, J. C.; Weilhammer, P.; Weingarten, J.; Weiss, C.; Wengler, T.; Wermes, N.; Yamouni, M.; Zavrtanik, M.

    2016-07-01

    In the present study, results towards the development of a 3D diamond sensor are presented. Conductive channels are produced inside the sensor bulk using a femtosecond laser. This electrode geometry allows full charge collection even for low quality diamond sensors. Results from testbeam show that charge is collected by these electrodes. In order to understand the channel growth parameters, with the goal of producing low resistivity channels, the conductive channels produced with a different laser setup are evaluated by Raman spectroscopy.

  3. Rigid Body Motion in Stereo 3D Simulation

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2010-01-01

    This paper addresses the difficulties experienced by first-grade students studying rigid body motion at Sofia University. Most quantities describing the rigid body are in relations that the students find hard to visualize and understand. They also lose the notion of cause-result relations between vector quantities, such as the relation between…

  4. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  5. Flying triangulation--an optical 3D sensor for the motion-robust acquisition of complex objects.

    PubMed

    Ettl, Svenja; Arold, Oliver; Yang, Zheng; Häusler, Gerd

    2012-01-10

    Three-dimensional (3D) shape acquisition is difficult if an all-around measurement of an object is desired or if a relative motion between object and sensor is unavoidable. An optical sensor principle is presented-we call it "flying triangulation"-that enables a motion-robust acquisition of 3D surface topography. It combines a simple handheld sensor with sophisticated registration algorithms. An easy acquisition of complex objects is possible-just by freely hand-guiding the sensor around the object. Real-time feedback of the sequential measurement results enables a comfortable handling for the user. No tracking is necessary. In contrast to most other eligible sensors, the presented sensor generates 3D data from each single camera image.

  6. Markerless 3D motion capture for animal locomotion studies

    PubMed Central

    Sellers, William Irvin; Hirasaki, Eishi

    2014-01-01

    ABSTRACT Obtaining quantitative data describing the movements of animals is an essential step in understanding their locomotor biology. Outside the laboratory, measuring animal locomotion often relies on video-based approaches and analysis is hampered because of difficulties in calibration and often the limited availability of possible camera positions. It is also usually restricted to two dimensions, which is often an undesirable over-simplification given the essentially three-dimensional nature of many locomotor performances. In this paper we demonstrate a fully three-dimensional approach based on 3D photogrammetric reconstruction using multiple, synchronised video cameras. This approach allows full calibration based on the separation of the individual cameras and will work fully automatically with completely unmarked and undisturbed animals. As such it has the potential to revolutionise work carried out on free-ranging animals in sanctuaries and zoological gardens where ad hoc approaches are essential and access within enclosures often severely restricted. The paper demonstrates the effectiveness of video-based 3D photogrammetry with examples from primates and birds, as well as discussing the current limitations of this technique and illustrating the accuracies that can be obtained. All the software required is open source so this can be a very cost effective approach and provides a methodology of obtaining data in situations where other approaches would be completely ineffective. PMID:24972869

  7. Comparative abilities of Microsoft Kinect and Vicon 3D motion capture for gait analysis.

    PubMed

    Pfister, Alexandra; West, Alexandre M; Bronner, Shaw; Noah, Jack Adam

    2014-07-01

    Biomechanical analysis is a powerful tool in the evaluation of movement dysfunction in orthopaedic and neurologic populations. Three-dimensional (3D) motion capture systems are widely used, accurate systems, but are costly and not available in many clinical settings. The Microsoft Kinect™ has the potential to be used as an alternative low-cost motion analysis tool. The purpose of this study was to assess concurrent validity of the Kinect™ with Brekel Kinect software in comparison to Vicon Nexus during sagittal plane gait kinematics. Twenty healthy adults (nine male, 11 female) were tracked while walking and jogging at three velocities on a treadmill. Concurrent hip and knee peak flexion and extension and stride timing measurements were compared between Vicon and Kinect™. Although Kinect measurements were representative of normal gait, the Kinect™ generally under-estimated joint flexion and over-estimated extension. Kinect™ and Vicon hip angular displacement correlation was very low and error was large. Kinect™ knee measurements were somewhat better than hip, but were not consistent enough for clinical assessment. Correlation between Kinect™ and Vicon stride timing was high and error was fairly small. Variability in Kinect™ measurements was smallest at the slowest velocity. The Kinect™ has basic motion capture capabilities and with some minor adjustments will be an acceptable tool to measure stride timing, but sophisticated advances in software and hardware are necessary to improve Kinect™ sensitivity before it can be implemented for clinical use.

  8. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  9. Simple 3-D stimulus for motion parallax and its simulation.

    PubMed

    Ono, Hiroshi; Chornenkyy, Yevgen; D'Amour, Sarah

    2013-01-01

    Simulation of a given stimulus situation should produce the same perception as the original. Rogers et al (2009 Perception 38 907-911) simulated Wheeler's (1982, PhD thesis, Rutgers University, NJ) motion parallax stimulus and obtained quite different perceptions. Wheeler's observers were unable to reliably report the correct direction of depth, whereas Rogers's were. With three experiments we explored the possible reasons for the discrepancy. Our results suggest that Rogers was able to see depth from the simulation partly due to his experience seeing depth with random dot surfaces.

  10. Evaluating the utility of 3D TRUS image information in guiding intra-procedure registration for motion compensation

    NASA Astrophysics Data System (ADS)

    De Silva, Tharindu; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    In targeted 3D transrectal ultrasound (TRUS)-guided biopsy, patient and prostate movement during the procedure can cause target misalignments that hinder accurate sampling of pre-planned suspicious tissue locations. Multiple solutions have been proposed for motion compensation via registration of intra-procedural TRUS images to a baseline 3D TRUS image acquired at the beginning of the biopsy procedure. While 2D TRUS images are widely used for intra-procedural guidance, some solutions utilize richer intra-procedural images such as bi- or multi-planar TRUS or 3D TRUS, acquired by specialized probes. In this work, we measured the impact of such richer intra-procedural imaging on motion compensation accuracy, to evaluate the tradeoff between cost and complexity of intra-procedural imaging versus improved motion compensation. We acquired baseline and intra-procedural 3D TRUS images from 29 patients at standard sextant-template biopsy locations. We used the planes extracted from the 3D intra-procedural scans to simulate 2D and 3D information available in different clinically relevant scenarios for registration. The registration accuracy was evaluated by calculating the target registration error (TRE) using manually identified homologous fiducial markers (micro-calcifications). Our results indicate that TRE improves gradually when the number of intra-procedural imaging planes used in registration is increased. Full 3D TRUS information helps the registration algorithm to robustly converge to more accurate solutions. These results can also inform the design of a fail-safe workflow during motion compensation in a system using a tracked 2D TRUS probe, by prescribing rotational acquisitions that can be performed quickly and easily by the physician immediately prior to needle targeting.

  11. Ultra-Wideband Time-Difference-of-Arrival High Resolution 3D Proximity Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Dekome, Kent; Dusl, John

    2010-01-01

    This paper describes a research and development effort for a prototype ultra-wideband (UWB) tracking system that is currently under development at NASA Johnson Space Center (JSC). The system is being studied for use in tracking of lunar./Mars rovers and astronauts during early exploration missions when satellite navigation systems are not available. U IATB impulse radio (UWB-IR) technology is exploited in the design and implementation of the prototype location and tracking system. A three-dimensional (3D) proximity tracking prototype design using commercially available UWB products is proposed to implement the Time-Difference- Of-Arrival (TDOA) tracking methodology in this research effort. The TDOA tracking algorithm is utilized for location estimation in the prototype system, not only to exploit the precise time resolution possible with UWB signals, but also to eliminate the need for synchronization between the transmitter and the receiver. Simulations show that the TDOA algorithm can achieve the fine tracking resolution with low noise TDOA estimates for close-in tracking. Field tests demonstrated that this prototype UWB TDOA High Resolution 3D Proximity Tracking System is feasible for providing positioning-awareness information in a 3D space to a robotic control system. This 3D tracking system is developed for a robotic control system in a facility called "Moonyard" at Honeywell Defense & System in Arizona under a Space Act Agreement.

  12. Defense Additive Manufacturing: DOD Needs to Systematically Track Department-wide 3D Printing Efforts

    DTIC Science & Technology

    2015-10-01

    Clip Additively Manufactured • The Navy installed a 3D printer aboard the USS Essex to demonstrate the ability to additively develop and produce...desired result and vision to have the capability on the fleet. These officials stated that the Navy plans to install 3D printers on two additional...DEFENSE ADDITIVE MANUFACTURING DOD Needs to Systematically Track Department-wide 3D Printing Efforts Report to

  13. Motion-Corrected 3D Sonic Anemometer for Tethersondes and Other Moving Platforms

    NASA Technical Reports Server (NTRS)

    Bognar, John

    2012-01-01

    To date, it has not been possible to apply 3D sonic anemometers on tethersondes or similar atmospheric research platforms due to the motion of the supporting platform. A tethersonde module including both a 3D sonic anemometer and associated motion correction sensors has been developed, enabling motion-corrected 3D winds to be measured from a moving platform such as a tethersonde. Blimps and other similar lifting systems are used to support tethersondes meteorological devices that fly on the tether of a blimp or similar platform. To date, tethersondes have been limited to making basic meteorological measurements (pressure, temperature, humidity, and wind speed and direction). The motion of the tethersonde has precluded the addition of 3D sonic anemometers, which can be used for high-speed flux measurements, thereby limiting what has been achieved to date with tethersondes. The tethersonde modules fly on a tether that can be constantly moving and swaying. This would introduce enormous error into the output of an uncorrected 3D sonic anemometer. The motion correction that is required must be implemented in a low-weight, low-cost manner to be suitable for this application. Until now, flux measurements using 3D sonic anemometers could only be made if the 3D sonic anemometer was located on a rigid, fixed platform such as a tower. This limited the areas in which they could be set up and used. The purpose of the innovation was to enable precise 3D wind and flux measurements to be made using tether - sondes. In brief, a 3D accelerometer and a 3D gyroscope were added to a tethersonde module along with a 3D sonic anemometer. This combination allowed for the necessary package motions to be measured, which were then mathematically combined with the measured winds to yield motion-corrected 3D winds. At the time of this reporting, no tethersonde has been able to make any wind measurement other than a basic wind speed and direction measurement. The addition of a 3D sonic

  14. Recording High Resolution 3D Lagrangian Motions In Marine Dinoflagellates using Digital Holographic Microscopic Cinematography

    NASA Astrophysics Data System (ADS)

    Sheng, J.; Malkiel, E.; Katz, J.; Place, A. R.; Belas, R.

    2006-11-01

    Detailed data on swimming behavior and locomotion for dense population of dinoflagellates constitutes a key component to understanding cell migration, cell-cell interactions and predator-prey dynamics, all of which affect algae bloom dynamics. Due to the multi-dimensional nature of flagellated cell motions, spatial-temporal Lagrangian measurements of multiple cells in high concentration are very limited. Here we present detailed data on 3D Lagrangian motions for three marine dinoflagellates: Oxyrrhis marina, Karlodinium veneficum, and Pfiesteria piscicida, using digital holographic microscopic cinematography. The measurements are performed in a 5x5x25mm cuvette with cell densities varying from 50,000 ˜ 90,000 cells/ml. Approximately 200-500 cells are tracked simultaneously for 12s at 60fps in a sample volume of 1x1x5 mm at a spatial resolution of 0.4x0.4x2 μm. We fully resolve the longitudinal flagella (˜200nm) along with the Lagrangian trajectory of each organism. Species dependent swimming behavior are identified and categorized quantitatively by velocities, radii of curvature, and rotations of pitch. Statistics on locomotion, temporal & spatial scales, and diffusion rate show substantial differences between species. The scaling between turning radius and cell dimension can be explained by a distributed stokeslet model for a self-propelled body.

  15. Eulerian and Lagrangian methods for vortex tracking in 2D and 3D flows

    NASA Astrophysics Data System (ADS)

    Huang, Yangzi; Green, Melissa

    2014-11-01

    Coherent structures are a key component of unsteady flows in shear layers. Improvement of experimental techniques has led to larger amounts of data and requires of automated procedures for vortex tracking. Many vortex criteria are Eulerian, and identify the structures by an instantaneous local swirling motion in the field, which are indicated by closed or spiral streamlines or pathlines in a reference frame. Alternatively, a Lagrangian Coherent Structures (LCS) analysis is a Lagrangian method based on the quantities calculated along fluid particle trajectories. In the current work, vortex detection is demonstrated on data from the simulation of two cases: a 2D flow with a flat plate undergoing a 45 ° pitch-up maneuver and a 3D wall-bounded turbulence channel flow. Vortices are visualized and tracked by their centers and boundaries using Γ1, the Q criterion, and LCS saddle points. In the cases of 2D flow, saddle points trace showed a rapid acceleration of the structure which indicates the shedding from the plate. For channel flow, saddle points trace shows that average structure convection speed exhibits a similar trend as a function of wall-normal distance as the mean velocity profile, and leads to statistical quantities of vortex dynamics. Dr. Jeff Eldredge and his research group at UCLA are gratefully acknowledged for sharing the database of simulation for the current research. This work was supported by the Air Force Office of Scientific Research under AFOSR Award No. FA9550-14-1-0210.

  16. Computing 3-D structure of rigid objects using stereo and motion

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1987-01-01

    Work performed as a step toward an intelligent automatic machine vision system for 3-D imaging is discussed. The problem considered is the quantitative 3-D reconstruction of rigid objects. Motion and stereo are the two clues considered in this system. The system basically consists of three processes: the low level process to extract image features, the middle level process to establish the correspondence in the stereo (spatial) and motion (temporal) modalities, and the high level process to compute the 3-D coordinates of the corner points by integrating the spatial and temporal correspondences.

  17. Head Tracking for 3D Audio Using a GPS-Aided MEMS IMU

    DTIC Science & Technology

    2005-03-01

    Aircraft, Directional Signals, GPS/INS Fusion , GPS/INS Integration, Head Tracking Systems, IMU (Inertial Measurement Unit), Inertial Sensors, MEMS...HEAD TRACKING FOR 3D AUDIO USING A GPS-AIDED MEMS IMU THESIS Jacque M. Joffrion, Captain, USAF AFIT/GE/ENG/05-09 DEPARTMENT OF THE AIR FORCE AIR...the United States Government. AFIT/GE/ENG/05-09 HEAD TRACKING FOR 3D AUDIO USING A GPS-AIDED MEMS IMU THESIS Presented to the Faculty of the Department

  18. LayTracks3D: A new approach for meshing general solids using medial axis transform

    DOE PAGES

    Quadros, William Roshan

    2015-08-22

    This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to themore » MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.« less

  19. Model-based risk assessment for motion effects in 3D radiotherapy of lung tumors

    NASA Astrophysics Data System (ADS)

    Werner, René; Ehrhardt, Jan; Schmidt-Richberg, Alexander; Handels, Heinz

    2012-02-01

    Although 4D CT imaging becomes available in an increasing number of radiotherapy facilities, 3D imaging and planning is still standard in current clinical practice. In particular for lung tumors, respiratory motion is a known source of uncertainty and should be accounted for during radiotherapy planning - which is difficult by using only a 3D planning CT. In this contribution, we propose applying a statistical lung motion model to predict patients' motion patterns and to estimate dosimetric motion effects in lung tumor radiotherapy if only 3D images are available. Being generated based on 4D CT images of patients with unimpaired lung motion, the model tends to overestimate lung tumor motion. It therefore promises conservative risk assessment regarding tumor dose coverage. This is exemplarily evaluated using treatment plans of lung tumor patients with different tumor motion patterns and for two treatment modalities (conventional 3D conformal radiotherapy and step-&- shoot intensity modulated radiotherapy). For the test cases, 4D CT images are available. Thus, also a standard registration-based 4D dose calculation is performed, which serves as reference to judge plausibility of the modelbased 4D dose calculation. It will be shown that, if combined with an additional simple patient-specific breathing surrogate measurement (here: spirometry), the model-based dose calculation provides reasonable risk assessment of respiratory motion effects.

  20. On the integrability of the motion of 3D-Swinging Atwood machine and related problems

    NASA Astrophysics Data System (ADS)

    Elmandouh, A. A.

    2016-03-01

    In the present article, we study the problem of the motion of 3D- Swinging Atwood machine. A new integrable case for this problem is announced. We point out a new integrable case describing the motion of a heavy particle on a titled cone.

  1. 3D-printed concentrators for tracking-integrated CPV modules

    NASA Astrophysics Data System (ADS)

    Apostoleris, Harry; Leland, Julian; Chiesa, Matteo; Stefancich, Marco

    2016-09-01

    We demonstrate 3D-printed nonimaging concentrators and propose a tracking integration scheme to reduce the external tracking requirements of CPV modules. In the proposed system, internal sun tracking is achieved by rotation of the mini-concentrators inside the module by small motors. We discuss the design principles employed in the development of the system, experimentally evaluate the performance of the concentrator prototypes, and propose practical modifications that may be made to improve on-site performance of the devices.

  2. Local motion-compensated method for high-quality 3D coronary artery reconstruction

    PubMed Central

    Liu, Bo; Bai, Xiangzhi; Zhou, Fugen

    2016-01-01

    The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method. PMID:28018741

  3. Local motion-compensated method for high-quality 3D coronary artery reconstruction.

    PubMed

    Liu, Bo; Bai, Xiangzhi; Zhou, Fugen

    2016-12-01

    The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method.

  4. Analyzing Non-circular Motions in Spiral Galaxies Through 3D Spectroscopy

    NASA Astrophysics Data System (ADS)

    Fuentes-Carrera, I.; Rosado, M.; Amram, P.

    3D spectroscopic techniques allow the assessment of different types of motions in extended objects. In the case of spiral galaxies, thes type of techniques allow us to trace not only the (almost) circular motion of the ionized gas, but also the motions arising from the presence of structure such as bars, spiral arms and tidal features. We present an analysis of non-circular motions in spiral galaxies in interacting pairs using scanning Fabry-Perot interferometry of emission lines. We show how this analysis can be helpful to differentiate circular from non-circular motions in the kinematical analysis of this type of galaxies.

  5. Conflicting motion information impairs multiple object tracking.

    PubMed

    St Clair, Rebecca; Huff, Markus; Seiffert, Adriane E

    2010-04-28

    People can keep track of target objects as they move among identical distractors using only spatiotemporal information. We investigated whether or not participants use motion information during the moment-to-moment tracking of objects by adding motion to the texture of moving objects. The texture either remained static or moved relative to the object's direction of motion, either in the same direction, the opposite direction, or orthogonal to each object's trajectory. Results showed that, compared to the static texture condition, tracking performance was worse when the texture moved in the opposite direction of the object and better when the texture moved in the same direction as the object. Our results support the conclusion that motion information is used during the moment-to-moment tracking of objects. Motion information may either affect a representation of position or be used to periodically predict the future location of targets.

  6. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    NASA Astrophysics Data System (ADS)

    Dhou, S.; Hurwitz, M.; Mishra, P.; Cai, W.; Rottmann, J.; Li, R.; Williams, C.; Wagar, M.; Berbeco, R.; Ionascu, D.; Lewis, J. H.

    2015-05-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  7. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    PubMed Central

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722

  8. The effect of motion on IMRT - looking at interplay with 3D measurements

    NASA Astrophysics Data System (ADS)

    Thomas, A.; Yan, H.; Oldham, M.; Juang, T.; Adamovics, J.; Yin, F. F.

    2013-06-01

    Clinical recommendations to address tumor motion management have been derived from studies dealing with simulations and 2D measurements. 3D measurements may provide more insight and possibly alter the current motion management guidelines. This study provides an initial look at true 3D measurements involving leaf motion deliveries by use of a motion phantom and the PRESAGE/DLOS dosimetry system. An IMRT and VMAT plan were delivered to the phantom and analyzed by means of DVHs to determine whether the expansion of treatment volumes based on known imaging motion adequately cover the target. DVHs confirmed that for these deliveries the expansion volumes were adequate to treat the intended target although further studies should be conducted to allow for differences in parameters that could alter the results, such as delivery dose and breathe rate.

  9. Confocal fluorometer for diffusion tracking in 3D engineered tissue constructs

    NASA Astrophysics Data System (ADS)

    Daly, D.; Zilioli, A.; Tan, N.; Buttenschoen, K.; Chikkanna, B.; Reynolds, J.; Marsden, B.; Hughes, C.

    2016-03-01

    We present results of the development of a non-contacting instrument, called fScan, based on scanning confocal fluorometry for assessing the diffusion of materials through a tissue matrix. There are many areas in healthcare diagnostics and screening where it is now widely accepted that the need for new quantitative monitoring technologies is a major pinch point in patient diagnostics and in vitro testing. With the increasing need to interpret 3D responses this commonly involves the need to track the diffusion of compounds, pharma-active species and cells through a 3D matrix of tissue. Methods are available but to support the advances that are currently only promised, this monitoring needs to be real-time, non-invasive, and economical. At the moment commercial meters tend to be invasive and usually require a sample of the medium to be removed and processed prior to testing. This methodology clearly has a number of significant disadvantages. fScan combines a fiber based optical arrangement with a compact, free space optical front end that has been integrated so that the sample's diffusion can be measured without interference. This architecture is particularly important due to the "wet" nature of the samples. fScan is designed to measure constructs located within standard well plates and a 2-D motion stage locates the required sample with respect to the measurement system. Results are presented that show how the meter has been used to evaluate movements of samples through collagen constructs in situ without disturbing their kinetic characteristics. These kinetics were little understood prior to these measurements.

  10. Tracking local motion on the beating heart

    NASA Astrophysics Data System (ADS)

    Groeger, Martin; Ortmaier, Tobias; Sepp, Wolfgang; Hirzinger, Gerd

    2002-05-01

    Local motion on the beating heart is investigated in the context of minimally invasive robotic surgery. The focus lies on the motion remaining in the mechanically stabilised field of surgery of the heart. Motion is detected by tracking natural landmarks on the heart surface in 2D video images. An appropriate motion model is presented with a discussion of its degrees of freedom and a trajectory analysis of its parameters.

  11. The 3D Human Motion Control Through Refined Video Gesture Annotation

    NASA Astrophysics Data System (ADS)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  12. 3D tracking of mating events in wild swarms of the malaria mosquito Anopheles gambiae.

    PubMed

    Butail, Sachit; Manoukis, Nicholas; Diallo, Moussa; Yaro, Alpha S; Dao, Adama; Traoré, Sekou F; Ribeiro, José M; Lehmann, Tovi; Paley, Derek A

    2011-01-01

    We describe an automated tracking system that allows us to reconstruct the 3D kinematics of individual mosquitoes in swarms of Anopheles gambiae. The inputs to the tracking system are video streams recorded from a stereo camera system. The tracker uses a two-pass procedure to automatically localize and track mosquitoes within the swarm. A human-in-the-loop step verifies the estimates and connects broken tracks. The tracker performance is illustrated using footage of mating events filmed in Mali in August 2010.

  13. Recursive estimation of 3D motion and surface structure from local affine flow parameters.

    PubMed

    Calway, Andrew

    2005-04-01

    A recursive structure from motion algorithm based on optical flow measurements taken from an image sequence is described. It provides estimates of surface normals in addition to 3D motion and depth. The measurements are affine motion parameters which approximate the local flow fields associated with near-planar surface patches in the scene. These are integrated over time to give estimates of the 3D parameters using an extended Kalman filter. This also estimates the camera focal length and, so, the 3D estimates are metric. The use of parametric measurements means that the algorithm is computationally less demanding than previous optical flow approaches and the recursive filter builds in a degree of noise robustness. Results of experiments on synthetic and real image sequences demonstrate that the algorithm performs well.

  14. High-throughput 3D tracking of bacteria on a standard phase contrast microscope

    NASA Astrophysics Data System (ADS)

    Taute, K. M.; Gude, S.; Tans, S. J.; Shimizu, T. S.

    2015-11-01

    Bacteria employ diverse motility patterns in traversing complex three-dimensional (3D) natural habitats. 2D microscopy misses crucial features of 3D behaviour, but the applicability of existing 3D tracking techniques is constrained by their performance or ease of use. Here we present a simple, broadly applicable, high-throughput 3D bacterial tracking method for use in standard phase contrast microscopy. Bacteria are localized at micron-scale resolution over a range of 350 × 300 × 200 μm by maximizing image cross-correlations between their observed diffraction patterns and a reference library. We demonstrate the applicability of our technique to a range of bacterial species and exploit its high throughput to expose hidden contributions of bacterial individuality to population-level variability in motile behaviour. The simplicity of this powerful new tool for bacterial motility research renders 3D tracking accessible to a wider community and paves the way for investigations of bacterial motility in complex 3D environments.

  15. Optimal Local Searching for Fast and Robust Textureless 3D Object Tracking in Highly Cluttered Backgrounds.

    PubMed

    Seo, Byung-Kuk; Park, Jong-Il; Hinterstoisser, Stefan; Ilic, Slobodan

    2013-06-13

    Edge-based tracking is a fast and plausible approach for textureless 3D object tracking, but its robustness is still very challenging in highly cluttered backgrounds due to numerous local minima. To overcome this problem, we propose a novel method for fast and robust textureless 3D object tracking in highly cluttered backgrounds. The proposed method is based on optimal local searching of 3D-2D correspondences between a known 3D object model and 2D scene edges in an image with heavy background clutter. In our searching scheme, searching regions are partitioned into three levels (interior, contour, and exterior) with respect to the previous object region, and confident searching directions are determined by evaluating candidates of correspondences on their region levels; thus, the correspondences are searched among likely candidates in only the confident directions instead of searching through all candidates. To ensure the confident searching direction, we also adopt the region appearance, which is efficiently modeled on a newly defined local space (called a searching bundle). Experimental results and performance evaluations demonstrate that our method fully supports fast and robust textureless 3D object tracking even in highly cluttered backgrounds.

  16. Optimal local searching for fast and robust textureless 3D object tracking in highly cluttered backgrounds.

    PubMed

    Seo, Byung-Kuk; Park, Hanhoon; Park, Jong-Il; Hinterstoisser, Stefan; Ilic, Slobodan

    2014-01-01

    Edge-based tracking is a fast and plausible approach for textureless 3D object tracking, but its robustness is still very challenging in highly cluttered backgrounds due to numerous local minima. To overcome this problem, we propose a novel method for fast and robust textureless 3D object tracking in highly cluttered backgrounds. The proposed method is based on optimal local searching of 3D-2D correspondences between a known 3D object model and 2D scene edges in an image with heavy background clutter. In our searching scheme, searching regions are partitioned into three levels (interior, contour, and exterior) with respect to the previous object region, and confident searching directions are determined by evaluating candidates of correspondences on their region levels; thus, the correspondences are searched among likely candidates in only the confident directions instead of searching through all candidates. To ensure the confident searching direction, we also adopt the region appearance, which is efficiently modeled on a newly defined local space (called a searching bundle). Experimental results and performance evaluations demonstrate that our method fully supports fast and robust textureless 3D object tracking even in highly cluttered backgrounds.

  17. Motion-Capture-Enabled Software for Gestural Control of 3D Models

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey S.; Luo, Victor; Crockett, Thomas M.; Shams, Khawaja S.; Powell, Mark W.; Valderrama, Anthony

    2012-01-01

    Current state-of-the-art systems use general-purpose input devices such as a keyboard, mouse, or joystick that map to tasks in unintuitive ways. This software enables a person to control intuitively the position, size, and orientation of synthetic objects in a 3D virtual environment. It makes possible the simultaneous control of the 3D position, scale, and orientation of 3D objects using natural gestures. Enabling the control of 3D objects using a commercial motion-capture system allows for natural mapping of the many degrees of freedom of the human body to the manipulation of the 3D objects. It reduces training time for this kind of task, and eliminates the need to create an expensive, special-purpose controller.

  18. Improving segmentation of 3D touching cell nuclei using flow tracking on surface meshes.

    PubMed

    Li, Gang; Guo, Lei

    2012-01-01

    Automatic segmentation of touching cell nuclei in 3D microscopy images is of great importance in bioimage informatics and computational biology. This paper presents a novel method for improving 3D touching cell nuclei segmentation. Given binary touching nuclei by the method in Li et al. (2007), our method herein consists of several steps: surface mesh reconstruction and curvature information estimation; direction field diffusion on surface meshes; flow tracking on surface meshes; and projection of surface mesh segmentation to volumetric images. The method is validated on both synthesised and real 3D touching cell nuclei images, demonstrating its validity and effectiveness.

  19. Ultrasonography-based motion tracking for MRgFUS

    NASA Astrophysics Data System (ADS)

    Jenne, Jürgen W.; Tretbar, Steffen H.; Hewener, Holger J.; Speicher, Daniel; Barthscherer, Tobias; Sarti, Cristina; Bongers, André; Schwaab, Julia; Günther, Matthias

    2017-03-01

    Non-invasive treatment of moving organs like liver and kidney with high intensity focused ultrasound (HIFU/FUS) is challenging. The highly precise HIFU ablation requires real-time knowledge of tumor position with mm precision. The aim of this work was to build up a magnetic resonance imaging compatible tracking device using diagnostic ultrasound imaging for MR guided FUS (MRgFUS). The hardware of the developed US-tracking system comprises the ultrasound beam former with a screen directly placed in front of the MR-magnet, a linear and a special ultrasound tracking probe. The tracking probe (2x64 element phased array) can acquire two perpendicularly oriented US-image planes for quasi 3D tracking. The US-data are sent to a workstation in the console room of the MRI scanner which controls the whole tracking device. The tracking software (Sonoplan II) analyzes the ultrasound image stream and calculates the actual position of pre-defined contours. Beside the 2D-translation, the tracking algorithm analyzes the rotation as well as the 2D scaling of the contour. The developed US-tracking system proved MR-compatibility in 1.5 and 3 T MR-systems and enabled simultaneous MR- and US-imaging and motion tracking. In the next step, the tracking system will be combined with an MRgFUS unit.

  20. Detailed Evaluation of Five 3D Speckle Tracking Algorithms Using Synthetic Echocardiographic Recordings.

    PubMed

    Alessandrini, Martino; Heyde, Brecht; Queiros, Sandro; Cygan, Szymon; Zontak, Maria; Somphone, Oudom; Bernard, Olivier; Sermesant, Maxime; Delingette, Herve; Barbosa, Daniel; De Craene, Mathieu; ODonnell, Matthew; Dhooge, Jan

    2016-08-01

    A plethora of techniques for cardiac deformation imaging with 3D ultrasound, typically referred to as 3D speckle tracking techniques, are available from academia and industry. Although the benefits of single methods over alternative ones have been reported in separate publications, the intrinsic differences in the data and definitions used makes it hard to compare the relative performance of different solutions. To address this issue, we have recently proposed a framework to simulate realistic 3D echocardiographic recordings and used it to generate a common set of ground-truth data for 3D speckle tracking algorithms, which was made available online. The aim of this study was therefore to use the newly developed database to contrast non-commercial speckle tracking solutions from research groups with leading expertise in the field. The five techniques involved cover the most representative families of existing approaches, namely block-matching, radio-frequency tracking, optical flow and elastic image registration. The techniques were contrasted in terms of tracking and strain accuracy. The feasibility of the obtained strain measurements to diagnose pathology was also tested for ischemia and dyssynchrony.

  1. Depth representation of moving 3-D objects in apparent-motion path.

    PubMed

    Hidaka, Souta; Kawachi, Yousuke; Gyoba, Jiro

    2008-01-01

    Apparent motion is perceived when two objects are presented alternately at different positions. The internal representations of apparently moving objects are formed in an apparent-motion path which lacks physical inputs. We investigated the depth information contained in the representation of 3-D moving objects in an apparent-motion path. We examined how probe objects-briefly placed in the motion path-affected the perceived smoothness of apparent motion. The probe objects comprised 3-D objects which were defined by being shaded or by disparity (convex/concave) or 2-D (flat) objects, while the moving objects were convex/concave objects. We found that flat probe objects induced a significantly smoother motion perception than concave probe objects only in the case of the convex moving objects. However, convex probe objects did not lead to smoother motion as the flat objects did, although the convex probe objects contained the same depth information for the moving objects. Moreover, the difference between probe objects was reduced when the moving objects were concave. These counterintuitive results were consistent in conditions when both depth cues were used. The results suggest that internal representations contain incomplete depth information that is intermediate between that of 2-D and 3-D objects.

  2. 3-D geometry calibration and markerless electromagnetic tracking with a mobile C-arm

    NASA Astrophysics Data System (ADS)

    Cheryauka, Arvi; Barrett, Johnny; Wang, Zhonghua; Litvin, Andrew; Hamadeh, Ali; Beaudet, Daniel

    2007-03-01

    The design of mobile X-ray C-arm equipment with image tomography and surgical guidance capabilities involves the retrieval of repeatable gantry positioning in three-dimensional space. Geometry misrepresentations can cause degradation of the reconstruction results with the appearance of blurred edges, image artifacts, and even false structures. It may also amplify surgical instrument tracking errors leading to improper implant placement. In our prior publications we have proposed a C-arm 3D positioner calibration method comprising separate intrinsic and extrinsic geometry calibration steps. Following this approach, in the present paper, we extend the intrinsic geometry calibration of C-gantry beyond angular positions in the orbital plane into angular positions on a unit sphere of isocentric rotation. Our method makes deployment of markerless interventional tool guidance with use of high-resolution fluoro images and electromagnetic tracking feasible at any angular position of the tube-detector assembly. Variations of the intrinsic parameters associated with C-arm motion are measured off-line as functions of orbital and lateral angles. The proposed calibration procedure provides better accuracy, and prevents unnecessary workflow steps for surgical navigation applications. With a slight modification, the Misalignment phantom, a tool for intrinsic geometry calibration, is also utilized to obtain an accurate 'image-to-sensor' mapping. We show simulation results, image quality and navigation accuracy estimates, and feasibility data acquired with the prototype system. The experimental results show the potential of high-resolution CT imaging (voxel size below 0.5 mm) and confident navigation in an interventional surgery setting with a mobile C-arm.

  3. Note: Time-gated 3D single quantum dot tracking with simultaneous spinning disk imaging

    SciTech Connect

    DeVore, M. S.; Stich, D. G.; Keller, A. M.; Phipps, M. E.; Hollingsworth, J. A.; Goodwin, P. M.; Werner, J. H.; Cleyrat, C.; Lidke, D. S.; Wilson, B. S.

    2015-12-15

    We describe recent upgrades to a 3D tracking microscope to include simultaneous Nipkow spinning disk imaging and time-gated single-particle tracking (SPT). Simultaneous 3D molecular tracking and spinning disk imaging enable the visualization of cellular structures and proteins around a given fluorescently labeled target molecule. The addition of photon time-gating to the SPT hardware improves signal to noise by discriminating against Raman scattering and short-lived fluorescence. In contrast to camera-based SPT, single-photon arrival times are recorded, enabling time-resolved spectroscopy (e.g., measurement of fluorescence lifetimes and photon correlations) to be performed during single molecule/particle tracking experiments.

  4. Design and Performance Evaluation on Ultra-Wideband Time-Of-Arrival 3D Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Dusl, John

    2012-01-01

    A three-dimensional (3D) Ultra-Wideband (UWB) Time--of-Arrival (TOA) tracking system has been studied at NASA Johnson Space Center (JSC) to provide the tracking capability inside the International Space Station (ISS) modules for various applications. One of applications is to locate and report the location where crew experienced possible high level of carbon-dioxide and felt upset. In order to accurately locate those places in a multipath intensive environment like ISS modules, it requires a robust real-time location system (RTLS) which can provide the required accuracy and update rate. A 3D UWB TOA tracking system with two-way ranging has been proposed and studied. The designed system will be tested in the Wireless Habitat Testbed which simulates the ISS module environment. In this presentation, we discuss the 3D TOA tracking algorithm and the performance evaluation based on different tracking baseline configurations. The simulation results show that two configurations of the tracking baseline are feasible. With 100 picoseconds standard deviation (STD) of TOA estimates, the average tracking error 0.2392 feet (about 7 centimeters) can be achieved for configuration Twisted Rectangle while the average tracking error 0.9183 feet (about 28 centimeters) can be achieved for configuration Slightly-Twisted Top Rectangle . The tracking accuracy can be further improved with the improvement of the STD of TOA estimates. With 10 picoseconds STD of TOA estimates, the average tracking error 0.0239 feet (less than 1 centimeter) can be achieved for configuration "Twisted Rectangle".

  5. A Gaussian process guided particle filter for tracking 3D human pose in video.

    PubMed

    Sedai, Suman; Bennamoun, Mohammed; Huynh, Du Q

    2013-11-01

    In this paper, we propose a hybrid method that combines Gaussian process learning, a particle filter, and annealing to track the 3D pose of a human subject in video sequences. Our approach, which we refer to as annealed Gaussian process guided particle filter, comprises two steps. In the training step, we use a supervised learning method to train a Gaussian process regressor that takes the silhouette descriptor as an input and produces multiple output poses modeled by a mixture of Gaussian distributions. In the tracking step, the output pose distributions from the Gaussian process regression are combined with the annealed particle filter to track the 3D pose in each frame of the video sequence. Our experiments show that the proposed method does not require initialization and does not lose tracking of the pose. We compare our approach with a standard annealed particle filter using the HumanEva-I dataset and with other state of the art approaches using the HumanEva-II dataset. The evaluation results show that our approach can successfully track the 3D human pose over long video sequences and give more accurate pose tracking results than the annealed particle filter.

  6. Three-dimensional motion tracking by Kalman filtering

    NASA Astrophysics Data System (ADS)

    Gao, Jean; Kosaka, Akio; Kak, Avinash C.

    2000-10-01

    In this paper, a 3D semantic object motion tracking method based on Kalman filtering is proposed. First, we use a specially designed Color Image Segmentation Editor (CISE) to devise shapes that more accurately describe the object to be tracked. CISE is an integration of edge and region detection, which is based on edge-linking, split-and-merge and the energy minimization for active contour detection. An ROI is further segmented into single motion blobs by considering the constancy of the motion parameters in each blob. Over short time intervals, each blob can be tracked separately and, over longer times, the blobs can be allowed to fragment and coalesce into new blobs as motion evolves. The tracking of each blob is based on a Kalman filter derived from linearization of a constraint equation satisfied by the pinhole model of a camera. The Kalman filter allows the tracker to project the uncertainties associated with a blob center (or with the coordinates of any other features) into the next frame. This projected uncertainty region can then be searched rot eh pixels belonging to the blob. Future work includes investigation of the effects of illumination changes and simultaneous tracking of multiple targets.

  7. The BaBar Level 1 Drift-Chamber Trigger Upgrade With 3D Tracking

    SciTech Connect

    Chai, X.D.; /Iowa U.

    2005-11-29

    At BABAR, the Level 1 Drift Chamber trigger is being upgraded to reduce increasing background rates while the PEP-II luminosity keeps improving. This upgrade uses the drift time information and stereo wires in the drift chamber to perform a 3D track reconstruction that effectively rejects background events spread out along the beam line.

  8. A Microscopic Optically Tracking Navigation System That Uses High-resolution 3D Computer Graphics.

    PubMed

    Yoshino, Masanori; Saito, Toki; Kin, Taichi; Nakagawa, Daichi; Nakatomi, Hirofumi; Oyama, Hiroshi; Saito, Nobuhito

    2015-01-01

    Three-dimensional (3D) computer graphics (CG) are useful for preoperative planning of neurosurgical operations. However, application of 3D CG to intraoperative navigation is not widespread because existing commercial operative navigation systems do not show 3D CG in sufficient detail. We have developed a microscopic optically tracking navigation system that uses high-resolution 3D CG. This article presents the technical details of our microscopic optically tracking navigation system. Our navigation system consists of three components: the operative microscope, registration, and the image display system. An optical tracker was attached to the microscope to monitor the position and attitude of the microscope in real time; point-pair registration was used to register the operation room coordinate system, and the image coordinate system; and the image display system showed the 3D CG image in the field-of-view of the microscope. Ten neurosurgeons (seven males, two females; mean age 32.9 years) participated in an experiment to assess the accuracy of this system using a phantom model. Accuracy of our system was compared with the commercial system. The 3D CG provided by the navigation system coincided well with the operative scene under the microscope. Target registration error for our system was 2.9 ± 1.9 mm. Our navigation system provides a clear image of the operation position and the surrounding structures. Systems like this may reduce intraoperative complications.

  9. Moving Human Path Tracking Based on Video Surveillance in 3d Indoor Scenarios

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Zlatanova, Sisi; Wang, Zhe; Zhang, Yeting; Liu, Liu

    2016-06-01

    Video surveillance systems are increasingly used for a variety of 3D indoor applications. We can analyse human behaviour, discover and avoid crowded areas, monitor human traffic and so forth. In this paper we concentrate on use of surveillance cameras to track and reconstruct the path a person has followed. For the purpose we integrated video surveillance data with a 3D indoor model of the building and develop a single human moving path tracking method. We process the surveillance videos to detected single human moving traces; then we match the depth information of 3D scenes to the constructed 3D indoor network model and define the human traces in the 3D indoor space. Finally, the single human traces extracted from multiple cameras are connected with the help of the connectivity provided by the 3D network model. Using this approach, we can reconstruct the entire walking path. The provided experiments with a single person have verified the effectiveness and robustness of the method.

  10. A full-parallax 3D display with restricted viewing zone tracking viewer's eye

    NASA Astrophysics Data System (ADS)

    Beppu, Naoto; Yendo, Tomohiro

    2015-03-01

    The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.

  11. Event-Based 3D Motion Flow Estimation Using 4D Spatio Temporal Subspaces Properties.

    PubMed

    Ieng, Sio-Hoi; Carneiro, João; Benosman, Ryad B

    2016-01-01

    State of the art scene flow estimation techniques are based on projections of the 3D motion on image using luminance-sampled at the frame rate of the cameras-as the principal source of information. We introduce in this paper a pure time based approach to estimate the flow from 3D point clouds primarily output by neuromorphic event-based stereo camera rigs, or by any existing 3D depth sensor even if it does not provide nor use luminance. This method formulates the scene flow problem by applying a local piecewise regularization of the scene flow. The formulation provides a unifying framework to estimate scene flow from synchronous and asynchronous 3D point clouds. It relies on the properties of 4D space time using a decomposition into its subspaces. This method naturally exploits the properties of the neuromorphic asynchronous event based vision sensors that allows continuous time 3D point clouds reconstruction. The approach can also handle the motion of deformable object. Experiments using different 3D sensors are presented.

  12. Event-Based 3D Motion Flow Estimation Using 4D Spatio Temporal Subspaces Properties

    PubMed Central

    Ieng, Sio-Hoi; Carneiro, João; Benosman, Ryad B.

    2017-01-01

    State of the art scene flow estimation techniques are based on projections of the 3D motion on image using luminance—sampled at the frame rate of the cameras—as the principal source of information. We introduce in this paper a pure time based approach to estimate the flow from 3D point clouds primarily output by neuromorphic event-based stereo camera rigs, or by any existing 3D depth sensor even if it does not provide nor use luminance. This method formulates the scene flow problem by applying a local piecewise regularization of the scene flow. The formulation provides a unifying framework to estimate scene flow from synchronous and asynchronous 3D point clouds. It relies on the properties of 4D space time using a decomposition into its subspaces. This method naturally exploits the properties of the neuromorphic asynchronous event based vision sensors that allows continuous time 3D point clouds reconstruction. The approach can also handle the motion of deformable object. Experiments using different 3D sensors are presented. PMID:28220057

  13. Mesoscopic in vivo 3-D tracking of sparse cell populations using angular multiplexed optical projection tomography.

    PubMed

    Chen, Lingling; Alexandrov, Yuriy; Kumar, Sunil; Andrews, Natalie; Dallman, Margaret J; French, Paul M W; McGinty, James

    2015-04-01

    We describe an angular multiplexed imaging technique for 3-D in vivo cell tracking of sparse cell distributions and optical projection tomography (OPT) with superior time-lapse resolution and a significantly reduced light dose compared to volumetric time-lapse techniques. We demonstrate that using dual axis OPT, where two images are acquired simultaneously at different projection angles, can enable localization and tracking of features in 3-D with a time resolution equal to the camera frame rate. This is achieved with a 200x reduction in light dose compared to an equivalent volumetric time-lapse single camera OPT acquisition with 200 projection angles. We demonstrate the application of this technique to mapping the 3-D neutrophil migration pattern observed over ~25.5 minutes in a live 2 day post-fertilisation transgenic LysC:GFP zebrafish embryo following a tail wound.

  14. Motion corrected LV quantification based on 3D modelling for improved functional assessment in cardiac MRI

    NASA Astrophysics Data System (ADS)

    Liew, Y. M.; McLaughlin, R. A.; Chan, B. T.; Aziz, Y. F. Abdul; Chee, K. H.; Ung, N. M.; Tan, L. K.; Lai, K. W.; Ng, S.; Lim, E.

    2015-04-01

    Cine MRI is a clinical reference standard for the quantitative assessment of cardiac function, but reproducibility is confounded by motion artefacts. We explore the feasibility of a motion corrected 3D left ventricle (LV) quantification method, incorporating multislice image registration into the 3D model reconstruction, to improve reproducibility of 3D LV functional quantification. Multi-breath-hold short-axis and radial long-axis images were acquired from 10 patients and 10 healthy subjects. The proposed framework reduced misalignment between slices to subpixel accuracy (2.88 to 1.21 mm), and improved interstudy reproducibility for 5 important clinical functional measures, i.e. end-diastolic volume, end-systolic volume, ejection fraction, myocardial mass and 3D-sphericity index, as reflected in a reduction in the sample size required to detect statistically significant cardiac changes: a reduction of 21-66%. Our investigation on the optimum registration parameters, including both cardiac time frames and number of long-axis (LA) slices, suggested that a single time frame is adequate for motion correction whereas integrating more LA slices can improve registration and model reconstruction accuracy for improved functional quantification especially on datasets with severe motion artefacts.

  15. Separate Perceptual and Neural Processing of Velocity- and Disparity-Based 3D Motion Signals.

    PubMed

    Joo, Sung Jun; Czuba, Thaddeus B; Cormack, Lawrence K; Huk, Alexander C

    2016-10-19

    Although the visual system uses both velocity- and disparity-based binocular information for computing 3D motion, it is unknown whether (and how) these two signals interact. We found that these two binocular signals are processed distinctly at the levels of both cortical activity in human MT and perception. In human MT, adaptation to both velocity-based and disparity-based 3D motions demonstrated direction-selective neuroimaging responses. However, when adaptation to one cue was probed using the other cue, there was no evidence of interaction between them (i.e., there was no "cross-cue" adaptation). Analogous psychophysical measurements yielded correspondingly weak cross-cue motion aftereffects (MAEs) in the face of very strong within-cue adaptation. In a direct test of perceptual independence, adapting to opposite 3D directions generated by different binocular cues resulted in simultaneous, superimposed, opposite-direction MAEs. These findings suggest that velocity- and disparity-based 3D motion signals may both flow through area MT but constitute distinct signals and pathways.

  16. Surveillance, detection, and 3D infrared tracking of bullets, rockets, mortars, and artillery

    NASA Astrophysics Data System (ADS)

    Leslie, Daniel H.; Hyman, Howard; Moore, Fritz; Squire, Mark D.

    2001-09-01

    We describe test results using the FIRST (Fast InfraRed Sniper Tracker) to detect, track, and range to bullets in flight for determining the location of the bullet launch point. The technology developed for the FIRST system can be used to provide detection and accurate 3D track data for other small threat objects including rockets, mortars, and artillery in addition to bullets. We discuss the radiometry and detection range for these objects, and discuss the trade-offs involved in design of the very fast optical system for acquisition, tracking, and ranging of these targets.

  17. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  18. Particle Filters and Occlusion Handling for Rigid 2D-3D Pose Tracking

    PubMed Central

    Lee, Jehoon; Sandhu, Romeil; Tannenbaum, Allen

    2013-01-01

    In this paper, we address the problem of 2D-3D pose estimation. Specifically, we propose an approach to jointly track a rigid object in a 2D image sequence and to estimate its pose (position and orientation) in 3D space. We revisit a joint 2D segmentation/3D pose estimation technique, and then extend the framework by incorporating a particle filter to robustly track the object in a challenging environment, and by developing an occlusion detection and handling scheme to continuously track the object in the presence of occlusions. In particular, we focus on partial occlusions that prevent the tracker from extracting an exact region properties of the object, which plays a pivotal role for region-based tracking methods in maintaining the track. To this end, a dynamical choice of how to invoke the objective functional is performed online based on the degree of dependencies between predictions and measurements of the system in accordance with the degree of occlusion and the variation of the object’s pose. This scheme provides the robustness to deal with occlusions of an obstacle with different statistical properties from that of the object of interest. Experimental results demonstrate the practical applicability and robustness of the proposed method in several challenging scenarios. PMID:24058277

  19. 3D model-based detection and tracking for space autonomous and uncooperative rendezvous

    NASA Astrophysics Data System (ADS)

    Shang, Yang; Zhang, Yueqiang; Liu, Haibo

    2015-10-01

    In order to fully navigate using a vision sensor, a 3D edge model based detection and tracking technique was developed. Firstly, we proposed a target detection strategy over a sequence of several images from the 3D model to initialize the tracking. The overall purpose of such approach is to robustly match each image with the model views of the target. Thus we designed a line segment detection and matching method based on the multi-scale space technology. Experiments on real images showed that our method is highly robust under various image changes. Secondly, we proposed a method based on 3D particle filter (PF) coupled with M-estimation to track and estimate the pose of the target efficiently. In the proposed approach, a similarity observation model was designed according to a new distance function of line segments. Then, based on the tracking results of PF, the pose was optimized using M-estimation. Experiments indicated that the proposed method can effectively track and accurately estimate the pose of freely moving target in unconstrained environment.

  20. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    NASA Astrophysics Data System (ADS)

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  1. Eye Tracking to Explore the Impacts of Photorealistic 3d Representations in Pedstrian Navigation Performance

    NASA Astrophysics Data System (ADS)

    Dong, Weihua; Liao, Hua

    2016-06-01

    Despite the now-ubiquitous two-dimensional (2D) maps, photorealistic three-dimensional (3D) representations of cities (e.g., Google Earth) have gained much attention by scientists and public users as another option. However, there is no consistent evidence on the influences of 3D photorealism on pedestrian navigation. Whether 3D photorealism can communicate cartographic information for navigation with higher effectiveness and efficiency and lower cognitive workload compared to the traditional symbolic 2D maps remains unknown. This study aims to explore whether the photorealistic 3D representation can facilitate processes of map reading and navigation in digital environments using a lab-based eye tracking approach. Here we show the differences of symbolic 2D maps versus photorealistic 3D representations depending on users' eye-movement and navigation behaviour data. We found that the participants using the 3D representation were less effective, less efficient and were required higher cognitive workload than using the 2D map for map reading. However, participants using the 3D representation performed more efficiently in self-localization and orientation at the complex decision points. The empirical results can be helpful to improve the usability of pedestrian navigation maps in future designs.

  2. Effects of 3D random correlated velocity perturbations on predicted ground motions

    USGS Publications Warehouse

    Hartzell, S.; Harmsen, S.; Frankel, A.

    2010-01-01

    Three-dimensional, finite-difference simulations of a realistic finite-fault rupture on the southern Hayward fault are used to evaluate the effects of random, correlated velocity perturbations on predicted ground motions. Velocity perturbations are added to a three-dimensional (3D) regional seismic velocity model of the San Francisco Bay Area using a 3D von Karman random medium. Velocity correlation lengths of 5 and 10 km and standard deviations in the velocity of 5% and 10% are considered. The results show that significant deviations in predicted ground velocities are seen in the calculated frequency range (≤1 Hz) for standard deviations in velocity of 5% to 10%. These results have implications for the practical limits on the accuracy of scenario ground-motion calculations and on retrieval of source parameters using higher-frequency, strong-motion data.

  3. Method for dose-reduced 3D catheter tracking on a scanning-beam digital x-ray system using dynamic electronic collimation

    PubMed Central

    Dunkerley, David A. P.; Funk, Tobias; Speidel, Michael A.

    2016-01-01

    Scanning-beam digital x-ray (SBDX) is an inverse geometry x-ray fluoroscopy system capable of tomosynthesis-based 3D catheter tracking. This work proposes a method of dose-reduced 3D tracking using dynamic electronic collimation (DEC) of the SBDX scanning x-ray tube. Positions in the 2D focal spot array are selectively activated to create a region-of-interest (ROI) x-ray field around the tracked catheter. The ROI position is updated for each frame based on a motion vector calculated from the two most recent 3D tracking results. The technique was evaluated with SBDX data acquired as a catheter tip inside a chest phantom was pulled along a 3D trajectory. DEC scans were retrospectively generated from the detector images stored for each focal spot position. DEC imaging of a catheter tip in a volume measuring 11.4 cm across at isocenter required 340 active focal spots per frame, versus 4473 spots in full-FOV mode. The dose-area-product (DAP) and peak skin dose (PSD) for DEC versus full field-of-view (FOV) scanning were calculated using an SBDX Monte Carlo simulation code. DAP was reduced to 7.4% to 8.4% of the full-FOV value, consistent with the relative number of active focal spots (7.6%). For image sequences with a moving catheter, PSD was 33.6% to 34.8% of the full-FOV value. The root-mean-squared-deviation between DEC-based 3D tracking coordinates and full-FOV 3D tracking coordinates was less than 0.1 mm. The 3D distance between the tracked tip and the sheath centerline averaged 0.75 mm. Dynamic electronic collimation can reduce dose with minimal change in tracking performance. PMID:27375314

  4. Mobile Biplane X-Ray Imaging System for Measuring 3D Dynamic Joint Motion During Overground Gait.

    PubMed

    Guan, Shanyuanye; Gray, Hans A; Keynejad, Farzad; Pandy, Marcus G

    2016-01-01

    Most X-ray fluoroscopy systems are stationary and impose restrictions on the measurement of dynamic joint motion; for example, knee-joint kinematics during gait is usually measured with the subject ambulating on a treadmill. We developed a computer-controlled, mobile, biplane, X-ray fluoroscopy system to track human body movement for high-speed imaging of 3D joint motion during overground gait. A robotic gantry mechanism translates the two X-ray units alongside the subject, tracking and imaging the joint of interest as the subject moves. The main aim of the present study was to determine the accuracy with which the mobile imaging system measures 3D knee-joint kinematics during walking. In vitro experiments were performed to measure the relative positions of the tibia and femur in an intact human cadaver knee and of the tibial and femoral components of a total knee arthroplasty (TKA) implant during simulated overground gait. Accuracy was determined by calculating mean, standard deviation and root-mean-squared errors from differences between kinematic measurements obtained using volumetric models of the bones and TKA components and reference measurements obtained from metal beads embedded in the bones. Measurement accuracy was enhanced by the ability to track and image the joint concurrently. Maximum root-mean-squared errors were 0.33 mm and 0.65° for translations and rotations of the TKA knee and 0.78 mm and 0.77° for translations and rotations of the intact knee, which are comparable to results reported for treadmill walking using stationary biplane systems. System capability for in vivo joint motion measurement was also demonstrated for overground gait.

  5. Alignment of 3D Building Models and TIR Video Sequences with Line Tracking

    NASA Astrophysics Data System (ADS)

    Iwaszczuk, D.; Stilla, U.

    2014-11-01

    Thermal infrared imagery of urban areas became interesting for urban climate investigations and thermal building inspections. Using a flying platform such as UAV or a helicopter for the acquisition and combining the thermal data with the 3D building models via texturing delivers a valuable groundwork for large-area building inspections. However, such thermal textures are useful for further analysis if they are geometrically correctly extracted. This can be achieved with a good coregistrations between the 3D building models and thermal images, which cannot be achieved by direct georeferencing. Hence, this paper presents methodology for alignment of 3D building models and oblique TIR image sequences taken from a flying platform. In a single image line correspondences between model edges and image line segments are found using accumulator approach and based on these correspondences an optimal camera pose is calculated to ensure the best match between the projected model and the image structures. Among the sequence the linear features are tracked based on visibility prediction. The results of the proposed methodology are presented using a TIR image sequence taken from helicopter in a densely built-up urban area. The novelty of this work is given by employing the uncertainty of the 3D building models and by innovative tracking strategy based on a priori knowledge from the 3D building model and the visibility checking.

  6. Markerless motion capture can provide reliable 3D gait kinematics in the sagittal and frontal plane.

    PubMed

    Sandau, Martin; Koblauch, Henrik; Moeslund, Thomas B; Aanæs, Henrik; Alkjær, Tine; Simonsen, Erik B

    2014-09-01

    Estimating 3D joint rotations in the lower extremities accurately and reliably remains unresolved in markerless motion capture, despite extensive studies in the past decades. The main problems have been ascribed to the limited accuracy of the 3D reconstructions. Accordingly, the purpose of the present study was to develop a new approach based on highly detailed 3D reconstructions in combination with a translational and rotational unconstrained articulated model. The highly detailed 3D reconstructions were synthesized from an eight camera setup using a stereo vision approach. The subject specific articulated model was generated with three rotational and three translational degrees of freedom for each limb segment and without any constraints to the range of motion. This approach was tested on 3D gait analysis and compared to a marker based method. The experiment included ten healthy subjects in whom hip, knee and ankle joint were analysed. Flexion/extension angles as well as hip abduction/adduction closely resembled those obtained from the marker based system. However, the internal/external rotations, knee abduction/adduction and ankle inversion/eversion were less reliable.

  7. Spectrum analysis of motion parallax in a 3D cluttered scene and application to egomotion.

    PubMed

    Mann, Richard; Langer, Michael S

    2005-09-01

    Previous methods for estimating observer motion in a rigid 3D scene assume that image velocities can be measured at isolated points. When the observer is moving through a cluttered 3D scene such as a forest, however, pointwise measurements of image velocity are more challenging to obtain because multiple depths, and hence multiple velocities, are present in most local image regions. We introduce a method for estimating egomotion that avoids pointwise image velocity estimation as a first step. In its place, the direction of motion parallax in local image regions is estimated, using a spectrum-based method, and these directions are then combined to directly estimate 3D observer motion. There are two advantages to this approach. First, the method can be applied to a wide range of 3D cluttered scenes, including those for which pointwise image velocities cannot be measured because only normal velocity information is available. Second, the egomotion estimates can be used as a posterior constraint on estimating pointwise image velocities, since known egomotion parameters constrain the candidate image velocities at each point to a one-dimensional rather than a two-dimensional space.

  8. 3D motion adapted gating (3D MAG): a new navigator technique for accelerated acquisition of free breathing navigator gated 3D coronary MR-angiography.

    PubMed

    Hackenbroch, M; Nehrke, K; Gieseke, J; Meyer, C; Tiemann, K; Litt, H; Dewald, O; Naehle, C P; Schild, H; Sommer, T

    2005-08-01

    This study aimed to evaluate the influence of a new navigator technique (3D MAG) on navigator efficiency, total acquisition time, image quality and diagnostic accuracy. Fifty-six patients with suspected coronary artery disease underwent free breathing navigator gated coronary MRA (Intera, Philips Medical Systems, 1.5 T, spatial resolution 0.9x0.9x3 mm3) with and without 3D MAG. Evaluation of both sequences included: 1) navigator scan efficiency, 2) total acquisition time, 3) assessment of image quality and 4) detection of stenoses >50%. Average navigator efficiencies of the LCA and RCA were 43+/-12% and 42+/-12% with and 36+/-16% and 35+/-16% without 3D MAG (P<0.01). Scan time was reduced from 12 min 7 s without to 8 min 55 s with 3D MAG for the LCA and from 12 min 19 s to 9 min 7 s with 3D MAG for the RCA (P<0.01). The average scores of image quality of the coronary MRAs with and without 3D MAG were 3.5+/-0.79 and 3.46+/-0.84 (P>0.05). There was no significant difference in the sensitivity and specificity in the detection of coronary artery stenoses between coronary MRAs with and without 3D MAG (P>0.05). 3D MAG provides accelerated acquisition of navigator gated coronary MRA by about 19% while maintaining image quality and diagnostic accuracy.

  9. 3D Measurement of Forearm and Upper Arm during Throwing Motion using Body Mounted Sensor

    NASA Astrophysics Data System (ADS)

    Koda, Hideharu; Sagawa, Koichi; Kuroshima, Kouta; Tsukamoto, Toshiaki; Urita, Kazutaka; Ishibashi, Yasuyuki

    The aim of this study is to propose the measurement method of three-dimensional (3D) movement of forearm and upper arm during pitching motion of baseball using inertial sensors without serious consideration of sensor installation. Although high accuracy measurement of sports motion is achieved by using optical motion capture system at present, it has some disadvantages such as the calibration of cameras and limitation of measurement place. Whereas the proposed method for 3D measurement of pitching motion using body mounted sensors provides trajectory and orientation of upper arm by the integration of acceleration and angular velocity measured on upper limb. The trajectory of forearm is derived so that the elbow joint axis of forearm corresponds to that of upper arm. Spatial relation between upper limb and sensor system is obtained by performing predetermined movements of upper limb and utilizing angular velocity and gravitational acceleration. The integration error is modified so that the estimated final position, velocity and posture of upper limb agree with the actual ones. The experimental results of the measurement of pitching motion show that trajectories of shoulder, elbow and wrist estimated by the proposed method are highly correlated to those from the motion capture system within the estimation error of about 10 [%].

  10. Brightness-compensated 3-D optical flow algorithm for monitoring cochlear motion patterns

    NASA Astrophysics Data System (ADS)

    von Tiedemann, Miriam; Fridberger, Anders; Ulfendahl, Mats; de Monvel, Jacques Boutet

    2010-09-01

    A method for three-dimensional motion analysis designed for live cell imaging by fluorescence confocal microscopy is described. The approach is based on optical flow computation and takes into account brightness variations in the image scene that are not due to motion, such as photobleaching or fluorescence variations that may reflect changes in cellular physiology. The 3-D optical flow algorithm allowed almost perfect motion estimation on noise-free artificial sequences, and performed with a relative error of <10% on noisy images typical of real experiments. The method was applied to a series of 3-D confocal image stacks from an in vitro preparation of the guinea pig cochlea. The complex motions caused by slow pressure changes in the cochlear compartments were quantified. At the surface of the hearing organ, the largest motion component was the transverse one (normal to the surface), but significant radial and longitudinal displacements were also present. The outer hair cell displayed larger radial motion at their basolateral membrane than at their apical surface. These movements reflect mechanical interactions between different cellular structures, which may be important for communicating sound-evoked vibrations to the sensory cells. A better understanding of these interactions is important for testing realistic models of cochlear mechanics.

  11. Brightness-compensated 3-D optical flow algorithm for monitoring cochlear motion patterns.

    PubMed

    von Tiedemann, Miriam; Fridberger, Anders; Ulfendahl, Mats; de Monvel, Jacques Boutet

    2010-01-01

    A method for three-dimensional motion analysis designed for live cell imaging by fluorescence confocal microscopy is described. The approach is based on optical flow computation and takes into account brightness variations in the image scene that are not due to motion, such as photobleaching or fluorescence variations that may reflect changes in cellular physiology. The 3-D optical flow algorithm allowed almost perfect motion estimation on noise-free artificial sequences, and performed with a relative error of <10% on noisy images typical of real experiments. The method was applied to a series of 3-D confocal image stacks from an in vitro preparation of the guinea pig cochlea. The complex motions caused by slow pressure changes in the cochlear compartments were quantified. At the surface of the hearing organ, the largest motion component was the transverse one (normal to the surface), but significant radial and longitudinal displacements were also present. The outer hair cell displayed larger radial motion at their basolateral membrane than at their apical surface. These movements reflect mechanical interactions between different cellular structures, which may be important for communicating sound-evoked vibrations to the sensory cells. A better understanding of these interactions is important for testing realistic models of cochlear mechanics.

  12. Incorporating 3D body motions into large-sized freeform surface conceptual design.

    PubMed

    Qin, Shengfeng; Wright, David K; Kang, Jingsheng; Prieto, P A

    2005-01-01

    Large-sized free-form surface design presents some challenges in practice. Especially at the conceptual design stage, sculpting physical models is still essential for surface development, because CAD models are less intuitive for designers to design and modify them. These sculpted physical models can be then scanned and converted into CAD models. However, if the physical models are too big, designers may have problems in finding a suitable position to conduct their operations or simply the models are difficult to be scanned in. We investigated a novel surface modelling approach by utilising a 3D motion capture system. For designing a large-sized surface, a network of splines is initially set up. Artists or designers wearing motion marks on their hands can then change shapes of the splines with their hands. Literarily they can move their body freely to any positions to perform their tasks. They can also move their hands in 3D free space to detail surface characteristics by their gestures. All their design motions are recorded in the motion capturing system and transferred into 3D curves and surfaces correspondingly. This paper reports this novel surface design method associated with some case studies.

  13. Bias Field Inconsistency Correction of Motion-Scattered Multislice MRI for Improved 3D Image Reconstruction

    PubMed Central

    Kim, Kio; Habas, Piotr A.; Rajagopalan, Vidya; Scott, Julia A.; Corbett-Detig, James M.; Rousseau, Francois; Barkovich, A. James; Glenn, Orit A.; Studholme, Colin

    2012-01-01

    A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multi-slice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types. PMID:21511561

  14. Bias field inconsistency correction of motion-scattered multislice MRI for improved 3D image reconstruction.

    PubMed

    Kim, Kio; Habas, Piotr A; Rajagopalan, Vidya; Scott, Julia A; Corbett-Detig, James M; Rousseau, Francois; Barkovich, A James; Glenn, Orit A; Studholme, Colin

    2011-09-01

    A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multislice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types.

  15. 4DCBCT-based motion modeling and 3D fluoroscopic image generation for lung cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Berbeco, Ross; Lewis, John

    2015-03-01

    A method is developed to build patient-specific motion models based on 4DCBCT images taken at treatment time and use them to generate 3D time-varying images (referred to as 3D fluoroscopic images). Motion models are built by applying Principal Component Analysis (PCA) on the displacement vector fields (DVFs) estimated by performing deformable image registration on each phase of 4DCBCT relative to a reference phase. The resulting PCA coefficients are optimized iteratively by comparing 2D projections captured at treatment time with projections estimated using the motion model. The optimized coefficients are used to generate 3D fluoroscopic images. The method is evaluated using anthropomorphic physical and digital phantoms reproducing real patient trajectories. For physical phantom datasets, the average tumor localization error (TLE) and (95th percentile) in two datasets were 0.95 (2.2) mm. For digital phantoms assuming superior image quality of 4DCT and no anatomic or positioning disparities between 4DCT and treatment time, the average TLE and the image intensity error (IIE) in six datasets were smaller using 4DCT-based motion models. When simulating positioning disparities and tumor baseline shifts at treatment time compared to planning 4DCT, the average TLE (95th percentile) and IIE were 4.2 (5.4) mm and 0.15 using 4DCT-based models, while they were 1.2 (2.2) mm and 0.10 using 4DCBCT-based ones, respectively. 4DCBCT-based models were shown to perform better when there are positioning and tumor baseline shift uncertainties at treatment time. Thus, generating 3D fluoroscopic images based on 4DCBCT-based motion models can capture both inter- and intra- fraction anatomical changes during treatment.

  16. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson's Disease.

    PubMed

    Piro, Neltje E; Piro, Lennart K; Kassubek, Jan; Blechschmidt-Trapp, Ronald A

    2016-06-21

    Remote monitoring of Parkinson's Disease (PD) patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS) classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a) an examination based on video recordings as a clinical reference; (b) an automatically classified UPDRS; and (c) a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG) sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b) the automatically classified UPDRS is 0.48 and with (c) the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team.

  17. Speckle tracking in a phantom and feature-based tracking in liver in the presence of respiratory motion using 4D ultrasound.

    PubMed

    Harris, Emma J; Miller, Naomi R; Bamber, Jeffrey C; Symonds-Tayler, J Richard N; Evans, Philip M

    2010-06-21

    We have evaluated a 4D ultrasound-based motion tracking system developed for tracking of abdominal organs during therapy. Tracking accuracy and precision were determined using a tissue-mimicking phantom, by comparing tracked motion with known 3D sinusoidal motion. The feasibility of tracking 3D liver motion in vivo was evaluated by acquiring 4D ultrasound data from four healthy volunteers. For two of these volunteers, data were also acquired whilst simultaneously measuring breath flow using a spirometer. Hepatic blood vessels, tracked off-line using manual tracking, were used as a reference to assess, in vivo, two types of automated tracking algorithm: incremental (from one volume to the next) and non-incremental (from the first volume to each subsequent volume). For phantom-based experiments, accuracy and precision (RMS error and SD) were found to be 0.78 mm and 0.54 mm, respectively. For in vivo measurements, mean absolute distance and standard deviation of the difference between automatically and manually tracked displacements were less than 1.7 mm and 1 mm respectively in all directions (left-right, anterior-posterior and superior-inferior). In vivo non-incremental tracking gave the best agreement. In both phantom and in vivo experiments, tracking performance was poorest for the elevational component of 3D motion. Good agreement between automatically and manually tracked displacements indicates that 4D ultrasound-based motion tracking has potential for image guidance applications in therapy.

  18. Speckle tracking in a phantom and feature-based tracking in liver in the presence of respiratory motion using 4D ultrasound

    NASA Astrophysics Data System (ADS)

    Harris, Emma J.; Miller, Naomi R.; Bamber, Jeffrey C.; Symonds-Tayler, J. Richard N.; Evans, Philip M.

    2010-06-01

    We have evaluated a 4D ultrasound-based motion tracking system developed for tracking of abdominal organs during therapy. Tracking accuracy and precision were determined using a tissue-mimicking phantom, by comparing tracked motion with known 3D sinusoidal motion. The feasibility of tracking 3D liver motion in vivo was evaluated by acquiring 4D ultrasound data from four healthy volunteers. For two of these volunteers, data were also acquired whilst simultaneously measuring breath flow using a spirometer. Hepatic blood vessels, tracked off-line using manual tracking, were used as a reference to assess, in vivo, two types of automated tracking algorithm: incremental (from one volume to the next) and non-incremental (from the first volume to each subsequent volume). For phantom-based experiments, accuracy and precision (RMS error and SD) were found to be 0.78 mm and 0.54 mm, respectively. For in vivo measurements, mean absolute distance and standard deviation of the difference between automatically and manually tracked displacements were less than 1.7 mm and 1 mm respectively in all directions (left-right, anterior-posterior and superior-inferior). In vivo non-incremental tracking gave the best agreement. In both phantom and in vivo experiments, tracking performance was poorest for the elevational component of 3D motion. Good agreement between automatically and manually tracked displacements indicates that 4D ultrasound-based motion tracking has potential for image guidance applications in therapy.

  19. Detailed Measurement of Wall Strain with 3D Speckle Tracking in the Aortic Root: A Case of Bionic Support for Clinical Decision Making

    PubMed Central

    Vogt, Sebastian; Karatolios, Konstantinos; Wittek, Andreas; Blasé, Christopher; Ramaswamy, Anette; Mirow, Nikolas; Moosdorf, Rainer

    2016-01-01

    Three-dimensional (3D) wall motion tracking (WMT) based on ultrasound imaging enables estimation of aortic wall motion and deformation. It provides insights into changes in vascular compliance and vessel wall properties essential for understanding the pathogenesis and progression of aortic diseases. In this report, we employed the novel 3D WMT analysis on the ascending aorta aneurysm (AA) to estimate local aortic wall motion and strain in case of a patient scheduled for replacement of the aortic root. Although progression of the diameter indicates surgical therapy, at present we addressed the question for optimal surgical time point. According to the data, AA in our case has enlarged diameter and subsequent reduced circumferential wall strain, but area tracking data reveals almost normal elastic properties. Virtual remodeling of the aortic root opens a play list for different loading conditions to determine optimal surgical intervention in time. PMID:28018834

  20. Measurement Matrix Optimization and Mismatch Problem Compensation for DLSLA 3-D SAR Cross-Track Reconstruction

    PubMed Central

    Bao, Qian; Jiang, Chenglong; Lin, Yun; Tan, Weixian; Wang, Zhirui; Hong, Wen

    2016-01-01

    With a short linear array configured in the cross-track direction, downward looking sparse linear array three-dimensional synthetic aperture radar (DLSLA 3-D SAR) can obtain the 3-D image of an imaging scene. To improve the cross-track resolution, sparse recovery methods have been investigated in recent years. In the compressive sensing (CS) framework, the reconstruction performance depends on the property of measurement matrix. This paper concerns the technique to optimize the measurement matrix and deal with the mismatch problem of measurement matrix caused by the off-grid scatterers. In the model of cross-track reconstruction, the measurement matrix is mainly affected by the configuration of antenna phase centers (APC), thus, two mutual coherence based criteria are proposed to optimize the configuration of APCs. On the other hand, to compensate the mismatch problem of the measurement matrix, the sparse Bayesian inference based method is introduced into the cross-track reconstruction by jointly estimate the scatterers and the off-grid error. Experiments demonstrate the performance of the proposed APCs’ configuration schemes and the proposed cross-track reconstruction method. PMID:27556471

  1. Measurement Matrix Optimization and Mismatch Problem Compensation for DLSLA 3-D SAR Cross-Track Reconstruction.

    PubMed

    Bao, Qian; Jiang, Chenglong; Lin, Yun; Tan, Weixian; Wang, Zhirui; Hong, Wen

    2016-08-22

    With a short linear array configured in the cross-track direction, downward looking sparse linear array three-dimensional synthetic aperture radar (DLSLA 3-D SAR) can obtain the 3-D image of an imaging scene. To improve the cross-track resolution, sparse recovery methods have been investigated in recent years. In the compressive sensing (CS) framework, the reconstruction performance depends on the property of measurement matrix. This paper concerns the technique to optimize the measurement matrix and deal with the mismatch problem of measurement matrix caused by the off-grid scatterers. In the model of cross-track reconstruction, the measurement matrix is mainly affected by the configuration of antenna phase centers (APC), thus, two mutual coherence based criteria are proposed to optimize the configuration of APCs. On the other hand, to compensate the mismatch problem of the measurement matrix, the sparse Bayesian inference based method is introduced into the cross-track reconstruction by jointly estimate the scatterers and the off-grid error. Experiments demonstrate the performance of the proposed APCs' configuration schemes and the proposed cross-track reconstruction method.

  2. A 3D front tracking method on a CPU/GPU system

    SciTech Connect

    Bo, Wurigen; Grove, John

    2011-01-21

    We describe the method to port a sequential 3D interface tracking code to a GPU with CUDA. The interface is represented as a triangular mesh. Interface geometry properties and point propagation are performed on a GPU. Interface mesh adaptation is performed on a CPU. The convergence of the method is assessed from the test problems with given velocity fields. Performance results show overall speedups from 11 to 14 for the test problems under mesh refinement. We also briefly describe our ongoing work to couple the interface tracking method with a hydro solver.

  3. Structure-From-Motion in 3D Space Using 2D Lidars

    PubMed Central

    Choi, Dong-Geol; Bok, Yunsu; Kim, Jun-Sik; Shim, Inwook; Kweon, In So

    2017-01-01

    This paper presents a novel structure-from-motion methodology using 2D lidars (Light Detection And Ranging). In 3D space, 2D lidars do not provide sufficient information for pose estimation. For this reason, additional sensors have been used along with the lidar measurement. In this paper, we use a sensor system that consists of only 2D lidars, without any additional sensors. We propose a new method of estimating both the 6D pose of the system and the surrounding 3D structures. We compute the pose of the system using line segments of scan data and their corresponding planes. After discarding the outliers, both the pose and the 3D structures are refined via nonlinear optimization. Experiments with both synthetic and real data show the accuracy and robustness of the proposed method. PMID:28165372

  4. Cooperative Wall-climbing Robots in 3D Environments for Surveillance and Target Tracking

    DTIC Science & Technology

    2009-02-08

    distribution of impeller vanes, volume of the chamber, and sealing effect , etc. Fig. 5 and 6 show some exemplary simulation results. In paper [11], we...Environments for Surveillance and Target Tracking 11 multiple nonholonomic mobile robots using Cartesian coordinates. Based on the special feature...gamma-ray or x-ray cargo inspection system. Three-dimensional (3D) measurements of the objects inside a cargo can be obtained by effectively

  5. 3D imaging of semiconductor colloid nanocrystals: on the way to nanodiagnostics of track membranes

    NASA Astrophysics Data System (ADS)

    Kulyk, S. I.; Eremchev, I. Y.; Gorshelev, A. A.; Naumov, A. V.; Zagorsky, D. L.; Kotova, S. P.; Volostnikov, V. G.; Vorontsov, E. N.

    2016-12-01

    The work concerns the feasibility of 3D optical diagnostic of porous media with subdifraction spatial resolution via epi-luminescence microscopy of single semiconductor colloid nanocrystals (quantum dots, QD) CdSe/ZnS used as emitting labels/nanoprobes. The nanoprecise reconstruction of axial coordinate is provided by double helix technique of point spread function transformation (DH-PSF). The results of QD localization in polycarbonate track membrane (TM) is presented.

  6. A motion- and sound-activated, 3D-printed, chalcogenide-based triboelectric nanogenerator.

    PubMed

    Kanik, Mehmet; Say, Mehmet Girayhan; Daglar, Bihter; Yavuz, Ahmet Faruk; Dolas, Muhammet Halit; El-Ashry, Mostafa M; Bayindir, Mehmet

    2015-04-08

    A multilayered triboelectric nanogenerator (MULTENG) that can be actuated by acoustic waves, vibration of a moving car, and tapping motion is built using a 3D-printing technique. The MULTENG can generate an open-circuit voltage of up to 396 V and a short-circuit current of up to 1.62 mA, and can power 38 LEDs. The layers of the triboelectric generator are made of polyetherimide nanopillars and chalcogenide core-shell nanofibers.

  7. 3D Geometry and Motion Estimations of Maneuvering Targets for Interferometric ISAR With Sparse Aperture.

    PubMed

    Xu, Gang; Xing, Mengdao; Xia, Xiang-Gen; Zhang, Lei; Chen, Qianqian; Bao, Zheng

    2016-05-01

    In the current scenario of high-resolution inverse synthetic aperture radar (ISAR) imaging, the non-cooperative targets may have strong maneuverability, which tends to cause time-variant Doppler modulation and imaging plane in the echoed data. Furthermore, it is still a challenge to realize ISAR imaging of maneuvering targets from sparse aperture (SA) data. In this paper, we focus on the problem of 3D geometry and motion estimations of maneuvering targets for interferometric ISAR (InISAR) with SA. For a target of uniformly accelerated rotation, the rotational modulation in echo is formulated as chirp sensing code under a chirp-Fourier dictionary to represent the maneuverability. In particular, a joint multi-channel imaging approach is developed to incorporate the multi-channel data and treat the multi-channel ISAR image formation as a joint-sparsity constraint optimization. Then, a modified orthogonal matching pursuit (OMP) algorithm is employed to solve the optimization problem to produce high-resolution range-Doppler (RD) images and chirp parameter estimation. The 3D target geometry and the motion estimations are followed by using the acquired RD images and chirp parameters. Herein, a joint estimation approach of 3D geometry and rotation motion is presented to realize outlier removing and error reduction. In comparison with independent single-channel processing, the proposed joint multi-channel imaging approach performs better in 2D imaging, 3D imaging, and motion estimation. Finally, experiments using both simulated and measured data are performed to confirm the effectiveness of the proposed algorithm.

  8. Stereoscopic motion analysis in densely packed clusters: 3D analysis of the shimmering behaviour in Giant honey bees

    PubMed Central

    2011-01-01

    Background The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns. Results Stereoscopic imaging provided non-invasive, automated, simultaneous, in-situ 3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. Segmentation was the basis for the stereo matching, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the tracking procedure and triangulated into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb) of individual bees over time. Conclusions The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method could be used to study

  9. Ground motion simulations in Marmara (Turkey) region from 3D finite difference method

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Ulrich, Thomas; Douglas, John

    2016-04-01

    In the framework of the European project MARSite (2012-2016), one of the main contributions from our research team was to provide ground-motion simulations for the Marmara region from various earthquake source scenarios. We adopted a 3D finite difference code, taking into account the 3D structure around the Sea of Marmara (including the bathymetry) and the sea layer. We simulated two moderate earthquakes (about Mw4.5) and found that the 3D structure improves significantly the waveforms compared to the 1D layer model. Simulations were carried out for different earthquakes (moderate point sources and large finite sources) in order to provide shake maps (Aochi and Ulrich, BSSA, 2015), to study the variability of ground-motion parameters (Douglas & Aochi, BSSA, 2016) as well as to provide synthetic seismograms for the blind inversion tests (Diao et al., GJI, 2016). The results are also planned to be integrated in broadband ground-motion simulations, tsunamis generation and simulations of triggered landslides (in progress by different partners). The simulations are freely shared among the partners via the internet and the visualization of the results is diffused on the project's homepage. All these simulations should be seen as a reference for this region, as they are based on the latest knowledge that obtained during the MARSite project, although their refinement and validation of the model parameters and the simulations are a continuing research task relying on continuing observations. The numerical code used, the models and the simulations are available on demand.

  10. 3D Visualization of Monte-Carlo Simulation's of HZE Track Structure and Initial Chemical Species

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2009-01-01

    Heavy ions biophysics is important for space radiation risk assessment [1] and hadron-therapy [2]. The characteristic of heavy ions tracks include a very high energy deposition region close to the track (<20 nm) denoted as the track core, and an outer penumbra region consisting of individual secondary electrons (6-rays). A still open question is the radiobiological effects of 6- rays relative to the track core. Of importance is the induction of double-strand breaks (DSB) [3] and oxidative damage to the biomolecules and the tissue matrix, considered the most important lesions for acute and long term effects of radiation. In this work, we have simulated a 56Fe26+ ion track of 1 GeV/amu with our Monte-Carlo code RITRACKS [4]. The simulation results have been used to calculate the energy depiction and initial chemical species in a "voxelized" space, which is then visualized in 3D. Several voxels with dose >1000 Gy are found in the penumbra, some located 0.1 mm from the track core. In computational models, the DSB induction probability is calculated with radial dose [6], which may not take into account the higher RBE of electron track ends for DSB induction. Therefore, these simulations should help improve models of DSB induction and our understanding of heavy ions biophysics.

  11. 3D dosimetric validation of motion compensation concepts in radiotherapy using an anthropomorphic dynamic lung phantom.

    PubMed

    Mann, P; Witte, M; Moser, T; Lang, C; Runz, A; Johnen, W; Berger, M; Biederer, J; Karger, C P

    2017-01-21

    In this study, we developed a new setup for the validation of clinical workflows in adaptive radiation therapy, which combines a dynamic ex vivo porcine lung phantom and three-dimensional (3D) polymer gel dosimetry. The phantom consists of an artificial PMMA-thorax and contains a post mortem explanted porcine lung to which arbitrary breathing patterns can be applied. A lung tumor was simulated using the PAGAT (polyacrylamide gelatin gel fabricated at atmospheric conditions) dosimetry gel, which was evaluated in three dimensions by magnetic resonance imaging (MRI). To avoid bias by reaction with oxygen and other materials, the gel was collocated inside a BAREX(™) container. For calibration purposes, the same containers with eight gel samples were irradiated with doses from 0 to 7 Gy. To test the technical feasibility of the system, a small spherical dose distribution located completely within the gel volume was planned. Dose delivery was performed under static and dynamic conditions of the phantom with and without motion compensation by beam gating. To verify clinical target definition and motion compensation concepts, the entire gel volume was homogeneously irradiated applying adequate margins in case of the static phantom and an additional internal target volume in case of dynamically operated phantom without and with gated beam delivery. MR-evaluation of the gel samples and comparison of the resulting 3D dose distribution with the planned dose distribution revealed a good agreement for the static phantom. In case of the dynamically operated phantom without motion compensation, agreement was very poor while additional application of motion compensation techniques restored the good agreement between measured and planned dose. From these experiments it was concluded that the set up with the dynamic and anthropomorphic lung phantom together with 3D-gel dosimetry provides a valuable and versatile tool for geometrical and dosimetrical validation of motion compensated

  12. 3D dosimetric validation of motion compensation concepts in radiotherapy using an anthropomorphic dynamic lung phantom

    NASA Astrophysics Data System (ADS)

    Mann, P.; Witte, M.; Moser, T.; Lang, C.; Runz, A.; Johnen, W.; Berger, M.; Biederer, J.; Karger, C. P.

    2017-01-01

    In this study, we developed a new setup for the validation of clinical workflows in adaptive radiation therapy, which combines a dynamic ex vivo porcine lung phantom and three-dimensional (3D) polymer gel dosimetry. The phantom consists of an artificial PMMA-thorax and contains a post mortem explanted porcine lung to which arbitrary breathing patterns can be applied. A lung tumor was simulated using the PAGAT (polyacrylamide gelatin gel fabricated at atmospheric conditions) dosimetry gel, which was evaluated in three dimensions by magnetic resonance imaging (MRI). To avoid bias by reaction with oxygen and other materials, the gel was collocated inside a BAREX™ container. For calibration purposes, the same containers with eight gel samples were irradiated with doses from 0 to 7 Gy. To test the technical feasibility of the system, a small spherical dose distribution located completely within the gel volume was planned. Dose delivery was performed under static and dynamic conditions of the phantom with and without motion compensation by beam gating. To verify clinical target definition and motion compensation concepts, the entire gel volume was homogeneously irradiated applying adequate margins in case of the static phantom and an additional internal target volume in case of dynamically operated phantom without and with gated beam delivery. MR-evaluation of the gel samples and comparison of the resulting 3D dose distribution with the planned dose distribution revealed a good agreement for the static phantom. In case of the dynamically operated phantom without motion compensation, agreement was very poor while additional application of motion compensation techniques restored the good agreement between measured and planned dose. From these experiments it was concluded that the set up with the dynamic and anthropomorphic lung phantom together with 3D-gel dosimetry provides a valuable and versatile tool for geometrical and dosimetrical validation of motion compensated

  13. Robust ego-motion estimation and 3-D model refinement using surface parallax.

    PubMed

    Agrawal, Amit; Chellappa, Rama

    2006-05-01

    We present an iterative algorithm for robustly estimating the ego-motion and refining and updating a coarse depth map using parametric surface parallax models and brightness derivatives extracted from an image pair. Given a coarse depth map acquired by a range-finder or extracted from a digital elevation map (DEM), ego-motion is estimated by combining a global ego-motion constraint and a local brightness constancy constraint. Using the estimated camera motion and the available depth estimate, motion of the three-dimensional (3-D) points is compensated. We utilize the fact that the resulting surface parallax field is an epipolar field, and knowing its direction from the previous motion estimates, estimate its magnitude and use it to refine the depth map estimate. The parallax magnitude is estimated using a constant parallax model (CPM) which assumes a smooth parallax field and a depth based parallax model (DBPM), which models the parallax magnitude using the given depth map. We obtain confidence measures for determining the accuracy of the estimated depth values which are used to remove regions with potentially incorrect depth estimates for robustly estimating ego-motion in subsequent iterations. Experimental results using both synthetic and real data (both indoor and outdoor sequences) illustrate the effectiveness of the proposed algorithm.

  14. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  15. Meanie3D - a mean-shift based, multivariate, multi-scale clustering and tracking algorithm

    NASA Astrophysics Data System (ADS)

    Simon, Jürgen-Lorenz; Malte, Diederich; Silke, Troemel

    2014-05-01

    Project OASE is the one of 5 work groups at the HErZ (Hans Ertel Centre for Weather Research), an ongoing effort by the German weather service (DWD) to further research at Universities concerning weather prediction. The goal of project OASE is to gain an object-based perspective on convective events by identifying them early in the onset of convective initiation and follow then through the entire lifecycle. The ability to follow objects in this fashion requires new ways of object definition and tracking, which incorporate all the available data sets of interest, such as Satellite imagery, weather Radar or lightning counts. The Meanie3D algorithm provides the necessary tool for this purpose. Core features of this new approach to clustering (object identification) and tracking are the ability to identify objects using the mean-shift algorithm applied to a multitude of variables (multivariate), as well as the ability to detect objects on various scales (multi-scale) using elements of Scale-Space theory. The algorithm works in 2D as well as 3D without modifications. It is an extension of a method well known from the field of computer vision and image processing, which has been tailored to serve the needs of the meteorological community. In spite of the special application to be demonstrated here (like convective initiation), the algorithm is easily tailored to provide clustering and tracking for a wide class of data sets and problems. In this talk, the demonstration is carried out on two of the OASE group's own composite sets. One is a 2D nationwide composite of Germany including C-Band Radar (2D) and Satellite information, the other a 3D local composite of the Bonn/Jülich area containing a high-resolution 3D X-Band Radar composite.

  16. Respiratory motion correction in 3-D PET data with advanced optical flow algorithms.

    PubMed

    Dawood, Mohammad; Buther, Florian; Jiang, Xiaoyi; Schafers, Klaus P

    2008-08-01

    The problem of motion is well known in positron emission tomography (PET) studies. The PET images are formed over an elongated period of time. As the patients cannot hold breath during the PET acquisition, spatial blurring and motion artifacts are the natural result. These may lead to wrong quantification of the radioactive uptake. We present a solution to this problem by respiratory-gating the PET data and correcting the PET images for motion with optical flow algorithms. The algorithm is based on the combined local and global optical flow algorithm with modifications to allow for discontinuity preservation across organ boundaries and for application to 3-D volume sets. The superiority of the algorithm over previous work is demonstrated on software phantom and real patient data.

  17. A brain-computer interface method combined with eye tracking for 3D interaction.

    PubMed

    Lee, Eui Chul; Woo, Jin Cheol; Kim, Jong Hwa; Whang, Mincheol; Park, Kang Ryoung

    2010-07-15

    With the recent increase in the number of three-dimensional (3D) applications, the need for interfaces to these applications has increased. Although the eye tracking method has been widely used as an interaction interface for hand-disabled persons, this approach cannot be used for depth directional navigation. To solve this problem, we propose a new brain computer interface (BCI) method in which the BCI and eye tracking are combined to analyze depth navigation, including selection and two-dimensional (2D) gaze direction, respectively. The proposed method is novel in the following five ways compared to previous works. First, a device to measure both the gaze direction and an electroencephalogram (EEG) pattern is proposed with the sensors needed to measure the EEG attached to a head-mounted eye tracking device. Second, the reliability of the BCI interface is verified by demonstrating that there is no difference between the real and the imaginary movements for the same work in terms of the EEG power spectrum. Third, depth control for the 3D interaction interface is implemented by an imaginary arm reaching movement. Fourth, a selection method is implemented by an imaginary hand grabbing movement. Finally, for the independent operation of gazing and the BCI, a mode selection method is proposed that measures a user's concentration by analyzing the pupil accommodation speed, which is not affected by the operation of gazing and the BCI. According to experimental results, we confirmed the feasibility of the proposed 3D interaction method using eye tracking and a BCI.

  18. Patient specific respiratory motion modeling using a limited number of 3D lung CT images.

    PubMed

    Cui, Xueli; Gao, Xin; Xia, Wei; Liu, Yangchuan; Liang, Zhiyuan

    2014-01-01

    To build a patient specific respiratory motion model with a low dose, a novel method was proposed that uses a limited number of 3D lung CT volumes with an external respiratory signal. 4D lung CT volumes were acquired for patients with in vitro labeling on the upper abdominal surface. Meanwhile, 3D coordinates of in vitro labeling were measured as external respiratory signals. A sequential correspondence between the 4D lung CT and the external respiratory signal was built using the distance correlation method, and a 3D displacement for every registration control point in the CT volumes with respect to time can be obtained by the 4D lung CT deformable registration. A temporal fitting was performed for every registration control point displacements and an external respiratory signal in the anterior-posterior direction respectively to draw their fitting curves. Finally, a linear regression was used to fit the corresponding samples of the control point displacement fitting curves and the external respiratory signal fitting curve to finish the pulmonary respiration modeling. Compared to a B-spline-based method using the respiratory signal phase, the proposed method is highly advantageous as it offers comparable modeling accuracy and target modeling error (TME); while at the same time, the proposed method requires 70% less 3D lung CTs. When using a similar amount of 3D lung CT data, the mean of the proposed method's TME is smaller than the mean of the PCA (principle component analysis)-based methods' TMEs. The results indicate that the proposed method is successful in striking a balance between modeling accuracy and number of 3D lung CT volumes.

  19. Spatiotemporal non-rigid image registration for 3D ultrasound cardiac motion estimation

    NASA Astrophysics Data System (ADS)

    Loeckx, D.; Ector, J.; Maes, F.; D'hooge, J.; Vandermeulen, D.; Voigt, J.-U.; Heidbüchel, H.; Suetens, P.

    2007-03-01

    We present a new method to evaluate 4D (3D + time) cardiac ultrasound data sets by nonrigid spatio-temporal image registration. First, a frame-to-frame registration is performed that yields a dense deformation field. The deformation field is used to calculate local spatiotemporal properties of the myocardium, such as the velocity, strain and strain rate. The field is also used to propagate particular points and surfaces, representing e.g. the endo-cardial surface over the different frames. As such, the 4D path of these point is obtained, which can be used to calculate the velocity by which the wall moves and the evolution of the local surface area over time. The wall velocity is not angle-dependent as in classical Doppler imaging, since the 4D data allows calculating the true 3D motion. Similarly, all 3D myocardium strain components can be estimated. Combined they result in local surface area or volume changes which van be color-coded as a measure of local contractability. A diagnostic method that strongly benefits from this technique is cardiac motion and deformation analysis, which is an important aid to quantify the mechanical properties of the myocardium.

  20. Integration of 3D structure from disparity into biological motion perception independent of depth awareness.

    PubMed

    Wang, Ying; Jiang, Yi

    2014-01-01

    Images projected onto the retinas of our two eyes come from slightly different directions in the real world, constituting binocular disparity that serves as an important source for depth perception - the ability to see the world in three dimensions. It remains unclear whether the integration of disparity cues into visual perception depends on the conscious representation of stereoscopic depth. Here we report evidence that, even without inducing discernible perceptual representations, the disparity-defined depth information could still modulate the visual processing of 3D objects in depth-irrelevant aspects. Specifically, observers who could not discriminate disparity-defined in-depth facing orientations of biological motions (i.e., approaching vs. receding) due to an excessive perceptual bias nevertheless exhibited a robust perceptual asymmetry in response to the indistinguishable facing orientations, similar to those who could consciously discriminate such 3D information. These results clearly demonstrate that the visual processing of biological motion engages the disparity cues independent of observers' depth awareness. The extraction and utilization of binocular depth signals thus can be dissociable from the conscious representation of 3D structure in high-level visual perception.

  1. Scientific rotoscoping: a morphology-based method of 3-D motion analysis and visualization.

    PubMed

    Gatesy, Stephen M; Baier, David B; Jenkins, Farish A; Dial, Kenneth P

    2010-06-01

    Three-dimensional skeletal movement is often impossible to accurately quantify from external markers. X-ray imaging more directly visualizes moving bones, but extracting 3-D kinematic data is notoriously difficult from a single perspective. Stereophotogrammetry is extremely powerful if bi-planar fluoroscopy is available, yet implantation of three radio-opaque markers in each segment of interest may be impractical. Herein we introduce scientific rotoscoping (SR), a new method of motion analysis that uses articulated bone models to simultaneously animate and quantify moving skeletons without markers. The three-step process is described using examples from our work on pigeon flight and alligator walking. First, the experimental scene is reconstructed in 3-D using commercial animation software so that frames of undistorted fluoroscopic and standard video can be viewed in their correct spatial context through calibrated virtual cameras. Second, polygonal models of relevant bones are created from CT or laser scans and rearticulated into a hierarchical marionette controlled by virtual joints. Third, the marionette is registered to video images by adjusting each of its degrees of freedom over a sequence of frames. SR outputs high-resolution 3-D kinematic data for multiple, unmarked bones and anatomically accurate animations that can be rendered from any perspective. Rather than generating moving stick figures abstracted from the coordinates of independent surface points, SR is a morphology-based method of motion analysis deeply rooted in osteological and arthrological data.

  2. Coordination of gaze and hand movements for tracking and tracing in 3D.

    PubMed

    Gielen, Constantinus C A M; Dijkstra, Tjeerd M H; Roozen, Irene J; Welten, Joke

    2009-03-01

    In this study we have investigated movements in three-dimensional space. Since most studies have investigated planar movements (like ellipses, cloverleaf shapes and "figure eights") we have compared two generalizations of the two-thirds power law to three dimensions. In particular we have tested whether the two-thirds power law could be best described by tangential velocity and curvature in a plane (compatible with the idea of planar segmentation) or whether tangential velocity and curvature should be calculated in three dimensions. We defined total curvature in three dimensions as the square root of the sum of curvature squared and torsion squared. The results demonstrate that most of the variance is explained by tangential velocity and total curvature. This indicates that all three orthogonal components of movements in 3D are equally important and that movements are truly 3D and do not reflect a concatenation of 2D planar movement segments. In addition, we have studied the coordination of eye and hand movements in 3D by measuring binocular eye movements while subjects move the finger along a curved path. The results show that the directional component and finger position almost superimpose when subjects track a target moving in 3D. However, the vergence component of gaze leads finger position by about 250msec. For drawing (tracing) the path of a visible 3D shape, the directional component of gaze leads finger position by about 225msec, and the vergence component leads finger position by about 400msec. These results are compatible with the idea that gaze leads hand position during drawing movement to assist prediction and planning of hand position in 3D space.

  3. A Little Knowledge of Ground Motion: Explaining 3-D Physics-Based Modeling to Engineers

    NASA Astrophysics Data System (ADS)

    Porter, K.

    2014-12-01

    Users of earthquake planning scenarios require the ground-motion map to be credible enough to justify costly planning efforts, but not all ground-motion maps are right for all uses. There are two common ways to create a map of ground motion for a hypothetical earthquake. One approach is to map the median shaking estimated by empirical attenuation relationships. The other uses 3-D physics-based modeling, in which one analyzes a mathematical model of the earth's crust near the fault rupture and calculates the generation and propagation of seismic waves from source to ground surface by first principles. The two approaches produce different-looking maps. The more-familiar median maps smooth out variability and correlation. Using them in a planning scenario can lead to a systematic underestimation of damage and loss, and could leave a community underprepared for realistic shaking. The 3-D maps show variability, including some very high values that can disconcert non-scientists. So when the USGS Science Application for Risk Reduction's (SAFRR) Haywired scenario project selected 3-D maps, it was necessary to explain to scenario users—especially engineers who often use median maps—the differences, advantages, and disadvantages of the two approaches. We used authority, empirical evidence, and theory to support our choice. We prefaced our explanation with SAFRR's policy of using the best available earth science, and cited the credentials of the maps' developers and the reputation of the journal in which they published the maps. We cited recorded examples from past earthquakes of extreme ground motions that are like those in the scenario map. We explained the maps on theoretical grounds as well, explaining well established causes of variability: directivity, basin effects, and source parameters. The largest mapped motions relate to potentially unfamiliar extreme-value theory, so we used analogies to human longevity and the average age of the oldest person in samples of

  4. 3-D Flow Field Diagnostics and Validation Studies using Stereoscopic Tracking Velocimetry

    NASA Technical Reports Server (NTRS)

    Cha, Soyoung Stephen; Ramachandran, Narayanan; Whitaker, Ann F. (Technical Monitor)

    2002-01-01

    The measurement of 3-D three-component velocity fields is of great importance in both ground and space experiments for understanding materials processing and fluid physics. Here, we present the investigation results of stereoscopic tracking velocimetry (STV) for measuring 3-D velocity fields. The effort includes diagnostic technology development, experimental velocity measurement, and comparison with analytical and numerical computation. The advantages of STV stems from the system simplicity for building compact hardware and in software efficiency for continual near-real-time process monitoring. It also has illumination flexibility for observing volumetric flow fields from arbitrary directions. STV is based on stereoscopic CCD observations of particles seeded in a flow. Neural networks are used for data analysis. The developed diagnostic tool is tested with a simple directional solidification apparatus using Succinonitrile. The 3-D velocity field in the liquid phase is measured and compared with results from detailed numerical computations. Our theoretical, numerical, and experimental effort has shown STV to be a viable candidate for reliably quantifying the 3-D flow field in materials processing and fluids experiments.

  5. The CT-PPS tracking system with 3D pixel detectors

    NASA Astrophysics Data System (ADS)

    Ravera, F.

    2016-11-01

    The CMS-TOTEM Precision Proton Spectrometer (CT-PPS) detector will be installed in Roman pots (RP) positioned on either side of CMS, at about 210 m from the interaction point. This detector will measure leading protons, allowing detailed studies of diffractive physics and central exclusive production in standard LHC running conditions. An essential component of the CT-PPS apparatus is the tracking system, which consists of two detector stations per arm equipped with six 3D silicon pixel-sensor modules, each read out by six PSI46dig chips. The front-end electronics has been designed to fulfill the mechanical constraints of the RP and to be compatible as much as possible with the readout chain of the CMS pixel detector. The tracking system is currently under construction and will be installed by the end of 2016. In this contribution the final design and the expected performance of the CT-PPS tracking system is presented. A summary of the studies performed, before and after irradiation, on the 3D detectors produced for CT-PPS is given.

  6. Methods for using 3-D ultrasound speckle tracking in biaxial mechanical testing of biological tissue samples.

    PubMed

    Yap, Choon Hwai; Park, Dae Woo; Dutta, Debaditya; Simon, Marc; Kim, Kang

    2015-04-01

    Being multilayered and anisotropic, biological tissues such as cardiac and arterial walls are structurally complex, making the full assessment and understanding of their mechanical behavior challenging. Current standard mechanical testing uses surface markers to track tissue deformations and does not provide deformation data below the surface. In the study described here, we found that combining mechanical testing with 3-D ultrasound speckle tracking could overcome this limitation. Rat myocardium was tested with a biaxial tester and was concurrently scanned with high-frequency ultrasound in three dimensions. The strain energy function was computed from stresses and strains using an iterative non-linear curve-fitting algorithm. Because the strain energy function consists of terms for the base matrix and for embedded fibers, spatially varying fiber orientation was also computed by curve fitting. Using finite-element simulations, we first validated the accuracy of the non-linear curve-fitting algorithm. Next, we compared experimentally measured rat myocardium strain energy function values with those in the literature and found a matching order of magnitude. Finally, we retained samples after the experiments for fiber orientation quantification using histology and found that the results satisfactorily matched those computed in the experiments. We conclude that 3-D ultrasound speckle tracking can be a useful addition to traditional mechanical testing of biological tissues and may provide the benefit of enabling fiber orientation computation.

  7. 3D Motion Planning Algorithms for Steerable Needles Using Inverse Kinematics

    PubMed Central

    Duindam, Vincent; Xu, Jijie; Alterovitz, Ron; Sastry, Shankar; Goldberg, Ken

    2010-01-01

    Steerable needles can be used in medical applications to reach targets behind sensitive or impenetrable areas. The kinematics of a steerable needle are nonholonomic and, in 2D, equivalent to a Dubins car with constant radius of curvature. In 3D, the needle can be interpreted as an airplane with constant speed and pitch rate, zero yaw, and controllable roll angle. We present a constant-time motion planning algorithm for steerable needles based on explicit geometric inverse kinematics similar to the classic Paden-Kahan subproblems. Reachability and path competitivity are analyzed using analytic comparisons with shortest path solutions for the Dubins car (for 2D) and numerical simulations (for 3D). We also present an algorithm for local path adaptation using null-space results from redundant manipulator theory. Finally, we discuss several ways to use and extend the inverse kinematics solution to generate needle paths that avoid obstacles. PMID:21359051

  8. Experimental analysis of mechanical response of stabilized occipitocervical junction by 3D mark tracking technique

    NASA Astrophysics Data System (ADS)

    Germaneau, A.; Doumalin, P.; Dupré, J. C.; Brèque, C.; Brémand, F.; D'Houtaud, S.; Rigoard, P.

    2010-06-01

    This study is about a biomechanical comparison of some stabilization solutions for the occipitocervical junction. Four kinds of occipito-cervical fixations are analysed in this work: lateral plates fixed by two kinds of screws, lateral plates fixed by hooks and median plate. To study mechanical rigidity of each one, tests have been performed on human skulls by applying loadings and by studying mechanical response of fixations and bone. For this experimental analysis, a specific setup has been developed to impose a load corresponding to the flexion-extension physiological movements. 3D mark tracking technique is employed to measure 3D displacement fields on the bone and on the fixations. Observations of displacement evolution on the bone according to the fixation show different rigidities given by each solution.

  9. 3D delivered dose assessment using a 4DCT-based motion model

    SciTech Connect

    Cai, Weixing; Hurwitz, Martina H.; Williams, Christopher L.; Dhou, Salam; Berbeco, Ross I.; Mishra, Pankaj E-mail: jhlewis@lroc.harvard.edu; Lewis, John H. E-mail: jhlewis@lroc.harvard.edu; Seco, Joao

    2015-06-15

    Purpose: The purpose of this work is to develop a clinically feasible method of calculating actual delivered dose distributions for patients who have significant respiratory motion during the course of stereotactic body radiation therapy (SBRT). Methods: A novel approach was proposed to calculate the actual delivered dose distribution for SBRT lung treatment. This approach can be specified in three steps. (1) At the treatment planning stage, a patient-specific motion model is created from planning 4DCT data. This model assumes that the displacement vector field (DVF) of any respiratory motion deformation can be described as a linear combination of some basis DVFs. (2) During the treatment procedure, 2D time-varying projection images (either kV or MV projections) are acquired, from which time-varying “fluoroscopic” 3D images of the patient are reconstructed using the motion model. The DVF of each timepoint in the time-varying reconstruction is an optimized linear combination of basis DVFs such that the 2D projection of the 3D volume at this timepoint matches the projection image. (3) 3D dose distribution is computed for each timepoint in the set of 3D reconstructed fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating deformed dose distributions. This approach was first validated using two modified digital extended cardio-torso (XCAT) phantoms with lung tumors and different respiratory motions. The estimated doses were compared to the dose that would be calculated for routine 4DCT-based planning and to the actual delivered dose that was calculated using “ground truth” XCAT phantoms at all timepoints. The approach was also tested using one set of patient data, which demonstrated the application of our method in a clinical scenario. Results: For the first XCAT phantom that has a mostly regular breathing pattern, the errors in 95% volume dose (D95) are 0.11% and 0.83%, respectively for 3D fluoroscopic images

  10. 3D delivered dose assessment using a 4DCT-based motion model

    PubMed Central

    Cai, Weixing; Hurwitz, Martina H.; Williams, Christopher L.; Dhou, Salam; Berbeco, Ross I.; Seco, Joao; Mishra, Pankaj; Lewis, John H.

    2015-01-01

    Purpose: The purpose of this work is to develop a clinically feasible method of calculating actual delivered dose distributions for patients who have significant respiratory motion during the course of stereotactic body radiation therapy (SBRT). Methods: A novel approach was proposed to calculate the actual delivered dose distribution for SBRT lung treatment. This approach can be specified in three steps. (1) At the treatment planning stage, a patient-specific motion model is created from planning 4DCT data. This model assumes that the displacement vector field (DVF) of any respiratory motion deformation can be described as a linear combination of some basis DVFs. (2) During the treatment procedure, 2D time-varying projection images (either kV or MV projections) are acquired, from which time-varying “fluoroscopic” 3D images of the patient are reconstructed using the motion model. The DVF of each timepoint in the time-varying reconstruction is an optimized linear combination of basis DVFs such that the 2D projection of the 3D volume at this timepoint matches the projection image. (3) 3D dose distribution is computed for each timepoint in the set of 3D reconstructed fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating deformed dose distributions. This approach was first validated using two modified digital extended cardio-torso (XCAT) phantoms with lung tumors and different respiratory motions. The estimated doses were compared to the dose that would be calculated for routine 4DCT-based planning and to the actual delivered dose that was calculated using “ground truth” XCAT phantoms at all timepoints. The approach was also tested using one set of patient data, which demonstrated the application of our method in a clinical scenario. Results: For the first XCAT phantom that has a mostly regular breathing pattern, the errors in 95% volume dose (D95) are 0.11% and 0.83%, respectively for 3D fluoroscopic images

  11. Motion compensation by registration-based catheter tracking

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Wimmer, Andreas; Liao, Rui; Hornegger, Joachim; Strobel, Norbert

    2011-03-01

    The treatment of atrial fibrillation has gained increasing importance in the field of computer-aided interventions. State-of-the-art treatment involves the electrical isolation of the pulmonary veins attached to the left atrium under fluoroscopic X-ray image guidance. Due to the rather low soft-tissue contrast of X-ray fluoroscopy, the heart is difficult to see. To overcome this problem, overlay images from pre-operative 3-D volumetric data can be used to add anatomical detail. Unfortunately, these overlay images are static at the moment, i.e., they do not move with respiratory and cardiac motion. The lack of motion compensation may impair X-ray based catheter navigation, because the physician could potentially position catheters incorrectly. To improve overlay-based catheter navigation, we present a novel two stage approach for respiratory and cardiac motion compensation. First, a cascade of boosted classifiers is employed to segment a commonly used circumferential mapping catheter which is firmly fixed at the ostium of the pulmonary vein during ablation. Then, a 2-D/2-D model-based registration is applied to track the segmented mapping catheter. Our novel hybrid approach was evaluated on 10 clinical data sets consisting of 498 fluoroscopic monoplane frames. We obtained an average 2-D tracking error of 0.61 mm, with a minimum error of 0.26 mm and a maximum error of 1.62 mm. These results demonstrate that motion compensation using registration-based catheter tracking is both feasible and accurate. Using this approach, we can only estimate in-plane motion. Fortunately, compensating for this is often sufficient for EP procedures where the motion is governed by breathing.

  12. Management of three-dimensional intrafraction motion through real-time DMLC tracking.

    PubMed

    Sawant, Amit; Venkat, Raghu; Srivastava, Vikram; Carlson, David; Povzner, Sergey; Cattell, Herb; Keall, Paul

    2008-05-01

    Tumor tracking using a dynamic multileaf collimator (DMLC) represents a promising approach for intrafraction motion management in thoracic and abdominal cancer radiotherapy. In this work, we develop, empirically demonstrate, and characterize a novel 3D tracking algorithm for real-time, conformal, intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT)-based radiation delivery to targets moving in three dimensions. The algorithm obtains real-time information of target location from an independent position monitoring system and dynamically calculates MLC leaf positions to account for changes in target position. Initial studies were performed to evaluate the geometric accuracy of DMLC tracking of 3D target motion. In addition, dosimetric studies were performed on a clinical linac to evaluate the impact of real-time DMLC tracking for conformal, step-and-shoot (S-IMRT), dynamic (D-IMRT), and VMAT deliveries to a moving target. The efficiency of conformal and IMRT delivery in the presence of tracking was determined. Results show that submillimeter geometric accuracy in all three dimensions is achievable with DMLC tracking. Significant dosimetric improvements were observed in the presence of tracking for conformal and IMRT deliveries to moving targets. A gamma index evaluation with a 3%-3 mm criterion showed that deliveries without DMLC tracking exhibit between 1.7 (S-IMRT) and 4.8 (D-IMRT) times more dose points that fail the evaluation compared to corresponding deliveries with tracking. The efficiency of IMRT delivery, as measured in the lab, was observed to be significantly lower in case of tracking target motion perpendicular to MLC leaf travel compared to motion parallel to leaf travel. Nevertheless, these early results indicate that accurate, real-time DMLC tracking of 3D tumor motion is feasible and can potentially result in significant geometric and dosimetric advantages leading to more effective management of intrafraction motion.

  13. Prediction for human motion tracking failures.

    PubMed

    Dockstader, Shiloh L; Imennov, Nikita S

    2006-02-01

    We propose a new and effective method of predicting tracking failures and apply it to the robust analysis of gait and human motion. We define a tracking failure as an event and describe its temporal characteristics using a hidden Markov model (HMM). We represent the human body using a three-dimensional, multicomponent structural model, where each component is designed to independently allow the extraction of certain gait variables. To enable a fault-tolerant tracking and feature extraction system, we introduce a single HMM for each element of the structural model, trained on previous examples of tracking failures. The algorithm derives vector observations for each Markov model using the time-varying noise covariance matrices of the structural model parameters. When transformed with a logarithmic function, the conditional output probability of each HMM is shown to have a causal relationship with imminent tracking failures. We demonstrate the effectiveness of the proposed approach on a variety of multiview video sequences of complex human motion.

  14. Validation of INSAT-3D atmospheric motion vectors for monsoon 2015

    NASA Astrophysics Data System (ADS)

    Sharma, Priti; Rani, S. Indira; Das Gupta, M.

    2016-05-01

    Atmospheric Motion Vector (AMV) over Indian Ocean and surrounding region is one of the most important sources of tropospheric wind information assimilated in numerical weather prediction (NWP) system. Earlier studies showed that the quality of Indian geo-stationary satellite Kalpana-1 AMVs was not comparable to that of other geostationary satellites over this region and hence not used in NWP system. Indian satellite INSAT-3D was successfully launched on July 26, 2013 with upgraded imaging system as compared to that of previous Indian satellite Kalpana-1. INSAT-3D has middle infrared band (3.80 - 4.00 μm) which is capable of night time pictures of low clouds and fog. Three consecutive images of 30-minutes interval are used to derive the AMVs. New height assignment scheme (using NWP first guess and replacing old empirical GA method) along with modified quality control scheme were implemented for deriving INSAT-3D AMVs. In this paper an attempt has been made to validate these AMVs against in-situ observations as well as against NCMRWF's NWP first guess for monsoon 2015. AMVs are subdivided into three different pressure levels in the vertical viz. low (1000 - 700 hPa), middle (700 - 400 hPa) and high (400 - 100 hPa) for validation purpose. Several statistics viz. normalized root mean square vector difference; biases etc. have been computed over different latitudinal belt. Result shows that the general mean monsoon circulations along with all the transient monsoon systems are well captured by INSAT-3D AMVs, as well as the error statistics viz., RMSE etc of INSAT-3D AMVs is now comparable to other geostationary satellites.

  15. A Markerless 3D Computerized Motion Capture System Incorporating a Skeleton Model for Monkeys

    PubMed Central

    Nakamura, Tomoya; Matsumoto, Jumpei; Nishimaru, Hiroshi; Bretas, Rafael Vieira; Takamura, Yusaku; Hori, Etsuro; Ono, Taketoshi; Nishijo, Hisao

    2016-01-01

    In this study, we propose a novel markerless motion capture system (MCS) for monkeys, in which 3D surface images of monkeys were reconstructed by integrating data from four depth cameras, and a skeleton model of the monkey was fitted onto 3D images of monkeys in each frame of the video. To validate the MCS, first, estimated 3D positions of body parts were compared between the 3D MCS-assisted estimation and manual estimation based on visual inspection when a monkey performed a shuttling behavior in which it had to avoid obstacles in various positions. The mean estimation error of the positions of body parts (3–14 cm) and of head rotation (35–43°) between the 3D MCS-assisted and manual estimation were comparable to the errors between two different experimenters performing manual estimation. Furthermore, the MCS could identify specific monkey actions, and there was no false positive nor false negative detection of actions compared with those in manual estimation. Second, to check the reproducibility of MCS-assisted estimation, the same analyses of the above experiments were repeated by a different user. The estimation errors of positions of most body parts between the two experimenters were significantly smaller in the MCS-assisted estimation than in the manual estimation. Third, effects of methamphetamine (MAP) administration on the spontaneous behaviors of four monkeys were analyzed using the MCS. MAP significantly increased head movements, tended to decrease locomotion speed, and had no significant effect on total path length. The results were comparable to previous human clinical data. Furthermore, estimated data following MAP injection (total path length, walking speed, and speed of head rotation) correlated significantly between the two experimenters in the MCS-assisted estimation (r = 0.863 to 0.999). The results suggest that the presented MCS in monkeys is useful in investigating neural mechanisms underlying various psychiatric disorders and developing

  16. Real-Time, Multiple, Pan/Tilt/Zoom, Computer Vision Tracking, and 3D Position Estimating System for Unmanned Aerial System Metrology

    DTIC Science & Technology

    2013-10-18

    area of 3D point estimation of flapping- wing UASs. The benefits of designing and developing such a system is instrumental in researching various...are many benefits to us- ing SIFT in tracking. It detects features that are invariant to image scale and rotation, and are shown to provide robust...provided to estimate background motion for optical flow background subtraction. The experiments with the static background showed minute benefit in

  17. An automated tool for 3D tracking of single molecules in living cells

    NASA Astrophysics Data System (ADS)

    Gardini, L.; Capitanio, M.; Pavone, F. S.

    2015-07-01

    Recently, tremendous improvements have been achieved in the precision of localization of single fluorescent molecules, allowing localization and tracking of biomolecules at the nm level. Since the behaviour of proteins and biological molecules is tightly influenced by the cell's environment, a growing number of microscopy techniques are moving from in vitro to live cell experiments. Looking at both diffusion and active transportation processes inside a cell requires three-dimensional localization over a few microns range, high SNR images and high temporal resolution (ms order of magnitude). To satisfy these requirements we developed an automated routine that allow 3D tracking of single fluorescent molecules in living cells with nanometer accuracy, by exploiting the properties of the point-spread-function of out-of-focus Quantum Dots bound to the protein of interest.

  18. A portable instrument for 3-D dynamic robot measurements using triangulation and laser tracking

    SciTech Connect

    Mayer, J.R.R. . Mechanical Engineering Dept.); Parker, G.A. . Dept. of Mechanical Engineering)

    1994-08-01

    The paper describes the development and validation of a 3-D measurement instrument capable of determining the static and dynamic performance of industrial robots to ISO standards. Using two laser beams to track an optical target attached to the robot end-effector, the target position coordinates may be estimated, relative to the instrument coordinate frame, to a high accuracy using triangulation principles. The effect of variations in the instrument geometry from the nominal model is evaluated through a kinematic model of the tracking head. Significant improvements of the measurement accuracy are then obtained by a simple adjustment of the main parameters. Extensive experimental test results are included to demonstrate the instrument performance. Finally typical static and dynamic measurement results for an industrial robot are presented to illustrate the effectiveness and usefulness of the instrument.

  19. Passive markers for tracking surgical instruments in real-time 3-D ultrasound imaging.

    PubMed

    Stoll, Jeffrey; Ren, Hongliang; Dupont, Pierre E

    2012-03-01

    A family of passive echogenic markers is presented by which the position and orientation of a surgical instrument can be determined in a 3-D ultrasound volume, using simple image processing. Markers are attached near the distal end of the instrument so that they appear in the ultrasound volume along with the instrument tip. They are detected and measured within the ultrasound image, thus requiring no external tracking device. This approach facilitates imaging instruments and tissue simultaneously in ultrasound-guided interventions. Marker-based estimates of instrument pose can be used in augmented reality displays or for image-based servoing. Design principles for marker shapes are presented that ensure imaging system and measurement uniqueness constraints are met. An error analysis is included that can be used to guide marker design and which also establishes a lower bound on measurement uncertainty. Finally, examples of marker measurement and tracking algorithms are presented along with experimental validation of the concepts.

  20. A Soft Sensor-Based Three-Dimensional (3-D) Finger Motion Measurement System

    PubMed Central

    Park, Wookeun; Ro, Kyongkwan; Kim, Suin; Bae, Joonbum

    2017-01-01

    In this study, a soft sensor-based three-dimensional (3-D) finger motion measurement system is proposed. The sensors, made of the soft material Ecoflex, comprise embedded microchannels filled with a conductive liquid metal (EGaln). The superior elasticity, light weight, and sensitivity of soft sensors allows them to be embedded in environments in which conventional sensors cannot. Complicated finger joints, such as the carpometacarpal (CMC) joint of the thumb are modeled to specify the location of the sensors. Algorithms to decouple the signals from soft sensors are proposed to extract the pure flexion, extension, abduction, and adduction joint angles. The performance of the proposed system and algorithms are verified by comparison with a camera-based motion capture system. PMID:28241414

  1. Contribution of Visuospatial and Motion-Tracking to Invisible Motion

    PubMed Central

    Battaglini, Luca; Casco, Clara

    2016-01-01

    People experience an object's motion even when it is occluded. We investigate the processing of invisible motion in three experiments. Observers saw a moving circle passing behind an invisible, irregular hendecagonal polygon and had to respond as quickly as possible when the target had “just reappeared” from behind the occluder. Without explicit cues allowing the end of each of the eight hidden trajectories to be predicted (length ranging between 4.7 and 5 deg), we found as expected, if visuospatial attention was involved, anticipation errors, providing that information on pre-occluder motion was available. This indicates that the observers, rather than simply responding when they saw the target, tended to anticipate its reappearance (Experiment 1). The new finding is that, with a fixation mark indicating the center of the invisible trajectory, a linear relationship between the physical and judged occlusion duration is found, but not without it (Experiment 2) or with a fixation mark varying in position from trial to trial (Experiment 3). We interpret the role of central fixation in the differences in distinguishing trajectories smaller than 0.3 deg, by suggesting that it reflects spatiotemporal computation and motion-tracking. These two mechanisms allow visual imagery to form of the point symmetrical to that of the disappearance, with respect to fixation, and then for the occluded moving target to be tracked up to this point. PMID:27683566

  2. Evaluation of a Gait Assessment Module Using 3D Motion Capture Technology

    PubMed Central

    Baskwill, Amanda J.; Belli, Patricia; Kelleher, Leila

    2017-01-01

    Background Gait analysis is the study of human locomotion. In massage therapy, this observation is part of an assessment process that informs treatment planning. Massage therapy students must apply the theory of gait assessment to simulated patients. At Humber College, the gait assessment module traditionally consists of a textbook reading and a three-hour, in-class session in which students perform gait assessment on each other. In 2015, Humber College acquired a three-dimensional motion capture system. Purpose The purpose was to evaluate the use of 3D motion capture in a gait assessment module compared to the traditional gait assessment module. Participants Semester 2 massage therapy students who were enrolled in Massage Theory 2 (n = 38). Research Design Quasi-experimental, wait-list comparison study. Intervention The intervention group participated in an in-class session with a Qualisys motion capture system. Main Outcome Measure(s) The outcomes included knowledge and application of gait assessment theory as measured by quizzes, and students’ satisfaction as measured through a questionnaire. Results There were no statistically significant differences in baseline and post-module knowledge between both groups (pre-module: p = .46; post-module: p = .63). There was also no difference between groups on the final application question (p = .13). The intervention group enjoyed the in-class session because they could visualize the content, whereas the comparison group enjoyed the interactivity of the session. The intervention group recommended adding the assessment of gait on their classmates to their experience. Both groups noted more time was needed for the gait assessment module. Conclusions Based on the results of this study, it is recommended that the gait assessment module combine both the traditional in-class session and the 3D motion capture system. PMID:28293329

  3. Adaptive Kalman snake for semi-autonomous 3D vessel tracking.

    PubMed

    Lee, Sang-Hoon; Lee, Sanghoon

    2015-10-01

    In this paper, we propose a robust semi-autonomous algorithm for 3D vessel segmentation and tracking based on an active contour model and a Kalman filter. For each computed tomography angiography (CTA) slice, we use the active contour model to segment the vessel boundary and the Kalman filter to track position and shape variations of the vessel boundary between slices. For successful segmentation via active contour, we select an adequate number of initial points from the contour of the first slice. The points are set manually by user input for the first slice. For the remaining slices, the initial contour position is estimated autonomously based on segmentation results of the previous slice. To obtain refined segmentation results, an adaptive control spacing algorithm is introduced into the active contour model. Moreover, a block search-based initial contour estimation procedure is proposed to ensure that the initial contour of each slice can be near the vessel boundary. Experiments were performed on synthetic and real chest CTA images. Compared with the well-known Chan-Vese (CV) model, the proposed algorithm exhibited better performance in segmentation and tracking. In particular, receiver operating characteristic analysis on the synthetic and real CTA images demonstrated the time efficiency and tracking robustness of the proposed model. In terms of computational time redundancy, processing time can be effectively reduced by approximately 20%.

  4. 3D Fluorescent and Reflective Imaging of Whole Stardust Tracks in Aerogel

    SciTech Connect

    Greenberg, M.; Ebel, D.S.

    2011-11-07

    The NASA Stardust mission returned to earth in 2006 with the cometary collector having captured over 1,000 particles in an aerogel medium at a relative velocity of 6.1 km/s. Particles captured in aerogel were heated, disaggregated and dispersed along 'tracks' or cavities in aerogel, singular tracks representing a history of one capture event. It has been our focus to chemically and morphologically characterize whole tracks in 3-dimensions, utilizing solely non-destructive methods. To this end, we have used a variety of methods: 3D Laser Scanning Confocal Microscopy (LSCM), synchrotron X-ray fluorescence (SXRF), and synchrotron X-ray diffraction (SXRD). In the past months we have developed two new techniques to aid in data collection. (1) We have received a new confocal microscope which has enabled autofluorescent and spectral imaging of aerogel samples. (2) We have developed a stereo-SXRF technique to chemically identify large grains in SXRF maps in 3-space. The addition of both of these methods to our analytic abilities provides a greater understanding of the mechanisms and results of track formation.

  5. Quantifying the 3D Odorant Concentration Field Used by Actively Tracking Blue Crabs

    NASA Astrophysics Data System (ADS)

    Webster, D. R.; Dickman, B. D.; Jackson, J. L.; Weissburg, M. J.

    2007-11-01

    Blue crabs and other aquatic organisms locate food and mates by tracking turbulent odorant plumes. The odorant concentration fluctuates unpredictably due to turbulent transport, and many characteristics of the fluctuation pattern have been hypothesized as useful cues for orienting to the odorant source. To make a direct linkage between tracking behavior and the odorant concentration signal, we developed a measurement system based the laser induced fluorescence technique to quantify the instantaneous 3D concentration field surrounding actively tracking blue crabs. The data suggest a correlation between upstream walking speed and the concentration of the odorant signal arriving at the antennule chemosensors, which are located near the mouth region. More specifically, we note an increase in upstream walking speed when high concentration bursts arrive at the antennules location. We also test hypotheses regarding the ability of blue crabs to steer relative to the plume centerline based on the signal contrast between the chemosensors located on their leg appendages. These chemosensors are located much closer to the substrate compared to the antennules and are separated by the width of the blue crab. In this case, it appears that blue crabs use the bilateral signal comparison to track along the edge of the plume.

  6. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago

    2015-08-01

    The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most

  7. Characterisation of dynamic couplings at lower limb residuum/socket interface using 3D motion capture.

    PubMed

    Tang, Jinghua; McGrath, Michael; Laszczak, Piotr; Jiang, Liudi; Bader, Dan L; Moser, David; Zahedi, Saeed

    2015-12-01

    Design and fitting of artificial limbs to lower limb amputees are largely based on the subjective judgement of the prosthetist. Understanding the science of three-dimensional (3D) dynamic coupling at the residuum/socket interface could potentially aid the design and fitting of the socket. A new method has been developed to characterise the 3D dynamic coupling at the residuum/socket interface using 3D motion capture based on a single case study of a trans-femoral amputee. The new model incorporated a Virtual Residuum Segment (VRS) and a Socket Segment (SS) which combined to form the residuum/socket interface. Angular and axial couplings between the two segments were subsequently determined. Results indicated a non-rigid angular coupling in excess of 10° in the quasi-sagittal plane and an axial coupling of between 21 and 35 mm. The corresponding angular couplings of less than 4° and 2° were estimated in the quasi-coronal and quasi-transverse plane, respectively. We propose that the combined experimental and analytical approach adopted in this case study could aid the iterative socket fitting process and could potentially lead to a new socket design.

  8. Biodynamic Doppler imaging of subcellular motion inside 3D living tissue culture and biopsies (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Nolte, David D.

    2016-03-01

    Biodynamic imaging is an emerging 3D optical imaging technology that probes up to 1 mm deep inside three-dimensional living tissue using short-coherence dynamic light scattering to measure the intracellular motions of cells inside their natural microenvironments. Biodynamic imaging is label-free and non-invasive. The information content of biodynamic imaging is captured through tissue dynamics spectroscopy that displays the changes in the Doppler signatures from intracellular constituents in response to applied compounds. The affected dynamic intracellular mechanisms include organelle transport, membrane undulations, cytoskeletal restructuring, strain at cellular adhesions, cytokinesis, mitosis, exo- and endo-cytosis among others. The development of 3D high-content assays such as biodynamic profiling can become a critical new tool for assessing efficacy of drugs and the suitability of specific types of tissue growth for drug discovery and development. The use of biodynamic profiling to predict clinical outcome of living biopsies to cancer therapeutics can be developed into a phenotypic companion diagnostic, as well as a new tool for therapy selection in personalized medicine. This invited talk will present an overview of the optical, physical and physiological processes involved in biodynamic imaging. Several different biodynamic imaging modalities include motility contrast imaging (MCI), tissue-dynamics spectroscopy (TDS) and tissue-dynamics imaging (TDI). A wide range of potential applications will be described that include process monitoring for 3D tissue culture, drug discovery and development, cancer therapy selection, embryo assessment for in-vitro fertilization and artificial reproductive technologies, among others.

  9. Designing a high accuracy 3D auto stereoscopic eye tracking display, using a common LCD monitor

    NASA Astrophysics Data System (ADS)

    Taherkhani, Reza; Kia, Mohammad

    2012-09-01

    This paper describes the design and building of a low cost and practical stereoscopic display that does not need to wear special glasses, and uses eye tracking to give a large degree of freedom to viewer (or viewer's) movement while displaying the minimum amount of information. The parallax barrier technique is employed to turn a LCD into an auto-stereoscopic display. The stereo image pair is screened on the usual liquid crystal display simultaneously but in different columns of pixels. Controlling of the display in red-green-blue sub pixels increases the accuracy of light projecting direction to less than 2 degrees without losing too much LCD's resolution and an eye-tracking system determines the correct angle to project the images along the viewer's eye pupils and an image processing system puts the 3D images data in correct R-G-B sub pixels. 1.6 degree of light direction controlling achieved in practice. The 3D monitor is just made by applying some simple optical materials on a usual LCD display with normal resolution. [Figure not available: see fulltext.

  10. Nonintrusive viewpoint tracking for 3D for perception in smart video conference

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Martinez-Ponte, Isabel; Meessen, Jerome; Delaigle, Jean-François

    2006-02-01

    Globalisation of people's interaction in the industrial world and ecological cost of transport make video-conference an interesting solution for collaborative work. However, the lack of immersive perception makes video-conference not appealing. TIFANIS tele-immersion system was conceived to let users interact as if they were physically together. In this paper, we focus on an important feature of the immersive system: the automatic tracking of the user's point of view in order to render correctly in his display the scene from the ther site. Viewpoint information has to be computed in a very short time and the detection system should be no intrusive, otherwise it would become cumbersome for the user, i.e. he would lose the feeling of "being there". The viewpoint detection system consists of several modules. First, an analysis module identifies and follows regions of interest (ROI) where faces are detected. We will show the cooperative approach between spatial detection and temporal tracking. Secondly, an eye detector finds the position of the eyes within faces. Then, the 3D positions of the eyes are deduced using stereoscopic images from a binocular camera. Finally, the 3D scene is rendered in real-time according to the new point of view.

  11. The role of 3D and speckle tracking echocardiography in cardiac amyloidosis: a case report.

    PubMed

    Nucci, E M; Lisi, M; Cameli, M; Baldi, L; Puccetti, L; Mondillo, S; Favilli, R; Lunghetti, S

    2014-01-01

    Cardiac amyloidosis (CA) is a disorder characterized by amyloid fibrils deposition in cardiac interstitium; it results in a restrictive cardiomyopathy with heart failure (HF) and conduction abnormalities. The "gold standard" for diagnosis of CA is myocardial biopsy but possible sampling errors and procedural risks, limit it's use. Magnetic resonance (RMN) offers more information than traditional echocardiography and allows diagnosis of CA but often it's impossible to perform. We report the case of a man with HF and symptomatic bradyarrhythmia that required an urgent pacemaker implant. Echocardiography was strongly suggestive of CA but wasn't impossible to perform an RMN to confirm this hypothesis because the patient was implanted with a definitive pacemaker. So was performed a Speckle Tracking Echocardiography (STE) and a 3D echocardiography: STE allows to differentiate CA from others hypertrophic cardiomyopathy by longitudinal strain value < 12% and 3D echocardiography shows regional left ventricular dyssynchrony with a characteristic temporal pattern of dispersion of regional volume systolic change. On the basis of these results, finally was performed an endomyocardial biopsy that confirmed the diagnosis of CA. This case underlines the importance of news, noninvasive techniques such as eco 3D and STE for early diagnosis of CA, especially when RMN cannot be performed.

  12. Interactive Motion Planning for Steerable Needles in 3D Environments with Obstacles

    PubMed Central

    Patil, Sachin; Alterovitz, Ron

    2011-01-01

    Bevel-tip steerable needles for minimally invasive medical procedures can be used to reach clinical targets that are behind sensitive or impenetrable areas and are inaccessible to straight, rigid needles. We present a fast algorithm that can compute motion plans for steerable needles to reach targets in complex, 3D environments with obstacles at interactive rates. The fast computation makes this method suitable for online control of the steerable needle based on 3D imaging feedback and allows physicians to interactively edit the planning environment in real-time by adding obstacle definitions as they are discovered or become relevant. We achieve this fast performance by using a Rapidly Exploring Random Tree (RRT) combined with a reachability-guided sampling heuristic to alleviate the sensitivity of the RRT planner to the choice of the distance metric. We also relax the constraint of constant-curvature needle trajectories by relying on duty-cycling to realize bounded-curvature needle trajectories. These characteristics enable us to achieve orders of magnitude speed-up compared to previous approaches; we compute steerable needle motion plans in under 1 second for challenging environments containing complex, polyhedral obstacles and narrow passages. PMID:22294214

  13. A soft biomimetic tongue: model reconstruction and motion tracking

    NASA Astrophysics Data System (ADS)

    Lu, Xuanming; Xu, Weiliang; Li, Xiaoning

    2016-04-01

    A bioinspired robotic tongue which is actuated by a network of compressed air is proposed for the purpose of mimicking the movements of human tongue. It can be applied in the fields such as medical science and food engineering. The robotic tongue is made of two kinds of silicone rubber Ecoflex 0030 and PDMS with the shape simplified from real human tongue. In order to characterize the robotic tongue, a series of experiments were carried out. Laser scan was applied to reconstruct the static model of robotic tongue when it was under pressurization. After each scan, the robotic tongue was scattered into dense points in the same 3D coordinate system and the coordinates of each point were recorded. Motion tracking system (OptiTrack) was used to track and record the whole process of deformation dynamically during the loading and unloading phase. In the experiments, five types of deformation were achieved including roll-up, roll-down, elongation, groove and twist. Utilizing the discrete points generated by laser scan, the accurate parameterized outline of robotic tongue under different pressure was obtained, which could help demonstrate the static characteristic of robotic tongue. The precise deformation process under one pressure was acquired through the OptiTrack system which contains a series of digital cameras, markers on the robotic tongue and a set of hardware and software for data processing. By means of tracking and recording different process of deformation under different pressure, the dynamic characteristic of robotic tongue could be achieved.

  14. Nondestructive 3D confocal laser imaging with deconvolution of seven whole stardust tracks with complementary XRF and quantitative analysis

    SciTech Connect

    Greenberg, M.; Ebel, D.S.

    2009-03-19

    We present a nondestructive 3D system for analysis of whole Stardust tracks, using a combination of Laser Confocal Scanning Microscopy and synchrotron XRF. 3D deconvolution is used for optical corrections, and results of quantitative analyses of several tracks are presented. The Stardust mission to comet Wild 2 trapped many cometary and ISM particles in aerogel, leaving behind 'tracks' of melted silica aerogel on both sides of the collector. Collected particles and their tracks range in size from submicron to millimeter scale. Interstellar dust collected on the obverse of the aerogel collector is thought to have an average track length of {approx}15 {micro}m. It has been our goal to perform a total non-destructive 3D textural and XRF chemical analysis on both types of tracks. To that end, we use a combination of Laser Confocal Scanning Microscopy (LCSM) and X Ray Florescence (XRF) spectrometry. Utilized properly, the combination of 3D optical data and chemical data provides total nondestructive characterization of full tracks, prior to flattening or other destructive analysis methods. Our LCSM techniques allow imaging at 0.075 {micro}m/pixel, without the use of oil-based lenses. A full textural analysis on track No.82 is presented here as well as analysis of 6 additional tracks contained within 3 keystones (No.128, No.129 and No.140). We present a method of removing the axial distortion inherent in LCSM images, by means of a computational 3D Deconvolution algorithm, and present some preliminary experiments with computed point spread functions. The combination of 3D LCSM data and XRF data provides invaluable information, while preserving the integrity of the samples for further analysis. It is imperative that these samples, the first extraterrestrial solids returned since the Apollo era, be fully mapped nondestructively in 3D, to preserve the maximum amount of information prior to other, destructive analysis.

  15. Feasibility Study for Ballet E-Learning: Automatic Composition System for Ballet "Enchainement" with Online 3D Motion Data Archive

    ERIC Educational Resources Information Center

    Umino, Bin; Longstaff, Jeffrey Scott; Soga, Asako

    2009-01-01

    This paper reports on "Web3D dance composer" for ballet e-learning. Elementary "petit allegro" ballet steps were enumerated in collaboration with ballet teachers, digitally acquired through 3D motion capture systems, and categorised into families and sub-families. Digital data was manipulated into virtual reality modelling language (VRML) and fit…

  16. Segmentation and tracking of adherens junctions in 3D for the analysis of epithelial tissue morphogenesis.

    PubMed

    Cilla, Rodrigo; Mechery, Vinodh; Hernandez de Madrid, Beatriz; Del Signore, Steven; Dotu, Ivan; Hatini, Victor

    2015-04-01

    Epithelial morphogenesis generates the shape of tissues, organs and embryos and is fundamental for their proper function. It is a dynamic process that occurs at multiple spatial scales from macromolecular dynamics, to cell deformations, mitosis and apoptosis, to coordinated cell rearrangements that lead to global changes of tissue shape. Using time lapse imaging, it is possible to observe these events at a system level. However, to investigate morphogenetic events it is necessary to develop computational tools to extract quantitative information from the time lapse data. Toward this goal, we developed an image-based computational pipeline to preprocess, segment and track epithelial cells in 4D confocal microscopy data. The computational pipeline we developed, for the first time, detects the adherens junctions of epithelial cells in 3D, without the need to first detect cell nuclei. We accentuate and detect cell outlines in a series of steps, symbolically describe the cells and their connectivity, and employ this information to track the cells. We validated the performance of the pipeline for its ability to detect vertices and cell-cell contacts, track cells, and identify mitosis and apoptosis in surface epithelia of Drosophila imaginal discs. We demonstrate the utility of the pipeline to extract key quantitative features of cell behavior with which to elucidate the dynamics and biomechanical control of epithelial tissue morphogenesis. We have made our methods and data available as an open-source multiplatform software tool called TTT (http://github.com/morganrcu/TTT).

  17. Segmentation and Tracking of Adherens Junctions in 3D for the Analysis of Epithelial Tissue Morphogenesis

    PubMed Central

    Cilla, Rodrigo; Mechery, Vinodh; Hernandez de Madrid, Beatriz; Del Signore, Steven; Dotu, Ivan; Hatini, Victor

    2015-01-01

    Epithelial morphogenesis generates the shape of tissues, organs and embryos and is fundamental for their proper function. It is a dynamic process that occurs at multiple spatial scales from macromolecular dynamics, to cell deformations, mitosis and apoptosis, to coordinated cell rearrangements that lead to global changes of tissue shape. Using time lapse imaging, it is possible to observe these events at a system level. However, to investigate morphogenetic events it is necessary to develop computational tools to extract quantitative information from the time lapse data. Toward this goal, we developed an image-based computational pipeline to preprocess, segment and track epithelial cells in 4D confocal microscopy data. The computational pipeline we developed, for the first time, detects the adherens junctions of epithelial cells in 3D, without the need to first detect cell nuclei. We accentuate and detect cell outlines in a series of steps, symbolically describe the cells and their connectivity, and employ this information to track the cells. We validated the performance of the pipeline for its ability to detect vertices and cell-cell contacts, track cells, and identify mitosis and apoptosis in surface epithelia of Drosophila imaginal discs. We demonstrate the utility of the pipeline to extract key quantitative features of cell behavior with which to elucidate the dynamics and biomechanical control of epithelial tissue morphogenesis. We have made our methods and data available as an open-source multiplatform software tool called TTT (http://github.com/morganrcu/TTT) PMID:25884654

  18. Simulations of Coalescence and Breakup of Interfaces Using a 3D Front-tracking Method

    NASA Astrophysics Data System (ADS)

    Lu, Jiacai; Tryggvason, Gretar

    2015-11-01

    Direct Numerical Simulations (DNS) of complex multiphase flows with coalescing and breaking-up of interfaces are conducted using a 3D front-tracking method. Front-tracking method has been successfully used in DNS of turbulent channel bubbly flows and many other multiphase flows, but as the void fraction increases changes in the interface topology, though coalescence and breakup, become more common and have to be accounted for. Topology changes have often been identified as a challenge for front tracking, where the interface is represented using a triangular mesh, but here we present an efficient algorithm to change the topology of triangular elements of interfaces. In the current implementation we have not included any small-scale attractive forces so thin films coalesce either at prescribed times or when their thickness reaches a given value. Simulations of the collisions of two drops and comparisons with experimental results have been used to validate the algorithm but the main applications have been to flow regime transitions in gas-liquid flows in pressure driven channel flows. The evolution of flow, including flow rate, wall shear, projected interface areas, pseudo-turbulence, and the average size of the various flow structures, is examined as the topology of the interface changes through coalescence and breakup. Research supported by DOE (CASL).

  19. Multiview diffeomorphic registration: application to motion and strain estimation from 3D echocardiography.

    PubMed

    Piella, Gemma; De Craene, Mathieu; Butakoff, Constantine; Grau, Vicente; Yao, Cheng; Nedjati-Gilani, Shahrum; Penney, Graeme P; Frangi, Alejandro F

    2013-04-01

    This paper presents a new registration framework for quantifying myocardial motion and strain from the combination of multiple 3D ultrasound (US) sequences. The originality of our approach lies in the estimation of the transformation directly from the input multiple views rather than from a single view or a reconstructed compounded sequence. This allows us to exploit all spatiotemporal information available in the input views avoiding occlusions and image fusion errors that could lead to some inconsistencies in the motion quantification result. We propose a multiview diffeomorphic registration strategy that enforces smoothness and consistency in the spatiotemporal domain by modeling the 4D velocity field continuously in space and time. This 4D continuous representation considers 3D US sequences as a whole, therefore allowing to robustly cope with variations in heart rate resulting in different number of images acquired per cardiac cycle for different views. This contributes to the robustness gained by solving for a single transformation from all input sequences. The similarity metric takes into account the physics of US images and uses a weighting scheme to balance the contribution of the different views. It includes a comparison both between consecutive images and between a reference and each of the following images. The strain tensor is computed locally using the spatial derivatives of the reconstructed displacement fields. Registration and strain accuracy were evaluated on synthetic 3D US sequences with known ground truth. Experiments were also conducted on multiview 3D datasets of 8 volunteers and 1 patient treated by cardiac resynchronization therapy. Strain curves obtained from our multiview approach were compared to the single-view case, as well as with other multiview approaches. For healthy cases, the inclusion of several views improved the consistency of the strain curves and reduced the number of segments where a non-physiological strain pattern was

  20. 3D cardiac motion reconstruction from CT data and tagged MRI.

    PubMed

    Wang, Xiaoxu; Mihalef, Viorel; Qian, Zhen; Voros, Szilard; Metaxas, Dimitris

    2012-01-01

    In this paper we present a novel method for left ventricle (LV) endocardium motion reconstruction using high resolution CT data and tagged MRI. High resolution CT data provide anatomic details on the LV endocardial surface, such as the papillary muscle and trabeculae carneae. Tagged MRI provides better time resolution. The combination of these two imaging techniques can give us better understanding on left ventricle motion. The high resolution CT images are segmented with mean shift method and generate the LV endocardium mesh. The meshless deformable model built with high resolution endocardium surface from CT data fit to the tagged MRI of the same phase. 3D deformation of the myocardium is computed with the Lagrangian dynamics and local Laplacian deformation. The segmented inner surface of left ventricle is compared with the heart inner surface picture and show high agreement. The papillary muscles are attached to the inner surface with roots. The free wall of the left ventricle inner surface is covered with trabeculae carneae. The deformation of the heart wall and the papillary muscle in the first half of the cardiac cycle is presented. The motion reconstruction results are very close to the live heart video.

  1. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  2. Exploring single-molecule interactions through 3D optical trapping and tracking: From thermal noise to protein refolding

    NASA Astrophysics Data System (ADS)

    Wong, Wesley Philip

    The focus of this thesis is the development and application of a novel technique for investigating the structure and dynamics of weak interactions between and within single-molecules. This approach is designed to explore unusual features in bi-directional transitions near equilibrium. The basic idea is to infer molecular events by observing changes in the three-dimensional Brownian fluctuations of a functionalized microsphere held weakly near a reactive substrate. Experimentally, I have developed a unique optical tweezers system that combines an interference technique for accurate 3D tracking (˜1 nm vertically, and ˜2-3 nm laterally) with a continuous autofocus system which stabilizes the trap height to within 1-2 mn over hours. A number of different physical and biological systems were investigated with this instrument. Data interpretation was assisted by a multi-scale Brownian Dynamics simulation that I have developed. I have explored the 3D signatures of different molecular tethers, distinguishing between single and multiple attachments, as well as between stiff and soft linkages. As well, I have developed a technique for measuring the force-dependent compliance of molecular tethers from thermal noise fluctuations and demonstrated this with a short ssDNA oligomer. Another practical approach that I have developed for extracting information from fluctuation measurements is Inverse Brownian Dynamics, which yields the underlying potential of mean force and position dependent diffusion coefficient from the Brownian motion of a particle. I have also developed a new force calibration method that takes into account video motion blur, and that uses this information to measure bead dynamics. Perhaps most significantly, I have trade the first direct observations of the refolding of spectrin repeats under mechanical force, and investigated the force-dependent kinetics of this transition.

  3. Breakup of Finite-Size Colloidal Aggregates in Turbulent Flow Investigated by Three-Dimensional (3D) Particle Tracking Velocimetry.

    PubMed

    Saha, Debashish; Babler, Matthaus U; Holzner, Markus; Soos, Miroslav; Lüthi, Beat; Liberzon, Alex; Kinzelbach, Wolfgang

    2016-01-12

    Aggregates grown in mild shear flow are released, one at a time, into homogeneous isotropic turbulence, where their motion and intermittent breakup is recorded by three-dimensional particle tracking velocimetry (3D-PTV). The aggregates have an open structure with a fractal dimension of ∼2.2, and their size is 1.4 ± 0.4 mm, which is large, compared to the Kolmogorov length scale (η = 0.15 mm). 3D-PTV of flow tracers allows for the simultaneous measurement of aggregate trajectories and the full velocity gradient tensor along their pathlines, which enables us to access the Lagrangian stress history of individual breakup events. From this data, we found no consistent pattern that relates breakup to the local flow properties at the point of breakup. Also, the correlation between the aggregate size and both shear stress and normal stress at the location of breakage is found to be weaker, when compared with the correlation between size and drag stress. The analysis suggests that the aggregates are mostly broken due to the accumulation of the drag stress over a time lag on the order of the Kolmogorov time scale. This finding is explained by the fact that the aggregates are large, which gives their motion inertia and increases the time for stress propagation inside the aggregate. Furthermore, it is found that the scaling of the largest fragment and the accumulated stress at breakup follows an earlier established power law, i.e., dfrag ∼ σ(-0.6) obtained from laminar nozzle experiments. This indicates that, despite the large size and the different type of hydrodynamic stress, the microscopic mechanism causing breakup is consistent over a wide range of aggregate size and stress magnitude.

  4. Application of 3d-ptv To Track Particle Moving Inside Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Cenedese, A.; Cushman, J. H.; Moroni, M.

    There exist a number of imaging-based measurement techniques for determining 3D velocity fields in an observation volume. Among these are: a) scanning techniques (Guezennec et al. 1994, Moroni and Cushman, 2001); b) holographic techniques (Hin- sch and Hinrichs 1996); c) defocusing techniques (Willert and Gharib 1992); d) stereo- scopic techniques (Maas et al. 1993, Kasagi and Nishino 1990). We have focused our attention on 3D-PTV which is an experimental technique based on reconstructing 3D trajectories of reflecting tracer particles through a stereoscopic recording of image se- quences. Coordinates are determined first and then trajectories are defined. 3D-PTV requires the operator to light a volume of the test section as opposed to 2D techniques that require a light sheet. Stereoscopic methods share the following basic steps (Pa- pantoniou, 1990): a) stereoscopic calibrated imaging and recording of a suitably illu- minated particle flow; b) subsequent photogrammetric analysis of the resulting images to derive the instantaneous 3-D particle positions and c) tracking of the 3-D coordinate sets in time to derive the tracer trajectories. The ideal setup for obtaining highly accu- rate trajectories requires the cameras to be mounted with the distance between them equal to the distance to the center of the measurement volume (with three cameras this requires a hexagonal cell). But the camera arrangement is usually a compromise between ideal geometrical conditions for a homogeneous distribution of accuracies in the measuring volume and practical restrictions associated with the experiment. The position of the cameras in object space (exterior orientation) and the parameters of each camera (interior orientation) are needed to reconstruct the 3D objects. These pa- rameters can be calculated simultaneously in a so-called "bundle adjustment" or by pre-calibration. A matched index (of refraction) porous medium heterogeneous at the bench scale has been constructed by filling

  5. Soft-tissue motion tracking and structure estimation for robotic assisted MIS procedures.

    PubMed

    Stoyanov, Danail; Mylonas, George P; Deligianni, Fani; Darzi, Ara; Yang, Guang Zhong

    2005-01-01

    In robotically assisted laparoscopic surgery, soft-tissue motion tracking and structure recovery are important for intraoperative surgical guidance, motion compensation and delivering active constraints. In this paper, we present a novel method for feature based motion tracking of deformable soft-tissue surfaces in totally endoscopic coronary artery bypass graft (TECAB) surgery. We combine two feature detectors to recover distinct regions on the epicardial surface for which the sparse 3D surface geometry may be computed using a pre-calibrated stereo laparoscope. The movement of the 3D points is then tracked in the stereo images with stereo-temporal constrains by using an iterative registration algorithm. The practical value of the technique is demonstrated on both a deformable phantom model with tomographically derived surface geometry and in vivo robotic assisted minimally invasive surgery (MIS) image sequences.

  6. Kinematic ground motion simulations on rough faults including effects of 3D stochastic velocity perturbations

    USGS Publications Warehouse

    Graves, Robert; Pitarka, Arben

    2016-01-01

    We describe a methodology for generating kinematic earthquake ruptures for use in 3D ground‐motion simulations over the 0–5 Hz frequency band. Our approach begins by specifying a spatially random slip distribution that has a roughly wavenumber‐squared fall‐off. Given a hypocenter, the rupture speed is specified to average about 75%–80% of the local shear wavespeed and the prescribed slip‐rate function has a Kostrov‐like shape with a fault‐averaged rise time that scales self‐similarly with the seismic moment. Both the rupture time and rise time include significant local perturbations across the fault surface specified by spatially random fields that are partially correlated with the underlying slip distribution. We represent velocity‐strengthening fault zones in the shallow (<5  km) and deep (>15  km) crust by decreasing rupture speed and increasing rise time in these regions. Additional refinements to this approach include the incorporation of geometric perturbations to the fault surface, 3D stochastic correlated perturbations to the P‐ and S‐wave velocity structure, and a damage zone surrounding the shallow fault surface characterized by a 30% reduction in seismic velocity. We demonstrate the approach using a suite of simulations for a hypothetical Mw 6.45 strike‐slip earthquake embedded in a generalized hard‐rock velocity structure. The simulation results are compared with the median predictions from the 2014 Next Generation Attenuation‐West2 Project ground‐motion prediction equations and show very good agreement over the frequency band 0.1–5 Hz for distances out to 25 km from the fault. Additionally, the newly added features act to reduce the coherency of the radiated higher frequency (f>1  Hz) ground motions, and homogenize radiation‐pattern effects in this same bandwidth, which move the simulations closer to the statistical characteristics of observed motions as illustrated by comparison with recordings from

  7. Correlation between the respiratory waveform measured using a respiratory sensor and 3D tumor motion in gated radiotherapy

    SciTech Connect

    Tsunashima, Yoshikazu . E-mail: tsunashima@pmrc.tsukuba.ac.jp; Sakae, Takeji; Shioyama, Yoshiyuki; Kagei, Kenji; Terunuma, Toshiyuki; Nohtomi, Akihiro; Akine, Yasuyuki

    2004-11-01

    Purpose: The purpose of this study is to investigate the correlation between the respiratory waveform measured using a respiratory sensor and three-dimensional (3D) tumor motion. Methods and materials: A laser displacement sensor (LDS: KEYENCE LB-300) that measures distance using infrared light was used as the respiratory sensor. This was placed such that the focus was in an area around the patient's navel. When the distance from the LDS to the body surface changes as the patient breathes, the displacement is detected as a respiratory waveform. To obtain the 3D tumor motion, a biplane digital radiography unit was used. For the tumor in the lung, liver, and esophagus of 26 patients, the waveform was compared with the 3D tumor motion. The relationship between the respiratory waveform and the 3D tumor motion was analyzed by means of the Fourier transform and a cross-correlation function. Results: The respiratory waveform cycle agreed with that of the cranial-caudal and dorsal-ventral tumor motion. A phase shift observed between the respiratory waveform and the 3D tumor motion was principally in the range 0.0 to 0.3 s, regardless of the organ being measured, which means that the respiratory waveform does not always express the 3D tumor motion with fidelity. For this reason, the standard deviation of the tumor position in the expiration phase, as indicated by the respiratory waveform, was derived, which should be helpful in suggesting the internal margin required in the case of respiratory gated radiotherapy. Conclusion: Although obtained from only a few breathing cycles for each patient, the correlation between the respiratory waveform and the 3D tumor motion was evident in this study. If this relationship is analyzed carefully and an internal margin is applied, the accuracy and convenience of respiratory gated radiotherapy could be improved by use of the respiratory sensor.Thus, it is expected that this procedure will come into wider use.

  8. Real-time motion- and B0-correction for LASER-localized spiral-accelerated 3D-MRSI of the brain at 3T.

    PubMed

    Bogner, Wolfgang; Hess, Aaron T; Gagoski, Borjan; Tisdall, M Dylan; van der Kouwe, Andre J W; Trattnig, Siegfried; Rosen, Bruce; Andronesi, Ovidiu C

    2014-03-01

    The full potential of magnetic resonance spectroscopic imaging (MRSI) is often limited by localization artifacts, motion-related artifacts, scanner instabilities, and long measurement times. Localized adiabatic selective refocusing (LASER) provides accurate B1-insensitive spatial excitation even at high magnetic fields. Spiral encoding accelerates MRSI acquisition, and thus, enables 3D-coverage without compromising spatial resolution. Real-time position- and shim/frequency-tracking using MR navigators correct motion- and scanner instability-related artifacts. Each of these three advanced MRI techniques provides superior MRSI data compared to commonly used methods. In this work, we integrated in a single pulse sequence these three promising approaches. Real-time correction of motion, shim, and frequency-drifts using volumetric dual-contrast echo planar imaging-based navigators were implemented in an MRSI sequence that uses low-power gradient modulated short-echo time LASER localization and time efficient spiral readouts, in order to provide fast and robust 3D-MRSI in the human brain at 3T. The proposed sequence was demonstrated to be insensitive to motion- and scanner drift-related degradations of MRSI data in both phantoms and volunteers. Motion and scanner drift artifacts were eliminated and excellent spectral quality was recovered in the presence of strong movement. Our results confirm the expected benefits of combining a spiral 3D-LASER-MRSI sequence with real-time correction. The new sequence provides accurate, fast, and robust 3D metabolic imaging of the human brain at 3T. This will further facilitate the use of 3D-MRSI for neuroscience and clinical applications.

  9. Skeletal camera network embedded structure-from-motion for 3D scene reconstruction from UAV images

    NASA Astrophysics Data System (ADS)

    Xu, Zhihua; Wu, Lixin; Gerke, Markus; Wang, Ran; Yang, Huachao

    2016-11-01

    Structure-from-Motion (SfM) techniques have been widely used for 3D scene reconstruction from multi-view images. However, due to the large computational costs of SfM methods there is a major challenge in processing highly overlapping images, e.g. images from unmanned aerial vehicles (UAV). This paper embeds a novel skeletal camera network (SCN) into SfM to enable efficient 3D scene reconstruction from a large set of UAV images. First, the flight control data are used within a weighted graph to construct a topologically connected camera network (TCN) to determine the spatial connections between UAV images. Second, the TCN is refined using a novel hierarchical degree bounded maximum spanning tree to generate a SCN, which contains a subset of edges from the TCN and ensures that each image is involved in at least a 3-view configuration. Third, the SCN is embedded into the SfM to produce a novel SCN-SfM method, which allows performing tie-point matching only for the actually connected image pairs. The proposed method was applied in three experiments with images from two fixed-wing UAVs and an octocopter UAV, respectively. In addition, the SCN-SfM method was compared to three other methods for image connectivity determination. The comparison shows a significant reduction in the number of matched images if our method is used, which leads to less computational costs. At the same time the achieved scene completeness and geometric accuracy are comparable.

  10. Velocity and Density Models Incorporating the Cascadia Subduction Zone for 3D Earthquake Ground Motion Simulations

    USGS Publications Warehouse

    Stephenson, William J.

    2007-01-01

    INTRODUCTION In support of earthquake hazards and ground motion studies in the Pacific Northwest, three-dimensional P- and S-wave velocity (3D Vp and Vs) and density (3D rho) models incorporating the Cascadia subduction zone have been developed for the region encompassed from about 40.2?N to 50?N latitude, and from about -122?W to -129?W longitude. The model volume includes elevations from 0 km to 60 km (elevation is opposite of depth in model coordinates). Stephenson and Frankel (2003) presented preliminary ground motion simulations valid up to 0.1 Hz using an earlier version of these models. The version of the model volume described here includes more structural and geophysical detail, particularly in the Puget Lowland as required for scenario earthquake simulations in the development of the Seattle Urban Hazards Maps (Frankel and others, 2007). Olsen and others (in press) used the model volume discussed here to perform a Cascadia simulation up to 0.5 Hz using a Sumatra-Andaman Islands rupture history. As research from the EarthScope Program (http://www.earthscope.org) is published, a wealth of important detail can be added to these model volumes, particularly to depths of the upper-mantle. However, at the time of development for this model version, no EarthScope-specific results were incorporated. This report is intended to be a reference for colleagues and associates who have used or are planning to use this preliminary model in their research. To this end, it is intended that these models will be considered a beginning template for a community velocity model of the Cascadia region as more data and results become available.

  11. Application of 3D hydrodynamic and particle tracking models for better environmental management of finfish culture

    NASA Astrophysics Data System (ADS)

    Moreno Navas, Juan; Telfer, Trevor C.; Ross, Lindsay G.

    2011-04-01

    Hydrographic conditions, and particularly current speeds, have a strong influence on the management of fish cage culture. These hydrodynamic conditions can be used to predict particle movement within the water column and the results used to optimise environmental conditions for effective site selection, setting of environmental quality standards, waste dispersion, and potential disease transfer. To this end, a 3D hydrodynamic model, MOHID, has been coupled to a particle tracking model to study the effects of mean current speed, quiescent water periods and bulk water circulation in Mulroy Bay, Co. Donegal Ireland, an Irish fjard (shallow fjordic system) important to the aquaculture industry. A Lagangrian method simulated the instantaneous release of "particles" emulating discharge from finfish cages to show the behaviour of waste in terms of water circulation and water exchange. The 3D spatial models were used to identify areas of mixed and stratified water using a version of the Simpson-Hunter criteria, and to use this in conjunction with models of current flow for appropriate site selection for salmon aquaculture. The modelled outcomes for stratification were in good agreement with the direct measurements of water column stratification based on observed density profiles. Calculations of the Simpson-Hunter tidal parameter indicated that most of Mulroy Bay was potentially stratified with a well mixed region over the shallow channels where the water is faster flowing. The fjard was characterised by areas of both very low and high mean current speeds, with some areas having long periods of quiescent water. The residual current and the particle tracking animations created through the models revealed an anticlockwise eddy that may influence waste dispersion and potential for disease transfer, among salmon cages and which ensures that the retention time of waste substances from cages is extended. The hydrodynamic model results were incorporated into the ArcView TM GIS

  12. CISUS: an integrated 3D ultrasound system for IGT using a modular tracking API

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Viswanathan, Anand; Pieper, Steve; Choti, Michael A.; Taylor, Russell H.; Kikinis, Ron; Fichtinger, Gabor

    2004-05-01

    Ultrasound has become popular in clinical/surgical applications, both as the primary image guidance modality and also in conjunction with other modalities like CT or MRI. Three dimensional ultrasound (3DUS) systems have also demonstrated usefulness in image-guided therapy (IGT). At the same time, however, current lack of open-source and open-architecture multi-modal medical visualization systems prevents 3DUS from fulfilling its potential. Several stand-alone 3DUS systems, like Stradx or In-Vivo exist today. Although these systems have been found to be useful in real clinical setting, it is difficult to augment their functionality and integrate them in versatile IGT systems. To address these limitations, a robotic/freehand 3DUS open environment (CISUS) is being integrated into the 3D Slicer, an open-source research tool developed for medical image analysis and surgical planning. In addition, the system capitalizes on generic application programming interfaces (APIs) for tracking devices and robotic control. The resulting platform-independent open-source system may serve as a valuable tool to the image guided surgery community. Other researchers could straightforwardly integrate the generic CISUS system along with other functionalities (i.e. dual view visualization, registration, real-time tracking, segmentation, etc) to rapidly create their medical/surgical applications. Our current driving clinical application is robotically assisted and freehand 3DUS-guided liver ablation, which is fully being integrated under the CISUS-3D Slicer. Initial functionality and pre-clinical feasibility are demonstrated on phantom and ex-vivo animal models.

  13. Defragmented image based autostereoscopic 3D displays with dynamic eye tracking

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-12-01

    We studied defragmented image based autostereoscopic 3D displays with dynamic eye tracking. Specifically, we examined the impact of parallax barrier (PB) angular orientation on their image quality. The 3D display system required fine adjustment of PB angular orientation with respect to a display panel. This was critical for both image color balancing and minimizing image resolution mismatch between horizontal and vertical directions. For evaluating uniformity of image brightness, we applied optical ray tracing simulations. The simulations took effects of PB orientation misalignment into account. The simulation results were then compared with recorded experimental data. Our optimal simulated system produced significantly enhanced image uniformity at around sweet spots in viewing zones. However this was contradicted by real experimental results. We offer quantitative treatment of illuminance uniformity of view images to estimate misalignment of PB orientation, which could account for brightness non-uniformity observed experimentally. Our study also shows that slight imperfection in the adjustment of PB orientation due to practical restrictions of adjustment accuracy can induce substantial non-uniformity of view images' brightness. We find that image brightness non-uniformity critically depends on misalignment of PB angular orientation, for example, as slight as ≤ 0.01 ° in our system. This reveals that reducing misalignment of PB angular orientation from the order of 10-2 to 10-3 degrees can greatly improve the brightness uniformity.

  14. Using an automated 3D-tracking system to record individual and shoals of adult zebrafish.

    PubMed

    Maaswinkel, Hans; Zhu, Liqun; Weng, Wei

    2013-12-05

    Like many aquatic animals, zebrafish (Danio rerio) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals.

  15. Three-dimensional liver motion tracking using real-time two-dimensional MRI

    SciTech Connect

    Brix, Lau; Ringgaard, Steffen; Sørensen, Thomas Sangild; Poulsen, Per Rugaard

    2014-04-15

    Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (or tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial, sagittal

  16. Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation.

    PubMed

    Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud

    2017-01-01

    In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges.

  17. Nonrigid Registration of 2-D and 3-D Dynamic Cell Nuclei Images for Improved Classification of Subcellular Particle Motion

    PubMed Central

    Kim, Il-Han; Chen, Yi-Chun M.; Spector, David L.; Eils, Roland; Rohr, Karl

    2012-01-01

    The observed motion of subcellular particles in fluorescence microscopy image sequences of live cells is generally a superposition of the motion and deformation of the cell and the motion of the particles. Decoupling the two types of movements to enable accurate classification of the particle motion requires the application of registration algorithms. We have developed an intensity-based approach for nonrigid registration of multi-channel microscopy image sequences of cell nuclei. First, based on 3-D synthetic images we demonstrate that cell nucleus deformations change the observed motion types of particles and that our approach allows to recover the original motion. Second, we have successfully applied our approach to register 2-D and 3-D real microscopy image sequences. A quantitative experimental comparison with previous approaches for nonrigid registration of cell microscopy has also been performed. PMID:20840894

  18. Global Existence and Asymptotic Behavior of Affine Motion of 3D Ideal Fluids Surrounded by Vacuum

    NASA Astrophysics Data System (ADS)

    Sideris, Thomas C.

    2017-03-01

    The 3D compressible and incompressible Euler equations with a physical vacuum free boundary condition and affine initial conditions reduce to a globally solvable Hamiltonian system of ordinary differential equations for the deformation gradient in {GL^+(3, R)} . The evolution of the fluid domain is described by a family of ellipsoids whose diameter grows at a rate proportional to time. Upon rescaling to a fixed diameter, the asymptotic limit of the fluid ellipsoid is determined by a positive semi-definite quadratic form of rank r = 1, 2, or 3, corresponding to the asymptotic degeneration of the ellipsoid along 3-r of its principal axes. In the compressible case, the asymptotic limit has rank r = 3, and asymptotic completeness holds, when the adiabatic index {γ} satisfies {4/3 < γ < 2} . The number of possible degeneracies, 3-r, increases with the value of the adiabatic index {γ} . In the incompressible case, affine motion reduces to geodesic flow in {SL(3, R)} with the Euclidean metric. For incompressible affine swirling flow, there is a structural instability. Generically, when the vorticity is nonzero, the domains degenerate along only one axis, but the physical vacuum boundary condition fails over a finite time interval. The rescaled fluid domains of irrotational motion can collapse along two axes.

  19. The role of perspective information in the recovery of 3D structure-from-motion.

    PubMed

    Eagle, R A; Hogervorst, M A

    1999-05-01

    When investigating the recovery of three-dimensional structure-from-motion (SFM), vision scientists often assume that scaled-orthographic projection, which removes effects due to depth variations across the object, is an adequate approximation to full perspective projection. This is so even though SFM judgements can, in principle, be improved by exploiting perspective projection of scenes on to the retina. In an experiment, pairs of rotating hinged planes (open books) were simulated on a computer monitor, under either perspective or orthographic projection, and human observers were asked to indicate which they perceived had the larger dihedral angle. For small displays (4.6 x 6.0 degrees) discrimination thresholds were found to be similar under the two conditions, but diverged for all larger stimuli. In particular, as stimulus size was increased, performance under orthographic projection declined and by a stimulus size of 32 x 41 degrees performance was at chance for all subjects. In contrast, thresholds decreased under perspective projection as stimulus size was increased. These results show that human observers can use the information gained from perspective projection to recover SFM and that scaled-orthographic projection becomes an unacceptable approximation even at quite modest stimulus sizes. A model of SFM that incorporates measurement errors on the retinal motions accounts for performance under both projection systems, suggesting that this early noise forms the primary limitation on 3D discrimination performance.

  20. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    DOE PAGES

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together intomore » larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.« less

  1. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    SciTech Connect

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; Heiser, John; Yoo, Shinjae; Kalb, Paul

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together into larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.

  2. A Two-Axis Goniometric Sensor for Tracking Finger Motion.

    PubMed

    Wang, Lefan; Meydan, Turgut; Williams, Paul Ieuan

    2017-04-05

    The study of finger kinematics has developed into an important research area. Various hand tracking systems are currently available; however, they all have limited functionality. Generally, the most commonly adopted sensors are limited to measurements with one degree of freedom, i.e., flexion/extension of fingers. More advanced measurements including finger abduction, adduction, and circumduction are much more difficult to achieve. To overcome these limitations, we propose a two-axis 3D printed optical sensor with a compact configuration for tracking finger motion. Based on Malus' law, this sensor detects the angular changes by analyzing the attenuation of light transmitted through polarizing film. The sensor consists of two orthogonal axes each containing two pathways. The two readings from each axis are fused using a weighted average approach, enabling a measurement range up to 180 ∘ and an improvement in sensitivity. The sensor demonstrates high accuracy (±0.3 ∘ ), high repeatability, and low hysteresis error. Attaching the sensor to the index finger's metacarpophalangeal joint, real-time movements consisting of flexion/extension, abduction/adduction and circumduction have been successfully recorded. The proposed two-axis sensor has demonstrated its capability for measuring finger movements with two degrees of freedom and can be potentially used to monitor other types of body motion.

  3. Methods for abdominal respiratory motion tracking.

    PubMed

    Spinczyk, Dominik; Karwan, Adam; Copik, Marcin

    2014-01-01

    Non-invasive surface registration methods have been developed to register and track breathing motions in a patient's abdomen and thorax. We evaluated several different registration methods, including marker tracking using a stereo camera, chessboard image projection, and abdominal point clouds. Our point cloud approach was based on a time-of-flight (ToF) sensor that tracked the abdominal surface. We tested different respiratory phases using additional markers as landmarks for the extension of the non-rigid Iterative Closest Point (ICP) algorithm to improve the matching of irregular meshes. Four variants for retrieving the correspondence data were implemented and compared. Our evaluation involved 9 healthy individuals (3 females and 6 males) with point clouds captured in opposite breathing phases (i.e., inhalation and exhalation). We measured three factors: surface distance, correspondence distance, and marker error. To evaluate different methods for computing the correspondence measurements, we defined the number of correspondences for every target point and the average correspondence assignment error of the points nearest the markers.

  4. 4D ultrasound speckle tracking of intra-fraction prostate motion: a phantom-based comparison with x-ray fiducial tracking using CyberKnife

    NASA Astrophysics Data System (ADS)

    O'Shea, Tuathan P.; Garcia, Leo J.; Rosser, Karen E.; Harris, Emma J.; Evans, Philip M.; Bamber, Jeffrey C.

    2014-04-01

    This study investigates the use of a mechanically-swept 3D ultrasound (3D-US) probe for soft-tissue displacement monitoring during prostate irradiation, with emphasis on quantifying the accuracy relative to CyberKnife® x-ray fiducial tracking. An US phantom, implanted with x-ray fiducial markers was placed on a motion platform and translated in 3D using five real prostate motion traces acquired using the Calypso system. Motion traces were representative of all types of motion as classified by studying Calypso data for 22 patients. The phantom was imaged using a 3D swept linear-array probe (to mimic trans-perineal imaging) and, subsequently, the kV x-ray imaging system on CyberKnife. A 3D cross-correlation block-matching algorithm was used to track speckle in the ultrasound data. Fiducial and US data were each compared with known phantom displacement. Trans-perineal 3D-US imaging could track superior-inferior (SI) and anterior-posterior (AP) motion to ≤0.81 mm root-mean-square error (RMSE) at a 1.7 Hz volume rate. The maximum kV x-ray tracking RMSE was 0.74 mm, however the prostate motion was sampled at a significantly lower imaging rate (mean: 0.04 Hz). Initial elevational (right-left RL) US displacement estimates showed reduced accuracy but could be improved (RMSE <2.0 mm) using a correlation threshold in the ultrasound tracking code to remove erroneous inter-volume displacement estimates. Mechanically-swept 3D-US can track the major components of intra-fraction prostate motion accurately but exhibits some limitations. The largest US RMSE was for elevational (RL) motion. For the AP and SI axes, accuracy was sub-millimetre. It may be feasible to track prostate motion in 2D only. 3D-US also has the potential to improve high tracking accuracy for all motion types. It would be advisable to use US in conjunction with a small (˜2.0 mm) centre-of-mass displacement threshold in which case it would be possible to take full advantage of the accuracy and high imaging

  5. 4D ultrasound speckle tracking of intra-fraction prostate motion: a phantom-based comparison with x-ray fiducial tracking using CyberKnife.

    PubMed

    O'Shea, Tuathan P; Garcia, Leo J; Rosser, Karen E; Harris, Emma J; Evans, Philip M; Bamber, Jeffrey C

    2014-04-07

    This study investigates the use of a mechanically-swept 3D ultrasound (3D-US) probe for soft-tissue displacement monitoring during prostate irradiation, with emphasis on quantifying the accuracy relative to CyberKnife® x-ray fiducial tracking. An US phantom, implanted with x-ray fiducial markers was placed on a motion platform and translated in 3D using five real prostate motion traces acquired using the Calypso system. Motion traces were representative of all types of motion as classified by studying Calypso data for 22 patients. The phantom was imaged using a 3D swept linear-array probe (to mimic trans-perineal imaging) and, subsequently, the kV x-ray imaging system on CyberKnife. A 3D cross-correlation block-matching algorithm was used to track speckle in the ultrasound data. Fiducial and US data were each compared with known phantom displacement. Trans-perineal 3D-US imaging could track superior-inferior (SI) and anterior-posterior (AP) motion to ≤0.81 mm root-mean-square error (RMSE) at a 1.7 Hz volume rate. The maximum kV x-ray tracking RMSE was 0.74 mm, however the prostate motion was sampled at a significantly lower imaging rate (mean: 0.04 Hz). Initial elevational (right-left; RL) US displacement estimates showed reduced accuracy but could be improved (RMSE <2.0 mm) using a correlation threshold in the ultrasound tracking code to remove erroneous inter-volume displacement estimates. Mechanically-swept 3D-US can track the major components of intra-fraction prostate motion accurately but exhibits some limitations. The largest US RMSE was for elevational (RL) motion. For the AP and SI axes, accuracy was sub-millimetre. It may be feasible to track prostate motion in 2D only. 3D-US also has the potential to improve high tracking accuracy for all motion types. It would be advisable to use US in conjunction with a small (∼2.0 mm) centre-of-mass displacement threshold in which case it would be possible to take full advantage of the accuracy and high

  6. Are There Side Effects to Watching 3D Movies? A Prospective Crossover Observational Study on Visually Induced Motion Sickness

    PubMed Central

    Solimini, Angelo G.

    2013-01-01

    Background The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. Methods and Findings A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Conclusions Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators. PMID:23418530

  7. Evaluation and comparison of current biopsy needle localization and tracking methods using 3D ultrasound.

    PubMed

    Zhao, Yue; Shen, Yi; Bernard, Adeline; Cachard, Christian; Liebgott, Hervé

    2017-01-01

    This article compares four different biopsy needle localization algorithms in both 3D and 4D situations to evaluate their accuracy and execution time. The localization algorithms were: Principle component analysis (PCA), random Hough transform (RHT), parallel integral projection (PIP) and ROI-RK (ROI based RANSAC and Kalman filter). To enhance the contrast of the biopsy needle and background tissue, a line filtering pre-processing step was implemented. To make the PCA, RHT and PIP algorithms comparable with the ROI-RK method, a region of interest (ROI) strategy was added. Simulated and ex-vivo data were used to evaluate the performance of the different biopsy needle localization algorithms. The resolutions of the sectorial and cylindrical volumes were 0.3mm×0.4mm×0.6mmand0.1mm×0.1mm×0.2mm (axial×lateral×azimuthal) respectively. In so far as the simulation and experimental results show, the ROI-RK method successfully located and tracked the biopsy needle in both 3D and 4D situations. The tip localization error was within 1.5mm and the axis accuracy was within 1.6mm. To the best of our knowledge, considering both localization accuracy and execution time, the ROI-RK was the most stable and time-saving method. Normally, accuracy comes at the expense of time. However, the ROI-RK method was able to locate the biopsy needle with high accuracy in real time, which makes it a promising method for clinical applications.

  8. 3D Modelling of Inaccessible Areas using UAV-based Aerial Photography and Structure from Motion

    NASA Astrophysics Data System (ADS)

    Obanawa, Hiroyuki; Hayakawa, Yuichi; Gomez, Christopher

    2014-05-01

    In hardly accessible areas, the collection of 3D point-clouds using TLS (Terrestrial Laser Scanner) can be very challenging, while airborne equivalent would not give a correct account of subvertical features and concave geometries like caves. To solve such problem, the authors have experimented an aerial photography based SfM (Structure from Motion) technique on a 'peninsular-rock' surrounded on three sides by the sea at a Pacific coast in eastern Japan. The research was carried out using UAS (Unmanned Aerial System) combined with a commercial small UAV (Unmanned Aerial Vehicle) carrying a compact camera. The UAV is a DJI PHANTOM: the UAV has four rotors (quadcopter), it has a weight of 1000 g, a payload of 400 g and a maximum flight time of 15 minutes. The camera is a GoPro 'HERO3 Black Edition': resolution 12 million pixels; weight 74 g; and 0.5 sec. interval-shot. The 3D model has been constructed by digital photogrammetry using a commercial SfM software, Agisoft PhotoScan Professional®, which can generate sparse and dense point-clouds, from which polygonal models and orthophotographs can be calculated. Using the 'flight-log' and/or GCPs (Ground Control Points), the software can generate digital surface model. As a result, high-resolution aerial orthophotographs and a 3D model were obtained. The results have shown that it was possible to survey the sea cliff and the wave cut-bench, which are unobservable from land side. In details, we could observe the complexity of the sea cliff that is nearly vertical as a whole while slightly overhanging over the thinner base. The wave cut bench is nearly flat and develops extensively at the base of the cliff. Although there are some evidences of small rockfalls at the upper part of the cliff, there is no evidence of very recent activity, because no fallen rock exists on the wave cut bench. This system has several merits: firstly lower cost than the existing measuring methods such as manned-flight survey and aerial laser

  9. Automatic Tracking Of Markers From 3D-Measurement Of Human Body Movements During Walking

    NASA Astrophysics Data System (ADS)

    Elsner, Thomas; Meier, G.; Baumann, Juerg U.

    1989-04-01

    For human motion analysis, the spatio-temporal resolution of cinematographic registrations of body marker positions is still higher than the results of the best opto electronic systems available for this purpose today. So far, the need for manual digitization of several thousand marker positions per tested person has made this method unpractical for regular applications. An interactive and largely automated system for marker recognition and tracking from 16 mm film images based on progress in digital image processing has been developed and tested. Projected pictures are digitized with a high-resolution CCD-camera (1320x1035 pixel), processed, analyzed and serially evaluated with an interactive image analysis system SIGNUM IS200.

  10. Self-Motion Impairs Multiple-Object Tracking

    ERIC Educational Resources Information Center

    Thomas, Laura E.; Seiffert, Adriane E.

    2010-01-01

    Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement…

  11. Three-dimensional motion tracking for high-resolution optical microscopy, in vivo.

    PubMed

    Bakalar, M; Schroeder, J L; Pursley, R; Pohida, T J; Glancy, B; Taylor, J; Chess, D; Kellman, P; Xue, H; Balaban, R S

    2012-06-01

    When conducting optical imaging experiments, in vivo, the signal to noise ratio and effective spatial and temporal resolution is fundamentally limited by physiological motion of the tissue. A three-dimensional (3D) motion tracking scheme, using a multiphoton excitation microscope with a resonant galvanometer, (512 × 512 pixels at 33 frames s(-1)) is described to overcome physiological motion, in vivo. The use of commercially available graphical processing units permitted the rapid 3D cross-correlation of sequential volumes to detect displacements and adjust tissue position to track motions in near real-time. Motion phantom tests maintained micron resolution with displacement velocities of up to 200 μm min(-1), well within the drift observed in many biological tissues under physiologically relevant conditions. In vivo experiments on mouse skeletal muscle using the capillary vasculature with luminal dye as a displacement reference revealed an effective and robust method of tracking tissue motion to enable (1) signal averaging over time without compromising resolution, and (2) tracking of cellular regions during a physiological perturbation.

  12. Proton spin tracking with symplectic integration of orbit motion

    SciTech Connect

    Luo, Y.; Dutheil, Y.; Huang, H.; Meot, F.; Ranjbar, V.

    2015-05-03

    Symplectic integration had been adopted for orbital motion tracking in code SimTrack. SimTrack has been extensively used for dynamic aperture calculation with beam-beam interaction for the Relativistic Heavy Ion Collider (RHIC). Recently proton spin tracking has been implemented on top of symplectic orbital motion in this code. In this article, we will explain the implementation of spin motion based on Thomas-BMT equation, and the benchmarking with other spin tracking codes currently used for RHIC. Examples to calculate spin closed orbit and spin tunes are presented too.

  13. Tracking magnetogram proper motions by multiscale regularization

    NASA Technical Reports Server (NTRS)

    Jones, Harrison P.

    1995-01-01

    Long uninterrupted sequences of solar magnetograms from the global oscillations network group (GONG) network and from the solar and heliospheric observatory (SOHO) satellite will provide the opportunity to study the proper motions of magnetic features. The possible use of multiscale regularization, a scale-recursive estimation technique which begins with a prior model of how state variables and their statistical properties propagate over scale. Short magnetogram sequences are analyzed with the multiscale regularization algorithm as applied to optical flow. This algorithm is found to be efficient, provides results for all the spatial scales spanned by the data and provides error estimates for the solutions. It is found that the algorithm is less sensitive to evolutionary changes than correlation tracking.

  14. Combined aerial and terrestrial images for complete 3D documentation of Singosari Temple based on Structure from Motion algorithm

    NASA Astrophysics Data System (ADS)

    Hidayat, Husnul; Cahyono, A. B.

    2016-11-01

    Singosaritemple is one of cultural heritage building in East Java, Indonesia which was built in 1300s and restorated in 1934-1937. Because of its history and importance, complete documentation of this temple is required. Nowadays with the advent of low cost UAVs combining aerial photography with terrestrial photogrammetry gives more complete data for 3D documentation. This research aims to make complete 3D model of this landmark from aerial and terrestrial photographs with Structure from Motion algorithm. To establish correct scale, position, and orientation, the final 3D model was georeferenced with Ground Control Points in UTM 49S coordinate system. The result shows that all facades, floor, and upper structures can be modeled completely in 3D. In terms of 3D coordinate accuracy, the Root Mean Square Errors (RMSEs) are RMSEx=0,041 m; RMSEy=0,031 m; RMSEz=0,049 m which represent 0.071 m displacement in 3D space. In addition the mean difference of lenght measurements of the object is 0,057 m. With this accuracy, this method can be used to map the site up to 1:237 scale. Although the accuracy level is still in centimeters, the combined aerial and terrestrial photographs with Structure from Motion algorithm can provide complete and visually interesting 3D model.

  15. Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography

    PubMed Central

    Carrasco-Zevallos, Oscar M.; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A.

    2016-01-01

    Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800

  16. Use of MV and kV imager correlation for maintaining continuous real-time 3D internal marker tracking during beam interruptions

    NASA Astrophysics Data System (ADS)

    Wiersma, R. D.; Riaz, N.; Dieterich, Sonja; Suh, Yelin; Xing, L.

    2009-01-01

    The integration of onboard kV imaging together with a MV electronic portal imaging device (EPID) on linear accelerators (LINAC) can provide an easy to implement real-time 3D organ position monitoring solution for treatment delivery. Currently, real-time MV-kV tracking has only been demonstrated by simultaneous imagining by both MV and kV imaging devices. However, modalities such as step-and-shoot IMRT (SS-IMRT), which inherently contain MV beam interruptions, can lead to loss of target information necessary for 3D localization. Additionally, continuous kV imaging throughout the treatment delivery can lead to high levels of imaging dose to the patient. This work demonstrates for the first time how full 3D target tracking can be maintained even in the presence of such beam interruption, or MV/kV beam interleave, by use of a relatively simple correlation model together with MV-kV tracking. A moving correlation model was constructed using both present and prior positions of the marker in the available MV or kV image to compute the position of the marker on the interrupted imager. A commercially available radiotherapy system, equipped with both MV and kV imaging devices, was used to deliver typical SS-IMRT lung treatment plans to a 4D phantom containing internally embedded metallic markers. To simulate actual lung tumor motion, previous recorded 4D lung patient motion data were used. Lung tumor motion data of five separate patients were inputted into the 4D phantom, and typical SS-IMRT lung plans were delivered to simulate actual clinical deliveries. Application of the correlation model to SS-IMRT lung treatment deliveries was found to be an effective solution for maintaining continuous 3D tracking during 'step' beam interruptions. For deliveries involving five or more gantry angles with 50 or more fields per plan, the positional errors were found to have <=1 mm root mean squared error (RMSE) in all three spatial directions. In addition to increasing the robustness of

  17. Experimental evaluations of the accuracy of 3D and 4D planning in robotic tracking stereotactic body radiotherapy for lung cancers

    SciTech Connect

    Chan, Mark K. H.; Kwong, Dora L. W.; Ng, Sherry C. Y.; Tong, Anthony S. M.; Tam, Eric K. W.

    2013-04-15

    Purpose: Due to the complexity of 4D target tracking radiotherapy, the accuracy of this treatment strategy should be experimentally validated against established standard 3D technique. This work compared the accuracy of 3D and 4D dose calculations in respiration tracking stereotactic body radiotherapy (SBRT). Methods: Using the 4D planning module of the CyberKnife treatment planning system, treatment plans for a moving target and a static off-target cord structure were created on different four-dimensional computed tomography (4D-CT) datasets of a thorax phantom moving in different ranges. The 4D planning system used B-splines deformable image registrations (DIR) to accumulate dose distributions calculated on different breathing geometries, each corresponding to a static 3D-CT image of the 4D-CT dataset, onto a reference image to compose a 4D dose distribution. For each motion, 4D optimization was performed to generate a 4D treatment plan of the moving target. For comparison with standard 3D planning, each 4D plan was copied to the reference end-exhale images and a standard 3D dose calculation was followed. Treatment plans of the off-target structure were first obtained by standard 3D optimization on the end-exhale images. Subsequently, they were applied to recalculate the 4D dose distributions using DIRs. All dose distributions that were initially obtained using the ray-tracing algorithm with equivalent path-length heterogeneity correction (3D{sub EPL} and 4D{sub EPL}) were recalculated by a Monte Carlo algorithm (3D{sub MC} and 4D{sub MC}) to further investigate the effects of dose calculation algorithms. The calculated 3D{sub EPL}, 3D{sub MC}, 4D{sub EPL}, and 4D{sub MC} dose distributions were compared to measurements by Gafchromic EBT2 films in the axial and coronal planes of the moving target object, and the coronal plane for the static off-target object based on the {gamma} metric at 5%/3mm criteria ({gamma}{sub 5%/3mm}). Treatment plans were considered

  18. Intersection Based Motion Correction of Multi-Slice MRI for 3D in utero Fetal Brain Image Formation

    PubMed Central

    Kim, Kio; Habas, Piotr A.; Rousseau, Francois; Glenn, Orit A.; Barkovich, Anthony J.; Studholme, Colin

    2012-01-01

    In recent years post-processing of fast multi-slice MR imaging to correct fetal motion has provided the first true 3D MR images of the developing human brain in utero. Early approaches have used reconstruction based algorithms, employing a two step iterative process, where slices from the acquired data are re-aligned to an approximate 3D reconstruction of the fetal brain, which is then refined further using the improved slice alignment. This two step slice-to-volume process, although powerful, is computationally expensive in needing a 3D reconstruction, and is limited in its ability to recover sub-voxel alignment. Here, we describe an alternative approach which we term slice intersection motion correction (SIMC), that seeks to directly co-align multiple slice stacks by considering the matching structure along all intersecting slice pairs in all orthogonally planned slices that are acquired in clinical imaging studies. A collective update scheme for all slices is then derived, to simultaneously drive slices into a consistent match along their lines of intersection. We then describe a 3D reconstruction algorithm that, using the final motion corrected slice locations, suppresses through-plane partial volume effects to provide a single high isotropic resolution 3D image. The method is tested on simulated data with known motions and is applied to retrospectively reconstruct 3D images from a range of clinically acquired imaging studies. The quantitative evaluation of the registration accuracy for the simulated data sets demonstrated a significant improvement over previous approaches. An initial application of the technique to studying clinical pathology is included, where the proposed method recovered up to 15 mm of translation and 30 degrees of rotation for individual slices, and produced full 3D reconstructions containing clinically useful additional information not visible in the original 2D slices. PMID:19744911

  19. Fast, automatic, and accurate catheter reconstruction in HDR brachytherapy using an electromagnetic 3D tracking system

    SciTech Connect

    Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc; Binnekamp, Dirk

    2015-03-15

    Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.

  20. Real-time 3D ultrasound fetal image enhancment techniques using motion-compensated frame rate up-conversion

    NASA Astrophysics Data System (ADS)

    Lee, Gun-Ill; Park, Rae-Hong; Song, Young-Seuk; Kim, Cheol-An; Hwang, Jae-Sub

    2003-05-01

    In this paper, we present a motion compensated frame rate up-conversion method for real-time three-dimensional (3-D) ultrasound fetal image enhancement. The conventional mechanical scan method with one-dimensional (1-D) array converters used for 3-D volume data acquisition has a slow frame rate of multi-planar images. This drawback is not an issue for stationary objects, however in ultrasound images showing a fetus of more than about 25 weeks, we perceive abrupt changes due to fast motions. To compensate for this defect, we propose the frame rate up-conversion method by which new interpolated frames are inserted between two input frames, giving smooth renditions to human eyes. More natural motions can be obtained by frame rate up-conversion. In the proposed algorithm, we employ forward motion estimation (ME), in which motion vectors (MVs) ar estimated using a block matching algorithm (BMA). To smooth MVs over neighboring blocks, vector median filtering is performed. Using these smoothed MVs, interpolated frames are reconstructed by motion compensation (MC). The undesirable blocking artifacts due to blockwise processing are reduced by block boundary filtering using a Gaussian low pass filter (LPF). The proposed method can be used in computer aided diagnosis (CAD), where more natural 3-D ultrasound images are displayed in real-time. Simulation results with several real test sequences show the effectiveness of the proposed algorithm.

  1. Dynamic particle accumulation structure (PAS) in half-zone liquid bridge Reconstruction of particle motion by 3-D PTV

    NASA Astrophysics Data System (ADS)

    Ueno, I.; Abe, Y.; Noguchi, K.; Kawamura, H.

    Three-dimensional (3-D) velocity field reconstruction of oscillatory thermocapillary convections in a half-zone liquid bridge with a radius of O (1 mm) was carried out by applying 3-D particle tracking velocimetry (PTV). Simultaneous observation of the particles suspended in the bridge by two CCD cameras was carried out by placing a small cubic beam splitter above a transparent top rod. The reconstruction of the 3-D trajectories and the velocity fields of the particles in the several types of oscillatory-flow regimes were conducted successfully for sufficiently long period without losing particle tracking. With this application the present authors conducted a series of experiments focusing upon the collapse and re-formation process of the PAS by mechanically disturbing fully developed PAS.

  2. Mapping 3D Strains with Ultrasound Speckle Tracking: Method Validation and Initial Results in Porcine Scleral Inflation.

    PubMed

    Cruz Perez, Benjamin; Pavlatos, Elias; Morris, Hugh J; Chen, Hong; Pan, Xueliang; Hart, Richard T; Liu, Jun

    2016-07-01

    This study aimed to develop and validate a high frequency ultrasound method for measuring distributive, 3D strains in the sclera during elevations of intraocular pressure. A 3D cross-correlation based speckle-tracking algorithm was implemented to compute the 3D displacement vector and strain tensor at each tracking point. Simulated ultrasound radiofrequency data from a sclera-like structure at undeformed and deformed states with known strains were used to evaluate the accuracy and signal-to-noise ratio (SNR) of strain estimation. An experimental high frequency ultrasound (55 MHz) system was built to acquire 3D scans of porcine eyes inflated from 15 to 17 and then 19 mmHg. Simulations confirmed good strain estimation accuracy and SNR (e.g., the axial strains had less than 4.5% error with SNRs greater than 16.5 for strains from 0.005 to 0.05). Experimental data in porcine eyes showed increasing tensile, compressive, and shear strains in the posterior sclera during inflation, with a volume ratio close to one suggesting near-incompressibility. This study established the feasibility of using high frequency ultrasound speckle tracking for measuring 3D tissue strains and its potential to characterize physiological deformations in the posterior eye.

  3. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    SciTech Connect

    Dhou, S; Hurwitz, M; Lewis, J; Mishra, P

    2014-06-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT

  4. A study of the effects of degraded imagery on tactical 3D model generation using structure-from-motion

    NASA Astrophysics Data System (ADS)

    Bolick, Leslie; Harguess, Josh

    2016-05-01

    An emerging technology in the realm of airborne intelligence, surveillance, and reconnaissance (ISR) systems is structure-from-motion (SfM), which enables the creation of three-dimensional (3D) point clouds and 3D models from two-dimensional (2D) imagery. There are several existing tools, such as VisualSFM and open source project OpenSfM, to assist in this process, however, it is well-known that pristine imagery is usually required to create meaningful 3D data from the imagery. In military applications, such as the use of unmanned aerial vehicles (UAV) for surveillance operations, imagery is rarely pristine. Therefore, we present an analysis of structure-from-motion packages on imagery that has been degraded in a controlled manner.

  5. A 3D MR-acquisition scheme for nonrigid bulk motion correction in simultaneous PET-MR

    SciTech Connect

    Kolbitsch, Christoph Prieto, Claudia; Schaeffter, Tobias; Tsoumpas, Charalampos

    2014-08-15

    Purpose: Positron emission tomography (PET) is a highly sensitive medical imaging technique commonly used to detect and assess tumor lesions. Magnetic resonance imaging (MRI) provides high resolution anatomical images with different contrasts and a range of additional information important for cancer diagnosis. Recently, simultaneous PET-MR systems have been released with the promise to provide complementary information from both modalities in a single examination. Due to long scan times, subject nonrigid bulk motion, i.e., changes of the patient's position on the scanner table leading to nonrigid changes of the patient's anatomy, during data acquisition can negatively impair image quality and tracer uptake quantification. A 3D MR-acquisition scheme is proposed to detect and correct for nonrigid bulk motion in simultaneously acquired PET-MR data. Methods: A respiratory navigated three dimensional (3D) MR-acquisition with Radial Phase Encoding (RPE) is used to obtain T1- and T2-weighted data with an isotropic resolution of 1.5 mm. Healthy volunteers are asked to move the abdomen two to three times during data acquisition resulting in overall 19 movements at arbitrary time points. The acquisition scheme is used to retrospectively reconstruct dynamic 3D MR images with different temporal resolutions. Nonrigid bulk motion is detected and corrected in this image data. A simultaneous PET acquisition is simulated and the effect of motion correction is assessed on image quality and standardized uptake values (SUV) for lesions with different diameters. Results: Six respiratory gated 3D data sets with T1- and T2-weighted contrast have been obtained in healthy volunteers. All bulk motion shifts have successfully been detected and motion fields describing the transformation between the different motion states could be obtained with an accuracy of 1.71 ± 0.29 mm. The PET simulation showed errors of up to 67% in measured SUV due to bulk motion which could be reduced to less than

  6. Time-resolved 3D contrast-enhanced MRA of an extended FOV using continuous table motion.

    PubMed

    Madhuranthakam, Ananth J; Kruger, David G; Riederer, Stephen J; Glockner, James F; Hu, Houchun H

    2004-03-01

    A method is presented for acquiring 3D time-resolved MR images of an extended (>100 cm) longitudinal field of view (FOV), as used for peripheral MR angiographic runoff studies. Previous techniques for long-FOV peripheral MRA have generally provided a single image (i.e., with no time resolution). The technique presented here generates a time series of 3D images of the FOV that lies within the homogeneous volume of the magnet. This is achieved by differential sampling of 3D k-space during continuous motion of the patient table. Each point in the object is interrogated in five consecutive 3D image sets generated at 2.5-s intervals. The method was tested experimentally in eight human subjects, and the leading edge of the bolus was observed in real time and maintained within the imaging FOV. The data revealed differential bolus velocities along the vasculature of the legs.

  7. 3D HUMAN MOTION RETRIEVAL BASED ON HUMAN HIERARCHICAL INDEX STRUCTURE

    PubMed Central

    Guo, X.

    2013-01-01

    With the development and wide application of motion capture technology, the captured motion data sets are becoming larger and larger. For this reason, an efficient retrieval method for the motion database is very important. The retrieval method needs an appropriate indexing scheme and an effective similarity measure that can organize the existing motion data well. In this paper, we represent a human motion hierarchical index structure and adopt a nonlinear method to segment motion sequences. Based on this, we extract motion patterns and then we employ a fast similarity measure algorithm for motion pattern similarity computation to efficiently retrieve motion sequences. The experiment results show that the approach proposed in our paper is effective and efficient. PMID:24744481

  8. Estimation of Pulmonary Motion in Healthy Subjects and Patients with Intrathoracic Tumors Using 3D-Dynamic MRI: Initial Results

    PubMed Central

    Schoebinger, Max; Herth, Felix; Tuengerthal, Siegfried; Meinzer, Heinz-Peter; Kauczor, Hans-Ulrich

    2009-01-01

    Objective To estimate a new technique for quantifying regional lung motion using 3D-MRI in healthy volunteers and to apply the technique in patients with intra- or extrapulmonary tumors. Materials and Methods Intraparenchymal lung motion during a whole breathing cycle was quantified in 30 healthy volunteers using 3D-dynamic MRI (FLASH [fast low angle shot] 3D, TRICKS [time-resolved interpolated contrast kinetics]). Qualitative and quantitative vector color maps and cumulative histograms were performed using an introduced semiautomatic algorithm. An analysis of lung motion was performed and correlated with an established 2D-MRI technique for verification. As a proof of concept, the technique was applied in five patients with non-small cell lung cancer (NSCLC) and 5 patients with malignant pleural mesothelioma (MPM). Results The correlation between intraparenchymal lung motion of the basal lung parts and the 2D-MRI technique was significant (r = 0.89, p < 0.05). Also, the vector color maps quantitatively illustrated regional lung motion in all healthy volunteers. No differences were observed between both hemithoraces, which was verified by cumulative histograms. The patients with NSCLC showed a local lack of lung motion in the area of the tumor. In the patients with MPM, there was global diminished motion of the tumor bearing hemithorax, which improved siginificantly after chemotherapy (CHT) (assessed by the 2D- and 3D-techniques) (p < 0.01). Using global spirometry, an improvement could also be shown (vital capacity 2.9 ± 0.5 versus 3.4 L ± 0.6, FEV1 0.9 ± 0.2 versus 1.4 ± 0.2 L) after CHT, but this improvement was not significant. Conclusion A 3D-dynamic MRI is able to quantify intraparenchymal lung motion. Local and global parenchymal pathologies can be precisely located and might be a new tool used to quantify even slight changes in lung motion (e.g. in therapy monitoring, follow-up studies or even benign lung diseases). PMID:19885311

  9. 3D pose estimation and motion analysis of the articulated human hand-forearm limb in an industrial production environment

    NASA Astrophysics Data System (ADS)

    Hahn, Markus; Barrois, Björn; Krüger, Lars; Wöhler, Christian; Sagerer, Gerhard; Kummert, Franz

    2010-09-01

    This study introduces an approach to model-based 3D pose estimation and instantaneous motion analysis of the human hand-forearm limb in the application context of safe human-robot interaction. 3D pose estimation is performed using two approaches: The Multiocular Contracting Curve Density (MOCCD) algorithm is a top-down technique based on pixel statistics around a contour model projected into the images from several cameras. The Iterative Closest Point (ICP) algorithm is a bottom-up approach which uses a motion-attributed 3D point cloud to estimate the object pose. Due to their orthogonal properties, a fusion of these algorithms is shown to be favorable. The fusion is performed by a weighted combination of the extracted pose parameters in an iterative manner. The analysis of object motion is based on the pose estimation result and the motion-attributed 3D points belonging to the hand-forearm limb using an extended constraint-line approach which does not rely on any temporal filtering. A further refinement is obtained using the Shape Flow algorithm, a temporal extension of the MOCCD approach, which estimates the temporal pose derivative based on the current and the two preceding images, corresponding to temporal filtering with a short response time of two or at most three frames. Combining the results of the two motion estimation stages provides information about the instantaneous motion properties of the object. Experimental investigations are performed on real-world image sequences displaying several test persons performing different working actions typically occurring in an industrial production scenario. In all example scenes, the background is cluttered, and the test persons wear various kinds of clothes. For evaluation, independently obtained ground truth data are used. [Figure not available: see fulltext.

  10. Piecewise-diffeomorphic image registration: application to the motion estimation between 3D CT lung images with sliding conditions.

    PubMed

    Risser, Laurent; Vialard, François-Xavier; Baluwala, Habib Y; Schnabel, Julia A

    2013-02-01

    In this paper, we propose a new strategy for modelling sliding conditions when registering 3D images in a piecewise-diffeomorphic framework. More specifically, our main contribution is the development of a mathematical formalism to perform Large Deformation Diffeomorphic Metric Mapping registration with sliding conditions. We also show how to adapt this formalism to the LogDemons diffeomorphic registration framework. We finally show how to apply this strategy to estimate the respiratory motion between 3D CT pulmonary images. Quantitative tests are performed on 2D and 3D synthetic images, as well as on real 3D lung images from the MICCAI EMPIRE10 challenge. Results show that our strategy estimates accurate mappings of entire 3D thoracic image volumes that exhibit a sliding motion, as opposed to conventional registration methods which are not capable of capturing discontinuous deformations at the thoracic cage boundary. They also show that although the deformations are not smooth across the location of sliding conditions, they are almost always invertible in the whole image domain. This would be helpful for radiotherapy planning and delivery.

  11. Dynamics and cortical distribution of neural responses to 2D and 3D motion in human

    PubMed Central

    McKee, Suzanne P.; Norcia, Anthony M.

    2013-01-01

    The perception of motion-in-depth is important for avoiding collisions and for the control of vergence eye-movements and other motor actions. Previous psychophysical studies have suggested that sensitivity to motion-in-depth has a lower temporal processing limit than the perception of lateral motion. The present study used functional MRI-informed EEG source-imaging to study the spatiotemporal properties of the responses to lateral motion and motion-in-depth in human visual cortex. Lateral motion and motion-in-depth displays comprised stimuli whose only difference was interocular phase: monocular oscillatory motion was either in-phase in the two eyes (lateral motion) or in antiphase (motion-in-depth). Spectral analysis was used to break the steady-state visually evoked potentials responses down into even and odd harmonic components within five functionally defined regions of interest: V1, V4, lateral occipital complex, V3A, and hMT+. We also characterized the responses within two anatomically defined regions: the inferior and superior parietal cortex. Even harmonic components dominated the evoked responses and were a factor of approximately two larger for lateral motion than motion-in-depth. These responses were slower for motion-in-depth and were largely independent of absolute disparity. In each of our regions of interest, responses at odd-harmonics were relatively small, but were larger for motion-in-depth than lateral motion, especially in parietal cortex, and depended on absolute disparity. Taken together, our results suggest a plausible neural basis for reduced psychophysical sensitivity to rapid motion-in-depth. PMID:24198326

  12. 3D Ultrasonic Needle Tracking with a 1.5D Transducer Array for Guidance of Fetal Interventions

    PubMed Central

    West, Simeon J.; Mari, Jean-Martial; Ourselin, Sebastien; David, Anna L.; Desjardins, Adrien E.

    2016-01-01

    Ultrasound image guidance is widely used in minimally invasive procedures, including fetal surgery. In this context, maintaining visibility of medical devices is a significant challenge. Needles and catheters can readily deviate from the ultrasound imaging plane as they are inserted. When the medical device tips are not visible, they can damage critical structures, with potentially profound consequences including loss of pregnancy. In this study, we performed 3D ultrasonic tracking of a needle using a novel probe with a 1.5D array of transducer elements that was driven by a commercial ultrasound system. A fiber-optic hydrophone integrated into the needle received transmissions from the probe, and data from this sensor was processed to estimate the position of the hydrophone tip in the coordinate space of the probe. Golay coding was used to increase the signal-to-noise (SNR). The relative tracking accuracy was better than 0.4 mm in all dimensions, as evaluated using a water phantom. To obtain a preliminary indication of the clinical potential of 3D ultrasonic needle tracking, an intravascular needle insertion was performed in an in vivo pregnant sheep model. The SNR values ranged from 12 to 16 at depths of 20 to 31 mm and at an insertion angle of 49° relative to the probe surface normal. The results of this study demonstrate that 3D ultrasonic needle tracking with a fiber-optic hydrophone sensor and a 1.5D array is feasible in clinically realistic environments. PMID:28111644

  13. A Framework for 3D Model-Based Visual Tracking Using a GPU-Accelerated Particle Filter.

    PubMed

    Brown, J A; Capson, D W

    2012-01-01

    A novel framework for acceleration of particle filtering approaches to 3D model-based, markerless visual tracking in monocular video is described. Specifically, we present a methodology for partitioning and mapping the computationally expensive weight-update stage of a particle filter to a graphics processing unit (GPU) to achieve particle- and pixel-level parallelism. Nvidia CUDA and Direct3D are employed to harness the massively parallel computational power of modern GPUs for simulation (3D model rendering) and evaluation (segmentation, feature extraction, and weight calculation) of hundreds of particles at high speeds. The proposed framework addresses the computational intensity that is intrinsic to all particle filter approaches, including those that have been modified to minimize the number of particles required for a particular task. Performance and tracking quality results for rigid object and articulated hand tracking experiments demonstrate markerless, model-based visual tracking on consumer-grade graphics hardware with pixel-level accuracy up to 95 percent at 60+ frames per second. The framework accelerates particle evaluation up to 49 times over a comparable CPU-only implementation, providing an increased particle count while maintaining real-time frame rates.

  14. GPS Measurements of Crustal Motion Indicate 3D GIA Models are Needed to Understand Antarctic Ice Mass Change

    NASA Astrophysics Data System (ADS)

    Konfal, S. A.; Wilson, T. J.; Bevis, M. G.; Kendrick, E. C.; Dalziel, I. W. D.; Smalley, R., Jr.; Willis, M. J.; Heeszel, D.; Wiens, D. A.

    2014-12-01

    Continuous GPS measurements of bedrock crustal motions in response to GIA in Antarctica have been acquired by the Antarctic Network (ANET) component of the Polar Earth Observing Network (POLENET). Patterns of vertical crustal displacements are commonly considered the key fingerprints of GIA, with maximum uplift marking the position of former ice load centers. However, efforts to develop more realistic 3D earth models have shown that the horizontal motion pattern is a more important signature of GIA on a laterally varying earth. Here we provide the first measurements substantiating predictions of a reversal of horizontal motions across an extreme gradient in crustal thickness and mantle viscosity crossing Antarctica. GPS results document motion toward, rather than away from the sites of major ice mass loss in West Antarctica. When compared in a common reference frame, observed crustal motions are not in agreement with predictions from models of GIA. A gradient in crustal velocities, faster toward West Antarctica, is spatially coincident with the rheological boundary mapped from seismic tomographic results. This suggests that horizontal crustal motions are strongly influenced by laterally-varying earth properties, and demonstrates that only 3D earth models can produce reliable predictions of GIA for Antarctica.

  15. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson’s Disease

    PubMed Central

    Piro, Neltje E.; Piro, Lennart K.; Kassubek, Jan; Blechschmidt-Trapp, Ronald A.

    2016-01-01

    Remote monitoring of Parkinson’s Disease (PD) patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS) classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a) an examination based on video recordings as a clinical reference; (b) an automatically classified UPDRS; and (c) a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG) sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b) the automatically classified UPDRS is 0.48 and with (c) the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team. PMID:27338400

  16. Nonrigid motion correction in 3D using autofocusing with localized linear translations.

    PubMed

    Cheng, Joseph Y; Alley, Marcus T; Cunningham, Charles H; Vasanawala, Shreyas S; Pauly, John M; Lustig, Michael

    2012-12-01

    MR scans are sensitive to motion effects due to the scan duration. To properly suppress artifacts from nonrigid body motion, complex models with elements such as translation, rotation, shear, and scaling have been incorporated into the reconstruction pipeline. However, these techniques are computationally intensive and difficult to implement for online reconstruction. On a sufficiently small spatial scale, the different types of motion can be well approximated as simple linear translations. This formulation allows for a practical autofocusing algorithm that locally minimizes a given motion metric--more specifically, the proposed localized gradient-entropy metric. To reduce the vast search space for an optimal solution, possible motion paths are limited to the motion measured from multichannel navigator data. The novel navigation strategy is based on the so-called "Butterfly" navigators, which are modifications of the spin-warp sequence that provides intrinsic translational motion information with negligible overhead. With a 32-channel abdominal coil, sufficient number of motion measurements were found to approximate possible linear motion paths for every image voxel. The correction scheme was applied to free-breathing abdominal patient studies. In these scans, a reduction in artifacts from complex, nonrigid motion was observed.

  17. Accurate and high-performance 3D position measurement of fiducial marks by stereoscopic system for railway track inspection

    NASA Astrophysics Data System (ADS)

    Gorbachev, Alexey A.; Serikova, Mariya G.; Pantyushina, Ekaterina N.; Volkova, Daria A.

    2016-04-01

    Modern demands for railway track measurements require high accuracy (about 2-5 mm) of rails placement along the track to ensure smooth, safe and fast transportation. As a mean for railways geometry measurements we suggest a stereoscopic system which measures 3D position of fiducial marks arranged along the track by image processing algorithms. The system accuracy was verified during laboratory tests by comparison with precise laser tracker indications. The accuracy of +/-1.5 mm within a measurement volume 150×400×5000 mm was achieved during the tests. This confirmed that the stereoscopic system demonstrates good measurement accuracy and can be potentially used as fully automated mean for railway track inspection.

  18. 3D GABA imaging with real-time motion correction, shim update and reacquisition of adiabatic spiral MRSI.

    PubMed

    Bogner, Wolfgang; Gagoski, Borjan; Hess, Aaron T; Bhat, Himanshu; Tisdall, M Dylan; van der Kouwe, Andre J W; Strasser, Bernhard; Marjańska, Małgorzata; Trattnig, Siegfried; Grant, Ellen; Rosen, Bruce; Andronesi, Ovidiu C

    2014-12-01

    Gamma-aminobutyric acid (GABA) and glutamate (Glu) are the major neurotransmitters in the brain. They are crucial for the functioning of healthy brain and their alteration is a major mechanism in the pathophysiology of many neuro-psychiatric disorders. Magnetic resonance spectroscopy (MRS) is the only way to measure GABA and Glu non-invasively in vivo. GABA detection is particularly challenging and requires special MRS techniques. The most popular is MEscher-GArwood (MEGA) difference editing with single-voxel Point RESolved Spectroscopy (PRESS) localization. This technique has three major limitations: a) MEGA editing is a subtraction technique, hence is very sensitive to scanner instabilities and motion artifacts. b) PRESS is prone to localization errors at high fields (≥3T) that compromise accurate quantification. c) Single-voxel spectroscopy can (similar to a biopsy) only probe steady GABA and Glu levels in a single location at a time. To mitigate these problems, we implemented a 3D MEGA-editing MRS imaging sequence with the following three features: a) Real-time motion correction, dynamic shim updates, and selective reacquisition to eliminate subtraction artifacts due to scanner instabilities and subject motion. b) Localization by Adiabatic SElective Refocusing (LASER) to improve the localization accuracy and signal-to-noise ratio. c) K-space encoding via a weighted stack of spirals provides 3D metabolic mapping with flexible scan times. Simulations, phantom and in vivo experiments prove that our MEGA-LASER sequence enables 3D mapping of GABA+ and Glx (Glutamate+Gluatmine), by providing 1.66 times larger signal for the 3.02ppm multiplet of GABA+ compared to MEGA-PRESS, leading to clinically feasible scan times for 3D brain imaging. Hence, our sequence allows accurate and robust 3D-mapping of brain GABA+ and Glx levels to be performed at clinical 3T MR scanners for use in neuroscience and clinical applications.

  19. A collaborative computing framework of cloud network and WBSN applied to fall detection and 3-D motion reconstruction.

    PubMed

    Lai, Chin-Feng; Chen, Min; Pan, Jeng-Shyang; Youn, Chan-Hyun; Chao, Han-Chieh

    2014-03-01

    As cloud computing and wireless body sensor network technologies become gradually developed, ubiquitous healthcare services prevent accidents instantly and effectively, as well as provides relevant information to reduce related processing time and cost. This study proposes a co-processing intermediary framework integrated cloud and wireless body sensor networks, which is mainly applied to fall detection and 3-D motion reconstruction. In this study, the main focuses includes distributed computing and resource allocation of processing sensing data over the computing architecture, network conditions and performance evaluation. Through this framework, the transmissions and computing time of sensing data are reduced to enhance overall performance for the services of fall events detection and 3-D motion reconstruction.

  20. A stroboscopic structured illumination system used in dynamic 3D visualization of high-speed motion object

    NASA Astrophysics Data System (ADS)

    Su, Xianyu; Zhang, Qican; Li, Yong; Xiang, Liqun; Cao, Yiping; Chen, Wenjing

    2005-04-01

    A stroboscopic structured illumination system, which can be used in measurement for 3D shape and deformation of high-speed motion object, is proposed and verified by experiments. The system, present in this paper, can automatically detect the position of high-speed moving object and synchronously control the flash of LED to project a structured optical field onto surface of motion object and the shoot of imaging system to acquire an image of deformed fringe pattern, also can create a signal, set artificially through software, to synchronously control the LED and imaging system to do their job. We experiment on a civil electric fan, successful acquire a serial of instantaneous, sharp and clear images of rotation blade and reconstruct its 3D shapes in difference revolutions.

  1. Application of 3D digital image correlation to track displacements and strains of canvas paintings exposed to relative humidity changes.

    PubMed

    Malowany, Krzysztof; Tymińska-Widmer, Ludmiła; Malesa, Marcin; Kujawińska, Małgorzata; Targowski, Piotr; Rouba, Bogumiła J

    2014-03-20

    This paper introduces a methodology for tracking displacements in canvas paintings exposed to relative humidity changes. Displacements are measured by means of the 3D digital image correlation method that is followed by a postprocessing of displacement data, which allows the separation of local displacements from global displacement maps. The applicability of this methodology is tested on measurements of a model painting on canvas with introduced defects causing local inhomogeneity. The method allows the evaluation of conservation methods used for repairing canvas supports.

  2. Real-time circumferential mapping catheter tracking for motion compensation in atrial fibrillation ablation procedures

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Bourier, Felix; Wimmer, Andreas; Koch, Martin; Kiraly, Atilla; Liao, Rui; Kurzidim, Klaus; Hornegger, Joachim; Strobel, Norbert

    2012-02-01

    Atrial fibrillation (AFib) has been identified as a major cause of stroke. Radiofrequency catheter ablation has become an increasingly important treatment option, especially when drug therapy fails. Navigation under X-ray can be enhanced by using augmented fluoroscopy. It renders overlay images from pre-operative 3-D data sets which are then fused with X-ray images to provide more details about the underlying soft-tissue anatomy. Unfortunately, these fluoroscopic overlay images are compromised by respiratory and cardiac motion. Various methods to deal with motion have been proposed. To meet clinical demands, they have to be fast. Methods providing a processing frame rate of 3 frames-per-second (fps) are considered suitable for interventional electrophysiology catheter procedures if an acquisition frame rate of 2 fps is used. Unfortunately, when working at a processing rate of 3 fps, the delay until the actual motion compensated image can be displayed is about 300 ms. More recent algorithms can achieve frame rates of up to 20 fps, which reduces the lag to 50 ms. By using a novel approach involving a 3-D catheter model, catheter segmentation and a distance transform, we can speed up motion compensation to 25 fps which results in a display delay of only 40 ms on a standard workstation for medical applications. Our method uses a constrained 2-D/3-D registration to perform catheter tracking, and it obtained a 2-D tracking error of 0.61 mm.

  3. 3D tracking and phase-contrast imaging by twin-beams digital holographic microscope in microfluidics

    NASA Astrophysics Data System (ADS)

    Miccio, L.; Memmolo, P.; Finizio, A.; Paturzo, M.; Merola, F.; Grilli, S.; Ferraro, P.

    2012-06-01

    A compact twin-beam interferometer that can be adopted as a flexible diagnostic tool in microfluidic platforms is presented. The devise has two functionalities, as explained in the follow, and can be easily integrated in microfluidic chip. The configuration allows 3D tracking of micro-particles and, at same time, furnishes Quantitative Phase-Contrast maps of tracked micro-objects by interference microscopy. Experimental demonstration of its effectiveness and compatibility with biological field is given on for in vitro cells in microfluidic environment. Nowadays, several microfluidic configuration exist and many of them are commercially available, their development is due to the possibility for manipulating droplets, handling micro and nano-objects, visualize and quantify processes occurring in small volumes and, clearly, for direct applications on lab-on-a chip devices. In microfluidic research field, optical/photonics approaches are the more suitable ones because they have various advantages as to be non-contact, full-field, non-invasive and can be packaged thanks to the development of integrable optics. Moreover, phase contrast approaches, adapted to a lab-on-a-chip configurations, give the possibility to get quantitative information with remarkable lateral and vertical resolution directly in situ without the need to dye and/or kill cells. Furthermore, numerical techniques for tracking of micro-objects needs to be developed for measuring velocity fields, trajectories patterns, motility of cancer cell and so on. Here, we present a compact holographic microscope that can ensure, by the same configuration and simultaneously, accurate 3D tracking and quantitative phase-contrast analysis. The system, simple and solid, is based on twin laser beams coming from a single laser source. Through a easy conceptual design, we show how these two different functionalities can be accomplished by the same optical setup. The working principle, the optical setup and the mathematical

  4. Computer Vision Tracking Using Particle Filters for 3D Position Estimation

    DTIC Science & Technology

    2014-03-27

    5 2.2 Photogrammetry ...focus on particle filters. 2.2 Photogrammetry Photogrammetry is the process of determining 3-D coordinates through images. The mathematical underpinnings...of photogrammetry are rooted in the 1480s with Leonardo da Vinci’s study of perspectives [8, p. 1]. However, digital photogrammetry did not emerge

  5. Visual Target Tracking in the Presence of Unknown Observer Motion

    NASA Technical Reports Server (NTRS)

    Williams, Stephen; Lu, Thomas

    2009-01-01

    Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.

  6. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    PubMed Central

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.

    2017-01-01

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot

  7. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks.

    PubMed

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P

    2017-01-07

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot

  8. An eliminating method of motion-induced vertical parallax for time-division 3D display technology

    NASA Astrophysics Data System (ADS)

    Lin, Liyuan; Hou, Chunping

    2015-10-01

    A time difference between the left image and right image of the time-division 3D display makes a person perceive alternating vertical parallax when an object is moving vertically on a fixed depth plane, which causes the left image and right image perceived do not match and makes people more prone to visual fatigue. This mismatch cannot eliminate simply rely on the precise synchronous control of the left image and right image. Based on the principle of time-division 3D display technology and human visual system characteristics, this paper establishes a model of the true vertical motion velocity in reality and vertical motion velocity on the screen, and calculates the amount of the vertical parallax caused by vertical motion, and then puts forward a motion compensation method to eliminate the vertical parallax. Finally, subjective experiments are carried out to analyze how the time difference affects the stereo visual comfort by comparing the comfort values of the stereo image sequences before and after compensating using the eliminating method. The theoretical analysis and experimental results show that the proposed method is reasonable and efficient.

  9. Stereo and motion parallax cues in human 3D vision: can they vanish without a trace?

    PubMed

    Rauschecker, Andreas M; Solomon, Samuel G; Glennerster, Andrew

    2006-12-19

    In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the "correct" size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues.

  10. Investigating Cardiac Motion Patterns Using Synthetic High-Resolution 3D Cardiovascular Magnetic Resonance Images and Statistical Shape Analysis

    PubMed Central

    Biffi, Benedetta; Bruse, Jan L.; Zuluaga, Maria A.; Ntsinjana, Hopewell N.; Taylor, Andrew M.; Schievano, Silvia

    2017-01-01

    Diagnosis of ventricular dysfunction in congenital heart disease is more and more based on medical imaging, which allows investigation of abnormal cardiac morphology and correlated abnormal function. Although analysis of 2D images represents the clinical standard, novel tools performing automatic processing of 3D images are becoming available, providing more detailed and comprehensive information than simple 2D morphometry. Among these, statistical shape analysis (SSA) allows a consistent and quantitative description of a population of complex shapes, as a way to detect novel biomarkers, ultimately improving diagnosis and pathology understanding. The aim of this study is to describe the implementation of a SSA method for the investigation of 3D left ventricular shape and motion patterns and to test it on a small sample of 4 congenital repaired aortic stenosis patients and 4 age-matched healthy volunteers to demonstrate its potential. The advantage of this method is the capability of analyzing subject-specific motion patterns separately from the individual morphology, visually and quantitatively, as a way to identify functional abnormalities related to both dynamics and shape. Specifically, we combined 3D, high-resolution whole heart data with 2D, temporal information provided by cine cardiovascular magnetic resonance images, and we used an SSA approach to analyze 3D motion per se. Preliminary results of this pilot study showed that using this method, some differences in end-diastolic and end-systolic ventricular shapes could be captured, but it was not possible to clearly separate the two cohorts based on shape information alone. However, further analyses on ventricular motion allowed to qualitatively identify differences between the two populations. Moreover, by describing shape and motion with a small number of principal components, this method offers a fully automated process to obtain visually intuitive and numerical information on cardiac shape and motion

  11. Effects of image noise, respiratory motion, and motion compensation on 3D activity quantification in count-limited PET images

    NASA Astrophysics Data System (ADS)

    Siman, W.; Mawlawi, O. R.; Mikell, J. K.; Mourtada, F.; Kappadath, S. C.

    2017-01-01

    The aims of this study were to evaluate the effects of noise, motion blur, and motion compensation using quiescent-period gating (QPG) on the activity concentration (AC) distribution—quantified using the cumulative AC volume histogram (ACVH)—in count-limited studies such as 90Y-PET/CT. An International Electrotechnical Commission phantom filled with low 18F activity was used to simulate clinical 90Y-PET images. PET data were acquired using a GE-D690 when the phantom was static and subject to 1-4 cm periodic 1D motion. The static data were down-sampled into shorter durations to determine the effect of noise on ACVH. Motion-degraded PET data were sorted into multiple gates to assess the effect of motion and QPG on ACVH. Errors in ACVH at AC90 (minimum AC that covers 90% of the volume of interest (VOI)), AC80, and ACmean (average AC in the VOI) were characterized as a function of noise and amplitude before and after QPG. Scan-time reduction increased the apparent non-uniformity of sphere doses and the dispersion of ACVH. These effects were more pronounced in smaller spheres. Noise-related errors in ACVH at AC20 to AC70 were smaller (<15%) compared to the errors between AC80 to AC90 (>15%). The accuracy of ACmean was largely independent of the total count. Motion decreased the observed AC and skewed the ACVH toward lower values; the severity of this effect depended on motion amplitude and tumor diameter. The errors in AC20 to AC80 for the 17 mm sphere were  -25% and  -55% for motion amplitudes of 2 cm and 4 cm, respectively. With QPG, the errors in AC20 to AC80 of the 17 mm sphere were reduced to  -15% for motion amplitudes  <4 cm. For spheres with motion amplitude to diameter ratio  >0.5, QPG was effective at reducing errors in ACVH despite increases in image non-uniformity due to increased noise. ACVH is believed to be more relevant than mean or maximum AC to calculate tumor control and normal tissue complication probability

  12. Self optical motion-tracking for endoscopic optical coherence tomography probe using micro-beamsplitter probe

    NASA Astrophysics Data System (ADS)

    Li, Jiawen; Zhang, Jun; Chou, Lidek; Wang, Alex; Jing, Joseph; Chen, Zhongping

    2014-03-01

    Long range optical coherence tomography (OCT), with its high speed, high resolution, non-ionized properties and cross-sectional imaging capability, is suitable for upper airway lumen imaging. To render 2D OCT datasets to true 3D anatomy, additional tools are usually applied, such as X-ray guidance or a magnetic sensor. X-ray increases ionizing radiation. A magnetic sensor either increases probe size or requires an additional pull-back of the tracking sensor through the body cavity. In order to overcome these limitations, we present a novel tracking method using a 1.5 mm×1.5mm, 90/10-ratio micro-beamsplitter: 10% light through the beam-splitter is used for motion tracking and 90% light is used for regular OCT imaging and motion tracking. Two signals corresponding to these two split-beams that pass through different optical path length delays are obtained by the detector simultaneously. Using the two split beams' returned signals from the same marker line, the 2D inclination angle of each step is computed. By calculating the 2D inclination angle of each step and then connecting the translational displacements of each step, we can obtain the 2D motion trajectory of the probe. With two marker lines on the probe sheath, 3D inclination angles can be determined and then used for 3D trajectory reconstruction. We tested the accuracy of trajectory reconstruction using the probe and demonstrated the feasibility of the design for structure reconstruction of a biological sample using a porcine trachea specimen. This optical-tracking probe has the potential to be made as small as an outer diameter of 1.0mm, which is ideal for upper airway imaging.

  13. Real-time Awake Animal Motion Tracking System for SPECT Imaging

    SciTech Connect

    Goddard Jr, James Samuel; Baba, Justin S; Lee, Seung Joon; Weisenberger, A G; Stolin, A; McKisson, J; Smith, M F

    2008-01-01

    Enhancements have been made in the development of a real-time optical pose measurement and tracking system that provides 3D position and orientation data for a single photon emission computed tomography (SPECT) imaging system for awake, unanesthetized, unrestrained small animals. Three optical cameras with infrared (IR) illumination view the head movements of an animal enclosed in a transparent burrow. Markers placed on the head provide landmark points for image segmentation. Strobed IR LED s are synchronized to the cameras and illuminate the markers to prevent motion blur for each set of images. The system using the three cameras automatically segments the markers, detects missing data, rejects false reflections, performs trinocular marker correspondence, and calculates the 3D pose of the animal s head. Improvements have been made in methods for segmentation, tracking, and 3D calculation to give higher speed and more accurate measurements during a scan. The optical hardware has been installed within a Siemens MicroCAT II small animal scanner at Johns Hopkins without requiring functional changes to the scanner operation. The system has undergone testing using both phantoms and live mice and has been characterized in terms of speed, accuracy, robustness, and reliability. Experimental data showing these motion tracking results are given.

  14. 3D shape tracking of minimally invasive medical instruments using optical frequency domain reflectometry

    NASA Astrophysics Data System (ADS)

    Parent, Francois; Kanti Mandal, Koushik; Loranger, Sebastien; Watanabe Fernandes, Eric Hideki; Kashyap, Raman; Kadoury, Samuel

    2016-03-01

    We propose here a new alternative to provide real-time device tracking during minimally invasive interventions using a truly-distributed strain sensor based on optical frequency domain reflectometry (OFDR) in optical fibers. The guidance of minimally invasive medical instruments such as needles or catheters (ex. by adding a piezoelectric coating) has been the focus of extensive research in the past decades. Real-time tracking of instruments in medical interventions facilitates image guidance and helps the user to reach a pre-localized target more precisely. Image-guided systems using ultrasound imaging and shape sensors based on fiber Bragg gratings (FBG)-embedded optical fibers can provide retroactive feedback to the user in order to reach the targeted areas with even more precision. However, ultrasound imaging with electro-magnetic tracking cannot be used in the magnetic resonance imaging (MRI) suite, while shape sensors based on FBG embedded in optical fibers provides discrete values of the instrument position, which requires approximations to be made to evaluate its global shape. This is why a truly-distributed strain sensor based on OFDR could enhance the tracking accuracy. In both cases, since the strain is proportional to the radius of curvature of the fiber, a strain sensor can provide the three-dimensional shape of medical instruments by simply inserting fibers inside the devices. To faithfully follow the shape of the needle in the tracking frame, 3 fibers glued in a specific geometry are used, providing 3 degrees of freedom along the fiber. Near real-time tracking of medical instruments is thus obtained offering clear advantages for clinical monitoring in remotely controlled catheter or needle guidance. We present results demonstrating the promising aspects of this approach as well the limitations of using the OFDR technique.

  15. Patient specific respiratory motion modeling using a 3D patient’s external surface

    PubMed Central

    Fayad, Hadi; Pan, Tinsu; Pradier, Olivier; Visvikis, Dimitris

    2012-01-01

    Purpose: Respiratory motion modeling of both tumor and surrounding tissues is a key element in minimizing errors and uncertainties in radiation therapy. Different continuous motion models have been previously developed. However, most of these models are based on the use of parameters such as amplitude and phase extracted from 1D external respiratory signal. A potentially reduced correlation between the internal structures (tumor and healthy organs) and the corresponding external surrogates obtained from such 1D respiratory signal is a limitation of these models. The objective of this work is to describe a continuous patient specific respiratory motion model, accounting for the irregular nature of respiratory signals, using patient external surface information as surrogate measures rather than a 1D respiratory signal. Methods: Ten patients were used in this study having each one 4D CT series, a synchronized RPM signal and patient surfaces extracted from the 4D CT volumes using a threshold based segmentation algorithm. A patient specific model based on the use of principal component analysis was subsequently constructed. This model relates the internal motion described by deformation matrices and the external motion characterized by the amplitude and the phase of the respiratory signal in the case of the RPM or using specific regions of interest (ROI) in the case of the patients’ external surface utilization. The capability of the different models considered to handle the irregular nature of respiration was assessed using two repeated 4D CT acquisitions (in two patients) and static CT images acquired at extreme respiration conditions (end of inspiration and expiration) for one patient. Results: Both quantitative and qualitative parameters covering local and global measures, including an expert observer study, were used to assess and compare the performance of the different motion estimation models considered. Results indicate that using surface information

  16. Sedimentary basin effects in Seattle, Washington: Ground-motion observations and 3D simulations

    USGS Publications Warehouse

    Frankel, Arthur; Stephenson, William; Carver, David

    2009-01-01

    Seismograms of local earthquakes recorded in Seattle exhibit surface waves in the Seattle basin and basin-edge focusing of S waves. Spectral ratios of Swaves and later arrivals at 1 Hz for stiff-soil sites in the Seattle basin show a dependence on the direction to the earthquake, with earthquakes to the south and southwest producing higher average amplification. Earthquakes to the southwest typically produce larger basin surface waves relative to S waves than earthquakes to the north and northwest, probably because of the velocity contrast across the Seattle fault along the southern margin of the Seattle basin. S to P conversions are observed for some events and are likely converted at the bottom of the Seattle basin. We model five earthquakes, including the M 6.8 Nisqually earthquake, using 3D finite-difference simulations accurate up to 1 Hz. The simulations reproduce the observed dependence of amplification on the direction to the earthquake. The simulations generally match the timing and character of basin surface waves observed for many events. The 3D simulation for the Nisqually earth-quake produces focusing of S waves along the southern margin of the Seattle basin near the area in west Seattle that experienced increased chimney damage from the earthquake, similar to the results of the higher-frequency 2D simulation reported by Stephenson et al. (2006). Waveforms from the 3D simulations show reasonable agreement with the data at low frequencies (0.2-0.4 Hz) for the Nisqually earthquake and an M 4.8 deep earthquake west of Seattle.

  17. Analysis of 3-D Tongue Motion from Tagged and Cine Magnetic Resonance Images

    ERIC Educational Resources Information Center

    Xing, Fangxu; Woo, Jonghye; Lee, Junghoon; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.

    2016-01-01

    Purpose: Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during…

  18. Motion Controllers for Learners to Manipulate and Interact with 3D Objects for Mental Rotation Training

    ERIC Educational Resources Information Center

    Yeh, Shih-Ching; Wang, Jin-Liang; Wang, Chin-Yeh; Lin, Po-Han; Chen, Gwo-Dong; Rizzo, Albert

    2014-01-01

    Mental rotation is an important spatial processing ability and an important element in intelligence tests. However, the majority of past attempts at training mental rotation have used paper-and-pencil tests or digital images. This study proposes an innovative mental rotation training approach using magnetic motion controllers to allow learners to…

  19. Modelling of U-tube Tanks for ShipMo3D Ship Motion Predictions

    DTIC Science & Technology

    2012-01-01

    official languages unless the text is bilingual .) Ship roll motions in waves can be significant due to small roll damping and the proximity of ship... first ...John Duncan Head of Simulation Based Acquisition Defence Equipment and Support Abbey Wood Mail Point 8014 BRISTOL BS34 8JH UK DRDC

  20. Upper Extremity Motion Assessment in Adult Ischemic Stroke Patients: A 3-D Kinematic Model

    DTIC Science & Technology

    2001-10-25

    Botox , motion analysis, hemiplegia, stroke I. INTRODUCTION Recovery from ischemic stroke has been explained by patients learning new skills, by...University and the Medical College of Wisconsin and to Allergan, Inc.(Irvine, CA), makers of BOTOX ®, for their sponsorship. REFERENCES [1] Gracies

  1. Well-posedness of linearized motion for 3-D water waves far from equilibrium

    SciTech Connect

    Hou, T.Y.; Zhen-huan Teng; Pingwen Zhang

    1996-12-31

    In this paper, we study the motion of a free surface separating two different layers of fluid in three dimensions. We assume the flow to be inviscid, irrotational, and incompressible. In this case, one can reduce the entire motion by variables on the surface alone. In general, without additional regularizing effects such as surface alone. In general, without additional regularizing effects such as surface tension or viscosity, the flow can be subject to Rayleigh-Taylor or Kelvin-Helmholtz instabilities which will lead to unbounded growth in high frequency wave numbers. In this case, the problem is not well-posed in the Hadamard sense. The problem of water wave with no fluid above is a special case. It is well-known that such motion is well-posed when the free surface is sufficiently close to equilibrium. Beale, Hous and Lowengrub derived a general condition which ensures well-posedness of the linearization about a presumed time-dependent motion in two dimensional case. The linearized equations, when formulated in a proper coordinate system are found to have a qualitative structure surprisingly like that for the simple case of linear waves near equilbrium. Such an analysis is essential in analyzing stability of boundary integral methods for computing free interface problems. 19 refs.

  2. Combining marker-less patient setup and respiratory motion monitoring using low cost 3D camera technology

    NASA Astrophysics Data System (ADS)

    Tahavori, F.; Adams, E.; Dabbs, M.; Aldridge, L.; Liversidge, N.; Donovan, E.; Jordan, T.; Evans, PM.; Wells, K.

    2015-03-01

    Patient set-up misalignment/motion can be a significant source of error within external beam radiotherapy, leading to unwanted dose to healthy tissues and sub-optimal dose to the target tissue. Such inadvertent displacement or motion of the target volume may be caused by treatment set-up error, respiratory motion or an involuntary movement potentially decreasing therapeutic benefit. The conventional approach to managing abdominal-thoracic patient set-up is via skin markers (tattoos) and laser-based alignment. Alignment of the internal target volume with its position in the treatment plan can be achieved using Deep Inspiration Breath Hold (DIBH) in conjunction with marker-based respiratory motion monitoring. We propose a marker-less single system solution for patient set-up and respiratory motion management based on low cost 3D depth camera technology (such as the Microsoft Kinect). In this new work we assess this approach in a study group of six volunteer subjects. Separate simulated treatment mimic treatment "fractions" or set-ups are compared for each subject, undertaken using conventional laser-based alignment and with intrinsic depth images produced by Kinect. Microsoft Kinect is also compared with the well-known RPM system for respiratory motion management in terms of monitoring free-breathing and DIBH. Preliminary results suggest that Kinect is able to produce mm-level surface alignment and a comparable DIBH respiratory motion management when compared to the popular RPM system. Such an approach may also yield significant benefits in terms of patient throughput as marker alignment and respiratory motion can be automated in a single system.

  3. Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras

    PubMed Central

    Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Guo, Yin

    2016-01-01

    The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained. PMID:27869731

  4. Flying triangulation - A motion-robust optical 3D sensor for the real-time shape acquisition of complex objects

    NASA Astrophysics Data System (ADS)

    Willomitzer, Florian; Ettl, Svenja; Arold, Oliver; Häusler, Gerd

    2013-05-01

    The three-dimensional shape acquisition of objects has become more and more important in the last years. Up to now, there are several well-established methods which already yield impressive results. However, even under quite common conditions like object movement or a complex shaping, most methods become unsatisfying. Thus, the 3D shape acquisition is still a difficult and non-trivial task. We present our measurement principle "Flying Triangulation" which enables a motion-robust 3D acquisition of complex-shaped object surfaces by a freely movable handheld sensor. Since "Flying Triangulation" is scalable, a whole sensor-zoo for different object sizes is presented. Concluding, an overview of current and future fields of investigation is given.

  5. Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras.

    PubMed

    Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Guo, Yin

    2016-11-18

    The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained.

  6. SVMT: a MATLAB toolbox for stereo-vision motion tracking of motor reactivity.

    PubMed

    Vousdoukas, M I; Perakakis, P; Idrissi, S; Vila, J

    2012-10-01

    This article presents a Matlab-based stereo-vision motion tracking system (SVMT) for the detection of human motor reactivity elicited by sensory stimulation. It is a low-cost, non-intrusive system supported by Graphical User Interface (GUI) software, and has been successfully tested and integrated in a broad array of physiological recording devices at the Human Physiology Laboratory in the University of Granada. The SVMT GUI software handles data in Matlab and ASCII formats. Internal functions perform lens distortion correction, camera geometry definition, feature matching, as well as data clustering and filtering to extract 3D motion paths of specific body areas. System validation showed geo-rectification errors below 0.5 mm, while feature matching and motion paths extraction procedures were successfully validated with manual tracking and RMS errors were typically below 2% of the movement range. The application of the system in a psychophysiological experiment designed to elicit a startle motor response by the presentation of intense and unexpected acoustic stimuli, provided reliable data probing dynamical features of motor responses and habituation to repeated stimulus presentations. The stereo-geolocation and motion tracking performance of the SVMT system were successfully validated through comparisons with surface EMG measurements of eyeblink startle, which clearly demonstrate the ability of SVMT to track subtle body movement, such as those induced by the presentation of intense acoustic stimuli. Finally, SVMT provides an efficient solution for the assessment of motor reactivity not only in controlled laboratory settings, but also in more open, ecological environments.

  7. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.

  8. Kinetic Depth Effect and Optic Flow 1. 3D Shape from Fourier Motion

    DTIC Science & Technology

    1987-01-01

    rectification are unaffected by alternating-polarity but disrupted by interposed gray-frames. (2) To equate the accuracy of 2AFC planar direction-of...of the input stimulus. Direction. Discrimination between left and right motion direction (two-alternative forced choice, 2AFC Direction) minimally...and the standard stimulus would be recovered. 2AFC -Direction performance is impaired by polarity alternation, but still well above chance for a wide

  9. Image segmentation and registration for the analysis of joint motion from 3D MRI

    NASA Astrophysics Data System (ADS)

    Hu, Yangqiu; Haynor, David R.; Fassbind, Michael; Rohr, Eric; Ledoux, William

    2006-03-01

    We report an image segmentation and registration method for studying joint morphology and kinematics from in vivo MRI scans and its application to the analysis of ankle joint motion. Using an MR-compatible loading device, a foot was scanned in a single neutral and seven dynamic positions including maximal flexion, rotation and inversion/eversion. A segmentation method combining graph cuts and level sets was developed which allows a user to interactively delineate 14 bones in the neutral position volume in less than 30 minutes total, including less than 10 minutes of user interaction. In the subsequent registration step, a separate rigid body transformation for each bone is obtained by registering the neutral position dataset to each of the dynamic ones, which produces an accurate description of the motion between them. We have processed six datasets, including 3 normal and 3 pathological feet. For validation our results were compared with those obtained from 3DViewnix, a semi-automatic segmentation program, and achieved good agreement in volume overlap ratios (mean: 91.57%, standard deviation: 3.58%) for all bones. Our tool requires only 1/50 and 1/150 of the user interaction time required by 3DViewnix and NIH Image Plus, respectively, an improvement that has the potential to make joint motion analysis from MRI practical in research and clinical applications.

  10. 3D environment modeling and location tracking using off-the-shelf components

    NASA Astrophysics Data System (ADS)

    Luke, Robert H.

    2016-05-01

    The remarkable popularity of smartphones over the past decade has led to a technological race for dominance in market share. This has resulted in a flood of new processors and sensors that are inexpensive, low power and high performance. These sensors include accelerometers, gyroscope, barometers and most importantly cameras. This sensor suite, coupled with multicore processors, allows a new community of researchers to build small, high performance platforms for low cost. This paper describes a system using off-the-shelf components to perform position tracking as well as environment modeling. The system relies on tracking using stereo vision and inertial navigation to determine movement of the system as well as create a model of the environment sensed by the system.

  11. Laetoli’s lost tracks: 3D generated mean shape and missing footprints

    PubMed Central

    Bennett, M. R.; Reynolds, S. C.; Morse, S. A.; Budka, M.

    2016-01-01

    The Laetoli site (Tanzania) contains the oldest known hominin footprints, and their interpretation remains open to debate, despite over 35 years of research. The two hominin trackways present are parallel to one another, one of which is a composite formed by at least two individuals walking in single file. Most researchers have focused on the single, clearly discernible G1 trackway while the G2/3 trackway has been largely dismissed due to its composite nature. Here we report the use of a new technique that allows us to decouple the G2 and G3 tracks for the first time. In so doing we are able to quantify the mean footprint topology of the G3 trackway and render it useable for subsequent data analyses. By restoring the effectively ‘lost’ G3 track, we have doubled the available data on some of the rarest traces directly associated with our Pliocene ancestors. PMID:26902912

  12. On feature motion decorrelation in ultrasound speckle tracking.

    PubMed

    Liang, Tianzhu; Yung, Lingsing; Yu, Weichuan

    2013-02-01

    Speckle tracking methods refer to motion tracking methods based on speckle patterns in ultrasound images. They are commonly used in ultrasound based elasticity imaging techniques to reveal mechanical properties of tissues for clinical diagnosis. In speckle tracking, feature motion decorrelation exists when speckle patterns are not identical before and after tissue motion and deformation. Feature motion decorrelation violates the underlying assumption of most speckle tracking methods. Consequently, the estimation accuracy of current methods is greatly limited. In this paper, two types of speckle pattern variations, the geometric transformation and the intensity change of speckle patterns, are studied. We show that a coupled filtering method is able to compensate for both types of variations. It provides accurate strain estimations even when tissue deformation or rotation is extremely large. We also show that in most cases, an affine warping method that only compensates for the geometric transformation is able to achieve a similar performance as the coupled filtering method. Feature motion decorrelation in B-mode images is also studied. Finally, we show that in typical elastography studies, speckle tracking methods without modeling local shearing or rotation will fail when tissue deformation is large.

  13. Image-guided tumor motion modeling and tracking

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Wu, Y.; Liu, W.; Christensen, J.; Tai, A.; Li, A. X.

    2009-02-01

    Radiation therapy (RT) is an important procedure in the treatment of cancer in the thorax and abdomen. However, its efficacy can be severely limited by breathing induced tumor motion. Tumor motion causes uncertainty in the tumor's location and consequently limits the radiation dosage (for fear of damaging normal tissue). This paper describes a novel signal model for tumor motion tracking/prediction that can potentially improve RT results. Using CT and breathing sensor data, it provides a more accurate characterization of the breathing and tumor motion than previous work and is non-invasive. The efficacy of our model is demonstrated on patient data.

  14. Long Period Ground Motion Prediction Of Linked Tonankai And Nankai Subduction Earthquakes Using 3D Finite Difference Method

    NASA Astrophysics Data System (ADS)

    Kawabe, H.; Kamae, K.

    2005-12-01

    There is high possibility of the occurrence of the Tonankai and Nankai earthquakes which are capable of causing immense damage. During these huge earthquakes, long period ground motions may strike mega-cities Osaka and Nagoya located inside the Osaka and Nobi basins in which there are many long period and low damping structures (such as tall buildings and oil tanks). It is very important for the earthquake disaster mitigation to predict long period strong ground motions of the future Tonankai and Nankai earthquakes that are capable of exciting long-period strong ground motions over a wide area. In this study, we tried to predict long-period ground motions of the future Tonankai and Nankai earthquakes using 3D finite difference method. We construct a three-dimensional underground structure model including not only the basins but also propagation field from the source to the basins. Resultantly, we can point out that the predominant periods of pseudo-velocity response spectra change basin by basin. Long period ground motions with periods of 5 to 8 second are predominant in the Osaka basin, 3 to 6 second in the Nobi basin and 2 to 5 second in the Kyoto basin. These characteristics of the long-period ground motions are related with the thicknesses of the sediments of the basins. The duration of long period ground motions inside the basin are more than 5 minutes. These results are very useful for the earthquake disaster mitigation of long period structures such as tall buildings and oil tanks.

  15. Diaphragm motion characterization using chest motion data for biomechanics-based lung tumor tracking during EBRT

    NASA Astrophysics Data System (ADS)

    Karami, Elham; Gaede, Stewart; Lee, Ting-Yim; Samani, Abbas

    2016-03-01

    Despite recent advances in image-guided interventions, lung cancer External Beam Radiation Therapy (EBRT) is still very challenging due to respiration induced tumor motion. Among various proposed methods of tumor motion compensation, real-time tumor tracking is known to be one of the most effective solutions as it allows for maximum normal tissue sparing, less overall radiation exposure and a shorter treatment session. As such, we propose a biomechanics-based real-time tumor tracking method for effective lung cancer radiotherapy. In the proposed algorithm, the required boundary conditions for the lung Finite Element model, including diaphragm motion, are obtained using the chest surface motion as a surrogate signal. The primary objective of this paper is to demonstrate the feasibility of developing a function which is capable of inputting the chest surface motion data and outputting the diaphragm motion in real-time. For this purpose, after quantifying the diaphragm motion with a Principal Component Analysis (PCA) model, correlation coefficient between the model parameters of diaphragm motion and chest motion data was obtained through Partial Least Squares Regression (PLSR). Preliminary results obtained in this study indicate that the PCA coefficients representing the diaphragm motion can be obtained through chest surface motion tracking with high accuracy.

  16. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the

  17. Instability of the perceived world while watching 3D stereoscopic imagery: A likely source of motion sickness symptoms

    PubMed Central

    Hwang, Alex D.; Peli, Eli

    2014-01-01

    Watching 3D content using a stereoscopic display may cause various discomforting symptoms, including eye strain, blurred vision, double vision, and motion sickness. Numerous studies have reported motion-sickness-like symptoms during stereoscopic viewing, but no causal linkage between specific aspects of the presentation and the induced discomfort has been explicitly proposed. Here, we describe several causes, in which stereoscopic capture, display, and viewing differ from natural viewing resulting in static and, importantly, dynamic distortions that conflict with the expected stability and rigidity of the real world. This analysis provides a basis for suggested changes to display systems that may alleviate the symptoms, and suggestions for future studies to determine the relative contribution of the various effects to the unpleasant symptoms. PMID:26034562

  18. Instability of the perceived world while watching 3D stereoscopic imagery: A likely source of motion sickness symptoms.

    PubMed

    Hwang, Alex D; Peli, Eli

    2014-01-01

    Watching 3D content using a stereoscopic display may cause various discomforting symptoms, including eye strain, blurred vision, double vision, and motion sickness. Numerous studies have reported motion-sickness-like symptoms during stereoscopic viewing, but no causal linkage between specific aspects of the presentation and the induced discomfort has been explicitly proposed. Here, we describe several causes, in which stereoscopic capture, display, and viewing differ from natural viewing resulting in static and, importantly, dynamic distortions that conflict with the expected stability and rigidity of the real world. This analysis provides a basis for suggested changes to display systems that may alleviate the symptoms, and suggestions for future studies to determine the relative contribution of the various effects to the unpleasant symptoms.

  19. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking

    SciTech Connect

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-15

    Purpose: In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. Methods: The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a {gamma}-test with a 3%/3 mm criterion. Results: The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the {gamma}-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation

  20. Tracking immune-related cell responses to drug delivery microparticles in 3D dense collagen matrix.

    PubMed

    Obarzanek-Fojt, Magdalena; Curdy, Catherine; Loggia, Nicoletta; Di Lena, Fabio; Grieder, Kathrin; Bitar, Malak; Wick, Peter

    2016-10-01

    Beyond the therapeutic purpose, the impact of drug delivery microparticles on the local tissue and inflammatory responses remains to be further elucidated specifically for reactions mediated by the host immune cells. Such immediate and prolonged reactions may adversely influence the release efficacy and intended therapeutic pathway. The lack of suitable in vitro platforms limits our ability to gain insight into the nature of immune responses at a single cell level. In order to establish an in vitro 3D system mimicking the connective host tissue counterpart, we utilized reproducible, compressed, rat-tail collagen polymerized matrices. THP1 cells (human acute monocytic leukaemia cells) differentiated into macrophage-like cells were chosen as cell model and their functionality was retained in the dense rat-tail collagen matrix. Placebo microparticles were later combined in the immune cell seeded system during collagen polymerization and secreted pro-inflammatory factors: TNFα and IL-8 were used as immune response readout (ELISA). Our data showed an elevated TNFα and IL-8 secretion by macrophage THP1 cells indicating that Placebo microparticles trigger certain immune cell responses under 3D in vivo like conditions. Furthermore, we have shown that the system is sensitive to measure the differences in THP1 macrophage pro-inflammatory responses to Active Pharmaceutical Ingredient (API) microparticles with different API release kinetics. We have successfully developed a tissue-like, advanced, in vitro system enabling selective "readouts" of specific responses of immune-related cells. Such system may provide the basis of an advanced toolbox enabling systemic evaluation and prediction of in vivo microparticle reactions on human immune-related cells.

  1. Hybrid markerless tracking of complex articulated motion in golf swings.

    PubMed

    Fung, Sim Kwoh; Sundaraj, Kenneth; Ahamed, Nizam Uddin; Kiang, Lam Chee; Nadarajah, Sivadev; Sahayadhas, Arun; Ali, Md Asraf; Islam, Md Anamul; Palaniappan, Rajkumar

    2014-04-01

    Sports video tracking is a research topic that has attained increasing attention due to its high commercial potential. A number of sports, including tennis, soccer, gymnastics, running, golf, badminton and cricket have been utilised to display the novel ideas in sports motion tracking. The main challenge associated with this research concerns the extraction of a highly complex articulated motion from a video scene. Our research focuses on the development of a markerless human motion tracking system that tracks the major body parts of an athlete straight from a sports broadcast video. We proposed a hybrid tracking method, which consists of a combination of three algorithms (pyramidal Lucas-Kanade optical flow (LK), normalised correlation-based template matching and background subtraction), to track the golfer's head, body, hands, shoulders, knees and feet during a full swing. We then match, track and map the results onto a 2D articulated human stick model to represent the pose of the golfer over time. Our work was tested using two video broadcasts of a golfer, and we obtained satisfactory results. The current outcomes of this research can play an important role in enhancing the performance of a golfer, provide vital information to sports medicine practitioners by providing technically sound guidance on movements and should assist to diminish the risk of golfing injuries.

  2. Visualizing and Tracking Evolving Features in 3D Unstructured and Adaptive Datasets

    SciTech Connect

    Silver, D.; Zabusky, N.

    2002-08-01

    The massive amounts of time-varying datasets being generated demand new visualization and quantification techniques. Visualization alone is not sufficient. Without proper measurement information/computations real science cannot be done. Our focus is this work was to combine visualization with quantification of the data to allow for advanced querying and searching. As part of this proposal, we have developed a feature extraction adn tracking methodology which allows researcher to identify features of interest and follow their evolution over time. The implementation is distributed and operates over data In-situ: where it is stored and when it was computed.

  3. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV kV imaging

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wiersma, R. D.; Mao, W.; Luxton, G.; Xing, L.

    2008-12-01

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from ~0.5 mm for the normal adult breathing pattern to ~1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real

  4. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV-kV imaging.

    PubMed

    Liu, W; Wiersma, R D; Mao, W; Luxton, G; Xing, L

    2008-12-21

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from approximately 0.5 mm for the normal adult breathing pattern to approximately 1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general

  5. Physical Simulation for Probabilistic Motion Tracking

    DTIC Science & Technology

    2008-01-01

    mass, inertial properties , and collision ge- ometries) are known for each rigid body segment. Given these properties and a state hypothesis at frame f ...tracked individual into the prediction process. We assume that the segment shapes, mass properties , collision geome- tries and other associated...In PF the posterior, p(xf |y1: f ), where xf is the state of the body at time instant f and y1: f is the set of observations up to the time instant f

  6. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    PubMed

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming.

  7. Lagrangian 3D particle tracking in high-speed flows: Shake-The-Box for multi-pulse systems

    NASA Astrophysics Data System (ADS)

    Novara, Matteo; Schanz, Daniel; Reuther, Nico; Kähler, Christian J.; Schröder, Andreas

    2016-08-01

    The Shake-The-Box (STB) particle tracking technique, recently introduced for time-resolved 3D particle image velocimetry (PIV) images, is applied here to data from a multi-pulse investigation of a turbulent boundary layer flow with adverse pressure gradient in air at 36 m/s ( Re τ = 10,650). The multi-pulse acquisition strategy allows for the recording of four-pulse long time-resolved sequences with a time separation of a few microseconds. The experimental setup consists of a dual-imaging system and a dual-double-cavity laser emitting orthogonal polarization directions to separate the four pulses. The STB particle triangulation and tracking strategy is adapted here to cope with the limited amount of realizations available along the time sequence and to take advantage of the ghost track reduction offered by the use of two independent imaging systems. Furthermore, a correction scheme to compensate for camera vibrations is discussed, together with a method to accurately identify the position of the wall within the measurement domain. Results show that approximately 80,000 tracks can be instantaneously reconstructed within the measurement volume, enabling the evaluation of both dense velocity fields, suitable for spatial gradients evaluation, and highly spatially resolved boundary layer profiles. Turbulent boundary layer profiles obtained from ensemble averaging of the STB tracks are compared to results from 2D-PIV and long-range micro particle tracking velocimetry; the comparison shows the capability of the STB approach in delivering accurate results across a wide range of scales.

  8. A smart homecage system with 3D tracking for long-term behavioral experiments.

    PubMed

    Byunghun Lee; Kiani, Mehdi; Ghovanloo, Maysam

    2014-01-01

    A wirelessly-powered homecage system, called the EnerCage-HC, that is equipped with multi-coil wireless power transfer, closed-loop power control, optical behavioral tracking, and a graphic user interface (GUI) is presented for long-term electrophysiology experiments. The EnerCage-HC system can wirelessly power a mobile unit attached to a small animal subject and also track its behavior in real-time as it is housed inside a standard homecage. The EnerCage-HC system is equipped with one central and four overlapping slanted wire-wound coils (WWCs) with optimal geometries to form 3-and 4-coil power transmission links while operating at 13.56 MHz. Utilizing multi-coil links increases the power transfer efficiency (PTE) compared to conventional 2-coil links and also reduces the number of power amplifiers (PAs) to only one, which significantly reduces the system complexity, cost, and dissipated heat. A Microsoft Kinect installed 90 cm above the homecage localizes the animal position and orientation with 1.6 cm accuracy. An in vivo experiment was conducted on a freely behaving rat by continuously delivering 24 mW to the mobile unit for > 7 hours inside a standard homecage.

  9. Multisensor 3D tracking for counter small unmanned air vehicles (CSUAV)

    NASA Astrophysics Data System (ADS)

    Vasquez, Juan R.; Tarplee, Kyle M.; Case, Ellen E.; Zelnio, Anne M.; Rigling, Brian D.

    2008-04-01

    A variety of unmanned air vehicles (UAVs) have been developed for both military and civilian use. The typical large UAV is typically state owned, whereas small UAVs (SUAVs) may be in the form of remote controlled aircraft that are widely available. The potential threat of these SUAVs to both the military and civilian populace has led to research efforts to counter these assets via track, ID, and attack. Difficulties arise from the small size and low radar cross section when attempting to detect and track these targets with a single sensor such as radar or video cameras. In addition, clutter objects make accurate ID difficult without very high resolution data, leading to the use of an acoustic array to support this function. This paper presents a multi-sensor architecture that exploits sensor modes including EO/IR cameras, an acoustic array, and future inclusion of a radar. A sensor resource management concept is presented along with preliminary results from three of the sensors.

  10. A Detailed Study of FDIRC Prototype with Waveform Digitizing Electronics in Cosmic Ray Telescope Using 3D Tracks.

    SciTech Connect

    Nishimura, K

    2012-07-01

    We present a detailed study of a novel Cherenkov imaging detector called the Focusing DIRC (FDIRC) with waveform digitizing electronics. In this test study, the FDIRC prototype has been instrumented with seven Hamamatsu H-8500 MaPMTs. Waveforms from ~450 pixels are digitized with waveform sampling electronics based on the BLAB2 ASIC, operating at a sampling speed of ~2.5 GSa/s. The FDIRC prototype was tested in a large cosmic ray telescope (CRT) providing 3D muon tracks with ~1.5 mrad angular resolution and muon energy of Emuon greater than 1.6 GeV. In this study we provide a detailed analysis of the tails in the Cherenkov angle distribution as a function of various variables, compare experimental results with simulation, and identify the major contributions to the tails. We demonstrate that to see the full impact of these tails on the Cherenkov angle resolution, it is crucial to use 3D tracks, and have a full understanding of the role of ambiguities. These issues could not be fully explored in previous FDIRC studies where the beam was perpendicular to the quartz radiator bars. This work is relevant for the final FDIRC prototype of the PID detector at SuperB, which will be tested this year in the CRT setup.

  11. A Detailed Study of FDIRC Prototype with Waveform Digitizing Electronics in Cosmic Ray Telescope Using 3D Tracks

    SciTech Connect

    Nishimura, K.; Dey, B.; Aston, D.; Leith, D.W.G.S.; Ratcliff, B.; Roberts, D.; Ruckman, L.; Shtol, D.; Varner, G.S.; Va'vra, J.; Vavra, Jerry; /SLAC

    2012-07-30

    We present a detailed study of a novel Cherenkov imaging detector called the Focusing DIRC (FDIRC) with waveform digitizing electronics. In this test study, the FDIRC prototype has been instrumented with seven Hamamatsu H-8500 MaPMTs. Waveforms from {approx}450 pixels are digitized with waveform sampling electronics based on the BLAB2 ASIC, operating at a sampling speed of {approx}2.5 GSa/s. The FDIRC prototype was tested in a large cosmic ray telescope (CRT) providing 3D muon tracks with {approx}1.5 mrad angular resolution and muon energy of E{sub muon} > 1.6 GeV. In this study we provide a detailed analysis of the tails in the Cherenkov angle distribution as a function of various variables, compare experimental results with simulation, and identify the major contributions to the tails. We demonstrate that to see the full impact of these tails on the Cherenkov angle resolution, it is crucial to use 3D tracks, and have a full understanding of the role of ambiguities. These issues could not be fully explored in previous FDIRC studies where the beam was perpendicular to the quartz radiator bars. This work is relevant for the final FDIRC prototype of the PID detector at SuperB, which will be tested this year in the CRT setup.

  12. Dynamic tracking of a deformable tissue based on 3D-2D MR-US image registration

    NASA Astrophysics Data System (ADS)

    Marami, Bahram; Sirouspour, Shahin; Fenster, Aaron; Capson, David W.

    2014-03-01

    Real-time registration of pre-operative magnetic resonance (MR) or computed tomography (CT) images with intra-operative Ultrasound (US) images can be a valuable tool in image-guided therapies and interventions. This paper presents an automatic method for dynamically tracking the deformation of a soft tissue based on registering pre-operative three-dimensional (3D) MR images to intra-operative two-dimensional (2D) US images. The registration algorithm is based on concepts in state estimation where a dynamic finite element (FE)- based linear elastic deformation model correlates the imaging data in the spatial and temporal domains. A Kalman-like filtering process estimates the unknown deformation states of the soft tissue using the deformation model and a measure of error between the predicted and the observed intra-operative imaging data. The error is computed based on an intensity-based distance metric, namely, modality independent neighborhood descriptor (MIND), and no segmentation or feature extraction from images is required. The performance of the proposed method is evaluated by dynamically deforming 3D pre-operative MR images of a breast phantom tissue based on real-time 2D images obtained from an US probe. Experimental results on different registration scenarios showed that deformation tracking converges in a few iterations. The average target registration error on the plane of 2D US images for manually selected fiducial points was between 0.3 and 1.5 mm depending on the size of deformation.

  13. Spatial synchronization of an insole pressure distribution system with a 3D motion analysis system for center of pressure measurements.

    PubMed

    Fradet, Laetitia; Siegel, Johannes; Dahl, Marieke; Alimusaj, Merkur; Wolf, Sebastian I

    2009-01-01

    Insole pressure systems are often more appropriate than force platforms for analysing center of pressure (CoP) as they are more flexible in use and indicate the position of the CoP that characterizes the contact foot/shoe during gait with shoes. However, these systems are typically not synchronized with 3D motion analysis systems. The present paper proposes a direct method that does not require a force platform for synchronizing an insole pressure system with a 3D motion analysis system. The distance separating 24 different CoPs measured optically and their equivalents measured by the insoles and transformed in the global coordinate system did not exceed 2 mm, confirming the suitability of the method proposed. Additionally, during static single limb stance, distances smaller than 7 mm and correlations higher than 0.94 were found between CoP trajectories measured with insoles and force platforms. Similar measurements were performed during gait to illustrate the characteristics of the CoP measured with each system. The distance separating the two CoPs was below 19 mm and the coefficient of correlation above 0.86. The proposed method offers the possibility to conduct new experiments, such as the investigation of proprioception in climbing stairs or in the presence of obstacles.

  14. Dynamics of errors in 3D motion estimation and implications for strain-tensor imaging in acoustic elastography

    NASA Astrophysics Data System (ADS)

    Bilgen, Mehmet

    2000-06-01

    For the purpose of quantifying the noise in acoustic elastography, a displacement covariance matrix is derived analytically for the cross-correlation based 3D motion estimator. Static deformation induced in tissue from an external mechanical source is represented by a second-order strain tensor. A generalized 3D model is introduced for the ultrasonic echo signals. The components of the covariance matrix are related to the variances of the displacement errors and the errors made in estimating the elements of the strain tensor. The results are combined to investigate the dependences of these errors on the experimental and signal-processing parameters as well as to determine the effects of one strain component on the estimation of the other. The expressions are evaluated for special cases of axial strain estimation in the presence of axial, axial-shear and lateral-shear type deformations in 2D. The signals are shown to decorrelate with any of these deformations, with strengths depending on the reorganization and interaction of tissue scatterers with the ultrasonic point spread function following the deformation. Conditions that favour the improvements in motion estimation performance are discussed, and advantages gained by signal companding and pulse compression are illustrated.

  15. Particle tracking approach for transport in three-dimensional discrete fracture networks: Particle tracking in 3-D DFNs

    SciTech Connect

    Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.; Gable, Carl W.; Karra, Satish

    2015-09-16

    The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates mass balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.

  16. Particle tracking approach for transport in three-dimensional discrete fracture networks: Particle tracking in 3-D DFNs

    DOE PAGES

    Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.; ...

    2015-09-16

    The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates massmore » balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.« less

  17. Ultra-high-speed 3D astigmatic particle tracking velocimetry: application to particle-laden supersonic impinging jets

    NASA Astrophysics Data System (ADS)

    Buchmann, N. A.; Cierpka, C.; Kähler, C. J.; Soria, J.

    2014-11-01

    The paper demonstrates ultra-high-speed three-component, three-dimensional (3C3D) velocity measurements of micron-sized particles suspended in a supersonic impinging jet flow. Understanding the dynamics of individual particles in such flows is important for the design of particle impactors for drug delivery or cold gas dynamic spray processing. The underexpanded jet flow is produced via a converging nozzle, and micron-sized particles ( d p = 110 μm) are introduced into the gas flow. The supersonic jet impinges onto a flat surface, and the particle impact velocity and particle impact angle are studied for a range of flow conditions and impingement distances. The imaging system consists of an ultra-high-speed digital camera (Shimadzu HPV-1) capable of recording rates of up to 1 Mfps. Astigmatism particle tracking velocimetry (APTV) is used to measure the 3D particle position (Cierpka et al., Meas Sci Technol 21(045401):13, 2010) by coding the particle depth location in the 2D images by adding a cylindrical lens to the high-speed imaging system. Based on the reconstructed 3D particle positions, the particle trajectories are obtained via a higher-order tracking scheme that takes advantage of the high temporal resolution to increase robustness and accuracy of the measurement. It is shown that the particle velocity and impingement angle are affected by the gas flow in a manner depending on the nozzle pressure ratio and stand-off distance where higher pressure ratios and stand-off distances lead to higher impact velocities and larger impact angles.

  18. Readily Accessible Multiplane Microscopy: 3D Tracking the HIV-1 Genome in Living Cells.

    PubMed

    Itano, Michelle S; Bleck, Marina; Johnson, Daniel S; Simon, Sanford M

    2016-02-01

    Human immunodeficiency virus (HIV)-1 infection and the associated disease AIDS are a major cause of human death worldwide with no vaccine or cure available. The trafficking of HIV-1 RNAs from sites of synthesis in the nucleus, through the cytoplasm, to sites of assembly at the plasma membrane are critical steps in HIV-1 viral replication, but are not well characterized. Here we present a broadly accessible microscopy method that captures multiple focal planes simultaneously, which allows us to image the trafficking of HIV-1 genomic RNAs with high precision. This method utilizes a customization of a commercial multichannel emission splitter that enables high-resolution 3D imaging with single-macromolecule sensitivity. We show with high temporal and spatial resolution that HIV-1 genomic RNAs are most mobile in the cytosol, and undergo confined mobility at sites along the nuclear envelope and in the nucleus and nucleolus. These provide important insights regarding the mechanism by which the HIV-1 RNA genome is transported to the sites of assembly of nascent virions.

  19. Motion-sensitive 3-D optical coherence microscope operating at 1300 nm for the visualization of early frog development

    NASA Astrophysics Data System (ADS)

    Hoeling, Barbara M.; Feldman, Stephanie S.; Strenge, Daniel T.; Bernard, Aaron; Hogan, Emily R.; Petersen, Daniel C.; Fraser, Scott E.; Kee, Yun; Tyszka, J. Michael; Haskell, Richard C.

    2007-02-01

    We present 3-dimensional volume-rendered in vivo images of developing embryos of the African clawed frog Xenopus laevis taken with our new en-face-scanning, focus-tracking OCM system at 1300 nm wavelength. Compared to our older instrument which operates at 850 nm, we measure a decrease in the attenuation coefficient by 33%, leading to a substantial improvement in depth penetration. Both instruments have motion-sensitivity capability. By evaluating the fast Fourier transform of the fringe signal, we can produce simultaneously images displaying the fringe amplitude of the backscattered light and images showing the random Brownian motion of the scatterers. We present time-lapse movies of frog gastrulation, an early event during vertebrate embryonic development in which cell movements result in the formation of three distinct layers that later give rise to the major organ systems. We show that the motion-sensitive images reveal features of the different tissue types that are not discernible in the fringe amplitude images. In particular, we observe strong diffusive motion in the vegetal (bottom) part of the frog embryo which we attribute to the Brownian motion of the yolk platelets in the endoderm.

  20. Improved tracking by decoupling camera and target motion

    NASA Astrophysics Data System (ADS)

    Lankton, Shawn; Tannenbaum, Allen

    2008-02-01

    Video tracking is widely used for surveillance, security, and defense purposes. In cases where the camera is not fixed due to pans and tilts, or due to being fixed on a moving platform, tracking can become more difficult. Camera motion must be taken into account, and objects that come and go from the field of view should be continuously and uniquely tracked. We propose a tracking system that can meet these needs by using a frame registration technique to estimate camera motion. This estimate is then used as the input control signal to a Kalman filter which estimates the target's motion model based on measurements from a mean-shift localization scheme. Thus we decouple the camera and object motion and recast the problem in terms of a principled control theory solution. Our experiments show that using a controller built on these principles we are able to track videos with multiple objects in sequences with moving cameras. Furthermore, the techniques are computationally efficient and allow us to accomplish these results in real-time. Of specific importance is that when objects are lost off-frame they can still be uniquely identified and reacquired when they return to the field of view.

  1. Uncertainty preserving patch-based online modeling for 3D model acquisition and integration from passive motion imagery

    NASA Astrophysics Data System (ADS)

    Tang, Hao; Chang, Peng; Molina, Edgardo; Zhu, Zhigang

    2012-06-01

    In both military and civilian applications, abundant data from diverse sources captured on airborne platforms are often available for a region attracting interest. Since the data often includes motion imagery streams collected from multiple platforms flying at different altitudes, with sensors of different field of views (FOVs), resolutions, frame rates and spectral bands, it is imperative that a cohesive site model encompassing all the information can be quickly built and presented to the analysts. In this paper, we propose to develop an Uncertainty Preserving Patch-based Online Modeling System (UPPOMS) leading towards the automatic creation and updating of a cohesive, geo-registered, uncertaintypreserving, efficient 3D site terrain model from passive imagery with varying field-of-views and phenomenologies. The proposed UPPOMS has the following technical thrusts that differentiate our approach from others: (1) An uncertaintypreserved, patch-based 3D model is generated, which enables the integration of images captured with a mixture of NFOV and WFOV and/or visible and infrared motion imagery sensors. (2) Patch-based stereo matching and multi-view 3D integration are utilized, which are suitable for scenes with many low texture regions, particularly in mid-wave infrared images. (3) In contrast to the conventional volumetric algorithms, whose computational and storage costs grow exponentially with the amount of input data and the scale of the scene, the proposed UPPOMS system employs an online algorithmic pipeline, and scales well to large amount of input data. Experimental results and discussions of future work will be provided.

  2. Quantification of Ground Motion Reductions by Fault Zone Plasticity with 3D Spontaneous Rupture Simulations

    NASA Astrophysics Data System (ADS)

    Roten, D.; Olsen, K. B.; Cui, Y.; Day, S. M.

    2015-12-01

    We explore the effects of fault zone nonlinearity on peak ground velocities (PGVs) by simulating a suite of surface rupturing earthquakes in a visco-plastic medium. Our simulations, performed with the AWP-ODC 3D finite difference code, cover magnitudes from 6.5 to 8.0, with several realizations of the stochastic stress drop for a given magnitude. We test three different models of rock strength, with friction angles and cohesions based on criteria which are frequently applied to fractured rock masses in civil engineering and mining. We use a minimum shear-wave velocity of 500 m/s and a maximum frequency of 1 Hz. In rupture scenarios with average stress drop (~3.5 MPa), plastic yielding reduces near-fault PGVs by 15 to 30% in pre-fractured, low-strength rock, but less than 1% in massive, high quality rock. These reductions are almost insensitive to the scenario earthquake magnitude. In the case of high stress drop (~7 MPa), however, plasticity reduces near-fault PGVs by 38 to 45% in rocks of low strength and by 5 to 15% in rocks of high strength. Because plasticity reduces slip rates and static slip near the surface, these effects can partially be captured by defining a shallow velocity-strengthening layer. We also perform a dynamic nonlinear simulation of a high stress drop M 7.8 earthquake rupturing the southern San Andreas fault along 250 km from Indio to Lake Hughes. With respect to the viscoelastic solution (a), nonlinearity in the fault damage zone and in near-surface deposits would reduce long-period (> 1 s) peak ground velocities in the Los Angeles basin by 15-50% (b), depending on the strength of crustal rocks and shallow sediments. These simulation results suggest that nonlinear effects may be relevant even at long periods, especially for earthquakes with high stress drop.

  3. 3-D microvessel-mimicking ultrasound phantoms produced with a scanning motion system.

    PubMed

    Gessner, Ryan C; Kothadia, Roshni; Feingold, Steven; Dayton, Paul A

    2011-05-01

    Ultrasound techniques are currently being developed that can assess the vascularization of tissue as a marker for therapeutic response. Some of these ultrasound imaging techniques seek to extract quantitative features about vessel networks, whereas high-frequency imaging also allows individual vessels to be resolved. The development of these new techniques, and subsequent imaging analysis strategies, necessitates an understanding of their sensitivities to vessel and vessel network structural abnormalities. Constructing in-vitro flow phantoms for this purpose can be prohibitively challenging, because simulating precise flow environments with nontrivial structures is often impossible using conventional methods of construction for flow phantoms. Presented in this manuscript is a method to create predefined structures with <10 μm precision using a three-axis motion system. The application of this technique is demonstrated for the creation of individual vessel and vessel networks, which can easily be made to simulate the development of structural abnormalities typical of diseased vasculature in vivo. In addition, beyond facilitating the creation of phantoms that would otherwise be very challenging to construct, the method presented herein enables one to precisely simulate very slow blood flow and respiration artifacts, and to measure imaging resolution.

  4. Capturing the 3D Motion of an Infalling Galaxy via Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Su, Yuanyuan; Kraft, Ralph P.; Nulsen, Paul E. J.; Roediger, Elke; Forman, William R.; Churazov, Eugene; Randall, Scott W.; Jones, Christine; Machacek, Marie E.

    2017-01-01

    The Fornax Cluster is the nearest (≤slant 20 Mpc) galaxy cluster in the southern sky. NGC 1404 is a bright elliptical galaxy falling through the intracluster medium (ICM) of the Fornax Cluster. The sharp leading edge of NGC 1404 forms a classical “cold front” that separates 0.6 keV dense interstellar medium and 1.5 keV diffuse ICM. We measure the angular pressure variation along the cold front using a very deep (670 ks) Chandra X-ray observation. We are taking the classical approach—using stagnation pressure to determine a substructure’s speed—to the next level by not only deriving a general speed but also directionality, which yields the complete velocity field as well as the distance of the substructure directly from the pressure distribution. We find a hydrodynamic model consistent with the pressure jump along NGC 1404's atmosphere measured in multiple directions. The best-fit model gives an inclination of 33° and a Mach number of 1.3 for the infall of NGC 1404, in agreement with complementary measurements of the motion of NGC 1404. Our study demonstrates the successful treatment of a highly ionized ICM as ideal fluid flow, in support of the hypothesis that magnetic pressure is not dynamically important over most of the virial region of galaxy clusters.

  5. Nonlinear, nonlaminar-3D computation of electron motion through the output cavity of a klystron

    NASA Technical Reports Server (NTRS)

    Albers, L. U.; Kosmahl, H. G.

    1971-01-01

    The equations of motion used in the computation are discussed along with the space charge fields and the integration process. The following assumptions were used as a basis for the computation: (1) The beam is divided into N axisymmetric discs of equal charge and each disc into R rings of equal charge. (2) The velocity of each disc, its phase with respect to the gap voltage, and its radius at a specified position in the drift tunnel prior to the interaction gap is known from available large signal one dimensional programs. (3) The fringing rf fields are computed from exact analytical expressions derived from the wave equation assuming a known field shape between the tunnel tips at a radius a. (4) The beam is focused by an axisymmetric magnetic field. Both components of B, that is B sub z and B sub r, are taken into account. (5) Since this integration does not start at the cathode but rather further down the stream prior to entering the output cavity it is assumed that each electron moved along a laminar path from the cathode to the start of integration.

  6. 3D optical imagery for motion compensation in a limb ultrasound system

    NASA Astrophysics Data System (ADS)

    Ranger, Bryan J.; Feigin, Micha; Zhang, Xiang; Mireault, Al; Raskar, Ramesh; Herr, Hugh M.; Anthony, Brian W.

    2016-04-01

    Conventional processes for prosthetic socket fabrication are heavily subjective, often resulting in an interface to the human body that is neither comfortable nor completely functional. With nearly 100% of amputees reporting that they experience discomfort with the wearing of their prosthetic limb, designing an effective interface to the body can significantly affect quality of life and future health outcomes. Active research in medical imaging and biomechanical tissue modeling of residual limbs has led to significant advances in computer aided prosthetic socket design, demonstrating an interest in moving toward more quantifiable processes that are still patient-specific. In our work, medical ultrasonography is being pursued to acquire data that may quantify and improve the design process and fabrication of prosthetic sockets while greatly reducing cost compared to an MRI-based framework. This paper presents a prototype limb imaging system that uses a medical ultrasound probe, mounted to a mechanical positioning system and submerged in a water bath. The limb imaging is combined with three-dimensional optical imaging for motion compensation. Images are collected circumferentially around the limb and combined into cross-sectional axial image slices, resulting in a compound image that shows tissue distributions and anatomical boundaries similar to magnetic resonance imaging. In this paper we provide a progress update on our system development, along with preliminary results as we move toward full volumetric imaging of residual limbs for prosthetic socket design. This demonstrates a novel multi-modal approach to residual limb imaging.

  7. Aging affects postural tracking of complex visual motion cues

    PubMed Central

    Sotirakis, H.; Kyvelidou, A.; Mademli, L.; Stergiou, N.

    2017-01-01

    Postural tracking of visual motion cues improves perception–action coupling in aging, yet the nature of the visual cues to be tracked is critical for the efficacy of such a paradigm. We investigated how well healthy older (72.45 ± 4.72 years) and young (22.98 ± 2.9 years) adults can follow with their gaze and posture horizontally moving visual target cues of different degree of complexity. Participants tracked continuously for 120 s the motion of a visual target (dot) that oscillated in three different patterns: a simple periodic (simulated by a sine), a more complex (simulated by the Lorenz attractor that is deterministic displaying mathematical chaos) and an ultra-complex random (simulated by surrogating the Lorenz attractor) pattern. The degree of coupling between performance (posture and gaze) and the target motion was quantified in the spectral coherence, gain, phase and cross-approximate entropy (cross-ApEn) between signals. Sway–target coherence decreased as a function of target complexity and was lower for the older compared to the young participants when tracking the chaotic target. On the other hand, gaze–target coherence was not affected by either target complexity or age. Yet, a lower cross-ApEn value when tracking the chaotic stimulus motion revealed a more synchronous gaze–target relationship for both age groups. Results suggest limitations in online visuo-motor processing of complex motion cues and a less efficient exploitation of the body sway dynamics with age. Complex visual motion cues may provide a suitable training stimulus to improve visuo-motor integration and restore sway variability in older adults. PMID:27126061

  8. Effect of 3D physiological loading and motion on elastohydrodynamic lubrication of metal-on-metal total hip replacements.

    PubMed

    Gao, Leiming; Wang, Fengcai; Yang, Peiran; Jin, Zhongmin

    2009-07-01

    An elastohydrodynamic lubrication (EHL) simulation of a metal-on-metal (MOM) total hip implant was presented, considering both steady state and transient physiological loading and motion gait cycle in all three directions. The governing equations were solved numerically by the multi-grid method and fast Fourier transform in spherical coordinates, and full numerical solutions were presented included the pressure and film thickness distribution. Despite small variations in the magnitude of 3D resultant load, the horizontal anterior-posterior (AP) and medial-lateral (ML) load components were found to translate the contact area substantially in the corresponding direction and consequently to result in significant squeeze-film actions. For a cup positioned anatomically at 45 degrees , the variation of the resultant load was shown unlikely to cause the edge contact. The contact area was found within the cup dimensions of 70-130 degrees and 90-150 degrees in the AP and ML direction respectively even under the largest translations. Under walking conditions, the horizontal load components had a significant impact on the lubrication film due to the squeeze-film effect. The time-dependent film thickness was increased by the horizontal translation and decreased during the reverse of this translation caused by the multi-direction of the AP load during walking. The minimum film thickness of 12-20 nm was found at 0.4s and around the location at (95, 125) degrees. During the whole walking cycle both the average and centre film thickness were found obviously increased to a range of 40-65 nm, compared with the range of 25-55 nm under one load (vertical) and one motion (flexion-extension) condition, which suggested the lubrication in the current MOM hip implant was improved under 3D physiological loading and motion. This study suggested the lubrication performance especially the film thickness distribution should vary greatly under different operating conditions and the time and

  9. Quaternion correlation for tracking crystal motions

    NASA Astrophysics Data System (ADS)

    Shi, Qiwei; Latourte, Félix; Hild, François; Roux, Stéphane

    2016-09-01

    During in situ mechanical tests performed on polycrystalline materials in a scanning electron microscope, crystal orientation maps may be recorded at different stages of deformation from electron backscattered diffraction (EBSD). The present study introduces a novel correlation technique that exploits the crystallographic orientation field as a surface pattern to measure crystal motions. Introducing a quaternion-based formalism reveals crystal symmetry that is very convenient to handle and orientation extraction. Spatial regularization is provided by a penalty to deviation of displacement fields from being the solution to a homogeneous linear elastic problem. This procedure allows the large scale features of the displacement field to be captured, mostly from grain boundaries, and a fair interpolation of the displacement to be obtained within the grains. From these data, crystal rotations can be estimated very accurately. Both synthetic and real experimental cases are considered to illustrate the method.

  10. Local intensity feature tracking and motion modeling for respiratory signal extraction in cone beam CT projections.

    PubMed

    Dhou, Salam; Motai, Yuichi; Hugo, Geoffrey D

    2013-02-01

    Accounting for respiration motion during imaging can help improve targeting precision in radiation therapy. We propose local intensity feature tracking (LIFT), a novel markerless breath phase sorting method in cone beam computed tomography (CBCT) scan images. The contributions of this study are twofold. First, LIFT extracts the respiratory signal from the CBCT projections of the thorax depending only on tissue feature points that exhibit respiration. Second, the extracted respiratory signal is shown to correlate with standard respiration signals. LIFT extracts feature points in the first CBCT projection of a sequence and tracks those points in consecutive projections forming trajectories. Clustering is applied to select trajectories showing an oscillating behavior similar to the breath motion. Those "breathing" trajectories are used in a 3-D reconstruction approach to recover the 3-D motion of the lung which represents the respiratory signal. Experiments were conducted on datasets exhibiting regular and irregular breathing patterns. Results showed that LIFT-based respiratory signal correlates with the diaphragm position-based signal with an average phase shift of 1.68 projections as well as with the internal marker-based signal with an average phase shift of 1.78 projections. LIFT was able to detect the respiratory signal in all projections of all datasets.

  11. 3D measurements of alpine skiing with an inertial sensor motion capture suit and GNSS RTK system.

    PubMed

    Supej, Matej

    2010-05-01

    To date, camcorders have been the device of choice for 3D kinematic measurement in human locomotion, in spite of their limitations. This study examines a novel system involving a GNSS RTK that returns a reference trajectory through the use of a suit, imbedded with inertial sensors, to reveal subject segment motion. The aims were: (1) to validate the system's precision and (2) to measure an entire alpine ski race and retrieve the results shortly after measuring. For that purpose, four separate experiments were performed: (1) forced pendulum, (2) walking, (3) gate positions, and (4) skiing experiments. Segment movement validity was found to be dependent on the frequency of motion, with high accuracy (0.8 degrees , s = 0.6 degrees ) for 10 s, which equals approximately 10 slalom turns, while accuracy decreased slightly (2.1 degrees , 3.3 degrees , and 4.2 degrees for 0.5, 1, and 2 Hz oscillations, respectively) during 35 s of data collection. The motion capture suit's orientation inaccuracy was mostly due to geomagnetic secular variation. The system exhibited high validity regarding the reference trajectory (0.008 m, s = 0.0044) throughout an entire ski race. The system is capable of measuring an entire ski course with less manpower and therefore lower cost compared with camcorder-based techniques.

  12. Study of human body: Kinematics and kinetics of a martial arts (Silat) performers using 3D-motion capture

    NASA Astrophysics Data System (ADS)

    Soh, Ahmad Afiq Sabqi Awang; Jafri, Mohd Zubir Mat; Azraai, Nur Zaidi

    2015-04-01

    The Interest in this studies of human kinematics goes back very far in human history drove by curiosity or need for the understanding the complexity of human body motion. To find new and accurate information about the human movement as the advance computing technology became available for human movement that can perform. Martial arts (silat) were chose and multiple type of movement was studied. This project has done by using cutting-edge technology which is 3D motion capture to characterize and to measure the motion done by the performers of martial arts (silat). The camera will detect the markers (infrared reflection by the marker) around the performer body (total of 24 markers) and will show as dot in the computer software. The markers detected were analyzing using kinematic kinetic approach and time as reference. A graph of velocity, acceleration and position at time,t (seconds) of each marker was plot. Then from the information obtain, more parameters were determined such as work done, momentum, center of mass of a body using mathematical approach. This data can be used for development of the effectiveness movement in martial arts which is contributed to the people in arts. More future works can be implemented from this project such as analysis of a martial arts competition.

  13. Markerless rat head motion tracking using structured light for brain PET imaging of unrestrained awake small animals

    NASA Astrophysics Data System (ADS)

    Miranda, Alan; Staelens, Steven; Stroobants, Sigrid; Verhaeghe, Jeroen

    2017-03-01

    Preclinical positron emission tomography (PET) imaging in small animals is generally performed under anesthesia to immobilize the animal during scanning. More recently, for rat brain PET studies, methods to perform scans of unrestrained awake rats are being developed in order to avoid the unwanted effects of anesthesia on the brain response. Here, we investigate the use of a projected structure stereo camera to track the motion of the rat head during the PET scan. The motion information is then used to correct the PET data. The stereo camera calculates a 3D point cloud representation of the scene and the tracking is performed by point cloud matching using the iterative closest point algorithm. The main advantage of the proposed motion tracking is that no intervention, e.g. for marker attachment, is needed. A manually moved microDerenzo phantom experiment and 3 awake rat [18F]FDG experiments were performed to evaluate the proposed tracking method. The tracking accuracy was 0.33 mm rms. After motion correction image reconstruction, the microDerenzo phantom was recovered albeit with some loss of resolution. The reconstructed FWHM of the 2.5 and 3 mm rods increased with 0.94 and 0.51 mm respectively in comparison with the motion-free case. In the rat experiments, the average tracking success rate was 64.7%. The correlation of relative brain regional [18F]FDG uptake between the anesthesia and awake scan reconstructions was increased from on average 0.291 (not significant) before correction to 0.909 (p  <  0.0001) after motion correction. Markerless motion tracking using structured light can be successfully used for tracking of the rat head for motion correction in awake rat PET scans.

  14. How Plates Pull Transforms Apart: 3-D Numerical Models of Oceanic Transform Fault Response to Changes in Plate Motion Direction

    NASA Astrophysics Data System (ADS)

    Morrow, T. A.; Mittelstaedt, E. L.; Olive, J. A. L.

    2015-12-01

    Observations along oceanic fracture zones suggest that some mid-ocean ridge transform faults (TFs) previously split into multiple strike-slip segments separated by short (<~50 km) intra-transform spreading centers and then reunited to a single TF trace. This history of segmentation appears to correspond with changes in plate motion direction. Despite the clear evidence of TF segmentation, the processes governing its development and evolution are not well characterized. Here we use a 3-D, finite-difference / marker-in-cell technique to model the evolution of localized strain at a TF subjected to a sudden change in plate motion direction. We simulate the oceanic lithosphere and underlying asthenosphere at a ridge-transform-ridge setting using a visco-elastic-plastic rheology with a history-dependent plastic weakening law and a temperature- and stress-dependent mantle viscosity. To simulate the development of topography, a low density, low viscosity 'sticky air' layer is present above the oceanic lithosphere. The initial thermal gradient follows a half-space cooling solution with an offset across the TF. We impose an enhanced thermal diffusivity in the uppermost 6 km of lithosphere to simulate the effects of hydrothermal circulation. An initial weak seed in the lithosphere helps localize shear deformation between the two offset ridge axes to form a TF. For each model case, the simulation is run initially with TF-parallel plate motion until the thermal structure reaches a steady state. The direction of plate motion is then rotated either instantaneously or over a specified time period, placing the TF in a state of trans-tension. Model runs continue until the system reaches a new steady state. Parameters varied here include: initial TF length, spreading rate, and the rotation rate and magnitude of spreading obliquity. We compare our model predictions to structural observations at existing TFs and records of TF segmentation preserved in oceanic fracture zones.

  15. Shape and motion reconstruction from 3D-to-1D orthographically projected data via object-image relations.

    PubMed

    Ferrara, Matthew; Arnold, Gregory; Stuff, Mark

    2009-10-01

    This paper describes an invariant-based shape- and motion reconstruction algorithm for 3D-to-1D orthographically projected range data taken from unknown viewpoints. The algorithm exploits the object-image relation that arises in echo-based range data and represents a simplification and unification of previous work in the literature. Unlike one proposed approach, this method does not require uniqueness constraints, which makes its algorithmic form independent of the translation removal process (centroid removal, range alignment, etc.). The new algorithm, which simultaneously incorporates every projection and does not use an initialization in the optimization process, requires fewer calculations and is more straightforward than the previous approach. Additionally, the new algorithm is shown to be the natural extension of the approach developed by Tomasi and Kanade for 3D-to-2D orthographically projected data and is applied to a realistic inverse synthetic aperture radar imaging scenario, as well as experiments with varying amounts of aperture diversity and noise.

  16. A 3D-printed polymer micro-gripper with self-defined electrical tracks and thermal actuator

    NASA Astrophysics Data System (ADS)

    Alblalaihid, Khalid; Overton, James; Lawes, Simon; Kinnell, Peter

    2017-04-01

    This paper presents a simple fabrication process that allows for isolated metal tracks to be easily defined on the surface of 3D printed micro-scale polymer components. The process makes use of a standard low cost conformal sputter coating system to quickly deposit thin film metal layers on to the surface of 3D printed polymer micro parts. The key novelty lies in the inclusion of inbuilt masking features, on the surface of the polymer parts, to ensure that the conformal metal layer can be effectively broken to create electrically isolated metal features. The presented process is extremely flexible, and it is envisaged that it may be applied to a wide range of sensor and actuator applications. To demonstrate the process a polymer micro-scale gripper with an inbuilt thermal actuator is designed and fabricated. In this work the design methodology for creating the micro-gripper is presented, illustrating how the rapid and flexible manufacturing process allows for fast cycle time design iterations to be performed. In addition the compatibility of this approach with traditional design and analysis techniques such as basic finite element simulation is also demonstrated with simulation results in reasonable agreement with experimental performance data for the micro-gripper.

  17. 3-D or median map? Earthquake scenario ground-motion maps from physics-based models versus maps from ground-motion prediction equations

    NASA Astrophysics Data System (ADS)

    Porter, K.

    2015-12-01

    There are two common ways to create a ground-motion map for a hypothetical earthquake: using ground motion prediction equations (by far the more common of the two) and using 3-D physics-based modeling. The former is very familiar to engineers, the latter much less so, and the difference can present a problem because engineers tend to trust the familiar and distrust novelty. Maps for essentially the same hypothetical earthquake using the two different methods can look very different, while appearing to present the same information. Using one or the other can lead an engineer or disaster planner to very different estimates of damage and risk. The reasons have to do with depiction of variability, spatial correlation of shaking, the skewed distribution of real-world shaking, and the upward-curving relationship between shaking and damage. The scientists who develop the two kinds of map tend to specialize in one or the other and seem to defend their turf, which can aggravate the problem of clearly communicating with engineers.The USGS Science Application for Risk Reduction's (SAFRR) HayWired scenario has addressed the challenge of explaining to engineers the differences between the two maps, and why, in a disaster planning scenario, one might want to use the less-familiar 3-D map.

  18. A quantitative study of 3D-scanning frequency and Δd of tracking points on the tooth surface

    PubMed Central

    Li, Hong; Lyu, Peijun; Sun, Yuchun; Wang, Yong; Liang, Xiaoyue

    2015-01-01

    Micro-movement of human jaws in the resting state might influence the accuracy of direct three-dimensional (3D) measurement. Providing a reference for sampling frequency settings of intraoral scanning systems to overcome this influence is important. In this study, we measured micro-movement, or change in distance (∆d), as the change in position of a single tracking point from one sampling time point to another in five human subjects. ∆d of tracking points on incisors at 7 sampling frequencies was judged against the clinical accuracy requirement to select proper sampling frequency settings. The curve equation was then fit quantitatively between ∆d median and the sampling frequency to predict the trend of ∆d with increasing f. The difference of ∆d among the subjects and the difference between upper and lower incisor feature points of the same subject were analyzed by a non-parametric test (α = 0.05). Significant differences of incisor feature points were noted among different subjects and between upper and lower jaws of the same subject (P < 0.01). Overall, ∆d decreased with increasing frequency. When the frequency was 60 Hz, ∆d nearly reached the clinical accuracy requirement. Frequencies higher than 60 Hz did not significantly decrease Δd further. PMID:26400112

  19. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    NASA Astrophysics Data System (ADS)

    Afik, Eldad

    2015-09-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection.

  20. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    PubMed Central

    Afik, Eldad

    2015-01-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection. PMID:26329642

  1. ShipMo3D Version 1.0 User Manual for Simulating Time Domain Motions of a Freely Maneuvering Ship in a Seaway

    DTIC Science & Technology

    2007-10-01

    from) in earth -fixed axes ν mean wave direction (from) in earth -fixed axes νi mean wave direction (from) for spectral component i ρ water density {σ...Kennedy Abstract This report serves as a user manual for simulating ship motions in waves and in calm water using ShipMo3D Version 1.0. ShipMo3D is...with associated user applica- tions for predicting ship motions in calm water and in waves. Motion predictions are available in both the frequency

  2. Motion-compensated speckle tracking via particle filtering

    NASA Astrophysics Data System (ADS)

    Liu, Lixin; Yagi, Shin-ichi; Bian, Hongyu

    2015-07-01

    Recently, an improved motion compensation method that uses the sum of absolute differences (SAD) has been applied to frame persistence utilized in conventional ultrasonic imaging because of its high accuracy and relative simplicity in implementation. However, high time consumption is still a significant drawback of this space-domain method. To seek for a more accelerated motion compensation method and verify if it is possible to eliminate conventional traversal correlation, motion-compensated speckle tracking between two temporally adjacent B-mode frames based on particle filtering is discussed. The optimal initial density of particles, the least number of iterations, and the optimal transition radius of the second iteration are analyzed from simulation results for the sake of evaluating the proposed method quantitatively. The speckle tracking results obtained using the optimized parameters indicate that the proposed method is capable of tracking the micromotion of speckle throughout the region of interest (ROI) that is superposed with global motion. The computational cost of the proposed method is reduced by 25% compared with that of the previous algorithm and further improvement is necessary.

  3. Motion tracking in undergraduate physics laboratories with the Wii remote

    NASA Astrophysics Data System (ADS)

    Tomarken, Spencer L.; Simons, Dallas R.; Helms, Richard W.; Johns, Will E.; Schriver, Kenneth E.; Webster, Medford S.

    2012-04-01

    We report the incorporation of the Wiimote, a light-tracking remote control device, into two undergraduate-level experiments. We provide an overview of the Wiimote's basic functions and a systematic analysis of its motion tracking capabilities. We describe the Wiimote's use in measuring conservation of linear and angular momentum on an air table, and measuring the gravitational constant with the classic Cavendish torsion pendulum. Our results show that Wiimote is a simple and affordable way to streamline the data acquisition process and produce results that are generally superior to those obtained with conventional techniques.

  4. Marker-Free Tracking of Facet Capsule Motion using Polarization-Sensitive Optical Coherence Tomography

    PubMed Central

    Claeson, Amy A.; Yeh, Yi-Jou; Black, Adam J.; Akkin, Taner; Barocas, Victor H.

    2015-01-01

    We proposed and tested a method by which surface strains of biological tissues can be captured without the use of fiducial markers by instead, utilizing the inherent structure of the tissue. We used polarization-sensitive optical coherence tomography (PS OCT) to obtain volumetric data through the thickness and across a partial surface of the lumbar facet capsular ligament (FCL) during three cases of static bending. Reflectivity and phase retardance were calculated from two polarization channels, and a power spectrum analysis was performed on each a-line to extract the dominant banding frequency (a measure of degree of fiber alignment) through the maximum value of the power spectrum (maximum power). Maximum powers of all a-lines for each case were used to create 2D visualizations, which were subsequently tracked via digital image correlation. In-plane strains were calculated from measured 2D deformations and converted to 3D surface strains by including out-of-plane motion obtained from the PS OCT image. In-plane strains correlated with 3D strains (R2 ≥ 0.95). Using PS OCT for marker-free motion tracking of biological tissues is a promising new technique because it relies on the structural characteristics of the tissue to monitor displacement instead of external fiducial markers. PMID:26055969

  5. Vehicle tracking in wide area motion imagery from an airborne platform

    NASA Astrophysics Data System (ADS)

    van Eekeren, Adam W. M.; van Huis, Jasper R.; Eendebak, Pieter T.; Baan, Jan

    2015-10-01

    Airborne platforms, such as UAV's, with Wide Area Motion Imagery (WAMI) sensors can cover multiple square kilometers and produce large amounts of video data. Analyzing all data for information need purposes becomes increasingly labor-intensive for an image analyst. Furthermore, the capacity of the datalink in operational areas may be inadequate to transfer all data to the ground station. Automatic detection and tracking of people and vehicles enables to send only the most relevant footage to the ground station and assists the image analysts in effective data searches. In this paper, we propose a method for detecting and tracking vehicles in high-resolution WAMI images from a moving airborne platform. For the vehicle detection we use a cascaded set of classifiers, using an Adaboost training algorithm on Haar features. This detector works on individual images and therefore does not depend on image motion stabilization. For the vehicle tracking we use a local template matching algorithm. This approach has two advantages. In the first place, it does not depend on image motion stabilization and it counters the inaccuracy of the GPS data that is embedded in the video data. In the second place, it can find matches when the vehicle detector would miss a certain detection. This results in long tracks even when the imagery is of low frame-rate. In order to minimize false detections, we also integrate height information from a 3D reconstruction that is created from the same images. By using the locations of buildings and roads, we are able to filter out false detections and increase the performance of the tracker. In this paper we show that the vehicle tracks can also be used to detect more complex events, such as traffic jams and fast moving vehicles. This enables the image analyst to do a faster and more effective search of the data.

  6. Does fluid infiltration affect the motion of sediment grains? - A 3-D numerical modelling approach using SPH

    NASA Astrophysics Data System (ADS)

    Bartzke, Gerhard; Rogers, Benedict D.; Fourtakas, Georgios; Mokos, Athanasios; Huhn, Katrin

    2016-04-01

    The processes that cause the creation of a variety of sediment morphological features, e.g. laminated beds, ripples, or dunes, are based on the initial motion of individual sediment grains. However, with experimental techniques it is difficult to measure the flow characteristics, i.e., the velocity of the pore water flow in sediments, at a sufficient resolution and in a non-intrusive way. As a result, the role of fluid infiltration at the surface and in the interior affecting the initiation of motion of a sediment bed is not yet fully understood. Consequently, there is a strong need for numerical models, since these are capable of quantifying fluid driven sediment transport processes of complex sediment beds composed of irregular shapes. The numerical method Smoothed Particle Hydrodynamics (SPH) satisfies this need. As a meshless and Lagrangian technique, SPH is ideally suited to simulating flows in sediment beds composed of various grain shapes, but also flow around single grains at a high temporal and spatial resolution. The solver chosen is DualSPHysics (www.dual.sphysics.org) since this is validated for a range of flow conditions. For the present investigation a 3-D numerical flume model was generated using SPH with a length of 4.0 cm, a width of 0.05 cm and a height of 0.2 cm where mobile sediment particles were deposited in a recess. An experimental setup was designed to test sediment configurations composed of irregular grain shapes (grain diameter, D50=1000 μm). Each bed consisted of 3500 mobile objects. After the bed generation process, the entire domain was flooded with 18 million fluid particles. To drive the flow, an oscillating motion perpendicular to the bed was applied to the fluid, reaching a peak value of 0.3 cm/s, simulating 4 seconds of real time. The model results showed that flow speeds decreased logarithmically from the top of the domain towards the surface of the beds, indicating a fully developed boundary layer. Analysis of the fluid

  7. Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters

    PubMed Central

    Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng

    2016-01-01

    Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers. PMID:27618046

  8. Real time markerless motion tracking using linked kinematic chains

    DOEpatents

    Luck, Jason P.; Small, Daniel E.

    2007-08-14

    A markerless method is described for tracking the motion of subjects in a three dimensional environment using a model based on linked kinematic chains. The invention is suitable for tracking robotic, animal or human subjects in real-time using a single computer with inexpensive video equipment, and does not require the use of markers or specialized clothing. A simple model of rigid linked segments is constructed of the subject and tracked using three dimensional volumetric data collected by a multiple camera video imaging system. A physics based method is then used to compute forces to align the model with subsequent volumetric data sets in real-time. The method is able to handle occlusion of segments and accommodates joint limits, velocity constraints, and collision constraints and provides for error recovery. The method further provides for elimination of singularities in Jacobian based calculations, which has been problematic in alternative methods.

  9. Respiratory motion tracking of skin and liver in swine for Cyberknife motion compensation

    NASA Astrophysics Data System (ADS)

    Tang, Jonathan; Dieterich, Sonja; Cleary, Kevin R.

    2004-05-01

    In this study, we collected respiratory motion data of external skin markers and internal liver fiducials from several swine. The POLARIS infrared tracking system was used for recording reflective markers placed on the swine"s abdomen. The AURORA electromagnetic tracking system was used for recording 2 tracked needles implanted into the liver. This data will be used to develop correlation models between external skin movement and internal organ movement, which is the first step towards the ability to compensate for respiratory movement of the lesion. We are also developing a motion simulator for validation of our model and dose verification of mobile lesions in the CYBERKNIFE Suite. We believe that this research could provide significant information towards the development of precise radiation treatment of mobile target volumes.

  10. Shoulder 3D range of motion and humerus rotation in two volleyball spike techniques: injury prevention and performance.

    PubMed

    Seminati, Elena; Marzari, Alessandra; Vacondio, Oreste; Minetti, Alberto E

    2015-06-01

    Repetitive stresses and movements on the shoulder in the volleyball spike expose this joint to overuse injuries, bringing athletes to a career threatening injury. Assuming that specific spike techniques play an important role in injury risk, we compared the kinematic of the traditional (TT) and the alternative (AT) techniques in 21 elite athletes, evaluating their safety with respect to performance. Glenohumeral joint was set as the centre of an imaginary sphere, intersected by the distal end of the humerus at different angles. Shoulder range of motion and angular velocities were calculated and compared to the joint limits. Ball speed and jump height were also assessed. Results indicated the trajectory of the humerus to be different for the TT, with maximal flexion of the shoulder reduced by 10 degrees, and horizontal abduction 15 degrees higher. No difference was found for external rotation angles, while axial rotation velocities were significantly higher in AT, with a 5% higher ball speed. Results suggest AT as a potential preventive solution to shoulder chronic pathologies, reducing shoulder flexion during spiking. The proposed method allows visualisation of risks associated with different overhead manoeuvres, by depicting humerus angles and velocities with respect to joint limits in the same 3D space.

  11. Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction

    NASA Astrophysics Data System (ADS)

    Verrier, N.; Grosjean, N.; Dib, E.; Méès, L.; Fournier, C.; Marié, J.-L.

    2016-04-01

    Digital holography is a valuable tool for three-dimensional information extraction. Among existing configurations, the originally proposed set-up (i.e. Gabor, or in-line holography), is reasonably immune to variations in the experimental environment making it a method of choice for studies of fluid dynamics. Nevertheless, standard hologram reconstruction techniques, based on numerical light back-propagation are prone to artifacts such as twin images or aliases that limit both the quality and quantity of information extracted from the acquired holograms. To get round this issue, the hologram reconstruction as a parametric inverse problem has been shown to accurately estimate 3D positions and the size of seeding particles directly from the hologram. To push the bounds of accuracy on size estimation still further, we propose to fully exploit the information redundancy of a hologram video sequence using joint estimation reconstruction. Applying this approach in a bench-top experiment, we show that it led to a relative precision of 0.13% (for a 60 μm diameter droplet) for droplet size estimation, and a tracking precision of {σx}× {σy}× {σz}=0.15× 0.15× 1~\\text{pixels} .

  12. Calculating the Probability of Strong Ground Motions Using 3D Seismic Waveform Modeling - SCEC CyberShake

    NASA Astrophysics Data System (ADS)

    Gupta, N.; Callaghan, S.; Graves, R.; Mehta, G.; Zhao, L.; Deelman, E.; Jordan, T. H.; Kesselman, C.; Okaya, D.; Cui, Y.; Field, E.; Gupta, V.; Vahi, K.; Maechling, P. J.

    2006-12-01

    Researchers from the SCEC Community Modeling Environment (SCEC/CME) project are utilizing the CyberShake computational platform and a distributed high performance computing environment that includes USC High Performance Computer Center and the NSF TeraGrid facilities to calculate physics-based probabilistic seismic hazard curves for several sites in the Southern California area. Traditionally, probabilistic seismic hazard analysis (PSHA) is conducted using intensity measure relationships based on empirical attenuation relationships. However, a more physics-based approach using waveform modeling could lead to significant improvements in seismic hazard analysis. Members of the SCEC/CME Project have integrated leading-edge PSHA software tools, SCEC-developed geophysical models, validated anelastic wave modeling software, and state-of-the-art computational technologies on the TeraGrid to calculate probabilistic seismic hazard curves using 3D waveform-based modeling. The CyberShake calculations for a single probablistic seismic hazard curve require tens of thousands of CPU hours and multiple terabytes of disk storage. The CyberShake workflows are run on high performance computing systems including multiple TeraGrid sites (currently SDSC and NCSA), and the USC Center for High Performance Computing and Communications. To manage the extensive job scheduling and data requirements, CyberShake utilizes a grid-based scientific workflow system based on the Virtual Data System (VDS), the Pegasus meta-scheduler system, and the Globus toolkit. Probabilistic seismic hazard curves for spectral acceleration at 3.0 seconds have been produced for eleven sites in the Southern California region, including rock and basin sites. At low ground motion levels, there is little difference between the CyberShake and attenuation relationship curves. At higher ground motion (lower probability) levels, the curves are similar for some sites (downtown LA, I-5/SR-14 interchange) but different for

  13. SU-E-J-80: Interplay Effect Between VMAT Intensity Modulation and Tumor Motion in Hypofractioned Lung Treatment, Investigated with 3D Pressage Dosimeter

    SciTech Connect

    Touch, M; Wu, Q; Oldham, M

    2014-06-01

    Purpose: To demonstrate an embedded tissue equivalent presage dosimeter for measuring 3D doses in moving tumors and to study the interplay effect between the tumor motion and intensity modulation in hypofractioned Volumetric Modulated Arc Therapy(VMAT) lung treatment. Methods: Motion experiments were performed using cylindrical Presage dosimeters (5cm diameter by 7cm length) mounted inside the lung insert of a CIRS thorax phantom. Two different VMAT treatment plans were created and delivered in three different scenarios with the same prescribed dose of 18 Gy. Plan1, containing a 2 centimeter spherical CTV with an additional 2mm setup margin, was delivered on a stationary phantom. Plan2 used the same CTV except expanded by 1 cm in the Sup-Inf direction to generate ITV and PTV respectively. The dosimeters were irradiated in static and variable motion scenarios on a Truebeam system. After irradiation, high resolution 3D dosimetry was performed using the Duke Large Field-of-view Optical-CT Scanner, and compared to the calculated dose from Eclipse. Results: In the control case (no motion), good agreement was observed between the planned and delivered dose distributions as indicated by 100% 3D Gamma (3% of maximum planned dose and 3mm DTA) passing rates in the CTV. In motion cases gamma passing rates was 99% in CTV. DVH comparisons also showed good agreement between the planned and delivered dose in CTV for both control and motion cases. However, differences of 15% and 5% in dose to PTV were observed in the motion and control cases respectively. Conclusion: With very high dose nature of a hypofraction treatment, significant effect was observed only motion is introduced to the target. This can be resulted from the motion of the moving target and the modulation of the MLC. 3D optical dosimetry can be of great advantage in hypofraction treatment dose validation studies.

  14. Model tags: direct three-dimensional tracking of heart wall motion from tagged magnetic resonance images.

    PubMed

    Young, A A

    1999-12-01

    Although magnetic resonance tissue tagging is a useful tool for the non-invasive measurement of three-dimensional (3-D) heart wall motion, the clinical utility of current analysis techniques is limited by the prohibitively long time required for image analysis. A method was therefore developed for the reconstruction of 3-D heart wall motion directly from tagged magnetic resonance images, without prior identification of ventricular boundaries or tag stripe locations. The method utilized a finite-element model to describe the shape and motion of the heart. Initially, the model geometry was determined at the time of tag creation by fitting a small number of guide points which were placed interactively on the images. Model tags were then created within the model as material surfaces which defined the location of the magnetic tags. An objective function was derived to measure the degree of match between the model tags and the image stripes. The objective was minimized by allowing the model to deform directly under the influence of the images, utilizing an efficient method for calculating image-derived motion constraints. The model deformation could also be manipulated interactively by guide points. Experiments were performed using clinical images of a normal volunteer, as well as simulated images in which the true motion was specified. The root-mean-squared errors between the known and calculated displacement and strain for the simulated images were similar to those obtained using previous stripe-tracking and model-fitting methods. A significant improvement in analysis time was obtained for the normal volunteer and further improvements may allow the method to be applied in a 'real-time' clinical environment.

  15. Tracking 'differential organ motion' with a 'breathing' multileaf collimator: magnitude of problem assessed using 4D CT data and a motion-compensation strategy.

    PubMed

    McClelland, J R; Webb, S; McQuaid, D; Binnie, D M; Hawkes, D J

    2007-08-21

    Intrafraction tumour (e.g. lung) motion due to breathing can, in principle, be compensated for by applying identical breathing motions to the leaves of a multileaf collimator (MLC) as intensity-modulated radiation therapy is delivered by the dynamic MLC (DMLC) technique. A difficulty arising, however, is that irradiated voxels, which are in line with a bixel at one breathing phase (at which the treatment plan has been made), may move such that they cease to be in line with that breathing bixel at another phase. This is the phenomenon of differential voxel motion and existing tracking solutions have ignored this very real problem. There is absolutely no tracking solution to the problem of compensating for differential voxel motion. However, there is a strategy that can be applied in which the leaf breathing is determined to minimize the geometrical mismatch in a least-squares sense in irradiating differentially-moving voxels. A 1D formulation in very restricted circumstances is already in the literature and has been applied to some model breathing situations which can be studied analytically. These are, however, highly artificial. This paper presents the general 2D formulation of the problem including allowing different importance factors to be applied to planning target volume and organ at risk (or most generally) each voxel. The strategy also extends the literature strategy to the situation where the number of voxels connecting to a bixel is a variable. Additionally the phenomenon of 'cross-leaf-track/channel' voxel motion is formally addressed. The general equations are presented and analytic results are given for some 1D, artificially contrived, motions based on the Lujan equations of breathing motion. Further to this, 3D clinical voxel motion data have been extracted from 4D CT measurements to both assess the magnitude of the problem of 2D motion perpendicular to the beam-delivery axis in clinical practice and also to find the 2D optimum breathing-leaf strategy

  16. Tracking 'differential organ motion' with a 'breathing' multileaf collimator: magnitude of problem assessed using 4D CT data and a motion-compensation strategy

    NASA Astrophysics Data System (ADS)

    McClelland, J. R.; Webb, S.; McQuaid, D.; Binnie, D. M.; Hawkes, D. J.

    2007-08-01

    Intrafraction tumour (e.g. lung) motion due to breathing can, in principle, be compensated for by applying identical breathing motions to the leaves of a multileaf collimator (MLC) as intensity-modulated radiation therapy is delivered by the dynamic MLC (DMLC) technique. A difficulty arising, however, is that irradiated voxels, which are in line with a bixel at one breathing phase (at which the treatment plan has been made), may move such that they cease to be in line with that breathing bixel at another phase. This is the phenomenon of differential voxel motion and existing tracking solutions have ignored this very real problem. There is absolutely no tracking solution to the problem of compensating for differential voxel motion. However, there is a strategy that can be applied in which the leaf breathing is determined to minimize the geometrical mismatch in a least-squares sense in irradiating differentially-moving voxels. A 1D formulation in very restricted circumstances is already in the literature and has been applied to some model breathing situations which can be studied analytically. These are, however, highly artificial. This paper presents the general 2D formulation of the problem including allowing different importance factors to be applied to planning target volume and organ at risk (or most generally) each voxel. The strategy also extends the literature strategy to the situation where the number of voxels connecting to a bixel is a variable. Additionally the phenomenon of 'cross-leaf-track/channel' voxel motion is formally addressed. The general equations are presented and analytic results are given for some 1D, artificially contrived, motions based on the Lujan equations of breathing motion. Further to this, 3D clinical voxel motion data have been extracted from 4D CT measurements to both assess the magnitude of the problem of 2D motion perpendicular to the beam-delivery axis in clinical practice and also to find the 2D optimum breathing-leaf strategy

  17. Implementation of a New Method for Dynamic Multileaf Collimator Tracking of Prostate Motion in Arc Radiotherapy Using a Single KV Imager

    SciTech Connect

    Poulsen, Per Rugaard; Cho, Byungchul; Sawant, Amit; Keall, Paul J.

    2010-03-01

    Purpose: To implement a method for real-time prostate motion estimation with a single kV imager during arc radiotherapy and to integrate it with dynamic multileaf collimator (DMLC) target tracking. Methods and Materials: An arc field with a circular aperture and 358 deg. gantry rotation was delivered to a motion phantom with a fiducial marker under continuous kV X-ray imaging at 5 Hz, perpendicular to the treatment beam. A pretreatment gantry rotation of 120 deg. in 20 sec with continuous imaging preceded the treatment. During treatment, each kV image was first used together with all previous images to estimate the three-dimensional (3D) target probability density function and then used together with this probability density function to estimate the 3D target position. The MLC aperture was then adapted to the estimated 3D target position. Tracking was performed with five patient-measured prostate trajectories that represented characteristic prostate motion patterns. Two data sets were recorded during tracking: (1) the estimated 3D target positions, for off-line comparison with the actual phantom motion; and (2) continuous portal images, for independent off-line calculation of the 2D tracking error as the positional difference between the marker and the MLC aperture center in each portal image. All experiments were also made with 1- Hz kV imaging. Results: The mean 3D root-mean-square error of the trajectory estimation was 0.6 mm. The mean root-mean-square tracking error was 0.7 mm, both parallel and perpendicular to the MLC. The accuracy degraded slightly for 1- Hz imaging. Conclusions: Single-imager DMLC prostate tracking that allows arbitrary beam modulation during arc radiotherapy was implemented. It has submillimeter accuracy for most prostate motion types.

  18. Motion management during IMAT treatment of mobile lung tumors—A comparison of MLC tracking and gated delivery

    PubMed Central

    Falk, Marianne; Pommer, Tobias; Keall, Paul; Korreman, Stine; Persson, Gitte; Poulsen, Per; Munck af Rosenschöld, Per

    2014-01-01

    Purpose: To compare real-time dynamic multileaf collimator (MLC) tracking, respiratory amplitude and phase gating, and no compensation for intrafraction motion management during intensity modulated arc therapy (IMAT). Methods: Motion management with MLC tracking and gating was evaluated for four lung cancer patients. The IMAT plans were delivered to a dosimetric phantom mounted onto a 3D motion phantom performing patient-specific lung tumor motion. The MLC tracking system was guided by an optical system that used stereoscopic infrared (IR) cameras and five spherical reflecting markers attached to the dosimetric phantom. The gated delivery used a duty cycle of 35% and collected position data using an IR camera and two reflecting markers attached to a marker block. Results: The average gamma index failure rate (2% and 2 mm criteria) was <0.01% with amplitude gating for all patients, and <0.1% with phase gating and <3.7% with MLC tracking for three of the four patients. One of the patients had an average failure rate of 15.1% with phase gating and 18.3% with MLC tracking. With no motion compensation, the average gamma index failure rate ranged from 7.1% to 46.9% for the different patients. Evaluation of the dosimetric error contributions showed that the gated delivery mainly had errors in target localization, while MLC tracking also had contributions from MLC leaf fitting and leaf adjustment. The average treatment time was about three times longer with gating compared to delivery with MLC tracking (that did not prolong the treatment time) or no motion compensation. For two of the patients, the different motion compensation techniques allowed for approximately the same margin reduction but for two of the patients, gating enabled a larger reduction of the margins than MLC tracking. Conclusions: Both gating and MLC tracking reduced the effects of the target movements, although the gated delivery showed a better dosimetric accuracy and enabled a larger reduction of the

  19. Large scale track analysis for wide area motion imagery surveillance

    NASA Astrophysics Data System (ADS)

    van Leeuwen, C. J.; van Huis, J. R.; Baan, J.

    2016-10-01

    Wide Area Motion Imagery (WAMI) enables image based surveillance of areas that can cover multiple square kilometers. Interpreting and analyzing information from such sources, becomes increasingly time consuming as more data is added from newly developed methods for information extraction. Captured from a moving Unmanned Aerial Vehicle (UAV), the high-resolution images allow detection and tracking of moving vehicles, but this is a highly challenging task. By using a chain of computer vision detectors and machine learning techniques, we are capable of producing high quality track information of more than 40 thousand vehicles per five minutes. When faced with such a vast number of vehicular tracks, it is useful for analysts to be able to quickly query information based on region of interest, color, maneuvers or other high-level types of information, to gain insight and find relevant activities in the flood of information. In this paper we propose a set of tools, combined in a graphical user interface, which allows data analysts to survey vehicles in a large observed area. In order to retrieve (parts of) images from the high-resolution data, we developed a multi-scale tile-based video file format that allows to quickly obtain only a part, or a sub-sampling of the original high resolution image. By storing tiles of a still image according to a predefined order, we can quickly retrieve a particular region of the image at any relevant scale, by skipping to the correct frames and reconstructing the image. Location based queries allow a user to select tracks around a particular region of interest such as landmark, building or street. By using an integrated search engine, users can quickly select tracks that are in the vicinity of locations of interest. Another time-reducing method when searching for a particular vehicle, is to filter on color or color intensity. Automatic maneuver detection adds information to the tracks that can be used to find vehicles based on their

  20. Alignment of sparse freehand 3-D ultrasound with preoperative images of the liver using models of respiratory motion and deformation.

    PubMed

    Blackall, Jane M; Penney, Graeme P; King, Andrew P; Hawkes, David J

    2005-11-01

    We present a method for alignment of an interventional plan to optically tracked two-dimensional intraoperative ultrasound (US) images of the liver. Our clinical motivation is to enable the accurate transfer of information from three-dimensional preoperative imaging modalities [magnetic resonance (MR) or computed tomography (CT)] to intraoperative US to aid needle placement for thermal ablation of liver metastases. An initial rigid registration to intraoperative coordinates is obtained using a set of US images acquired at maximum exhalation. A preprocessing step is applied to both the preoperative images and the US images to produce evidence of corresponding structures. This yields two sets of images representing classification of regions as vessels. The registration then proceeds using these images. The preoperative images and plan are then warped to correspond to a single US slice acquired at an unknown point in the breathing cycle where the liver is likely to have moved and deformed relative to the preoperative image. Alignment is constrained using a patient-specific model of breathing motion and deformation. Target registration error is estimated by carrying out simulation experiments using resliced MR volumes to simulate real US and comparing the registration results to a "bronze-standard" registration performed on the full MR volume. Finally, the system is tested using real US and verified using visual inspection.

  1. Video motion analysis with automated tracking: an insight

    NASA Astrophysics Data System (ADS)

    Aftab Usman, Bilal; Alam, Junaid; Sabieh Anwar, Muhammad

    2015-11-01

    The article describes the use of elementary techniques in computer vision and motion photography for the analysis of well known experiments in interactive instructional physics laboratories. We describe a method for the automated tracking of the kinematics of physical objects which involves the subtraction of orthogonal colors in color space. The aim is to expose undergraduate students to image processing and its applications in video motion analysis. The straightforward technique is simple, results in computational speedup compared to an existing method, removes the need for a laborious repetitive and manual tagging of frames and is generally robust against color variations. Insight is also presented into the process of thresholding and selecting the correct region out of the several choice presented in the post-threshold frames. Finally, the approach is illustrated through a selection of well known mechanics experiments.

  2. Automatic acquisition of motion trajectories: tracking hockey players

    NASA Astrophysics Data System (ADS)

    Okuma, Kenji; Little, James J.; Lowe, David

    2003-12-01

    Computer systems that have the capability of analyzing complex and dynamic scenes play an essential role in video annotation. Scenes can be complex in such a way that there are many cluttered objects with different colors, shapes and sizes, and can be dynamic with multiple interacting moving objects and a constantly changing background. In reality, there are many scenes that are complex, dynamic, and challenging enough for computers to describe. These scenes include games of sports, air traffic, car traffic, street intersections, and cloud transformations. Our research is about the challenge of inventing a descriptive computer system that analyzes scenes of hockey games where multiple moving players interact with each other on a constantly moving background due to camera motions. Ultimately, such a computer system should be able to acquire reliable data by extracting the players" motion as their trajectories, querying them by analyzing the descriptive information of data, and predict the motions of some hockey players based on the result of the query. Among these three major aspects of the system, we primarily focus on visual information of the scenes, that is, how to automatically acquire motion trajectories of hockey players from video. More accurately, we automatically analyze the hockey scenes by estimating parameters (i.e., pan, tilt, and zoom) of the broadcast cameras, tracking hockey players in those scenes, and constructing a visual description of the data by displaying trajectories of those players. Many technical problems in vision such as fast and unpredictable players' motions and rapid camera motions make our challenge worth tackling. To the best of our knowledge, there have not been any automatic video annotation systems for hockey developed in the past. Although there are many obstacles to overcome, our efforts and accomplishments would hopefully establish the infrastructure of the automatic hockey annotation system and become a milestone for

  3. Air motion determination by tracking humidity patterns in isentropic layers

    NASA Technical Reports Server (NTRS)

    Mancuso, R. L.; Hall, D. J.

    1975-01-01

    Determining air motions by tracking humidity patterns in isentropic layers was investigated. Upper-air rawinsonde data from the NSSL network and from the AVE-II pilot experiment were used to simulate temperature and humidity profile data that will eventually be available from geosynchronous satellites. Polynomial surfaces that move with time were fitted to the mixing-ratio values of the different isentropic layers. The velocity components of the polynomial surfaces are part of the coefficients that are determined in order to give an optimum fitting of the data. In the mid-troposphere, the derived humidity motions were in good agreement with the winds measured by rawinsondes so long as there were few or no clouds and the lapse rate was relatively stable. In the lower troposphere, the humidity motions were unreliable primarily because of nonadiabatic processes and unstable lapse rates. In the upper troposphere, the humidity amounts were too low to be measured with sufficient accuracy to give reliable results. However, it appears that humidity motions could be used to provide mid-tropospheric wind data over large regions of the globe.

  4. Crosstalk minimization in autostereoscopic multiveiw 3D display by eye tracking and fusion (overlapping) of viewing zones

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Seon-Kyu; Yoon, Ki-Hyuk

    2012-06-01

    An autostereoscopic 3D display provides the binocular perception without eye glasses, but induces the low 3D effect and dizziness due to the crosstalk effect. The crosstalk related problems give the deterioration of 3D effect, clearness, and reality of 3D image. A novel method of reducing the crosstalk is designed and tested; the method is based on the fusion of viewing zones and the real time eye position. It is shown experimentally that the crosstalk is effectively reduced at any position around the optimal viewing distance.

  5. Respiratory motion compensation for simultaneous PET/MR based on a 3D-2D registration of strongly undersampled radial MR data: a simulation study

    NASA Astrophysics Data System (ADS)

    Rank, Christopher M.; Heußer, Thorsten; Flach, Barbara; Brehm, Marcus; Kachelrieß, Marc

    2015-03-01

    We propose a new method for PET/MR respiratory motion compensation, which is based on a 3D-2D registration of strongly undersampled MR data and a) runs in parallel with the PET acquisition, b) can be interlaced with clinical MR sequences, and c) requires less than one minute of the total MR acquisition time per bed position. In our simulation study, we applied a 3D encoded radial stack-of-stars sampling scheme with 160 radial spokes per slice and an acquisition time of 38 s. Gated 4D MR images were reconstructed using a 4D iterative reconstruction algorithm. Based on these images, motion vector fields were estimated using our newly-developed 3D-2D registration framework. A 4D PET volume of a patient with eight hot lesions in the lungs and upper abdomen was simulated and MoCo 4D PET images were reconstructed based on the motion vector fields derived from MR. For evaluation, average SUVmean values of the artificial lesions were determined for a 3D, a gated 4D, a MoCo 4D and a reference (with ten-fold measurement time) gated 4D reconstruction. Compared to the reference, 3D reconstructions yielded an underestimation of SUVmean values due to motion blurring. In contrast, gated 4D reconstructions showed the highest variation of SUVmean due to low statistics. MoCo 4D reconstructions were only slightly affected by these two sources of uncertainty resulting in a significant visual and quantitative improvement in terms of SUVmean values. Whereas temporal resolution was comparable to the gated 4D images, signal-to-noise ratio and contrast-to-noise ratio were close to the 3D reconstructions.

  6. Accuracy of tumor motion compensation algorithm from a robotic respiratory tracking system: A simulation study

    SciTech Connect

    Seppenwoolde, Yvette; Berbeco, Ross I.; Nishioka, Seiko; Shirato, Hiroki; Heijmen, Ben

    2007-07-15

    The Synchrony{sup TM} Respiratory Tracking System (RTS) is a treatment option of the CyberKnife robotic treatment device to irradiate extra-cranial tumors that move due to respiration. Advantages of RTS are that patients can breath normally and that there is no loss of linac duty cycle such as with gated therapy. Tracking is based on a measured correspondence model (linear or polynomial) between internal tumor motion and external (chest/abdominal) marker motion. The radiation beam follows the tumor movement via the continuously measured external marker motion. To establish the correspondence model at the start of treatment, the 3D internal tumor position is determined at 15 discrete time points by automatic detection of implanted gold fiducials in two orthogonal x-ray images; simultaneously, the positions of the external markers are measured. During the treatment, the relationship between internal and external marker positions is continuously accounted for and is regularly checked and updated. Here we use computer simulations based on continuously and simultaneously recorded internal and external marker positions to investigate the effectiveness of tumor tracking by the RTS. The Cyberknife does not allow continuous acquisition of x-ray images to follow the moving internal markers (typical imaging frequency is once per minute). Therefore, for the simulations, we have used data for eight lung cancer patients treated with respiratory gating. All of these patients had simultaneous and continuous recordings of both internal tumor motion and external abdominal motion. The available continuous relationship between internal and external markers for these patients allowed investigation of the consequences of the lower acquisition frequency of the RTS. With the use of the RTS, simulated treatment errors due to breathing motion were reduced largely and consistently over treatment time for all studied patients. A considerable part of the maximum reduction in treatment error

  7. Tracking Arabia-India motion from Miocene to Present

    NASA Astrophysics Data System (ADS)

    Chamot-Rooke, N. R.; Fournier, M.

    2009-12-01

    Although small, the present-day Arabia-India motion has been captured by several global and regional geodetic surveys that consistently show dextral motion of a few mm/yr, either transpressive or transtensive (Fournier et al., 2008). This motion is accommodated along the Owen Fracture Zone, an active strike-slip boundary that runs for more than 700 km from the Somalia-India-Arabia triple junction in the south to the Dalrymple trough in the north. Two recent marine cruises conducted along this fault aboard the BHO Beautemps-Beaupré (AOC 2006 and OWEN 2009) using a high resolution multibeam sounder (Simrad EM120, 10 m vertical resolution) provided a complete map of the active fault and confirmed a present-day pure dextral motion. The surface breaks closely follow a small circle of the Arabia-India motion, with several pull-part basins at the junctions between the main segments of the fault. Geomorphologic offsets reach 10 km, suggesting that the mapped fault has been active with the same style for past several million years. When did this motion start? The difficulty in tracking the past Arabia-India motion is that there is no direct kinematic indicator available, since the boundary has been strike-slip and/or convergent during the Tertiary. Motion was most probably sinistral during the rapid northward travelling of India towards Eurasia in the early Tertiary, Arabia being rigidly attached to Africa until the opening of the Gulf of Aden. However, the nature and location of the Arabia-India boundary at that time remain speculative. Throughout the Miocene, the relative motion between India and Arabia has been indirectly recorded at the Sheba and Carslberg ridges, the former recording Arabia-Somalia motion (opening of the Gulf of Aden) and the latter India-Somalia motion (Indian Ocean opening). Both ridges have been studied with some details recently, using up to date magnetic lineations identification (Merkouriev and DeMets, 2006; Fournier et al., 2009). We combine

  8. Surrogate-driven deformable motion model for organ motion tracking in particle radiation therapy

    NASA Astrophysics Data System (ADS)

    Fassi, Aurora; Seregni, Matteo; Riboldi, Marco; Cerveri, Pietro; Sarrut, David; Battista Ivaldi, Giovanni; Tabarelli de Fatis, Paola; Liotta, Marco; Baroni, Guido

    2015-02-01

    The aim of this study is the development and experimental testing of a tumor tracking method for particle radiation therapy, providing the daily respiratory dynamics of the patient’s thoraco-abdominal anatomy as a function of an external surface surrogate combined with an a priori motion model. The proposed tracking approach is based on a patient-specific breathing motion model, estimated from the four-dimensional (4D) planning computed tomography (CT) through deformable image registration. The model is adapted to the interfraction baseline variations in the patient’s anatomical configuration. The driving amplitude and phase parameters are obtained intrafractionally from a respiratory surrogate signal derived from the external surface displacement. The developed technique was assessed on a dataset of seven lung cancer patients, who underwent two repeated 4D CT scans. The first 4D CT was used to build the respiratory motion model, which was tested on the second scan. The geometric accuracy in localizing lung lesions, mediated over all breathing phases, ranged between 0.6 and 1.7 mm across all patients. Errors in tracking the surrounding organs at risk, such as lungs, trachea and esophagus, were lower than 1.3 mm on average. The median absolute variation in water equivalent path length (WEL) within the target volume did not exceed 1.9 mm-WEL for simulated particle beams. A significant improvement was achieved compared with error compensation based on standard rigid alignment. The present work can be regarded as a feasibility study for the potential extension of tumor tracking techniques in particle treatments. Differently from current tracking methods applied in conventional radiotherapy, the proposed approach allows for the dynamic localization of all anatomical structures scanned in the planning CT, thus providing complete information on density and WEL variations required for particle beam range adaptation.

  9. SU-E-T-562: Motion Tracking Optimization for Conformal Arc Radiotherapy Plans: A QUASAR Phantom Based Study

    SciTech Connect

    Xu, Z; Wang, I; Yao, R; Podgorsak, M

    2015-06-15

    Purpose: This study is to use plan parameters optimization (Dose rate, collimator angle, couch angle, initial starting phase) to improve the performance of conformal arc radiotherapy plans with motion tracking by increasing the plan performance score (PPS). Methods: Two types of 3D conformal arc plans were created based on QUASAR respiratory motion phantom with spherical and cylindrical targets. Sinusoidal model was applied to the MLC leaves to generate motion tracking plans. A MATLAB program was developed to calculate PPS of each plan (ranges from 0–1) and optimize plan parameters. We first selected the dose rate for motion tracking plans and then used simulated annealing algorithm to search for the combination of the other parameters that resulted in the plan of the maximal PPS. The optimized motion tracking plan was delivered by Varian Truebeam Linac. In-room cameras and stopwatch were used for starting phase selection and synchronization between phantom motion and plan delivery. Gaf-EBT2 dosimetry films were used to measure the dose delivered to the target in QUASAR phantom. Dose profiles and Truebeam trajectory log files were used for plan delivery performance evaluation. Results: For spherical target, the maximal PPS (PPSsph) of the optimized plan was 0.79: (Dose rate: 500MU/min, Collimator: 90°, Couch: +10°, starting phase: 0.83π). For cylindrical target, the maximal PPScyl was 0.75 (Dose rate: 300MU/min, Collimator: 87°, starting phase: 0.97π) with couch at 0°. Differences of dose profiles between motion tracking plans (with the maximal and the minimal PPS) and 3D conformal plans were as follows: PPSsph=0.79: %ΔFWHM: 8.9%, %Dmax: 3.1%; PPSsph=0.52: %ΔFWHM: 10.4%, %Dmax: 6.1%. PPScyl=0.75: %ΔFWHM: 4.7%, %Dmax: 3.6%; PPScyl=0.42: %ΔFWHM: 12.5%, %Dmax: 9.6%. Conclusion: By achieving high plan performance score through parameters optimization, we can improve target dose conformity of motion tracking plan by decreasing total MLC leaf travel distance

  10. It is time to integrate: the temporal dynamics of object motion and texture motion integration in multiple object tracking.

    PubMed

    Huff, Markus; Papenmeier, Frank

    2013-01-14

    In multiple-object tracking, participants can track several moving objects among identical distractors. It has recently been shown that the human visual system uses motion information in order to keep track of targets (St. Clair et al., Journal of Vision, 10(4), 1-13). Texture on the surface of an object that moved in the opposite direction to the object itself impaired tracking performance. In this study, we examined the temporal interval at which texture motion and object motion is integrated in dynamic scenes. In two multiple-object tracking experiments, we manipulated the texture motion on the objects: The texture either moved in the same direction as the objects, in the opposite direction, or alternated between the same and opposite direction at varying intervals. In Experiment 1, we show that the integration of object motion and texture motion can take place at intervals as short as 100 ms. In Experiment 2, we show that there is a linear relationship between the proportion of opposite texture motion and tracking performance. We suggest that texture motion might cause shifts in perceived object locations, thus influencing tracking performance.

  11. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  12. Structured light-based motion tracking in the limited view of an MR head coil

    NASA Astrophysics Data System (ADS)

    Erikshøj, M.; Olesen, O. V.; Conradsen, K.; Højgaard, L.; Larsen, R.

    2013-02-01

    A markerless motion tracking (MT) system developed for use in PET brain imaging has been tested in the limited field of view (FOV) of the MR head coil from the Siemens Biograph mMR. The system is a 3D surface scanner that uses structured light (SL) to create point cloud reconstructions of the facial surface. The point clouds are continuously realigned to a reference scan to obtain pose estimates. The system has been tested on a mannequin head performing controlled rotational and translational axial movements within the head coil outside the range of the magnetic field. The RMS of the residual error of the rotation was 0.11° and the RMS difference in the translation with the control system was 0.17 mm, within the trackable range of movement.

  13. SU-E-J-199: Evaluation of Motion Tracking Effects On Stereotactic Body Radiotherapy of Abdominal Targets

    SciTech Connect

    Monterroso, M; Dogan, N; Yang, Y

    2014-06-01

    Purpose: To evaluate the effects of respiratory motion on the delivered dose distribution of CyberKnife motion tracking-based stereotactic body radiotherapy (SBRT) of abdominal targets. Methods: Four patients (two pancreas and two liver, and all with 4DCT scans) were retrospectively evaluated. A plan (3D plan) using CyberKnife Synchrony was optimized on the end-exhale phase in the CyberKnife's MultiPlan treatment planning system (TPS), with 40Gy prescribed in 5 fractions. A 4D plan was then created following the 4D planning utility in the MultiPlan TPS, by recalculating dose from the 3D plan beams on all 4DCT phases, with the same prescribed isodose line. The other seven phases of the 4DCT were then deformably registered to the end-exhale phase for 4D dose summation. Doses to the target and organs at risk (OAR) were compared between 3D and 4D plans for each patient. The mean and maximum doses to duodenum, liver, spinal cord and kidneys, and doses to 5cc of duodenum, 700cc of liver, 0.25cc of spinal cord and 200cc of kidneys were used. Results: Target coverage in the 4D plans was about 1% higher for two patients and about 9% lower in the other two. OAR dose differences between 3D and 4D varied among structures, with doses as much as 8.26Gy lower or as much as 5.41Gy higher observed in the 4D plans. Conclusion: The delivered dose can be significantly different from the planned dose for both the target and OAR close to the target, which is caused by the relative geometry change while the beams chase the moving target. Studies will be performed on more patients in the future. The differences of motion tracking versus passive motion management with the use of internal target volumes will also be investigated.

  14. Computer numerical control (CNC) lithography: light-motion synchronized UV-LED lithography for 3D microfabrication

    NASA Astrophysics Data System (ADS)

    Kim, Jungkwun; Yoon, Yong-Kyu; Allen, Mark G.

    2016-03-01

    This paper presents a computer-numerical-controlled ultraviolet light-emitting diode (CNC UV-LED) lithography scheme for three-dimensional (3D) microfabrication. The CNC lithography scheme utilizes sequential multi-angled UV light exposures along with a synchronized switchable UV light source to create arbitrary 3D light traces, which are transferred into the photosensitive resist. The system comprises a switchable, movable UV-LED array as a light source, a motorized tilt-rotational sample holder, and a computer-control unit. System operation is such that the tilt-rotational sample holder moves in a pre-programmed routine, and the UV-LED is illuminated only at desired positions of the sample holder during the desired time period, enabling the formation of complex 3D microstructures. This facilitates easy fabrication of complex 3D structures, which otherwise would have required multiple manual exposure steps as in the previous multidirectional 3D UV lithography approach. Since it is batch processed, processing time is far less than that of the 3D printing approach at the expense of some reduction in the degree of achievable 3D structure complexity. In order to produce uniform light intensity from the arrayed LED light source, the UV-LED array stage has been kept rotating during exposure. UV-LED 3D fabrication capability was demonstrated through a plurality of complex structures such as V-shaped micropillars, micropanels, a micro-‘hi’ structure, a micro-‘cat’s claw,’ a micro-‘horn,’ a micro-‘calla lily,’ a micro-‘cowboy’s hat,’ and a micro-‘table napkin’ array.

  15. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-07

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability.

  16. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  17. A hybrid approach for fusing 4D-MRI temporal information with 3D-CT for the study of lung and lung tumor motion

    SciTech Connect

    Yang, Y. X.; Van Reeth, E.; Poh, C. L.; Teo, S.-K.; Tan, C. H.; Tham, I. W. K.

    2015-08-15

    Purpose: Accurate visualization of lung motion is important in many clinical applications, such as radiotherapy of lung cancer. Advancement in imaging modalities [e.g., computed tomography (CT) and MRI] has allowed dynamic imaging of lung and lung tumor motion. However, each imaging modality has its advantages and disadvantages. The study presented in this paper aims at generating synthetic 4D-CT dataset for lung cancer patients by combining both continuous three-dimensional (3D) motion captured by 4D-MRI and the high spatial resolution captured by CT using the authors’ proposed approach. Methods: A novel hybrid approach based on deformable image registration (DIR) and finite element method simulation was developed to fuse a static 3D-CT volume (acquired under breath-hold) and the 3D motion information extracted from 4D-MRI dataset, creating a synthetic 4D-CT dataset. Results: The study focuses on imaging of lung and lung tumor. Comparing the synthetic 4D-CT dataset with the acquired 4D-CT dataset of six lung cancer patients based on 420 landmarks, accurate results (average error <2 mm) were achieved using the authors’ proposed approach. Their hybrid approach achieved a 40% error reduction (based on landmarks assessment) over using only DIR techniques. Conclusions: The synthetic 4D-CT dataset generated has high spatial resolution, has excellent lung details, and is able to show movement of lung and lung tumor over multiple breathing cycles.

  18. Evaluation of the combined effects of target size, respiratory motion and background activity on 3D and 4D PET/CT images

    NASA Astrophysics Data System (ADS)

    Park, Sang-June; Ionascu, Dan; Killoran, Joseph; Mamede, Marcelo; Gerbaudo, Victor H.; Chin, Lee; Berbeco, Ross

    2008-07-01

    Gated (4D) PET/CT has the potential to greatly improve the accuracy of radiotherapy at treatment sites where internal organ motion is significant. However, the best methodology for applying 4D-PET/CT to target definition is not currently well established. With the goal of better understanding how to best apply 4D information to radiotherapy, initial studies were performed to investigate the effect of target size, respiratory motion and target-to-background activity concentration ratio (TBR) on 3D (ungated) and 4D PET images. Using a PET/CT scanner with 4D or gating capability, a full 3D-PET scan corrected with a 3D attenuation map from 3D-CT scan and a respiratory gated (4D) PET scan corrected with corresponding attenuation maps from 4D-CT were performed by imaging spherical targets (0.5-26.5 mL) filled with 18F-FDG in a dynamic thorax phantom and NEMA IEC body phantom at different TBRs (infinite, 8 and 4). To simulate respiratory motion, the phantoms were driven sinusoidally in the superior-inferior direction with amplitudes of 0, 1 and 2 cm and a period of 4.5 s. Recovery coefficients were determined on PET images. In addition, gating methods using different numbers of gating bins (1-20 bins) were evaluated with image noise and temporal resolution. For evaluation, volume recovery coefficient, signal-to-noise ratio and contrast-to-noise ratio were calculated as a function of the number of gating bins. Moreover, the optimum thresholds which give accurate moving target volumes were obtained for 3D and 4D images. The partial volume effect and signal loss in the 3D-PET images due to the limited PET resolution and the respiratory motion, respectively were measured. The results show that signal loss depends on both the amplitude and pattern of respiratory motion. However, the 4D-PET successfully recovers most of the loss induced by the respiratory motion. The 5-bin gating method gives the best temporal resolution with acceptable image noise. The results based on the 4D

  19. How Fast Is Your Body Motion? Determining a Sufficient Frame Rate for an Optical Motion Tracking System Using Passive Markers.

    PubMed

    Song, Min-Ho; Godøy, Rolf Inge

    2016-01-01

    This paper addresses how to determine a sufficient frame (sampling) rate for an optical motion tracking system using passive reflective markers. When using passive markers for the optical motion tracking, avoiding identity confusion between the markers becomes a problem as the speed of motion increases, necessitating a higher frame rate to avoid a failure of the motion tracking caused by marker confusions and/or dropouts. Initially, one might believe that the Nyquist-Shannon sampling rate estimated from the assumed maximal temporal variation of a motion (i.e. a sampling rate at least twice that of the maximum motion frequency) could be the complete solution to the problem. However, this paper shows that also the spatial distance between the markers should be taken into account in determining the suitable frame rate of an optical motion tracking with passive markers. In this paper, a frame rate criterion for the optical tracking using passive markers is theoretically derived and also experimentally verified using a high-quality optical motion tracking system. Both the theoretical and the experimental results showed that the minimum frame rate is proportional to the ratio between the maximum speed of the motion and the minimum spacing between markers, and may also be predicted precisely if the proportional constant is known in advance. The inverse of the proportional constant is here defined as the tracking efficiency constant and it can be easily determined with some test measurements. Moreover, this newly defined constant can provide a new way of evaluating the tracking algorithm performance of an optical tracking system.

  20. How Fast Is Your Body Motion? Determining a Sufficient Frame Rate for an Optical Motion Tracking System Using Passive Markers

    PubMed Central

    Song, Min-Ho; Godøy, Rolf Inge

    2016-01-01

    This paper addresses how to determine a sufficient frame (sampling) rate for an optical motion tracking system using passive reflective markers. When using passive markers for the optical motion tracking, avoiding identity confusion between the markers becomes a problem as the speed of motion increases, necessitating a higher frame rate to avoid a failure of the motion tracking caused by marker confusions and/or dropouts. Initially, one might believe that the Nyquist-Shannon sampling rate estimated from the assumed maximal temporal variation of a motion (i.e. a sampling rate at least twice that of the maximum motion frequency) could be the complete solution to the problem. However, this paper shows that also the spatial distance between the markers should be taken into account in determining the suitable frame rate of an optical motion tracking with passive markers. In this paper, a frame rate criterion for the optical tracking using passive markers is theoretically derived and also experimentally verified using a high-quality optical motion tracking system. Both the theoretical and the experimental results showed that the minimum frame rate is proportional to the ratio between the maximum speed of the motion and the minimum spacing between markers, and may also be predicted precisely if the proportional constant is known in advance. The inverse of the proportional constant is here defined as the tracking efficiency constant and it can be easily determined with some test measurements. Moreover, this newly defined constant can provide a new way of evaluating the tracking algorithm performance of an optical tracking system. PMID:26967900

  1. A common-path optical coherence tomography distance-sensor based surface tracking and motion compensation hand-held microsurgical tool

    NASA Astrophysics Data System (ADS)

    Zhang, Kang; Gehlbach, Peter; Kang, Jin U.

    2011-03-01

    Microsurgery requires constant attention to the involuntary motion due to physiological tremors. In this work, we demonstrated a simple and compact hand-held microsurgical tool capable of surface tracking and motion compensation based on common-path optical coherence tomography (CP-OCT) distance-sensor to improve the accuracy and safety of microsurgery. This tool is miniaturized into a 15mm-diameter plastic syringe and capable of surface tracking at less than 5 micrometer resolution. A phantom made with Intralipid layers is used to simulate a real tissue surface and a single-fiber integrated micro-dissector works as a surgical tip to perform tracking and accurate incision on the phantom surface. The micro-incision depth is evaluated after each operation through a fast 3D scanning by the Fourier domain OCT system. The results using the surface tracking and motion compensation tool show significant improvement compared to the results by free-hand.

  2. Unstructured grids in 3D and 4D for a time-dependent interface in front tracking with improved accuracy

    SciTech Connect

    Glimm, J.; Grove, J. W.; Li, X. L.; Li, Y.; Xu, Z.

    2002-01-01

    Front tracking traces the dynamic evolution of an interface separating differnt materials or fluid components. In this paper, they describe three types of the grid generation methods used in the front tracking method. One is the unstructured surface grid. The second is a structured grid-based reconstruction method. The third is a time-space grid, also grid based, for a conservative tracking algorithm with improved accuracy.

  3. Marker-less multi-frame motion tracking and compensation in PET-brain imaging

    NASA Astrophysics Data System (ADS)

    Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.

    2015-03-01

    In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.

  4. 2-D-3-D frequency registration using a low-dose radiographic system for knee motion estimation.

    PubMed

    Jerbi, Taha; Burdin, Valerie; Leboucher, Julien; Stindel, Eric; Roux, Christian

    2013-03-01

    In this paper, a new method is presented to study the feasibility of the pose and the position estimation of bone structures using a low-dose radiographic system, the entrepreneurial operating system (designed by EOS-Imaging Company). This method is based on a 2-D-3-D registration of EOS bi-planar X-ray images with an EOS 3-D reconstruction. This technique is relevant to such an application thanks to the EOS ability to simultaneously make acquisitions of frontal and sagittal radiographs, and also to produce a 3-D surface reconstruction with its attached software. In this paper, the pose and position of a bone in radiographs is estimated through the link between 3-D and 2-D data. This relationship is established in the frequency domain using the Fourier central slice theorem. To estimate the pose and position of the bone, we define a distance between the 3-D data and the radiographs, and use an iterative optimization approach to converge toward the best estimation. In this paper, we give the mathematical details of the method. We also show the experimental protocol and the results, which validate our approach.

  5. Influence of Head Motion on the Accuracy of 3D Reconstruction with Cone-Beam CT: Landmark Identification Errors in Maxillofacial Surface Model

    PubMed Central

    Song, Jin-Myoung; Cho, Jin-Hyoung

    2016-01-01

    Purpose The purpose of this study was to investigate the influence of head motion on the accuracy of three-dimensional (3D) reconstruction with cone-beam computed tomography (CBCT) scan. Materials and Methods Fifteen dry skulls were incorporated into a motion controller which simulated four types of head motion during CBCT scan: 2 horizontal rotations (to the right/to the left) and 2 vertical rotations (upward/downward). Each movement was triggered to occur at the start of the scan for 1 second by remote control. Four maxillofacial surface models with head motion and one control surface model without motion were obtained for each skull. Nine landmarks were identified on the five maxillofacial surface models for each skull, and landmark identification errors were compared between the control model and each of the models with head motion. Results Rendered surface models with head motion were similar to the control model in appearance; however, the landmark identification errors showed larger values in models with head motion than in the control. In particular, the Porion in the horizontal rotation models presented statistically significant differences (P < .05). Statistically significant difference in the errors between the right and left side landmark was present in the left side rotation which was opposite direction to the scanner rotation (P < .05). Conclusions Patient movement during CBCT scan might cause landmark identification errors on the 3D surface model in relation to the direction of the scanner rotation. Clinicians should take this into consideration to prevent patient movement during CBCT scan, particularly horizontal movement. PMID:27065238

  6. Imaging bacterial 3D motion using digital in-line holographic microscopy and correlation-based de-noising algorithm

    PubMed Central

    Molaei, Mehdi; Sheng, Jian

    2014-01-01

    Abstract: Better understanding of bacteria environment interactions in the context of biofilm formation requires accurate 3-dimentional measurements of bacteria motility. Digital Holographic Microscopy (DHM) has demonstrated its capability in resolving 3D distribution and mobility of particulates in a dense suspension. Due to their low scattering efficiency, bacteria are substantially difficult to be imaged by DHM. In this paper, we introduce a novel correlation-based de-noising algorithm to remove the background noise and enhance the quality of the hologram. Implemented in conjunction with DHM, we demonstrate that the method allows DHM to resolve 3-D E. coli bacteria locations of a dense suspension (>107 cells/ml) with submicron resolutions (<0.5 µm) over substantial depth and to obtain thousands of 3D cell trajectories. PMID:25607177

  7. Bedside assistance in freehand ultrasonic diagnosis by real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion

    NASA Astrophysics Data System (ADS)

    Fukuzawa, M.; Kawata, K.; Nakamori, N.; Kitsunezuka, Y.

    2011-03-01

    By real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion, freehand ultrasonic diagnosis of neonatal ischemic diseases has been assisted at the bedside. The 2D ultrasonic movie was taken with a conventional ultrasonic apparatus (ATL HDI5000) and ultrasonic probes of 5-7 MHz with the compact tilt-sensor to measure the probe orientation. The real-time 3D visualization was realized by developing an extended version of the PC-based visualization system. The software was originally developed on the DirectX platform and optimized with the streaming SIMD extensions. The 3D scatter diagram of the latest pulsatile tissues has been continuously generated and visualized as projection image with the ultrasonic movie in the current section more than 15 fps. It revealed the 3D structure of pulsatile tissues such as middle and posterior cerebral arteries, Willis ring and cerebellar arteries, in which pediatricians have great interests in the blood flow because asphyxiated and/or low-birth-weight neonates have a high risk of ischemic diseases such as hypoxic-ischemic encephalopathy and periventricular leukomalacia. Since the pulsatile tissue-motion is due to local blood flow, it can be concluded that the system developed in this work is very useful to assist freehand ultrasonic diagnosis of ischemic diseases in the neonatal cranium.

  8. Accuracy of real-time single- and multi-beat 3-d speckle tracking echocardiography in vitro.

    PubMed

    Hjertaas, Johannes Just; Fosså, Henrik; Dybdahl, Grete Lunestad; Grüner, Renate; Lunde, Per; Matre, Knut

    2013-06-01

    With little data published on the accuracy of cardiac 3-D strain measurements, we investigated the agreement between 3-D echocardiography and sonomicrometry in an in vitro model with a polyvinyl alcohol phantom. A cardiac scanner with a 3-D probe was used to acquire recordings at 15 different stroke volumes at a heart rate of 60 beats/min, and eight different stroke volumes at a heart rate of 120 beats/min. Sonomicrometry was used as a reference, monitoring longitudinal, circumferential and radial lengths. Both single- and multi-beat acquisitions were recorded. Strain values were compared with sonomicrometer strain using linear correlation coefficients and Bland-Altman analysis. Multi-beat acquisition showed good agreement, whereas real-time images showed less agreement. The best correlation was obtained for a heart rate 60 of beats/min at a volume rate 36.6 volumes/s.

  9. Spatial perception of motion-tracked binaural sound

    NASA Astrophysics Data System (ADS)

    Melick, Joshua B.; Algazi, V. Ralph; Duda, Richard O.

    2005-04-01

    Motion-tracked binaural sound reproduction extends conventional headphone-based binaural techniques by providing the dynamic cues to sound localization produced by voluntary head motion [V. R. Algazi, R. O. Duda, and D. M. Thompson, J. Aud. Eng. Soc. 52, 1142-1156 (2004)]. It does this by using several microphones to sample the acoustic field around a dummy head, interpolating between the microphone signals in accordance with the dynamically measured orientation of the listener's head. Although the provision of dynamic cues reduces the sensitivity of the method to characteristics of the individual listener, differences between the scattered field produced by the dummy head and the scattered field that would be produced by a particular listener distorts the spatial perception. A common observation is that sound sources appear to rise in elevation when the listener turns to face them. We investigate this effect by comparing the perceived rise in elevation under three different conditions: recordings in which recordings are made using (a) the listener's own head, (b) a KEMAR mannequin, and (c) a cylindrical head with no torso. Quantitative results are presented showing the degree to which perceptual distortions are least for (a) and greatest for (c). [Work supported by NSF.

  10. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  11. Human motion tracking by temporal-spatial local gaussian process experts.

    PubMed

    Zhao, Xu; Fu, Yun; Liu, Yuncai

    2011-04-01

    Human pose estimation via motion tracking systems can be considered as a regression problem within a discriminative framework. It is always a challenging task to model the mapping from observation space to state space because of the high-dimensional characteristic in the multimodal conditional distribution. In order to build the mapping, existing techniques usually involve a large set of training samples in the learning process which are limited in their capability to deal with multimodality. We propose, in this work, a novel online sparse Gaussian Process (GP) regression model to recover 3-D human motion in monocular videos. Particularly, we investigate the fact that for a given test input, its output is mainly determined by the training samples potentially residing in its local neighborhood and defined in the unified input-output space. This leads to a local mixture GP experts system composed of different local GP experts, each of which dominates a mapping behavior with the specific covariance function adapting to a local region. To handle the multimodality, we combine both temporal and spatial information therefore to obtain two categories of local experts. The temporal and spatial experts are integrated into a seamless hybrid system, which is automatically self-initialized and robust for visual tracking of nonlinear human motion. Learning and inference are extremely efficient as all the local experts are defined online within very small neighborhoods. Extensive experiments on two real-world databases, HumanEva and PEAR, demonstrate the effectiveness of our proposed model, which significantly improve the performance of existing models.

  12. Integrating structure-from-motion photogrammetry with geospatial software as a novel technique for quantifying 3D ecological characteristics of coral reefs

    PubMed Central

    Delparte, D; Gates, RD; Takabayashi, M

    2015-01-01

    The structural complexity of coral reefs plays a major role in the biodiversity, productivity, and overall functionality of reef ecosystems. Conventional metrics with 2-dimensional properties are inadequate for characterization of reef structural complexity. A 3-dimensional (3D) approach can better quantify topography, rugosity and other structural characteristics that play an important role in the ecology of coral reef communities. Structure-from-Motion (SfM) is an emerging low-cost photogrammetric method for high-resolution 3D topographic reconstruction. This study utilized SfM 3D reconstruction software tools to create textured mesh models of a reef at French Frigate Shoals, an atoll in the Northwestern Hawaiian Islands. The reconstructed orthophoto and digital elevation model were then integrated with geospatial software in order to quantify metrics pertaining to 3D complexity. The resulting data provided high-resolution physical properties of coral colonies that were then combined with live cover to accurately characterize the reef as a living structure. The 3D reconstruction of reef structure and complexity can be integrated with other physiological and ecological parameters in future research to develop reliable ecosystem models and improve capacity to monitor changes in the health and function of coral reef ecosystems. PMID:26207190

  13. Dynamic simulation and modeling of the motion modes produced during the 3D controlled manipulation of biological micro/nanoparticles based on the AFM.

    PubMed

    Saraee, Mahdieh B; Korayem, Moharam H

    2015-08-07

    Determining the motion modes and the exact position of a particle displaced during the manipulation process is of special importance. This issue becomes even more important when the studied particles are biological micro/nanoparticles and the goals of manipulation are the transfer of these particles within body cells, repair of cancerous cells and the delivery of medication to damaged cells. However, due to the delicate nature of biological nanoparticles and their higher vulnerability, by obtaining the necessary force of manipulation for the considered motion mode, we can prevent the sample from interlocking with or sticking to the substrate because of applying a weak force or avoid damaging the sample due to the exertion of excessive force. In this paper, the dynamic behaviors and the motion modes of biological micro/nanoparticles such as DNA, yeast, platelet and bacteria due to the 3D manipulation effect have been investigated. Since the above nanoparticles generally have a cylindrical shape, the cylindrical contact models have been employed in an attempt to more precisely model the forces exerted on the nanoparticle during the manipulation process. Also, this investigation has performed a comprehensive modeling and simulation of all the possible motion modes in 3D manipulation by taking into account the eccentricity of the applied load on the biological nanoparticle. The obtained results indicate that unlike the macroscopic scale, the sliding of nanoparticle on substrate in nano-scale takes place sooner than the other motion modes and that the spinning about the vertical and transverse axes and the rolling of nanoparticle occur later than the other motion modes. The simulation results also indicate that the applied force necessary for the onset of nanoparticle movement and the resulting motion mode depend on the size and aspect ratio of the nanoparticle.

  14. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    SciTech Connect

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon; Ra, Jong Beom; Lee, Jae Young

    2015-01-15

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a

  15. MO-F-CAMPUS-J-02: Commissioning of Radiofrequency Tracking for Gated SBRT of the Liver Using Novel Motion System

    SciTech Connect

    James, J; Cetnar, A; Nguyen, V; Wang, B

    2015-06-15

    Purpose: Tracking soft tissue targets has recently been approved as a new application of the Calypso radiofrequency tracking system allowing for gated treatment of the liver based on the motion of the target volume itself. As part of the commissioning process, an end-to-end test was performed using a 3D diode array and 6D motion platform to verify the dosimetric accuracy and establish the workflow of gated SBRT treatment of the liver using Calypso. Methods: A 4DCT scan of the ScandiDos Delta4 phantom was acquired using the HexaMotion motion platform to simulate realistic breathing motion. A VMAT plan was optimized on the end of inspiration phase of the 4DCT scan and delivered to the Delta4 phantom using the Varian TrueBeam. The treatment beam was gated by Calypso to deliver dose at the end of inspiration. The expected dose was compared to the delivered dose using gamma analysis. In addition, gating limits were investigated to determine how large the gating range can be while still maintaining dosimetric accuracy. Results: The 3%/3mm and 2%/2mm gamma pass rate for the gated treatment delivery was 100% and 98.4%, respective