Science.gov

Sample records for 3d motion tracking

  1. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  2. Tracking 3-D body motion for docking and robot control

    NASA Technical Reports Server (NTRS)

    Donath, M.; Sorensen, B.; Yang, G. B.; Starr, R.

    1987-01-01

    An advanced method of tracking three-dimensional motion of bodies has been developed. This system has the potential to dynamically characterize machine and other structural motion, even in the presence of structural flexibility, thus facilitating closed loop structural motion control. The system's operation is based on the concept that the intersection of three planes defines a point. Three rotating planes of laser light, fixed and moving photovoltaic diode targets, and a pipe-lined architecture of analog and digital electronics are used to locate multiple targets whose number is only limited by available computer memory. Data collection rates are a function of the laser scan rotation speed and are currently selectable up to 480 Hz. The tested performance on a preliminary prototype designed for 0.1 in accuracy (for tracking human motion) at a 480 Hz data rate includes a worst case resolution of 0.8 mm (0.03 inches), a repeatability of plus or minus 0.635 mm (plus or minus 0.025 inches), and an absolute accuracy of plus or minus 2.0 mm (plus or minus 0.08 inches) within an eight cubic meter volume with all results applicable at the 95 percent level of confidence along each coordinate region. The full six degrees of freedom of a body can be computed by attaching three or more target detectors to the body of interest.

  3. LV motion tracking from 3D echocardiography using textural and structural information.

    PubMed

    Myronenko, Andriy; Song, Xubo; Sahn, David J

    2007-01-01

    Automated motion reconstruction of the left ventricle (LV) from 3D echocardiography provides insight into myocardium architecture and function. Low image quality and artifacts make 3D ultrasound image processing a challenging problem. We introduce a LV tracking method, which combines textural and structural information to overcome the image quality limitations. Our method automatically reconstructs the motion of the LV contour (endocardium and epicardium) from a sequence of 3D ultrasound images. PMID:18044597

  4. 3D model-based catheter tracking for motion compensation in EP procedures

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Liao, Rui; Hornegger, Joachim; Strobel, Norbert

    2010-02-01

    Atrial fibrillation is the most common sustained heart arrhythmia and a leading cause of stroke. Its treatment by radio-frequency catheter ablation, performed using fluoroscopic image guidance, is gaining increasingly more importance. Two-dimensional fluoroscopic navigation can take advantage of overlay images derived from pre-operative 3-D data to add anatomical details otherwise not visible under X-ray. Unfortunately, respiratory motion may impair the utility of these static overlay images for catheter navigation. We developed an approach for image-based 3-D motion compensation as a solution to this problem. A bi-plane C-arm system is used to take X-ray images of a special circumferential mapping catheter from two directions. In the first step of the method, a 3-D model of the device is reconstructed. Three-dimensional respiratory motion at the site of ablation is then estimated by tracking the reconstructed catheter model in 3-D. This step involves bi-plane fluoroscopy and 2-D/3-D registration. Phantom data and clinical data were used to assess our model-based catheter tracking method. Experiments involving a moving heart phantom yielded an average 2-D tracking error of 1.4 mm and an average 3-D tracking error of 1.1 mm. Our evaluation of clinical data sets comprised 469 bi-plane fluoroscopy frames (938 monoplane fluoroscopy frames). We observed an average 2-D tracking error of 1.0 mm +/- 0.4 mm and an average 3-D tracking error of 0.8 mm +/- 0.5 mm. These results demonstrate that model-based motion-compensation based on 2-D/3-D registration is both feasible and accurate.

  5. Nearly automatic motion capture system for tracking octopus arm movements in 3D space.

    PubMed

    Zelman, Ido; Galun, Meirav; Akselrod-Ballin, Ayelet; Yekutieli, Yoram; Hochner, Binyamin; Flash, Tamar

    2009-08-30

    Tracking animal movements in 3D space is an essential part of many biomechanical studies. The most popular technique for human motion capture uses markers placed on the skin which are tracked by a dedicated system. However, this technique may be inadequate for tracking animal movements, especially when it is impossible to attach markers to the animal's body either because of its size or shape or because of the environment in which the animal performs its movements. Attaching markers to an animal's body may also alter its behavior. Here we present a nearly automatic markerless motion capture system that overcomes these problems and successfully tracks octopus arm movements in 3D space. The system is based on three successive tracking and processing stages. The first stage uses a recently presented segmentation algorithm to detect the movement in a pair of video sequences recorded by two calibrated cameras. In the second stage, the results of the first stage are processed to produce 2D skeletal representations of the moving arm. Finally, the 2D skeletons are used to reconstruct the octopus arm movement as a sequence of 3D curves varying in time. Motion tracking, segmentation and reconstruction are especially difficult problems in the case of octopus arm movements because of the deformable, non-rigid structure of the octopus arm and the underwater environment in which it moves. Our successful results suggest that the motion-tracking system presented here may be used for tracking other elongated objects. PMID:19505502

  6. Tracking 3D Picometer-Scale Motions of Single Nanoparticles with High-Energy Electron Probes

    PubMed Central

    Ogawa, Naoki; Hoshisashi, Kentaro; Sekiguchi, Hiroshi; Ichiyanagi, Kouhei; Matsushita, Yufuku; Hirohata, Yasuhisa; Suzuki, Seiichi; Ishikawa, Akira; Sasaki, Yuji C.

    2013-01-01

    We observed the high-speed anisotropic motion of an individual gold nanoparticle in 3D at the picometer scale using a high-energy electron probe. Diffracted electron tracking (DET) using the electron back-scattered diffraction (EBSD) patterns of labeled nanoparticles under wet-SEM allowed us to super-accurately measure the time-resolved 3D motion of individual nanoparticles in aqueous conditions. The highly precise DET data corresponded to the 3D anisotropic log-normal Gaussian distributions over time at the millisecond scale. PMID:23868465

  7. Towards robust 3D visual tracking for motion compensation in beating heart surgery.

    PubMed

    Richa, Rogério; Bó, Antônio P L; Poignet, Philippe

    2011-06-01

    In the context of minimally invasive cardiac surgery, active vision-based motion compensation schemes have been proposed for mitigating problems related to physiological motion. However, robust and accurate visual tracking remains a difficult task. The purpose of this paper is to present a robust visual tracking method that estimates the 3D temporal and spatial deformation of the heart surface using stereo endoscopic images. The novelty is the combination of a visual tracking method based on a Thin-Plate Spline (TPS) model for representing the heart surface deformations with a temporal heart motion model based on a time-varying dual Fourier series for overcoming tracking disturbances or failures. The considerable improvements in tracking robustness facing specular reflections and occlusions are demonstrated through experiments using images of in vivo porcine and human beating hearts. PMID:21277821

  8. 3D deformable organ model based liver motion tracking in ultrasound videos

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Bae; Hwang, Youngkyoo; Oh, Young-Taek; Bang, Won-Chul; Lee, Heesae; Kim, James D. K.; Kim, Chang Yeong

    2013-03-01

    This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor's 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.

  9. Ultrasonic diaphragm tracking for cardiac interventional navigation on 3D motion compensated static roadmaps

    NASA Astrophysics Data System (ADS)

    Timinger, Holger; Kruger, Sascha; Dietmayer, Klaus; Borgert, Joern

    2005-04-01

    In this paper, a novel approach to cardiac interventional navigation on 3D motion-compensated static roadmaps is presented. Current coronary interventions, e.g. percutaneous transluminal coronary angioplasties, are performed using 2D X-ray fluoroscopy. This comes along with well-known drawbacks like radiation exposure, use of contrast agent, and limited visualization, e.g. overlap and foreshortening, due to projection imaging. In the presented approach, the interventional device, i.e. the catheter, is tracked using an electromagnetic tracking system (MTS). Therefore, the catheters position is mapped into a static 3D image of the volume of interest (VOI) by means of an affine registration. In order to compensate for respiratory motion of the catheter with respect to the static image, a parameterized affine motion model is used which is driven by a respiratory sensor signal. This signal is derived from ultrasonic diaphragm tracking. The motion compensation for the heartbeat is done using ECG-gating. The methods are validated using a heart- and diaphragm-phantom. The mean displacement of the catheter due to the simulated organ motion decreases from approximately 9 mm to 1.3 mm. This result indicates that the proposed method is able to reconstruct the catheter position within the VOI accurately and that it can help to overcome drawbacks of current interventional procedures.

  10. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking.

    PubMed

    Dettmer, Simon L; Keyser, Ulrich F; Pagliara, Stefano

    2014-02-01

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces. PMID:24593372

  11. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    2014-02-01

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  12. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    SciTech Connect

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    2014-02-15

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  13. 3D motion tracking of the heart using Harmonic Phase (HARP) isosurfaces

    NASA Astrophysics Data System (ADS)

    Soliman, Abraam S.; Osman, Nael F.

    2010-03-01

    Tags are non-invasive features induced in the heart muscle that enable the tracking of heart motion. Each tag line, in fact, corresponds to a 3D tag surface that deforms with the heart muscle during the cardiac cycle. Tracking of tag surfaces deformation is useful for the analysis of left ventricular motion. Cardiac material markers (Kerwin et al, MIA, 1997) can be obtained from the intersections of orthogonal surfaces which can be reconstructed from short- and long-axis tagged images. The proposed method uses Harmonic Phase (HARP) method for tracking tag lines corresponding to a specific harmonic phase value and then the reconstruction of grid tag surfaces is achieved by a Delaunay triangulation-based interpolation for sparse tag points. Having three different tag orientations from short- and long-axis images, the proposed method showed the deformation of 3D tag surfaces during the cardiac cycle. Previous work on tag surface reconstruction was restricted for the "dark" tag lines; however, the use of HARP as proposed enables the reconstruction of isosurfaces based on their harmonic phase values. The use of HARP, also, provides a fast and accurate way for tag lines identification and tracking, and hence, generating the surfaces.

  14. Automated 3D Motion Tracking using Gabor Filter Bank, Robust Point Matching, and Deformable Models

    PubMed Central

    Wang, Xiaoxu; Chung, Sohae; Metaxas, Dimitris; Axel, Leon

    2013-01-01

    Tagged Magnetic Resonance Imaging (tagged MRI or tMRI) provides a means of directly and noninvasively displaying the internal motion of the myocardium. Reconstruction of the motion field is needed to quantify important clinical information, e.g., the myocardial strain, and detect regional heart functional loss. In this paper, we present a three-step method for this task. First, we use a Gabor filter bank to detect and locate tag intersections in the image frames, based on local phase analysis. Next, we use an improved version of the Robust Point Matching (RPM) method to sparsely track the motion of the myocardium, by establishing a transformation function and a one-to-one correspondence between grid tag intersections in different image frames. In particular, the RPM helps to minimize the impact on the motion tracking result of: 1) through-plane motion, and 2) relatively large deformation and/or relatively small tag spacing. In the final step, a meshless deformable model is initialized using the transformation function computed by RPM. The model refines the motion tracking and generates a dense displacement map, by deforming under the influence of image information, and is constrained by the displacement magnitude to retain its geometric structure. The 2D displacement maps in short and long axis image planes can be combined to drive a 3D deformable model, using the Moving Least Square method, constrained by the minimization of the residual error at tag intersections. The method has been tested on a numerical phantom, as well as on in vivo heart data from normal volunteers and heart disease patients. The experimental results show that the new method has a good performance on both synthetic and real data. Furthermore, the method has been used in an initial clinical study to assess the differences in myocardial strain distributions between heart disease (left ventricular hypertrophy) patients and the normal control group. The final results show that the proposed method

  15. Neural network techniques for invariant recognition and motion tracking of 3-D objects

    SciTech Connect

    Hwang, J.N.; Tseng, Y.H.

    1995-12-31

    Invariant recognition and motion tracking of 3-D objects under partial object viewing are difficult tasks. In this paper, we introduce a new neural network solution that is robust to noise corruption and partial viewing of objects. This method directly utilizes the acquired range data and requires no feature extraction. In the proposed approach, the object is first parametrically represented by a continuous distance transformation neural network (CDTNN) which is trained by the surface points of the exemplar object. When later presented with the surface points of an unknown object, this parametric representation allows the mismatch information to back-propagate through the CDTNN to gradually determine the best similarity transformation (translation and rotation) of the unknown object. The mismatch can be directly measured in the reconstructed representation domain between the model and the unknown object.

  16. Model-based lasso catheter tracking in monoplane fluoroscopy for 3D breathing motion compensation during EP procedures

    NASA Astrophysics Data System (ADS)

    Liao, Rui

    2010-02-01

    Radio-frequency catheter ablation (RFCA) of the pulmonary veins (PVs) attached to the left atrium (LA) is usually carried out under fluoroscopy guidance. Overlay of detailed anatomical structures via 3-D CT and/or MR volumes onto the fluoroscopy helps visualization and navigation in electrophysiology procedures (EP). Unfortunately, respiratory motion may impair the utility of static overlay of the volume with fluoroscopy for catheter navigation. In this paper, we propose a B-spline based method for tracking the circumferential catheter (lasso catheter) in monoplane fluoroscopy. The tracked motion can be used for the estimation of the 3-D trajectory of breathing motion and for subsequent motion compensation. A lasso catheter is typically used during EP procedures and is pushed against the ostia of the PVs to be ablated. Hence this method does not require additional instruments, and achieves motion estimation right at the site of ablation. The performance of the proposed tracking algorithm was evaluated on 340 monoplane frames with an average error of 0.68 +/- 0.36 mms. Our contributions in this work are twofold. First and foremost, we show how to design an effective, practical, and workflow-friendly 3-D motion compensation scheme for EP procedures in a monoplane setup. In addition, we develop an efficient and accurate method for model-based tracking of the circumferential lasso catheter in the low-dose EP fluoroscopy.

  17. Performance of ultrasound based measurement of 3D displacement using a curvilinear probe for organ motion tracking

    NASA Astrophysics Data System (ADS)

    Harris, Emma J.; Miller, Naomi R.; Bamber, Jeffrey C.; Evans, Phillip M.; Symonds-Tayler, J. Richard N.

    2007-09-01

    Three-dimensional (3D) soft tissue tracking is of interest for monitoring organ motion during therapy. Our goal is to assess the tracking performance of a curvilinear 3D ultrasound probe in terms of the accuracy and precision of measured displacements. The first aim was to examine the depth dependence of the tracking performance. This is of interest because the spatial resolution varies with distance from the elevational focus and because the curvilinear geometry of the transducer causes the spatial sampling frequency to decrease with depth. Our second aim was to assess tracking performance as a function of the spatial sampling setting (low, medium or high sampling). These settings are incorporated onto 3D ultrasound machines to allow the user to control the trade-off between spatial sampling and temporal resolution. Volume images of a speckle-producing phantom were acquired before and after the probe had been moved by a known displacement (1, 2 or 8 mm). This allowed us to assess the optimum performance of the tracking algorithm, in the absence of motion. 3D speckle tracking was performed using 3D cross-correlation and sub-voxel displacements were estimated. The tracking performance was found to be best for axial displacements and poorest for elevational displacements. In general, the performance decreased with depth, although the nature of the depth dependence was complex. Under certain conditions, the tracking performance was sufficient to be useful for monitoring organ motion. For example, at the highest sampling setting, for a 2 mm displacement, good accuracy and precision (an error and standard deviation of <0.4 mm) were observed at all depths and for all directions of displacement. The trade-off between spatial sampling, temporal resolution and size of the field of view (FOV) is discussed.

  18. MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems

    NASA Astrophysics Data System (ADS)

    Kopecky, Ken; Winer, Eliot

    2014-06-01

    Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.

  19. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. PMID:23218511

  20. Prediction of 3D internal organ position from skin surface motion: results from electromagnetic tracking studies

    NASA Astrophysics Data System (ADS)

    Wong, Kenneth H.; Tang, Jonathan; Zhang, Hui J.; Varghese, Emmanuel; Cleary, Kevin R.

    2005-04-01

    An effective treatment method for organs that move with respiration (such as the lungs, pancreas, and liver) is a major goal of radiation medicine. In order to treat such tumors, we need (1) real-time knowledge of the current location of the tumor, and (2) the ability to adapt the radiation delivery system to follow this constantly changing location. In this study, we used electromagnetic tracking in a swine model to address the first challenge, and to determine if movement of a marker attached to the skin could accurately predict movement of an internal marker embedded in an organ. Under approved animal research protocols, an electromagnetically tracked needle was inserted into a swine liver and an electromagnetically tracked guidewire was taped to the abdominal skin of the animal. The Aurora (Northern Digital Inc., Waterloo, Canada) electromagnetic tracking system was then used to monitor the position of both of these sensors every 40 msec. Position readouts from the sensors were then tested to see if any of the movements showed correlation. The strongest correlations were observed between external anterior-posterior motion and internal inferior-superior motion, with many other axes exhibiting only weak correlation. We also used these data to build a predictive model of internal motion by taking segments from the data and using them to derive a general functional relationship between the internal needle and the external guidewire. For the axis with the strongest correlation, this model enabled us to predict internal organ motion to within 1 mm.

  1. DLP technology application: 3D head tracking and motion correction in medical brain imaging

    NASA Astrophysics Data System (ADS)

    Olesen, Oline V.; Wilm, Jakob; Paulsen, Rasmus R.; Højgaard, Liselotte; Larsen, Rasmus

    2014-03-01

    In this paper we present a novel sensing system, robust Near-infrared Structured Light Scanning (NIRSL) for three-dimensional human model scanning application. Human model scanning due to its nature of various hair and dress appearance and body motion has long been a challenging task. Previous structured light scanning methods typically emitted visible coded light patterns onto static and opaque objects to establish correspondence between a projector and a camera for triangulation. In the success of these methods rely on scanning objects with proper reflective surface for visible light, such as plaster, light colored cloth. Whereas for human model scanning application, conventional methods suffer from low signal to noise ratio caused by low contrast of visible light over the human body. The proposed robust NIRSL, as implemented with the near infrared light, is capable of recovering those dark surfaces, such as hair, dark jeans and black shoes under visible illumination. Moreover, successful structured light scan relies on the assumption that the subject is static during scanning. Due to the nature of body motion, it is very time sensitive to keep this assumption in the case of human model scan. The proposed sensing system, by utilizing the new near-infrared capable high speed LightCrafter DLP projector, is robust to motion, provides accurate and high resolution three-dimensional point cloud, making our system more efficient and robust for human model reconstruction. Experimental results demonstrate that our system is effective and efficient to scan real human models with various dark hair, jeans and shoes, robust to human body motion and produces accurate and high resolution 3D point cloud.

  2. Atmospheric Motion Vectors from INSAT-3D: Initial quality assessment and its impact on track forecast of cyclonic storm NANAUK

    NASA Astrophysics Data System (ADS)

    Deb, S. K.; Kishtawal, C. M.; Kumar, Prashant; Kiran Kumar, A. S.; Pal, P. K.; Kaushik, Nitesh; Sangar, Ghansham

    2016-03-01

    The advanced Indian meteorological geostationary satellite INSAT-3D was launched on 26 July 2013 with an improved imager and an infrared sounder and is placed at 82°E over the Indian Ocean region. With the advancement in retrieval techniques of different atmospheric parameters and with improved imager data have enhanced the scope for better understanding of the different tropical atmospheric processes over this region. The retrieval techniques and accuracy of one such parameter, Atmospheric Motion Vectors (AMV) has improved significantly with the availability of improved spatial resolution data along with more options of spectral channels in the INSAT-3D imager. The present work is mainly focused on providing brief descriptions of INSAT-3D data and AMV derivation processes using these data. It also discussed the initial quality assessment of INSAT-3D AMVs for a period of six months starting from 01 February 2014 to 31 July 2014 with other independent observations: i) Meteosat-7 AMVs available over this region, ii) in-situ radiosonde wind measurements, iii) cloud tracked winds from Multi-angle Imaging Spectro-Radiometer (MISR) and iv) numerical model analysis. It is observed from this study that the qualities of newly derived INSAT-3D AMVs are comparable with existing two versions of Meteosat-7 AMVs over this region. To demonstrate its initial application, INSAT-3D AMVs are assimilated in the Weather Research and Forecasting (WRF) model and it is found that the assimilation of newly derived AMVs has helped in reduction of track forecast errors of the recent cyclonic storm NANAUK over the Arabian Sea. Though, the present study is limited to its application to one case study, however, it will provide some guidance to the operational agencies for implementation of this new AMV dataset for future applications in the Numerical Weather Prediction (NWP) over the south Asia region.

  3. Prospective motion correction of 3D echo-planar imaging data for functional MRI using optical tracking

    PubMed Central

    Todd, Nick; Josephs, Oliver; Callaghan, Martina F.; Lutti, Antoine; Weiskopf, Nikolaus

    2015-01-01

    We evaluated the performance of an optical camera based prospective motion correction (PMC) system in improving the quality of 3D echo-planar imaging functional MRI data. An optical camera and external marker were used to dynamically track the head movement of subjects during fMRI scanning. PMC was performed by using the motion information to dynamically update the sequence's RF excitation and gradient waveforms such that the field-of-view was realigned to match the subject's head movement. Task-free fMRI experiments on five healthy volunteers followed a 2 × 2 × 3 factorial design with the following factors: PMC on or off; 3.0 mm or 1.5 mm isotropic resolution; and no, slow, or fast head movements. Visual and motor fMRI experiments were additionally performed on one of the volunteers at 1.5 mm resolution comparing PMC on vs PMC off for no and slow head movements. Metrics were developed to quantify the amount of motion as it occurred relative to k-space data acquisition. The motion quantification metric collapsed the very rich camera tracking data into one scalar value for each image volume that was strongly predictive of motion-induced artifacts. The PMC system did not introduce extraneous artifacts for the no motion conditions and improved the time series temporal signal-to-noise by 30% to 40% for all combinations of low/high resolution and slow/fast head movement relative to the standard acquisition with no prospective correction. The numbers of activated voxels (p < 0.001, uncorrected) in both task-based experiments were comparable for the no motion cases and increased by 78% and 330%, respectively, for PMC on versus PMC off in the slow motion cases. The PMC system is a robust solution to decrease the motion sensitivity of multi-shot 3D EPI sequences and thereby overcome one of the main roadblocks to their widespread use in fMRI studies. PMID:25783205

  4. Bi-planar 2D-to-3D registration in Fourier domain for stereoscopic x-ray motion tracking

    NASA Astrophysics Data System (ADS)

    Zosso, Dominique; Le Callennec, Benoît; Bach Cuadra, Meritxell; Aminian, Kamiar; Jolles, Brigitte M.; Thiran, Jean-Philippe

    2008-03-01

    In this paper we present a new method to track bone movements in stereoscopic X-ray image series of the knee joint. The method is based on two different X-ray image sets: a rotational series of acquisitions of the still subject knee that allows the tomographic reconstruction of the three-dimensional volume (model), and a stereoscopic image series of orthogonal projections as the subject performs movements. Tracking the movements of bones throughout the stereoscopic image series means to determine, for each frame, the best pose of every moving element (bone) previously identified in the 3D reconstructed model. The quality of a pose is reflected in the similarity between its theoretical projections and the actual radiographs. We use direct Fourier reconstruction to approximate the three-dimensional volume of the knee joint. Then, to avoid the expensive computation of digitally rendered radiographs (DRR) for pose recovery, we develop a corollary to the 3-dimensional central-slice theorem and reformulate the tracking problem in the Fourier domain. Under the hypothesis of parallel X-ray beams, the heavy 2D-to-3D registration of projections in the signal domain is replaced by efficient slice-to-volume registration in the Fourier domain. Focusing on rotational movements, the translation-relevant phase information can be discarded and we only consider scalar Fourier amplitudes. The core of our motion tracking algorithm can be implemented as a classical frame-wise slice-to-volume registration task. Results on both synthetic and real images confirm the validity of our approach.

  5. 3D camera tracking from disparity images

    NASA Astrophysics Data System (ADS)

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  6. Hybrid MV-kV 3D respiratory motion tracking during radiation therapy with low imaging dose

    NASA Astrophysics Data System (ADS)

    Yan, Huagang; Li, Haiyun; Liu, Zhixiang; Nath, Ravinder; Liu, Wu

    2012-12-01

    A novel real-time adaptive MV-kV imaging framework for image-guided radiation therapy is developed to reduce the thoracic and abdominal tumor targeting uncertainty caused by respiration-induced intrafraction motion with ultra-low patient imaging dose. In our method, continuous stereoscopic MV-kV imaging is used at the beginning of a radiation therapy delivery for several seconds to measure the implanted marker positions. After this stereoscopic imaging period, the kV imager is switched off except for the times when no fiducial marker is detected in the cine-MV images. The 3D time-varying marker positions are estimated by combining the MV 2D projection data and the motion correlations between directional components of marker motion established from the stereoscopic imaging period and updated afterwards; in particular, the most likely position is assumed to be the position on the projection line that has the shortest distance to the first principal component line segment constructed from previous trajectory points. An adaptive windowed auto-regressive prediction is utilized to predict the marker position a short time later (310 ms and 460 ms in this study) to allow for tracking system latency. To demonstrate the feasibility and evaluate the accuracy of the proposed method, computer simulations were performed for both arc and fixed-gantry deliveries using 66 h of retrospective tumor motion data from 42 patients treated for thoracic or abdominal cancers. The simulations reveal that using our hybrid approach, a smaller than 1.2 mm or 1.5 mm root-mean-square tracking error can be achieved at a system latency of 310 ms or 460 ms, respectively. Because the kV imaging is only used for a short period of time in our method, extra patient imaging dose can be reduced by an order of magnitude compared to continuous MV-kV imaging, while the clinical tumor targeting accuracy for thoracic or abdominal cancers is maintained. Furthermore, no additional hardware is required with the

  7. Validation of 3D motion tracking of pulmonary lesions using CT fluoroscopy images for robotically assisted lung biopsy

    NASA Astrophysics Data System (ADS)

    Xu, Sheng; Fichtinger, Gabor; Taylor, Russell H.; Cleary, Kevin R.

    2005-04-01

    As recently proposed in our previous work, the two-dimensional CT fluoroscopy image series can be used to track the three-dimensional motion of a pulmonary lesion. The assumption is that the lung tissue is locally rigid, so that the real-time CT fluoroscopy image can be combined with a preoperative CT volume to infer the position of the lesion when the lesion is not in the CT fluoroscopy imaging plane. In this paper, we validate the basic properties of our tracking algorithm using a synthetic four-dimensional lung dataset. The motion tracking result is compared to the ground truth of the four-dimensional dataset. The optimal parameter configurations of the algorithm are discussed. The robustness and accuracy of the tracking algorithm are presented. The error analysis shows that the local rigidity error is the principle component of the tracking error. The error increases as the lesion moves away from the image region being registered. Using the synthetic four-dimensional lung data, the average tracking error over a complete respiratory cycle is 0.8 mm for target lesions inside the lung. As a result, the motion tracking algorithm can potentially alleviate the effect of respiratory motion in CT fluoroscopy-guided lung biopsy.

  8. Creation of 3D digital anthropomorphic phantoms which model actual patient non-rigid body motion as determined from MRI and position tracking studies of volunteers

    NASA Astrophysics Data System (ADS)

    Connolly, C. M.; Konik, A.; Dasari, P. K. R.; Segars, P.; Zheng, S.; Johnson, K. L.; Dey, J.; King, M. A.

    2011-03-01

    Patient motion can cause artifacts, which can lead to difficulty in interpretation. The purpose of this study is to create 3D digital anthropomorphic phantoms which model the location of the structures of the chest and upper abdomen of human volunteers undergoing a series of clinically relevant motions. The 3D anatomy is modeled using the XCAT phantom and based on MRI studies. The NURBS surfaces of the XCAT are interactively adapted to fit the MRI studies. A detailed XCAT phantom is first developed from an EKG triggered Navigator acquisition composed of sagittal slices with a 3 x 3 x 3 mm voxel dimension. Rigid body motion states are then acquired at breath-hold as sagittal slices partially covering the thorax, centered on the heart, with 9 mm gaps between them. For non-rigid body motion requiring greater sampling, modified Navigator sequences covering the entire thorax with 3 mm gaps between slices are obtained. The structures of the initial XCAT are then adapted to fit these different motion states. Simultaneous to MRI imaging the positions of multiple reflective markers on stretchy bands about the volunteer's chest and abdomen are optically tracked in 3D via stereo imaging. These phantoms with combined position tracking will be used to investigate both imaging-data-driven and motion-tracking strategies to estimate and correct for patient motion. Our initial application will be to cardiacperfusion SPECT imaging where the XCAT phantoms will be used to create patient activity and attenuation distributions for each volunteer with corresponding motion tracking data from the markers on the body-surface. Monte Carlo methods will then be used to simulate SPECT acquisitions, which will be used to evaluate various motion estimation and correction strategies.

  9. The birth of a dinosaur footprint: Subsurface 3D motion reconstruction and discrete element simulation reveal track ontogeny

    PubMed Central

    2014-01-01

    Locomotion over deformable substrates is a common occurrence in nature. Footprints represent sedimentary distortions that provide anatomical, functional, and behavioral insights into trackmaker biology. The interpretation of such evidence can be challenging, however, particularly for fossil tracks recovered at bedding planes below the originally exposed surface. Even in living animals, the complex dynamics that give rise to footprint morphology are obscured by both foot and sediment opacity, which conceals animal–substrate and substrate–substrate interactions. We used X-ray reconstruction of moving morphology (XROMM) to image and animate the hind limb skeleton of a chicken-like bird traversing a dry, granular material. Foot movement differed significantly from walking on solid ground; the longest toe penetrated to a depth of ∼5 cm, reaching an angle of 30° below horizontal before slipping backward on withdrawal. The 3D kinematic data were integrated into a validated substrate simulation using the discrete element method (DEM) to create a quantitative model of limb-induced substrate deformation. Simulation revealed that despite sediment collapse yielding poor quality tracks at the air–substrate interface, subsurface displacements maintain a high level of organization owing to grain–grain support. Splitting the substrate volume along “virtual bedding planes” exposed prints that more closely resembled the foot and could easily be mistaken for shallow tracks. DEM data elucidate how highly localized deformations associated with foot entry and exit generate specific features in the final tracks, a temporal sequence that we term “track ontogeny.” This combination of methodologies fosters a synthesis between the surface/layer-based perspective prevalent in paleontology and the particle/volume-based perspective essential for a mechanistic understanding of sediment redistribution during track formation. PMID:25489092

  10. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  11. Accuracy and precision of a custom camera-based system for 2-d and 3-d motion tracking during speech and nonspeech motor tasks.

    PubMed

    Feng, Yongqiang; Max, Ludo

    2014-04-01

    PURPOSE Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and submillimeter accuracy. METHOD The authors examined the accuracy and precision of 2-D and 3-D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially available computer software (APAS, Ariel Dynamics), and a custom calibration device. RESULTS Overall root-mean-square error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3- vs. 6-mm diameter) was negligible at all frame rates for both 2-D and 3-D data. CONCLUSION Motion tracking with consumer-grade digital cameras and the APAS software can achieve submillimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  12. Real-time intensity based 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Steiner, E.; Stock, M.; Georg, D.; Birkfellner, W.

    2014-03-01

    Intra-fractional respiratorymotion during radiotherapy is one of themain sources of uncertainty in dose application creating the need to extend themargins of the planning target volume (PTV). Real-time tumormotion tracking by 2D/3D registration using on-board kilo-voltage (kV) imaging can lead to a reduction of the PTV. One limitation of this technique when using one projection image, is the inability to resolve motion along the imaging beam axis. We present a retrospective patient study to investigate the impact of paired portal mega-voltage (MV) and kV images, on registration accuracy. We used data from eighteen patients suffering from non small cell lung cancer undergoing regular treatment at our center. For each patient we acquired a planning CT and sequences of kV and MV images during treatment. Our evaluation consisted of comparing the accuracy of motion tracking in 6 degrees-of-freedom(DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. We use graphics processing unit rendering for real-time performance. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 3.3 mm to 1.8 mm and the motion along AP was successfully extracted. The mean registration time was of 190+/-35ms. Our evaluation shows that using kVMV image pairs leads to improved motion extraction in 6 DOF. Therefore, this approach is suitable for accurate, real-time tumor motion tracking with a conventional LINAC.

  13. Intrinsic Feature Motion Tracking

    2013-03-19

    Subject motion during 3D medical scanning can cause blurring and artifacts in the 3D images resulting in either rescans or poor diagnosis. Anesthesia or physical restraints may be used to eliminate motion but are undesirable and can affect results. This software measures the six degree of freedom 3D motion of the subject during the scan under a rigidity assumption using only the intrinsic features present on the subject area being monitored. This movement over timemore » can then be used to correct the scan data removing the blur and artifacts. The software acquires images from external cameras or images stored on disk for processing. The images are from two or three calibrated cameras in a stereo arrangement. Algorithms extract and track the features over time and calculate position and orientation changes relative to an initial position. Output is the 3D position and orientation change measured at each image.« less

  14. Intrinsic Feature Motion Tracking

    SciTech Connect

    Goddard, Jr., James S.

    2013-03-19

    Subject motion during 3D medical scanning can cause blurring and artifacts in the 3D images resulting in either rescans or poor diagnosis. Anesthesia or physical restraints may be used to eliminate motion but are undesirable and can affect results. This software measures the six degree of freedom 3D motion of the subject during the scan under a rigidity assumption using only the intrinsic features present on the subject area being monitored. This movement over time can then be used to correct the scan data removing the blur and artifacts. The software acquires images from external cameras or images stored on disk for processing. The images are from two or three calibrated cameras in a stereo arrangement. Algorithms extract and track the features over time and calculate position and orientation changes relative to an initial position. Output is the 3D position and orientation change measured at each image.

  15. Automatic respiration tracking for radiotherapy using optical 3D camera

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Rapid optical three-dimensional (O3D) imaging systems provide accurate digitized 3D surface data in real-time, with no patient contact nor radiation. The accurate 3D surface images offer crucial information in image-guided radiation therapy (IGRT) treatments for accurate patient repositioning and respiration management. However, applications of O3D imaging techniques to image-guided radiotherapy have been clinically challenged by body deformation, pathological and anatomical variations among individual patients, extremely high dimensionality of the 3D surface data, and irregular respiration motion. In existing clinical radiation therapy (RT) procedures target displacements are caused by (1) inter-fractional anatomy changes due to weight, swell, food/water intake; (2) intra-fractional variations from anatomy changes within any treatment session due to voluntary/involuntary physiologic processes (e.g. respiration, muscle relaxation); (3) patient setup misalignment in daily reposition due to user errors; and (4) changes of marker or positioning device, etc. Presently, viable solution is lacking for in-vivo tracking of target motion and anatomy changes during the beam-on time without exposing patient with additional ionized radiation or high magnet field. Current O3D-guided radiotherapy systems relay on selected points or areas in the 3D surface to track surface motion. The configuration of the marks or areas may change with time that makes it inconsistent in quantifying and interpreting the respiration patterns. To meet the challenge of performing real-time respiration tracking using O3D imaging technology in IGRT, we propose a new approach to automatic respiration motion analysis based on linear dimensionality reduction technique based on PCA (principle component analysis). Optical 3D image sequence is decomposed with principle component analysis into a limited number of independent (orthogonal) motion patterns (a low dimension eigen-space span by eigen-vectors). New

  16. Quantitative Evaluation of 3D Mouse Behaviors and Motor Function in the Open-Field after Spinal Cord Injury Using Markerless Motion Tracking

    PubMed Central

    Sheets, Alison L.; Lai, Po-Lun; Fisher, Lesley C.; Basso, D. Michele

    2013-01-01

    Thousands of scientists strive to identify cellular mechanisms that could lead to breakthroughs in developing ameliorative treatments for debilitating neural and muscular conditions such as spinal cord injury (SCI). Most studies use rodent models to test hypotheses, and these are all limited by the methods available to evaluate animal motor function. This study’s goal was to develop a behavioral and locomotor assessment system in a murine model of SCI that enables quantitative kinematic measurements to be made automatically in the open-field by applying markerless motion tracking approaches. Three-dimensional movements of eight naïve, five mild, five moderate, and four severe SCI mice were recorded using 10 cameras (100 Hz). Background subtraction was used in each video frame to identify the animal’s silhouette, and the 3D shape at each time was reconstructed using shape-from-silhouette. The reconstructed volume was divided into front and back halves using k-means clustering. The animal’s front Center of Volume (CoV) height and whole-body CoV speed were calculated and used to automatically classify animal behaviors including directed locomotion, exploratory locomotion, meandering, standing, and rearing. More detailed analyses of CoV height, speed, and lateral deviation during directed locomotion revealed behavioral differences and functional impairments in animals with mild, moderate, and severe SCI when compared with naïve animals. Naïve animals displayed the widest variety of behaviors including rearing and crossing the center of the open-field, the fastest speeds, and tallest rear CoV heights. SCI reduced the range of behaviors, and decreased speed (r = .70 p<.005) and rear CoV height (r = .65 p<.01) were significantly correlated with greater lesion size. This markerless tracking approach is a first step toward fundamentally changing how rodent movement studies are conducted. By providing scientists with sensitive, quantitative measurement

  17. 3D hand tracking using Kalman filter in depth space

    NASA Astrophysics Data System (ADS)

    Park, Sangheon; Yu, Sunjin; Kim, Joongrock; Kim, Sungjin; Lee, Sangyoun

    2012-12-01

    Hand gestures are an important type of natural language used in many research areas such as human-computer interaction and computer vision. Hand gestures recognition requires the prior determination of the hand position through detection and tracking. One of the most efficient strategies for hand tracking is to use 2D visual information such as color and shape. However, visual-sensor-based hand tracking methods are very sensitive when tracking is performed under variable light conditions. Also, as hand movements are made in 3D space, the recognition performance of hand gestures using 2D information is inherently limited. In this article, we propose a novel real-time 3D hand tracking method in depth space using a 3D depth sensor and employing Kalman filter. We detect hand candidates using motion clusters and predefined wave motion, and track hand locations using Kalman filter. To verify the effectiveness of the proposed method, we compare the performance of the proposed method with the visual-based method. Experimental results show that the performance of the proposed method out performs visual-based method.

  18. Accuracy and Precision of a Custom Camera-Based System for 2-D and 3-D Motion Tracking during Speech and Nonspeech Motor Tasks

    ERIC Educational Resources Information Center

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose: Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable…

  19. Speeding up 3D speckle tracking using PatchMatch

    NASA Astrophysics Data System (ADS)

    Zontak, Maria; O'Donnell, Matthew

    2016-03-01

    Echocardiography provides valuable information to diagnose heart dysfunction. A typical exam records several minutes of real-time cardiac images. To enable complete analysis of 3D cardiac strains, 4-D (3-D+t) echocardiography is used. This results in a huge dataset and requires effective automated analysis. Ultrasound speckle tracking is an effective method for tissue motion analysis. It involves correlation of a 3D kernel (block) around a voxel with kernels in later frames. The search region is usually confined to a local neighborhood, due to biomechanical and computational constraints. For high strains and moderate frame-rates, however, this search region will remain large, leading to a considerable computational burden. Moreover, speckle decorrelation (due to high strains) leads to errors in tracking. To solve this, spatial motion coherency between adjacent voxels should be imposed, e.g., by averaging their correlation functions.1 This requires storing correlation functions for neighboring voxels, thus increasing memory demands. In this work, we propose an efficient search using PatchMatch, 2 a powerful method to find correspondences between images. Here we adopt PatchMatch for 3D volumes and radio-frequency signals. As opposed to an exact search, PatchMatch performs random sampling of the search region and propagates successive matches among neighboring voxels. We show that: 1) Inherently smooth offset propagation in PatchMatch contributes to spatial motion coherence without any additional processing or memory demand. 2) For typical scenarios, PatchMatch is at least 20 times faster than the exact search, while maintaining comparable tracking accuracy.

  20. 3D Human Motion Editing and Synthesis: A Survey

    PubMed Central

    Wang, Xin; Chen, Qiudi; Wang, Wanliang

    2014-01-01

    The ways to compute the kinematics and dynamic quantities of human bodies in motion have been studied in many biomedical papers. This paper presents a comprehensive survey of 3D human motion editing and synthesis techniques. Firstly, four types of methods for 3D human motion synthesis are introduced and compared. Secondly, motion capture data representation, motion editing, and motion synthesis are reviewed successively. Finally, future research directions are suggested. PMID:25045395

  1. Concurrent 3-D motion segmentation and 3-D interpretation of temporal sequences of monocular images.

    PubMed

    Sekkati, Hicham; Mitiche, Amar

    2006-03-01

    The purpose of this study is to investigate a variational method for joint multiregion three-dimensional (3-D) motion segmentation and 3-D interpretation of temporal sequences of monocular images. Interpretation consists of dense recovery of 3-D structure and motion from the image sequence spatiotemporal variations due to short-range image motion. The method is direct insomuch as it does not require prior computation of image motion. It allows movement of both viewing system and multiple independently moving objects. The problem is formulated following a variational statement with a functional containing three terms. One term measures the conformity of the interpretation within each region of 3-D motion segmentation to the image sequence spatiotemporal variations. The second term is of regularization of depth. The assumption that environmental objects are rigid accounts automatically for the regularity of 3-D motion within each region of segmentation. The third and last term is for the regularity of segmentation boundaries. Minimization of the functional follows the corresponding Euler-Lagrange equations. This results in iterated concurrent computation of 3-D motion segmentation by curve evolution, depth by gradient descent, and 3-D motion by least squares within each region of segmentation. Curve evolution is implemented via level sets for topology independence and numerical stability. This algorithm and its implementation are verified on synthetic and real image sequences. Viewers presented with anaglyphs of stereoscopic images constructed from the algorithm's output reported a strong perception of depth. PMID:16519351

  2. Motion Tracking System

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Integrated Sensors, Inc. (ISI), under NASA contract, developed a sensor system for controlling robot vehicles. This technology would enable a robot supply vehicle to automatically dock with Earth-orbiting satellites or the International Space Station. During the docking phase the ISI-developed sensor must sense the satellite's relative motion, then spin so the robot vehicle can adjust its motion to align with the satellite and slowly close until docking is completed. ISI used the sensing/tracking technology as the basis of its OPAD system, which simultaneously tracks an object's movement in six degrees of freedom. Applications include human limb motion analysis, assembly line position analysis and auto crash dummy motion analysis. The NASA technology is also the basis for Motion Analysis Workstation software, a package to simplify the video motion analysis process.

  3. Electrically tunable lens speeds up 3D orbital tracking

    PubMed Central

    Annibale, Paolo; Dvornikov, Alexander; Gratton, Enrico

    2015-01-01

    3D orbital particle tracking is a versatile and effective microscopy technique that allows following fast moving fluorescent objects within living cells and reconstructing complex 3D shapes using laser scanning microscopes. We demonstrated notable improvements in the range, speed and accuracy of 3D orbital particle tracking by replacing commonly used piezoelectric stages with Electrically Tunable Lens (ETL) that eliminates mechanical movement of objective lenses. This allowed tracking and reconstructing shape of structures extending 500 microns in the axial direction. Using the ETL, we tracked at high speed fluorescently labeled genomic loci within the nucleus of living cells with unprecedented temporal resolution of 8ms using a 1.42NA oil-immersion objective. The presented technology is cost effective and allows easy upgrade of scanning microscopes for fast 3D orbital tracking. PMID:26114037

  4. 3D harmonic phase tracking with anatomical regularization.

    PubMed

    Zhou, Yitian; Bernard, Olivier; Saloux, Eric; Manrique, Alain; Allain, Pascal; Makram-Ebeid, Sherif; De Craene, Mathieu

    2015-12-01

    This paper presents a novel algorithm that extends HARP to handle 3D tagged MRI images. HARP results were regularized by an original regularization framework defined in an anatomical space of coordinates. In the meantime, myocardium incompressibility was integrated in order to correct the radial strain which is reported to be more challenging to recover. Both the tracking and regularization of LV displacements were done on a volumetric mesh to be computationally efficient. Also, a window-weighted regression method was extended to cardiac motion tracking which helps maintain a low complexity even at finer scales. On healthy volunteers, the tracking accuracy was found to be as accurate as the best candidates of a recent benchmark. Strain accuracy was evaluated on synthetic data, showing low bias and strain errors under 5% (excluding outliers) for longitudinal and circumferential strains, while the second and third quartiles of the radial strain errors are in the (-5%,5%) range. In clinical data, strain dispersion was shown to correlate with the extent of transmural fibrosis. Also, reduced deformation values were found inside infarcted segments. PMID:26363844

  5. Motion field estimation for a dynamic scene using a 3D LiDAR.

    PubMed

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  6. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR

    PubMed Central

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  7. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes. PMID:23955795

  8. 3D track initiation in clutter using 2D measurements

    NASA Astrophysics Data System (ADS)

    Lin, Lin; Kirubarajan, Thiagalingam; Bar-Shalom, Yaakov

    2001-11-01

    In this paper we present an algorithm for initiating 3-D tracks using range and azimuth (bearing) measurements from a 2-D radar on a moving platform. The work is motivated by the need to track possibly low-flying targets, e.g., cruise missiles, using reports from an aircraft-based surveillance radar. Previous work on this problem considered simple linear motion in a flat earth coordinate frame. Our research extends this to a more realistic scenario where the earth"s curvature is also considered. The target is assumed to be moving along a great circle at a constant altitude. After the necessary coordinate transformations, the measurements are nonlinear functions of the target state and the observability of target altitude is severely limited. The observability, quantified by the Cramer-Rao Lower Bound (CRLB), is very sensitive to the sensor-to-target geometry. The paper presents a Maximum Likelihood (ML) estimator for estimating the target motion parameters in the Earth Centered Earth Fixed coordinate frame from 2-D range and angle measurements. In order to handle the possibility of false measurements and missed detections, which was not considered in, we use the Probabilistic Data Association (PDA) algorithm to weight the detections in a frame. The PDA-based modified global likelihood is optimized using a numerical search. The accuracies obtained by the resulting ML-PDA estimator are quantified using the CRLB for different sensor-target configurations. It is shown that the proposed estimator is efficient, that is, it meets the CRLB. Of particular interest is the achievable accuracy for estimating the target altitude, which is not observed directly by the 2-D radar, but can be only inferred from the range and bearing observations.

  9. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439

  10. [Evaluation of Motion Sickness Induced by 3D Video Clips].

    PubMed

    Matsuura, Yasuyuki; Takada, Hiroki

    2016-01-01

    The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology. PMID:26832611

  11. 3D tongue motion from tagged and cine MR images.

    PubMed

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z; Lee, Junghoon; Stone, Maureen; Prince, Jerry L

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach suffers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information to yield improved estimation of 3D tongue motion. The method uses the harmonic phase (HARP) algorithm to extract motion from tags and diffeomorphic demons to provide surface deformation. It then uses an incompressible deformation estimation algorithm to incorporate both sources of displacement information to form an estimate of the 3D whole tongue motion. Experimental results show that use of combined information improves motion estimation near the tongue surface, a problem that has previously been reported as problematic in HARP analysis, while preserving accurate internal motion estimates. Results on both normal and abnormal tongue motions are shown. PMID:24505742

  12. Preference for motion and depth in 3D film

    NASA Astrophysics Data System (ADS)

    Hartle, Brittney; Lugtigheid, Arthur; Kazimi, Ali; Allison, Robert S.; Wilcox, Laurie M.

    2015-03-01

    While heuristics have evolved over decades for the capture and display of conventional 2D film, it is not clear these always apply well to stereoscopic 3D (S3D) film. Further, while there has been considerable recent research on viewer comfort in S3D media, little attention has been paid to audience preferences for filming parameters in S3D. Here we evaluate viewers' preferences for moving S3D film content in a theatre setting. Specifically we examine preferences for combinations of camera motion (speed and direction) and stereoscopic depth (IA). The amount of IA had no impact on clip preferences regardless of the direction or speed of camera movement. However, preferences were influenced by camera speed, but only in the in-depth condition where viewers preferred faster motion. Given that previous research shows that slower speeds are more comfortable for viewing S3D content, our results show that viewing preferences cannot be predicted simply from measures of comfort. Instead, it is clear that viewer response to S3D film is complex and that film parameters selected to enhance comfort may in some instances produce less appealing content.

  13. A new 3D tracking method exploiting the capabilities of digital holography in microscopy

    NASA Astrophysics Data System (ADS)

    Miccio, L.; Memmolo, P.; Merola, F.; Fusco, S.; Embrione, V.; Netti, P. A.; Ferraro, P.

    2013-04-01

    A method for 3D tracking has been developed exploiting Digital Holographic Microscopy (DHM) features. In the framework of self-consistent platform for manipulation and measurement of biological specimen we use DHM for quantitative and completely label free analysis of specimen with low amplitude contrast. Tracking capability extend the potentiality of DHM allowing to monitor the motion of appropriate probes and correlate it with sample properties. Complete 3D tracking has been obtained for the probes avoiding the issue of amplitude refocusing in traditional tracking processing. Our technique belongs to the video tracking methods that, conversely from Quadrant Photo-Diode method, opens the possibility to track multiples probes. All the common used video tracking algorithms are based on the numerical analysis of amplitude images in the focus plane and the shift of the maxima in the image plane are measured after the application of an appropriate threshold. Our approach for video tracking uses different theoretical basis. A set of interferograms is recorded and the complex wavefields are managed numerically to obtain three dimensional displacements of the probes. The procedure works properly on an higher number of probes and independently from their size. This method overcomes the traditional video tracking issues as the inability to measure the axial movement and the choice of suitable threshold mask. The novel configuration allows 3D tracking of micro-particles and simultaneously can furnish Quantitative Phase-contrast maps of tracked micro-objects by interference microscopy, without changing the configuration. In this paper, we show a new concept for a compact interferometric microscope that can ensure the multifunctionality, accomplishing accurate 3D tracking and quantitative phase-contrast analysis. Experimental results are presented and discussed for in vitro cells. Through a very simple and compact optical arrangement we show how two different functionalities

  14. Monocular 3-D gait tracking in surveillance scenes.

    PubMed

    Rogez, Grégory; Rihan, Jonathan; Guerrero, Jose J; Orrite, Carlos

    2014-06-01

    Gait recognition can potentially provide a noninvasive and effective biometric authentication from a distance. However, the performance of gait recognition systems will suffer in real surveillance scenarios with multiple interacting individuals and where the camera is usually placed at a significant angle and distance from the floor. We present a methodology for view-invariant monocular 3-D human pose tracking in man-made environments in which we assume that observed people move on a known ground plane. First, we model 3-D body poses and camera viewpoints with a low dimensional manifold and learn a generative model of the silhouette from this manifold to a reduced set of training views. During the online stage, 3-D body poses are tracked using recursive Bayesian sampling conducted jointly over the scene's ground plane and the pose-viewpoint manifold. For each sample, the homography that relates the corresponding training plane to the image points is calculated using the dominant 3-D directions of the scene, the sampled location on the ground plane and the sampled camera view. Each regressed silhouette shape is projected using this homographic transformation and is matched in the image to estimate its likelihood. Our framework is able to track 3-D human walking poses in a 3-D environment exploring only a 4-D state space with success. In our experimental evaluation, we demonstrate the significant improvements of the homographic alignment over a commonly used similarity transformation and provide quantitative pose tracking results for the monocular sequences with a high perspective effect from the CAVIAR dataset. PMID:23955796

  15. 3D imaging of neutron tracks using confocal microscopy

    NASA Astrophysics Data System (ADS)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  16. Compression of point-texture 3D motion sequences

    NASA Astrophysics Data System (ADS)

    Song, In-Wook; Kim, Chang-Su; Lee, Sang-Uk

    2005-10-01

    In this work, we propose two compression algorithms for PointTexture 3D sequences: the octree-based scheme and the motion-compensated prediction scheme. The first scheme represents each PointTexture frame hierarchically using an octree. The geometry information in the octree nodes is encoded by the predictive partial matching (PPM) method. The encoder supports the progressive transmission of the 3D frame by transmitting the octree nodes in a top-down manner. The second scheme adopts the motion-compensated prediction to exploit the temporal correlation in 3D sequences. It first divides each frame into blocks, and then estimates the motion of each block using the block matching algorithm. In contrast to the motion-compensated 2D video coding, the prediction residual may take more bits than the original signal. Thus, in our approach, the motion compensation is used only for the blocks that can be replaced by the matching blocks. The other blocks are PPM-encoded. Extensive simulation results demonstrate that the proposed algorithms provide excellent compression performances.

  17. The effect of object speed and direction on the performance of 3D speckle tracking using a 3D swept-volume ultrasound probe

    NASA Astrophysics Data System (ADS)

    Harris, Emma J.; Miller, Naomi R.; Bamber, Jeffrey C.; Symonds-Tayler, J. Richard N.; Evans, Philip M.

    2011-11-01

    Three-dimensional (3D) soft tissue tracking using 3D ultrasound is of interest for monitoring organ motion during therapy. Previously we demonstrated feature tracking of respiration-induced liver motion in vivo using a 3D swept-volume ultrasound probe. The aim of this study was to investigate how object speed affects the accuracy of tracking ultrasonic speckle in the absence of any structural information, which mimics the situation in homogenous tissue for motion in the azimuthal and elevational directions. For object motion prograde and retrograde to the sweep direction of the transducer, the spatial sampling frequency increases or decreases with object speed, respectively. We examined the effect object motion direction of the transducer on tracking accuracy. We imaged a homogenous ultrasound speckle phantom whilst moving the probe with linear motion at a speed of 0-35 mm s-1. Tracking accuracy and precision were investigated as a function of speed, depth and direction of motion for fixed displacements of 2 and 4 mm. For the azimuthal direction, accuracy was better than 0.1 and 0.15 mm for displacements of 2 and 4 mm, respectively. For a 2 mm displacement in the elevational direction, accuracy was better than 0.5 mm for most speeds. For 4 mm elevational displacement with retrograde motion, accuracy and precision reduced with speed and tracking failure was observed at speeds of greater than 14 mm s-1. Tracking failure was attributed to speckle de-correlation as a result of decreasing spatial sampling frequency with increasing speed of retrograde motion. For prograde motion, tracking failure was not observed. For inter-volume displacements greater than 2 mm, only prograde motion should be tracked which will decrease temporal resolution by a factor of 2. Tracking errors of the order of 0.5 mm for prograde motion in the elevational direction indicates that using the swept probe technology speckle tracking accuracy is currently too poor to track homogenous tissue over

  18. Light driven micro-robotics with holographic 3D tracking

    NASA Astrophysics Data System (ADS)

    Glückstad, Jesper

    2016-04-01

    We recently pioneered the concept of light-driven micro-robotics including the new and disruptive 3D-printed micro-tools coined Wave-guided Optical Waveguides that can be real-time optically trapped and "remote-controlled" in a volume with six-degrees-of-freedom. To be exploring the full potential of this new drone-like 3D light robotics approach in challenging microscopic geometries requires a versatile and real-time reconfigurable light coupling that can dynamically track a plurality of "light robots" in 3D to ensure continuous optimal light coupling on the fly. Our latest developments in this new and exciting area will be reviewed in this invited paper.

  19. Learning Projectile Motion with the Computer Game ``Scorched 3D``

    NASA Astrophysics Data System (ADS)

    Jurcevic, John S.

    2008-01-01

    For most of our students, video games are a normal part of their lives. We should take advantage of this medium to teach physics in a manner that is engrossing for our students. In particular, modern video games incorporate accurate physics in their game engines, and they allow us to visualize the physics through flashy and captivating graphics. I recently used the game "Scorched 3D" to help my students understand projectile motion.

  20. Preparation and 3D Tracking of Catalytic Swimming Devices.

    PubMed

    Campbell, Andrew; Archer, Richard; Ebbens, Stephen

    2016-01-01

    We report a method to prepare catalytically active Janus colloids that "swim" in fluids and describe how to determine their 3D motion using fluorescence microscopy. One commonly deployed method for catalytically active colloids to produce enhanced motion is via an asymmetrical distribution of catalyst. Here this is achieved by spin coating a dispersed layer of fluorescent polymeric colloids onto a flat planar substrate, and then using directional platinum vapor deposition to half coat the exposed colloid surface, making a two faced "Janus" structure. The Janus colloids are then re-suspended from the planar substrate into an aqueous solution containing hydrogen peroxide. Hydrogen peroxide serves as a fuel for the platinum catalyst, which is decomposed into water and oxygen, but only on one side of the colloid. The asymmetry results in gradients that produce enhanced motion, or "swimming". A fluorescence microscope, together with a video camera is used to record the motion of individual colloids. The center of the fluorescent emission is found using image analysis to provide an x and y coordinate for each frame of the video. While keeping the microscope focal position fixed, the fluorescence emission from the colloid produces a characteristic concentric ring pattern which is subject to image analysis to determine the particles relative z position. In this way 3D trajectories for the swimming colloid are obtained, allowing swimming velocity to be accurately measured, and physical phenomena such as gravitaxis, which may bias the colloids motion to be detected. PMID:27404327

  1. Sketch on dynamic gesture tracking and analysis exploiting vision-based 3D interface

    NASA Astrophysics Data System (ADS)

    Woo, Woontack; Kim, Namgyu; Wong, Karen; Tadenuma, Makoto

    2000-12-01

    In this paper, we propose a vision-based 3D interface exploiting invisible 3D boxes, arranged in the personal space (i.e. reachable space by the body without traveling), which allows robust yet simple dynamic gesture tracking and analysis, without exploiting complicated sensor-based motion tracking systems. Vision-based gesture tracking and analysis is still a challenging problem, even though we have witnessed rapid advances in computer vision over the last few decades. The proposed framework consists of three main parts, i.e. (1) object segmentation without bluescreen and 3D box initialization with depth information, (2) movement tracking by observing how the body passes through the 3D boxes in the personal space and (3) movement feature extraction based on Laban's Effort theory and movement analysis by mapping features to meaningful symbols using time-delay neural networks. Obviously, exploiting depth information using multiview images improves the performance of gesture analysis by reducing the errors introduced by simple 2D interfaces In addition, the proposed box-based 3D interface lessens the difficulties in both tracking movement in 3D space and in extracting low-level features of the movement. Furthermore, the time-delay neural networks lessens the difficulties in movement analysis by training. Due to its simplicity and robustness, the framework will provide interactive systems, such as ATR I-cubed Tangible Music System or ATR Interactive Dance system, with improved quality of the 3D interface. The proposed simple framework also can be extended to other applications requiring dynamic gesture tracking and analysis on the fly.

  2. Meshless deformable models for 3D cardiac motion and strain analysis from tagged MRI.

    PubMed

    Wang, Xiaoxu; Chen, Ting; Zhang, Shaoting; Schaerer, Joël; Qian, Zhen; Huh, Suejung; Metaxas, Dimitris; Axel, Leon

    2015-01-01

    Tagged magnetic resonance imaging (TMRI) provides a direct and noninvasive way to visualize the in-wall deformation of the myocardium. Due to the through-plane motion, the tracking of 3D trajectories of the material points and the computation of 3D strain field call for the necessity of building 3D cardiac deformable models. The intersections of three stacks of orthogonal tagging planes are material points in the myocardium. With these intersections as control points, 3D motion can be reconstructed with a novel meshless deformable model (MDM). Volumetric MDMs describe an object as point cloud inside the object boundary and the coordinate of each point can be written in parametric functions. A generic heart mesh is registered on the TMRI with polar decomposition. A 3D MDM is generated and deformed with MR image tagging lines. Volumetric MDMs are deformed by calculating the dynamics function and minimizing the local Laplacian coordinates. The similarity transformation of each point is computed by assuming its neighboring points are making the same transformation. The deformation is computed iteratively until the control points match the target positions in the consecutive image frame. The 3D strain field is computed from the 3D displacement field with moving least squares. We demonstrate that MDMs outperformed the finite element method and the spline method with a numerical phantom. Meshless deformable models can track the trajectory of any material point in the myocardium and compute the 3D strain field of any particular area. The experimental results on in vivo healthy and patient heart MRI show that the MDM can fully recover the myocardium motion in three dimensions. PMID:25157446

  3. Meshless deformable models for 3D cardiac motion and strain analysis from tagged MRI

    PubMed Central

    Wang, Xiaoxu; Chen, Ting; Zhang, Shaoting; Schaerer, Joël; Qian, Zhen; Huh, Suejung; Metaxas, Dimitris; Axel, Leon

    2016-01-01

    Tagged magnetic resonance imaging (TMRI) provides a direct and noninvasive way to visualize the in-wall deformation of the myocardium. Due to the through-plane motion, the tracking of 3D trajectories of the material points and the computation of 3D strain field call for the necessity of building 3D cardiac deformable models. The intersections of three stacks of orthogonal tagging planes are material points in the myocardium. With these intersections as control points, 3D motion can be reconstructed with a novel meshless deformable model (MDM). Volumetric MDMs describe an object as point cloud inside the object boundary and the coordinate of each point can be written in parametric functions. A generic heart mesh is registered on the TMRI with polar decomposition. A 3D MDM is generated and deformed with MR image tagging lines. Volumetric MDMs are deformed by calculating the dynamics function and minimizing the local Laplacian coordinates. The similarity transformation of each point is computed by assuming its neighboring points are making the same transformation. The deformation is computed iteratively until the control points match the target positions in the consecutive image frame. The 3D strain field is computed from the 3D displacement field with moving least squares. We demonstrate that MDMs outperformed the finite element method and the spline method with a numerical phantom. Meshless deformable models can track the trajectory of any material point in the myocardium and compute the 3D strain field of any particular area. The experimental results on in vivo healthy and patient heart MRI show that the MDM can fully recover the myocardium motion in three dimensions. PMID:25157446

  4. Automated 3-D tracking of centrosomes in sequences of confocal image stacks.

    PubMed

    Kerekes, Ryan A; Gleason, Shaun S; Trivedi, Niraj; Solecki, David J

    2009-01-01

    In order to facilitate the study of neuron migration, we propose a method for 3-D detection and tracking of centrosomes in time-lapse confocal image stacks of live neuron cells. We combine Laplacian-based blob detection, adaptive thresholding, and the extraction of scale and roundness features to find centrosome-like objects in each frame. We link these detections using the joint probabilistic data association filter (JPDAF) tracking algorithm with a Newtonian state-space model tailored to the motion characteristics of centrosomes in live neurons. We apply our algorithm to image sequences containing multiple cells, some of which had been treated with motion-inhibiting drugs. We provide qualitative results and quantitative comparisons to manual segmentation and tracking results showing that our average motion estimates agree to within 13% of those computed manually by neurobiologists. PMID:19964725

  5. 3D Guided Wave Motion Analysis on Laminated Composites

    NASA Technical Reports Server (NTRS)

    Tian, Zhenhua; Leckey, Cara; Yu, Lingyu

    2013-01-01

    Ultrasonic guided waves have proved useful for structural health monitoring (SHM) and nondestructive evaluation (NDE) due to their ability to propagate long distances with less energy loss compared to bulk waves and due to their sensitivity to small defects in the structure. Analysis of actively transmitted ultrasonic signals has long been used to detect and assess damage. However, there remain many challenging tasks for guided wave based SHM due to the complexity involved with propagating guided waves, especially in the case of composite materials. The multimodal nature of the ultrasonic guided waves complicates the related damage analysis. This paper presents results from parallel 3D elastodynamic finite integration technique (EFIT) simulations used to acquire 3D wave motion in the subject laminated carbon fiber reinforced polymer composites. The acquired 3D wave motion is then analyzed by frequency-wavenumber analysis to study the wave propagation and interaction in the composite laminate. The frequency-wavenumber analysis enables the study of individual modes and visualization of mode conversion. Delamination damage has been incorporated into the EFIT model to generate "damaged" data. The potential for damage detection in laminated composites is discussed in the end.

  6. Tracking people and cars using 3D modeling and CCTV.

    PubMed

    Edelman, Gerda; Bijhold, Jurrien

    2010-10-10

    The aim of this study was to find a method for the reconstruction of movements of people and cars using CCTV footage and a 3D model of the environment. A procedure is proposed, in which video streams are synchronized and displayed in a 3D model, by using virtual cameras. People and cars are represented by cylinders and boxes, which are moved in the 3D model, according to their movements as shown in the video streams. The procedure was developed and tested in an experimental setup with test persons who logged their GPS coordinates as a recording of the ground truth. Results showed that it is possible to implement this procedure and to reconstruct movements of people and cars from video recordings. The procedure was also applied to a forensic case. In this work we experienced that more situational awareness was created by the 3D model, which made it easier to track people on multiple video streams. Based on all experiences from the experimental set up and the case, recommendations are formulated for use in practice. PMID:20439141

  7. Coverage Assessment and Target Tracking in 3D Domains

    PubMed Central

    Boudriga, Noureddine; Hamdi, Mohamed; Iyengar, Sitharama

    2011-01-01

    Recent advances in integrated electronic devices motivated the use of Wireless Sensor Networks (WSNs) in many applications including domain surveillance and mobile target tracking, where a number of sensors are scattered within a sensitive region to detect the presence of intruders and forward related events to some analysis center(s). Obviously, sensor deployment should guarantee an optimal event detection rate and should reduce coverage holes. Most of the coverage control approaches proposed in the literature deal with two-dimensional zones and do not develop strategies to handle coverage in three-dimensional domains, which is becoming a requirement for many applications including water monitoring, indoor surveillance, and projectile tracking. This paper proposes efficient techniques to detect coverage holes in a 3D domain using a finite set of sensors, repair the holes, and track hostile targets. To this end, we use the concepts of Voronoi tessellation, Vietoris complex, and retract by deformation. We show in particular that, through a set of iterative transformations of the Vietoris complex corresponding to the deployed sensors, the number of coverage holes can be computed with a low complexity. Mobility strategies are also proposed to repair holes by moving appropriately sensors towards the uncovered zones. The tracking objective is to set a non-uniform WSN coverage within the monitored domain to allow detecting the target(s) by the set of sensors. We show, in particular, how the proposed algorithms adapt to cope with obstacles. Simulation experiments are carried out to analyze the efficiency of the proposed models. To our knowledge, repairing and tracking is addressed for the first time in 3D spaces with different sensor coverage schemes. PMID:22163733

  8. 3-D model-based tracking for UAV indoor localization.

    PubMed

    Teulière, Céline; Marchand, Eric; Eck, Laurent

    2015-05-01

    This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights. PMID:25099967

  9. A comparison of 3D scapular kinematics between dominant and nondominant shoulders during multiplanar arm motion

    PubMed Central

    Lee, Sang Ki; Yang, Dae Suk; Kim, Ha Yong; Choy, Won Sik

    2013-01-01

    Background: Generally, the scapular motions of pathologic and contralateral normal shoulders are compared to characterize shoulder disorders. However, the symmetry of scapular motion of normal shoulders remains undetermined. Therefore, the aim of this study was to compare 3dimensinal (3D) scapular motion between dominant and nondominant shoulders during three different planes of arm motion by using an optical tracking system. Materials and Methods: Twenty healthy subjects completed five repetitions of elevation and lowering in sagittal plane flexion, scapular plane abduction, and coronal plane abduction. The 3D scapular motion was measured using an optical tracking system, after minimizing reflective marker skin slippage using ultrasonography. The dynamic 3D motion of the scapula of dominant and nondominant shoulders, and the scapulohumeral rhythm (SHR) were analyzed at each 10° increment during the three planes of arm motion. Results: There was no significant difference in upward rotation or internal rotation (P > 0.05) of the scapula between dominant and nondominant shoulders during the three planes of arm motion. However, there was a significant difference in posterior tilting (P = 0.018) during coronal plane abduction. The SHR was a large positive or negative number in the initial phase of sagittal plane flexion and scapular plane abduction. However, the SHR was a small positive or negative number in the initial phase of coronal plane abduction. Conclusions: Only posterior tilting of the scapula during coronal plane abduction was asymmetrical in our healthy subjects, and depending on the plane of arm motion, the pattern of the SHR differed as well. These differences should be considered in the clinical assessment of shoulder pathology. PMID:23682174

  10. Stereo and motion in the display of 3-D scattergrams

    SciTech Connect

    Littlefield, R.J.

    1982-04-01

    A display technique is described that is useful for detecting structure in a 3-dimensional distribution of points. The technique uses a high resolution color raster display to produce a 3-D scattergram. Depth cueing is provided by motion parallax using a capture-replay mechanism. Stereo vision depth cues can also be provided. The paper discusses some general aspects of stereo scattergrams and describes their implementation as red/green anaglyphs. These techniques have been used with data sets containing over 20,000 data points. They can be implemented on relatively inexpensive hardware. (A film of the display was shown at the conference.)

  11. Parameterization of real-time 3D speckle tracking framework for cardiac strain assessment.

    PubMed

    Lorsakul, Auranuch; Duan, Qi; Po, Ming Jack; Angelini, Elsa; Homma, Shunichi; Laine, Andrew F

    2011-01-01

    Cross-correlation based 3D speckle tracking algorithm can be used to automatically track myocardial motion on three dimensional real-time (RT3D) echocardiography. The goal of this study was to experimentally investigate the effects of different parameters associated with such algorithm to ensure accurate cardiac strain measurements. The investigation was performed on 10 chronic obstructive pulmonary disease RT3DE cardiac ultrasound images. The following two parameters were investigated: 1) the gradient threshold of the anisotropic diffusion pre-filtering and 2) the window size of the cross correlation template matching in the speckle tracking. Results suggest that the optimal gradient threshold of the anisotropic filter depends on the average gradient of the background speckle noise, and that an optimal pair of template size and search window size can be identified determines the cross-correlation level and computational cost. PMID:22254887

  12. The Visual Priming of Motion-Defined 3D Objects

    PubMed Central

    Jiang, Xiong; Jiang, Yang

    2015-01-01

    The perception of a stimulus can be influenced by previous perceptual experience, a phenomenon known as perceptual priming. However, there has been limited investigation on perceptual priming of shape perception of three-dimensional object structures defined by moving dots. Here we examined the perceptual priming of a 3D object shape defined purely by motion-in-depth cues (i.e., Shape-From-Motion, SFM) using a classic prime-target paradigm. The results from the first two experiments revealed a significant increase in accuracy when a “cloudy” SFM stimulus (whose object structure was difficult to recognize due to the presence of strong noise) was preceded by an unambiguous SFM that clearly defined the same transparent 3D shape. In contrast, results from Experiment 3 revealed no change in accuracy when a “cloudy” SFM stimulus was preceded by a static shape or a semantic word that defined the same object shape. Instead, there was a significant decrease in accuracy when preceded by a static shape or a semantic word that defined a different object shape. These results suggested that the perception of a noisy SFM stimulus can be facilitated by a preceding unambiguous SFM stimulus—but not a static image or a semantic stimulus—that defined the same shape. The potential neural and computational mechanisms underlying the difference in priming are discussed. PMID:26658496

  13. SU-E-J-135: An Investigation of Ultrasound Imaging for 3D Intra-Fraction Prostate Motion Estimation

    SciTech Connect

    O'Shea, T; Harris, E; Bamber, J; Evans, P

    2014-06-01

    Purpose: This study investigates the use of a mechanically swept 3D ultrasound (US) probe to estimate intra-fraction motion of the prostate during radiation therapy using an US phantom and simulated transperineal imaging. Methods: A 3D motion platform was used to translate an US speckle phantom while simulating transperineal US imaging. Motion patterns for five representative types of prostate motion, generated from patient data previously acquired with a Calypso system, were using to move the phantom in 3D. The phantom was also implanted with fiducial markers and subsequently tracked using the CyberKnife kV x-ray system for comparison. A normalised cross correlation block matching algorithm was used to track speckle patterns in 3D and 2D US data. Motion estimation results were compared with known phantom translations. Results: Transperineal 3D US could track superior-inferior (axial) and anterior-posterior (lateral) motion to better than 0.8 mm root-mean-square error (RMSE) at a volume rate of 1.7 Hz (comparable with kV x-ray tracking RMSE). Motion estimation accuracy was poorest along the US probe's swept axis (right-left; RL; RMSE < 4.2 mm) but simple regularisation methods could be used to improve RMSE (< 2 mm). 2D US was found to be feasible for slowly varying motion (RMSE < 0.5 mm). 3D US could also allow accurate radiation beam gating with displacement thresholds of 2 mm and 5 mm exhibiting a RMSE of less than 0.5 mm. Conclusion: 2D and 3D US speckle tracking is feasible for prostate motion estimation during radiation delivery. Since RL prostate motion is small in magnitude and frequency, 2D or a hybrid (2D/3D) US imaging approach which also accounts for potential prostate rotations could be used. Regularisation methods could be used to ensure the accuracy of tracking data, making US a feasible approach for gating or tracking in standard or hypo-fractionated prostate treatments.

  14. Reconstructing 3-D Ship Motion for Synthetic Aperture Sonar Processing

    NASA Astrophysics Data System (ADS)

    Thomsen, D. R.; Chadwell, C. D.; Sandwell, D.

    2004-12-01

    We are investigating the feasibility of coherent ping-to-ping processing of multibeam sonar data for high-resolution mapping and change detection in the deep ocean. Theoretical calculations suggest that standard multibeam resolution can be improved from 100 m to ~10 m through coherent summation of pings similar to synthetic aperture radar image formation. A requirement for coherent summation of pings is to correct the phase of the return echoes to an accuracy of ~3 cm at a sampling rate of ~10 Hz. In September of 2003, we conducted a seagoing experiment aboard R/V Revelle to test these ideas. Three geodetic-quality GPS receivers were deployed to recover 3-D ship motion to an accuracy of +- 3cm at a 1 Hz sampling rate [Chadwell and Bock, GRL, 2001]. Additionally, inertial navigation data (INS) from fiber-optic gyroscopes and pendulum-type accelerometers were collected at a 10 Hz rate. Independent measurements of ship orientation (yaw, pitch, and roll) from the GPS and INS show agreement to an RMS accuracy of better than 0.1 degree. Because inertial navigation hardware is susceptible to drift, these measurements were combined with the GPS to achieve both high accuracy and high sampling rate. To preserve the short-timescale accuracy of the INS and the long-timescale accuracy of the GPS measurements, time-filtered differences between the GPS and INS were subtracted from the INS integrated linear velocities. An optimal filter length of 25 s was chosen to force the RMS difference between the GPS and the integrated INS to be on the order of the accuracy of the GPS measurements. This analysis provides an upper bound on 3-D ship motion accuracy. Additionally, errors in the attitude can translate to the projections of motion for individual hydrophones. With lever arms on the order of 5m, these errors will likely be ~1mm. Based on these analyses, we expect to achieve the 3-cm accuracy requirement. Using full-resolution hydrophone data collected by a SIMRAD EM/120 echo sounder

  15. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  16. 3D visualisation and analysis of single and coalescing tracks in Solid state Nuclear Track Detectors

    NASA Astrophysics Data System (ADS)

    Wertheim, David; Gillmore, Gavin; Brown, Louise; Petford, Nick

    2010-05-01

    Exposure to radon gas (222Rn) and associated ionising decay products can cause lung cancer in humans (1). Solid state Nuclear Track Detectors (SSNTDs) can be used to monitor radon concentrations (2). Radon particles form tracks in the detectors and these tracks can be etched in order to enable 2D surface image analysis. We have previously shown that confocal microscopy can be used for 3D visualisation of etched SSNTDs (3). The aim of the study was to further investigate track angles and patterns in SSNTDs. A 'LEXT' confocal laser scanning microscope (Olympus Corporation, Japan) was used to acquire 3D image datasets of five CR-39 plastic SSNTD's. The resultant 3D visualisations were analysed by eye and inclination angles assessed on selected tracks. From visual assessment, single isolated tracks as well as coalescing tracks were observed on the etched detectors. In addition varying track inclination angles were observed. Several different patterns of track formation were seen such as single isolated and double coalescing tracks. The observed track angles of inclination may help to assess the angle at which alpha particles hit the detector. Darby, S et al. Radon in homes and risk of lung cancer : collaborative analysis of individual data from 13 European case-control studies. British Medical Journal 2005; 330, 223-226. Phillips, P.S., Denman, A.R., Crockett, R.G.M., Gillmore, G., Groves-Kirkby, C.J., Woolridge, A., Comparative Analysis of Weekly vs. Three monthly radon measurements in dwellings. DEFRA Report No., DEFRA/RAS/03.006. (2004). Wertheim D, Gillmore G, Brown L, and Petford N. A new method of imaging particle tracks in Solid State Nuclear Track Detectors. Journal of Microscopy 2010; 237: 1-6.

  17. Error control in the set-up of stereo camera systems for 3d animal tracking

    NASA Astrophysics Data System (ADS)

    Cavagna, A.; Creato, C.; Del Castello, L.; Giardina, I.; Melillo, S.; Parisi, L.; Viale, M.

    2015-12-01

    Three-dimensional tracking of animal systems is the key to the comprehension of collective behavior. Experimental data collected via a stereo camera system allow the reconstruction of the 3d trajectories of each individual in the group. Trajectories can then be used to compute some quantities of interest to better understand collective motion, such as velocities, distances between individuals and correlation functions. The reliability of the retrieved trajectories is strictly related to the accuracy of the 3d reconstruction. In this paper, we perform a careful analysis of the most significant errors affecting 3d reconstruction, showing how the accuracy depends on the camera system set-up and on the precision of the calibration parameters.

  18. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Smith, Philip W.

    1993-01-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  19. Multisensor fusion for 3D target tracking using track-before-detect particle filter

    NASA Astrophysics Data System (ADS)

    Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.

    2015-05-01

    This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.

  20. Reconstructing 3-D skin surface motion for the DIET breast cancer screening system.

    PubMed

    Botterill, Tom; Lotz, Thomas; Kashif, Amer; Chase, J Geoffrey

    2014-05-01

    Digital image-based elasto-tomography (DIET) is a prototype system for breast cancer screening. A breast is imaged while being vibrated, and the observed surface motion is used to infer the internal stiffness of the breast, hence identifying tumors. This paper describes a computer vision system for accurately measuring 3-D surface motion. A model-based segmentation is used to identify the profile of the breast in each image, and the 3-D surface is reconstructed by fitting a model to the profiles. The surface motion is measured using a modern optical flow implementation customized to the application, then trajectories of points on the 3-D surface are given by fusing the optical flow with the reconstructed surfaces. On data from human trials, the system is shown to exceed the performance of an earlier marker-based system at tracking skin surface motion. We demonstrate that the system can detect a 10 mm tumor in a silicone phantom breast. PMID:24770915

  1. Holographic microscopy for 3D tracking of bacteria

    NASA Astrophysics Data System (ADS)

    Nadeau, Jay; Cho, Yong Bin; El-Kholy, Marwan; Bedrossian, Manuel; Rider, Stephanie; Lindensmith, Christian; Wallace, J. Kent

    2016-03-01

    Understanding when, how, and if bacteria swim is key to understanding critical ecological and biological processes, from carbon cycling to infection. Imaging motility by traditional light microscopy is limited by focus depth, requiring cells to be constrained in z. Holographic microscopy offers an instantaneous 3D snapshot of a large sample volume, and is therefore ideal in principle for quantifying unconstrained bacterial motility. However, resolving and tracking individual cells is difficult due to the low amplitude and phase contrast of the cells; the index of refraction of typical bacteria differs from that of water only at the second decimal place. In this work we present a combination of optical and sample-handling approaches to facilitating bacterial tracking by holographic phase imaging. The first is the design of the microscope, which is an off-axis design with the optics along a common path, which minimizes alignment issues while providing all of the advantages of off-axis holography. Second, we use anti-reflective coated etalon glass in the design of sample chambers, which reduce internal reflections. Improvement seen with the antireflective coating is seen primarily in phase imaging, and its quantification is presented here. Finally, dyes may be used to increase phase contrast according to the Kramers-Kronig relations. Results using three test strains are presented, illustrating the different types of bacterial motility characterized by an enteric organism (Escherichia coli), an environmental organism (Bacillus subtilis), and a marine organism (Vibrio alginolyticus). Data processing steps to increase the quality of the phase images and facilitate tracking are also discussed.

  2. Drogue tracking using 3D flash lidar for autonomous aerial refueling

    NASA Astrophysics Data System (ADS)

    Chen, Chao-I.; Stettner, Roger

    2011-06-01

    Autonomous aerial refueling (AAR) is an important capability for an unmanned aerial vehicle (UAV) to increase its flying range and endurance without increasing its size. This paper presents a novel tracking method that utilizes both 2D intensity and 3D point-cloud data acquired with a 3D Flash LIDAR sensor to establish relative position and orientation between the receiver vehicle and drogue during an aerial refueling process. Unlike classic, vision-based sensors, a 3D Flash LIDAR sensor can provide 3D point-cloud data in real time without motion blur, in the day or night, and is capable of imaging through fog and clouds. The proposed method segments out the drogue through 2D analysis and estimates the center of the drogue from 3D point-cloud data for flight trajectory determination. A level-set front propagation routine is first employed to identify the target of interest and establish its silhouette information. Sufficient domain knowledge, such as the size of the drogue and the expected operable distance, is integrated into our approach to quickly eliminate unlikely target candidates. A statistical analysis along with a random sample consensus (RANSAC) is performed on the target to reduce noise and estimate the center of the drogue after all 3D points on the drogue are identified. The estimated center and drogue silhouette serve as the seed points to efficiently locate the target in the next frame.

  3. 3D whiteboard: collaborative sketching with 3D-tracked smart phones

    NASA Astrophysics Data System (ADS)

    Lue, James; Schulze, Jürgen P.

    2014-02-01

    We present the results of our investigation of the feasibility of a new approach for collaborative drawing in 3D, based on Android smart phones. Our approach utilizes a number of fiduciary markers, placed in the working area where they can be seen by the smart phones' cameras, in order to estimate the pose of each phone in the room. Our prototype allows two users to draw 3D objects with their smart phones by moving their phones around in 3D space. For example, 3D lines are drawn by recording the path of the phone as it is moved around in 3D space, drawing line segments on the screen along the way. Each user can see the virtual drawing space on their smart phones' displays, as if the display was a window into this space. Besides lines, our prototype application also supports 3D geometry creation, geometry transformation operations, and it shows the location of the other user's phone.

  4. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    NASA Astrophysics Data System (ADS)

    Kanphet, J.; Suriyapee, S.; Dumrongkijudom, N.; Sanghangthum, T.; Kumkhwao, J.; Wisetrintong, M.

    2016-03-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable.

  5. Edge preserving motion estimation with occlusions correction for assisted 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Pohl, Petr; Sirotenko, Michael; Tolstaya, Ekaterina; Bucha, Victor

    2014-02-01

    In this article we propose high quality motion estimation based on variational optical flow formulation with non-local regularization term. To improve motion in occlusion areas we introduce occlusion motion inpainting based on 3-frame motion clustering. Variational formulation of optical flow proved itself to be very successful, however a global optimization of cost function can be time consuming. To achieve acceptable computation times we adapted the algorithm that optimizes convex function in coarse-to-fine pyramid strategy and is suitable for modern GPU hardware implementation. We also introduced two simplifications of cost function that significantly decrease computation time with acceptable decrease of quality. For motion clustering based motion inpaitning in occlusion areas we introduce effective method of occlusion aware joint 3-frame motion clustering using RANSAC algorithm. Occlusion areas are inpainted by motion model taken from cluster that shows consistency in opposite direction. We tested our algorithm on Middlebury optical flow benchmark, where we scored around 20th position, but being one of the fastest method near the top. We also successfully used this algorithm in semi-automatic 2D to 3D conversion tool for spatio-temporal background inpainting, automatic adaptive key frame detection and key points tracking.

  6. High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories

    PubMed Central

    Su, Ting-Wei; Xue, Liang; Ozcan, Aydogan

    2012-01-01

    Dynamic tracking of human sperms across a large volume is a challenging task. To provide a high-throughput solution to this important need, here we describe a lensfree on-chip imaging technique that can track the three-dimensional (3D) trajectories of > 1,500 individual human sperms within an observation volume of approximately 8–17 mm3. This computational imaging platform relies on holographic lensfree shadows of sperms that are simultaneously acquired at two different wavelengths, emanating from two partially-coherent sources that are placed at 45° with respect to each other. This multiangle and multicolor illumination scheme permits us to dynamically track the 3D motion of human sperms across a field-of-view of > 17 mm2 and depth-of-field of approximately 0.5–1 mm with submicron positioning accuracy. The large statistics provided by this lensfree imaging platform revealed that only approximately 4–5% of the motile human sperms swim along well-defined helices and that this percentage can be significantly suppressed under seminal plasma. Furthermore, among these observed helical human sperms, a significant majority (approximately 90%) preferred right-handed helices over left-handed ones, with a helix radius of approximately 0.5–3 μm, a helical rotation speed of approximately 3–20 rotations/s and a linear speed of approximately 20–100 μm/s. This high-throughput 3D imaging platform could in general be quite valuable for observing the statistical swimming patterns of various other microorganisms, leading to new insights in their 3D motion and the underlying biophysics. PMID:22988076

  7. Inferred motion perception of light sources in 3D scenes is color-blind.

    PubMed

    Gerhard, Holly E; Maloney, Laurence T

    2013-01-01

    In everyday scenes, the illuminant can vary spatially in chromaticity and luminance, and change over time (e.g. sunset). Such variation generates dramatic image effects too complex for any contemporary machine vision system to overcome, yet human observers are remarkably successful at inferring object properties separately from lighting, an ability linked with estimation and tracking of light field parameters. Which information does the visual system use to infer light field dynamics? Here, we specifically ask whether color contributes to inferred light source motion. Observers viewed 3D surfaces illuminated by an out-of-view moving collimated source (sun) and a diffuse source (sky). In half of the trials, the two sources differed in chromaticity, thereby providing more information about motion direction. Observers discriminated light motion direction above chance, and only the least sensitive observer benefited slightly from the added color information, suggesting that color plays only a very minor role for inferring light field dynamics. PMID:23755354

  8. 3D imaging of particle tracks in Solid State Nuclear Track Detectors

    NASA Astrophysics Data System (ADS)

    Wertheim, D.; Gillmore, G.; Brown, L.; Petford, N.

    2009-04-01

    Inhalation of radon gas (222Rn) and associated ionizing decay products is known to cause lung cancer in human. In the U.K., it has been suggested that 3 to 5 % of total lung cancer deaths can be linked to elevated radon concentrations in the home and/or workplace. Radon monitoring in buildings is therefore routinely undertaken in areas of known risk. Indeed, some organisations such as the Radon Council in the UK and the Environmental Protection Agency in the USA, advocate a ‘to test is best' policy. Radon gas occurs naturally, emanating from the decay of 238U in rock and soils. Its concentration can be measured using CR?39 plastic detectors which conventionally are assessed by 2D image analysis of the surface; however there can be some variation in outcomes / readings even in closely spaced detectors. A number of radon measurement methods are currently in use (for examples, activated carbon and electrets) but the most widely used are CR?39 solid state nuclear track?etch detectors (SSNTDs). In this technique, heavily ionizing alpha particles leave tracks in the form of radiation damage (via interaction between alpha particles and the atoms making up the CR?39 polymer). 3D imaging of the tracks has the potential to provide information relating to angle and energy of alpha particles but this could be time consuming. Here we describe a new method for rapid high resolution 3D imaging of SSNTDs. A ‘LEXT' OLS3100 confocal laser scanning microscope was used in confocal mode to successfully obtain 3D image data on four CR?39 plastic detectors. 3D visualisation and image analysis enabled characterisation of track features. This method may provide a means of rapid and detailed 3D analysis of SSNTDs. Keywords: Radon; SSNTDs; confocal laser scanning microscope; 3D imaging; LEXT

  9. Angle-independent measure of motion for image-based gating in 3D coronary angiography

    SciTech Connect

    Lehmann, Glen C.; Holdsworth, David W.; Drangova, Maria

    2006-05-15

    The role of three-dimensional (3D) image guidance for interventional procedures and minimally invasive surgeries is increasing for the treatment of vascular disease. Currently, most interventional procedures are guided by two-dimensional x-ray angiography, but computed rotational angiography has the potential to provide 3D geometric information about the coronary arteries. The creation of 3D angiographic images of the coronary arteries requires synchronization of data acquisition with respect to the cardiac cycle, in order to minimize motion artifacts. This can be achieved by inferring the extent of motion from a patient's electrocardiogram (ECG) signal. However, a direct measurement of motion (from the 2D angiograms) has the potential to improve the 3D angiographic images by ensuring that only projections acquired during periods of minimal motion are included in the reconstruction. This paper presents an image-based metric for measuring the extent of motion in 2D x-ray angiographic images. Adaptive histogram equalization was applied to projection images to increase the sharpness of coronary arteries and the superior-inferior component of the weighted centroid (SIC) was measured. The SIC constitutes an image-based metric that can be used to track vessel motion, independent of apparent motion induced by the rotational acquisition. To evaluate the technique, six consecutive patients scheduled for routine coronary angiography procedures were studied. We compared the end of the SIC rest period ({rho}) to R-waves (R) detected in the patient's ECG and found a mean difference of 14{+-}80 ms. Two simultaneous angular positions were acquired and {rho} was detected for each position. There was no statistically significant difference (P=0.79) between {rho} in the two simultaneously acquired angular positions. Thus we have shown the SIC to be independent of view angle, which is critical for rotational angiography. A preliminary image-based gating strategy that employed the SIC

  10. Ion track reconstruction in 3D using alumina-based fluorescent nuclear track detectors.

    PubMed

    Niklas, M; Bartz, J A; Akselrod, M S; Abollahi, A; Jäkel, O; Greilich, S

    2013-09-21

    Fluorescent nuclear track detectors (FNTDs) based on Al2O3: C, Mg single crystal combined with confocal microscopy provide 3D information on ion tracks with a resolution only limited by light diffraction. FNTDs are also ideal substrates to be coated with cells to engineer cell-fluorescent ion track hybrid detectors (Cell-Fit-HD). This radiobiological tool enables a novel platform linking cell responses to physical dose deposition on a sub-cellular level in proton and heavy ion therapies. To achieve spatial correlation between single ion hits in the cell coating and its biological response the ion traversals have to be reconstructed in 3D using the depth information gained by the FNTD read-out. FNTDs were coated with a confluent human lung adenocarcinoma epithelial (A549) cell layer. Carbon ion irradiation of the hybrid detector was performed perpendicular and angular to the detector surface. In situ imaging of the fluorescently labeled cell layer and the FNTD was performed in a sequential read-out. Making use of the trajectory information provided by the FNTD the accuracy of 3D track reconstruction of single particles traversing the hybrid detector was studied. The accuracy is strongly influenced by the irradiation angle and therefore by complexity of the FNTD signal. Perpendicular irradiation results in highest accuracy with error of smaller than 0.10°. The ability of FNTD technology to provide accurate 3D ion track reconstruction makes it a powerful tool for radiobiological investigations in clinical ion beams, either being used as a substrate to be coated with living tissue or being implanted in vivo. PMID:23965401

  11. Ion track reconstruction in 3D using alumina-based fluorescent nuclear track detectors

    NASA Astrophysics Data System (ADS)

    Niklas, M.; Bartz, J. A.; Akselrod, M. S.; Abollahi, A.; Jäkel, O.; Greilich, S.

    2013-09-01

    Fluorescent nuclear track detectors (FNTDs) based on Al2O3: C, Mg single crystal combined with confocal microscopy provide 3D information on ion tracks with a resolution only limited by light diffraction. FNTDs are also ideal substrates to be coated with cells to engineer cell-fluorescent ion track hybrid detectors (Cell-Fit-HD). This radiobiological tool enables a novel platform linking cell responses to physical dose deposition on a sub-cellular level in proton and heavy ion therapies. To achieve spatial correlation between single ion hits in the cell coating and its biological response the ion traversals have to be reconstructed in 3D using the depth information gained by the FNTD read-out. FNTDs were coated with a confluent human lung adenocarcinoma epithelial (A549) cell layer. Carbon ion irradiation of the hybrid detector was performed perpendicular and angular to the detector surface. In situ imaging of the fluorescently labeled cell layer and the FNTD was performed in a sequential read-out. Making use of the trajectory information provided by the FNTD the accuracy of 3D track reconstruction of single particles traversing the hybrid detector was studied. The accuracy is strongly influenced by the irradiation angle and therefore by complexity of the FNTD signal. Perpendicular irradiation results in highest accuracy with error of smaller than 0.10°. The ability of FNTD technology to provide accurate 3D ion track reconstruction makes it a powerful tool for radiobiological investigations in clinical ion beams, either being used as a substrate to be coated with living tissue or being implanted in vivo.

  12. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI.

    PubMed

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z; Stone, Maureen; Prince, Jerry L

    2014-12-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations. PMID:25155697

  13. Faceless identification: a model for person identification using the 3D shape and 3D motion as cues

    NASA Astrophysics Data System (ADS)

    Klasen, Lena M.; Li, Haibo

    1999-02-01

    Person identification by using biometric methods based on image sequences, or still images, often requires a controllable and cooperative environment during the image capturing stage. In the forensic case the situation is more likely to be the opposite. In this work we propose a method that makes use of the anthropometry of the human body and human actions as cues for identification. Image sequences from surveillance systems are used, which can be seen as monocular image sequences. A 3D deformable wireframe body model is used as a platform to handle the non-rigid information of the 3D shape and 3D motion of the human body from the image sequence. A recursive method for estimating global motion and local shape variations is presented, using two recursive feedback systems.

  14. Computational optical-sectioning microscopy for 3D quantization of cell motion: results and challenges

    NASA Astrophysics Data System (ADS)

    McNally, James G.

    1994-09-01

    How cells move and navigate within a 3D tissue mass is of central importance in such diverse problems as embryonic development, wound healing and metastasis. This locomotion can now be visualized and quantified by using computation optical-sectioning microscopy. In this approach, a series of 2D images at different depths in a specimen are stacked to construct a 3D image, and then with a knowledge of the microscope's point-spread function, the actual distribution of fluorescent intensity in the specimen is estimated via computation. When coupled with wide-field optics and a cooled CCD camera, this approach permits non-destructive 3D imaging of living specimens over long time periods. With these techniques, we have observed a complex diversity of motile behaviors in a model embryonic system, the cellular slime mold Dictyostelium. To understand the mechanisms which control these various behaviors, we are examining motion in various Dictyostelium mutants with known defects in proteins thought to be essential for signal reception, cell-cell adhesion or locomotion. This application of computational techniques to analyze 3D cell locomotion raises several technical challenges. Image restoration techniques must be fast enough to process numerous 1 Gbyte time-lapse data sets (16 Mbytes per 3D image X 60 time points). Because some cells are weakly labeled and background intensity is often high due to unincorporated dye, the SNR in some of these images is poor. Currently, the images are processed by a regularized linear least- squares restoration method, and occasionally by a maximum-likelihood method. Also required for these studies are accurate automated- tracking procedures to generate both 3D trajectories for individual cells and 3D flows for a group of cells. Tracking is currently done independently for each cell, using a cell's image as a template to search for a similar image at the next time point. Finally, sophisticated visualization techniques are needed to view the

  15. Multiview 3-D Echocardiography Fusion with Breath-Hold Position Tracking Using an Optical Tracking System.

    PubMed

    Punithakumar, Kumaradevan; Hareendranathan, Abhilash R; McNulty, Alexander; Biamonte, Marina; He, Allen; Noga, Michelle; Boulanger, Pierre; Becher, Harald

    2016-08-01

    Recent advances in echocardiography allow real-time 3-D dynamic image acquisition of the heart. However, one of the major limitations of 3-D echocardiography is the limited field of view, which results in an acquisition insufficient to cover the whole geometry of the heart. This study proposes the novel approach of fusing multiple 3-D echocardiography images using an optical tracking system that incorporates breath-hold position tracking to infer that the heart remains at the same position during different acquisitions. In six healthy male volunteers, 18 pairs of apical/parasternal 3-D ultrasound data sets were acquired during a single breath-hold as well as in subsequent breath-holds. The proposed method yielded a field of view improvement of 35.4 ± 12.5%. To improve the quality of the fused image, a wavelet-based fusion algorithm was developed that computes pixelwise likelihood values for overlapping voxels from multiple image views. The proposed wavelet-based fusion approach yielded significant improvement in contrast (66.46 ± 21.68%), contrast-to-noise ratio (49.92 ± 28.71%), signal-to-noise ratio (57.59 ± 47.85%) and feature count (13.06 ± 7.44%) in comparison to individual views. PMID:27166019

  16. LayTracks3D: A new approach for meshing general solids using medial axis transform

    SciTech Connect

    Quadros, William Roshan

    2015-08-22

    This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to the MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.

  17. Robot motion tracking system with multiple views

    NASA Astrophysics Data System (ADS)

    Yamano, Hiroshi; Saito, Hideo

    2001-10-01

    In such a space where human workers and industrial robots work together, it has become necessary to monitor a robot motion for the safety. For such robot surveillance, we propose a robot tracking system from multiple view images. In this system, we treat tracking robot movement problem as an estimation problem of each pose parameter through all frames. This tracking algorithm consists of four stages, image generating stage, estimation stage, parameter searching stage, and prediction stage. At the first stage, robot area of real image is extracted by background subtraction. Here, Yuv color system is used because of reducing the change of lighting condition. By calibrating extrinsic and intrinsic parameters of all cameras with Tsai's method, we can project 3D model of the robot onto each camera. In the next stage, correlation of the input image and projected model image is calculated, which is defined by the area of robots in real and 3D images. At third stage, the pose parameters of the robot are estimated by maximizing the correlation. For computational efficiency, a high dimensional pose parameter space is divided into many low dimensional sub-spaces in accordance with the predicted pose parameters in the previous flame. We apply the proposed system for pose estimation of 5-axis robot manipulator. The estimated pose parameters are successfully matched with the actual pose of the robots.

  18. A 3D feature point tracking method for ion radiation.

    PubMed

    Kouwenberg, Jasper J M; Ulrich, Leonie; Jäkel, Oliver; Greilich, Steffen

    2016-06-01

    A robust and computationally efficient algorithm for automated tracking of high densities of particles travelling in (semi-) straight lines is presented. It extends the implementation of (Sbalzarini and Koumoutsakos 2005) and is intended for use in the analysis of single ion track detectors. By including information of existing tracks in the exclusion criteria and a recursive cost minimization function, the algorithm is robust to variations on the measured particle tracks. A trajectory relinking algorithm was included to resolve the crossing of tracks in high particle density images. Validation of the algorithm was performed using fluorescent nuclear track detectors (FNTD) irradiated with high- and low (heavy) ion fluences and showed less than 1% faulty trajectories in the latter. PMID:27163162

  19. A 3D feature point tracking method for ion radiation

    NASA Astrophysics Data System (ADS)

    Kouwenberg, Jasper J. M.; Ulrich, Leonie; Jäkel, Oliver; Greilich, Steffen

    2016-06-01

    A robust and computationally efficient algorithm for automated tracking of high densities of particles travelling in (semi-) straight lines is presented. It extends the implementation of (Sbalzarini and Koumoutsakos 2005) and is intended for use in the analysis of single ion track detectors. By including information of existing tracks in the exclusion criteria and a recursive cost minimization function, the algorithm is robust to variations on the measured particle tracks. A trajectory relinking algorithm was included to resolve the crossing of tracks in high particle density images. Validation of the algorithm was performed using fluorescent nuclear track detectors (FNTD) irradiated with high- and low (heavy) ion fluences and showed less than 1% faulty trajectories in the latter.

  20. Real-time visual sensing system achieving high-speed 3D particle tracking with nanometer resolution.

    PubMed

    Cheng, Peng; Jhiang, Sissy M; Menq, Chia-Hsiang

    2013-11-01

    This paper presents a real-time visual sensing system, which is created to achieve high-speed three-dimensional (3D) motion tracking of microscopic spherical particles in aqueous solutions with nanometer resolution. The system comprises a complementary metal-oxide-semiconductor (CMOS) camera, a field programmable gate array (FPGA), and real-time image processing programs. The CMOS camera has high photosensitivity and superior SNR. It acquires images of 128×120 pixels at a frame rate of up to 10,000 frames per second (fps) under the white light illumination from a standard 100 W halogen lamp. The real-time image stream is downloaded from the camera directly to the FPGA, wherein a 3D particle-tracking algorithm is implemented to calculate the 3D positions of the target particle in real time. Two important objectives, i.e., real-time estimation of the 3D position matches the maximum frame rate of the camera and the timing of the output data stream of the system is precisely controlled, are achieved. Two sets of experiments were conducted to demonstrate the performance of the system. First, the visual sensing system was used to track the motion of a 2 μm polystyrene bead, whose motion was controlled by a three-axis piezo motion stage. The ability to track long-range motion with nanometer resolution in all three axes is demonstrated. Second, it was used to measure the Brownian motion of the 2 μm polystyrene bead, which was stabilized in aqueous solution by a laser trapping system. PMID:24216655

  1. Determining 3-D motion and structure from image sequences

    NASA Technical Reports Server (NTRS)

    Huang, T. S.

    1982-01-01

    A method of determining three-dimensional motion and structure from two image frames is presented. The method requires eight point correspondences between the two frames, from which motion and structure parameters are determined by solving a set of eight linear equations and a singular value decomposition of a 3x3 matrix. It is shown that the solution thus obtained is unique.

  2. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846

  3. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis

    PubMed Central

    Cerveri, Pietro; Barros, Ricardo M. L.; Marins, João C. B.; Silvatti, Amanda P.

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846

  4. Study of a viewer tracking system with multiview 3D display

    NASA Astrophysics Data System (ADS)

    Yang, Jinn-Cherng; Wu, Chang-Shuo; Hsiao, Chuan-Heng; Yang, Ming-Chieh; Liu, Wen-Chieh; Hung, Yi-Ping

    2008-02-01

    An autostereoscopic display provides users great enjoyment of stereo visualization without uncomfortable and inconvenient drawbacks of wearing stereo glasses. However, bandwidth constraints of current multi-view 3D display severely restrict the number of views that can be simultaneously displayed without degrading resolution or increasing display cost unacceptably. An alternative to multiple view presentation is that the position of observer can be measured by using viewer-tracking sensor. It is a very important module of the viewer-tracking component for fluently rendering and accurately projecting the stereo video. In order to render stereo content with respect to user's view points and to optically project the content onto the left and right eyes of the user accurately, the real-time viewer tracking technique that allows the user to move around freely when watching the autostereoscopic display is developed in this study. It comprises the face detection by using multiple eigenspaces of various lighting conditions, fast block matching for tracking four motion parameters of the user's face region. The Edge Orientation Histogram (EOH) on Real AdaBoost to improve the performance of original AdaBoost algorithm is also applied in this study. The AdaBoost algorithm using Haar feature in OpenCV library developed by Intel to detect human face and enhance the accuracy performance with rotating image. The frame rate of viewer tracking process can achieve up to 15 Hz. Since performance of the viewer tracking autostereoscopic display is still influenced under variant environmental conditions, the accuracy, robustness and efficiency of the viewer-tracking system are evaluated in this study.

  5. Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller

    PubMed Central

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  6. Exploring direct 3D interaction for full horizontal parallax light field displays using leap motion controller.

    PubMed

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  7. On-line 3D motion estimation using low resolution MRI

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; de Senneville, B. Denis; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2015-08-01

    Image processing such as deformable image registration finds its way into radiotherapy as a means to track non-rigid anatomy. With the advent of magnetic resonance imaging (MRI) guided radiotherapy, intrafraction anatomy snapshots become technically feasible. MRI provides the needed tissue signal for high-fidelity image registration. However, acquisitions, especially in 3D, take a considerable amount of time. Pushing towards real-time adaptive radiotherapy, MRI needs to be accelerated without degrading the quality of information. In this paper, we investigate the impact of image resolution on the quality of motion estimations. Potentially, spatially undersampled images yield comparable motion estimations. At the same time, their acquisition times would reduce greatly due to the sparser sampling. In order to substantiate this hypothesis, exemplary 4D datasets of the abdomen were downsampled gradually. Subsequently, spatiotemporal deformations are extracted consistently using the same motion estimation for each downsampled dataset. Errors between the original and the respectively downsampled version of the dataset are then evaluated. Compared to ground-truth, results show high similarity of deformations estimated from downsampled image data. Using a dataset with {{≤ft(2.5 \\text{mm}\\right)}3} voxel size, deformation fields could be recovered well up to a downsampling factor of 2, i.e. {{≤ft(5 \\text{mm}\\right)}3} . In a therapy guidance scenario MRI, imaging speed could accordingly increase approximately fourfold, with acceptable loss of estimated motion quality.

  8. A 3D diamond detector for particle tracking

    NASA Astrophysics Data System (ADS)

    Artuso, M.; Bachmair, F.; Bäni, L.; Bartosik, M.; Beacham, J.; Bellini, V.; Belyaev, V.; Bentele, B.; Berdermann, E.; Bergonzo, P.; Bes, A.; Brom, J.-M.; Bruzzi, M.; Cerv, M.; Chau, C.; Chiodini, G.; Chren, D.; Cindro, V.; Claus, G.; Collot, J.; Costa, S.; Cumalat, J.; Dabrowski, A.; D`Alessandro, R.; de Boer, W.; Dehning, B.; Dobos, D.; Dünser, M.; Eremin, V.; Eusebi, R.; Forcolin, G.; Forneris, J.; Frais-Kölbl, H.; Gan, K. K.; Gastal, M.; Goffe, M.; Goldstein, J.; Golubev, A.; Gonella, L.; Gorišek, A.; Graber, L.; Grigoriev, E.; Grosse-Knetter, J.; Gui, B.; Guthoff, M.; Haughton, I.; Hidas, D.; Hits, D.; Hoeferkamp, M.; Hofmann, T.; Hosslet, J.; Hostachy, J.-Y.; Hügging, F.; Jansen, H.; Janssen, J.; Kagan, H.; Kanxheri, K.; Kasieczka, G.; Kass, R.; Kassel, F.; Kis, M.; Kramberger, G.; Kuleshov, S.; Lacoste, A.; Lagomarsino, S.; Lo Giudice, A.; Maazouzi, C.; Mandic, I.; Mathieu, C.; McFadden, N.; McGoldrick, G.; Menichelli, M.; Mikuž, M.; Morozzi, A.; Moss, J.; Mountain, R.; Murphy, S.; Oh, A.; Olivero, P.; Parrini, G.; Passeri, D.; Pauluzzi, M.; Pernegger, H.; Perrino, R.; Picollo, F.; Pomorski, M.; Potenza, R.; Quadt, A.; Re, A.; Riley, G.; Roe, S.; Sapinski, M.; Scaringella, M.; Schnetzer, S.; Schreiner, T.; Sciortino, S.; Scorzoni, A.; Seidel, S.; Servoli, L.; Sfyrla, A.; Shimchuk, G.; Smith, D. S.; Sopko, B.; Sopko, V.; Spagnolo, S.; Spanier, S.; Stenson, K.; Stone, R.; Sutera, C.; Taylor, A.; Traeger, M.; Tromson, D.; Trischuk, W.; Tuve, C.; Uplegger, L.; Velthuis, J.; Venturi, N.; Vittone, E.; Wagner, S.; Wallny, R.; Wang, J. C.; Weilhammer, P.; Weingarten, J.; Weiss, C.; Wengler, T.; Wermes, N.; Yamouni, M.; Zavrtanik, M.

    2016-07-01

    In the present study, results towards the development of a 3D diamond sensor are presented. Conductive channels are produced inside the sensor bulk using a femtosecond laser. This electrode geometry allows full charge collection even for low quality diamond sensors. Results from testbeam show that charge is collected by these electrodes. In order to understand the channel growth parameters, with the goal of producing low resistivity channels, the conductive channels produced with a different laser setup are evaluated by Raman spectroscopy.

  9. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  10. Automatic alignment of standard views in 3D echocardiograms using real-time tracking

    NASA Astrophysics Data System (ADS)

    Orderud, Fredrik; Torp, Hans; Rabben, Stein Inge

    2009-02-01

    In this paper, we present an automatic approach for alignment of standard apical and short-axis slices, and correcting them for out-of-plane motion in 3D echocardiography. This is enabled by using real-time Kalman tracking to perform automatic left ventricle segmentation using a coupled deformable model, consisting of a left ventricle model, as well as structures for the right ventricle and left ventricle outflow tract. Landmark points from the segmented model are then used to generate standard apical and short-axis slices. The slices are automatically updated after tracking in each frame to correct for out-of-plane motion caused by longitudinal shortening of the left ventricle. Results from a dataset of 35 recordings demonstrate the potential for automating apical slice initialization and dynamic short-axis slices. Apical 4-chamber, 2-chamber and long-axis slices are generated based on an assumption of fixed angle between the slices, and short-axis slices are generated so that they follow the same myocardial tissue over the entire cardiac cycle. The error compared to manual annotation was 8.4 +/- 3.5 mm for apex, 3.6 +/- 1.8 mm for mitral valve and 8.4 +/- 7.4 for apical 4-chamber view. The high computational efficiency and automatic behavior of the method enables it to operate in real-time, potentially during image acquisition.

  11. Rigid Body Motion in Stereo 3D Simulation

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2010-01-01

    This paper addresses the difficulties experienced by first-grade students studying rigid body motion at Sofia University. Most quantities describing the rigid body are in relations that the students find hard to visualize and understand. They also lose the notion of cause-result relations between vector quantities, such as the relation between…

  12. Markerless 3D motion capture for animal locomotion studies

    PubMed Central

    Sellers, William Irvin; Hirasaki, Eishi

    2014-01-01

    ABSTRACT Obtaining quantitative data describing the movements of animals is an essential step in understanding their locomotor biology. Outside the laboratory, measuring animal locomotion often relies on video-based approaches and analysis is hampered because of difficulties in calibration and often the limited availability of possible camera positions. It is also usually restricted to two dimensions, which is often an undesirable over-simplification given the essentially three-dimensional nature of many locomotor performances. In this paper we demonstrate a fully three-dimensional approach based on 3D photogrammetric reconstruction using multiple, synchronised video cameras. This approach allows full calibration based on the separation of the individual cameras and will work fully automatically with completely unmarked and undisturbed animals. As such it has the potential to revolutionise work carried out on free-ranging animals in sanctuaries and zoological gardens where ad hoc approaches are essential and access within enclosures often severely restricted. The paper demonstrates the effectiveness of video-based 3D photogrammetry with examples from primates and birds, as well as discussing the current limitations of this technique and illustrating the accuracies that can be obtained. All the software required is open source so this can be a very cost effective approach and provides a methodology of obtaining data in situations where other approaches would be completely ineffective. PMID:24972869

  13. Markerless 3D motion capture for animal locomotion studies.

    PubMed

    Sellers, William Irvin; Hirasaki, Eishi

    2014-01-01

    Obtaining quantitative data describing the movements of animals is an essential step in understanding their locomotor biology. Outside the laboratory, measuring animal locomotion often relies on video-based approaches and analysis is hampered because of difficulties in calibration and often the limited availability of possible camera positions. It is also usually restricted to two dimensions, which is often an undesirable over-simplification given the essentially three-dimensional nature of many locomotor performances. In this paper we demonstrate a fully three-dimensional approach based on 3D photogrammetric reconstruction using multiple, synchronised video cameras. This approach allows full calibration based on the separation of the individual cameras and will work fully automatically with completely unmarked and undisturbed animals. As such it has the potential to revolutionise work carried out on free-ranging animals in sanctuaries and zoological gardens where ad hoc approaches are essential and access within enclosures often severely restricted. The paper demonstrates the effectiveness of video-based 3D photogrammetry with examples from primates and birds, as well as discussing the current limitations of this technique and illustrating the accuracies that can be obtained. All the software required is open source so this can be a very cost effective approach and provides a methodology of obtaining data in situations where other approaches would be completely ineffective. PMID:24972869

  14. 3-D tracking in a miniature time projection chamber

    NASA Astrophysics Data System (ADS)

    Vahsen, S. E.; Hedges, M. T.; Jaegle, I.; Ross, S. J.; Seong, I. S.; Thorpe, T. N.; Yamaoka, J.; Kadyk, J. A.; Garcia-Sciveres, M.

    2015-07-01

    The three-dimensional (3-D) detection of millimeter-scale ionization trails is of interest for detecting nuclear recoils in directional fast neutron detectors and in direction-sensitive searches for weakly interacting massive particles (WIMPs), which may constitute the Dark Matter of the universe. We report on performance characterization of a miniature gas target Time Projection Chamber (TPC) where the drift charge is avalanche-multiplied with Gas Electron Multipliers (GEMs) and detected with the ATLAS FE-I3 Pixel Application Specific Integrated Circuit (ASIC). We report on measurements of gain, gain resolution, point resolution, diffusion, angular resolution, and energy resolution with low-energy X-rays, cosmic rays, and alpha particles, using the gases Ar:CO2 (70:30) and He:CO2 (70:30) at atmospheric pressure. We discuss the implications for future, larger directional neutron and Dark Matter detectors. With an eye to designing and selecting components for these, we generalize our results into analytical expressions for detector performance whenever possible. We conclude by demonstrating the 3-D directional detection of a fast neutron source.

  15. THE THOMSON SURFACE. III. TRACKING FEATURES IN 3D

    SciTech Connect

    Howard, T. A.; DeForest, C. E.; Tappin, S. J.; Odstrcil, D.

    2013-03-01

    In this, the final installment in a three-part series on the Thomson surface, we present simulated observations of coronal mass ejections (CMEs) observed by a hypothetical polarizing white light heliospheric imager. Thomson scattering yields a polarization signal that can be exploited to locate observed features in three dimensions relative to the Thomson surface. We consider how the appearance of the CME changes with the direction of trajectory, using simulations of a simple geometrical shape and also of a more realistic CME generated using the ENLIL model. We compare the appearance in both unpolarized B and polarized pB light, and show that there is a quantifiable difference in the measured brightness of a CME between unpolarized and polarized observations. We demonstrate a technique for using this difference to extract the three-dimensional (3D) trajectory of large objects such as CMEs. We conclude with a discussion on how a polarizing heliospheric imager could be used to extract 3D trajectory information about CMEs or other observed features.

  16. Flash trajectory imaging of target 3D motion

    NASA Astrophysics Data System (ADS)

    Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang

    2011-03-01

    We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.

  17. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  18. 2D-3D rigid registration to compensate for prostate motion during 3D TRUS-guided biopsy

    NASA Astrophysics Data System (ADS)

    De Silva, Tharindu; Fenster, Aaron; Bax, Jeffrey; Gardi, Lori; Romagnoli, Cesare; Samarabandu, Jagath; Ward, Aaron D.

    2012-02-01

    Prostate biopsy is the clinical standard for prostate cancer diagnosis. To improve the accuracy of targeting suspicious locations, systems have been developed that can plan and record biopsy locations in a 3D TRUS image acquired at the beginning of the procedure. Some systems are designed for maximum compatibility with existing ultrasound equipment and are thus designed around the use of a conventional 2D TRUS probe, using controlled axial rotation of this probe to acquire a 3D TRUS reference image at the start of the biopsy procedure. Prostate motion during the biopsy procedure causes misalignments between the prostate in the live 2D TRUS images and the pre-acquired 3D TRUS image. We present an image-based rigid registration technique that aligns live 2D TRUS images, acquired immediately prior to biopsy needle insertion, with the pre-acquired 3D TRUS image to compensate for this motion. Our method was validated using 33 manually identified intrinsic fiducials in eight subjects and the target registration error was found to be 1.89 mm. We analysed the suitability of two image similarity metrics (normalized cross correlation and mutual information) for this task by plotting these metrics as a function of varying parameters in the six degree-of-freedom transformation space, with the ground truth plane obtained from registration as the starting point for the parameter exploration. We observed a generally convex behaviour of the similarity metrics. This encourages their use for this registration problem, and could assist in the design of a tool for the detection of misalignment, which could trigger the execution of a non-real-time registration, when needed during the procedure.

  19. Four-directional stereo-microscopy for 3D particle tracking with real-time error evaluation.

    PubMed

    Hay, R F; Gibson, G M; Lee, M P; Padgett, M J; Phillips, D B

    2014-07-28

    High-speed video stereo-microscopy relies on illumination from two distinct angles to create two views of a sample from different directions. The 3D trajectory of a microscopic object can then be reconstructed using parallax to combine 2D measurements of its position in each image. In this work, we evaluate the accuracy of 3D particle tracking using this technique, by extending the number of views from two to four directions. This allows us to record two independent sets of measurements of the 3D coordinates of tracked objects, and comparison of these enables measurement and minimisation of the tracking error in all dimensions. We demonstrate the method by tracking the motion of an optically trapped microsphere of 5 μm in diameter, and find an accuracy of 2-5 nm laterally, and 5-10 nm axially, representing a relative error of less than 2.5% of its range of motion in each dimension. PMID:25089484

  20. Simple 3-D stimulus for motion parallax and its simulation.

    PubMed

    Ono, Hiroshi; Chornenkyy, Yevgen; D'Amour, Sarah

    2013-01-01

    Simulation of a given stimulus situation should produce the same perception as the original. Rogers et al (2009 Perception 38 907-911) simulated Wheeler's (1982, PhD thesis, Rutgers University, NJ) motion parallax stimulus and obtained quite different perceptions. Wheeler's observers were unable to reliably report the correct direction of depth, whereas Rogers's were. With three experiments we explored the possible reasons for the discrepancy. Our results suggest that Rogers was able to see depth from the simulation partly due to his experience seeing depth with random dot surfaces. PMID:23964382

  1. A non-disruptive technology for robust 3D tool tracking for ultrasound-guided interventions.

    PubMed

    Mung, Jay; Vignon, Francois; Jain, Ameet

    2011-01-01

    In the past decade ultrasound (US) has become the preferred modality for a number of interventional procedures, offering excellent soft tissue visualization. The main limitation however is limited visualization of surgical tools. A new method is proposed for robust 3D tracking and US image enhancement of surgical tools under US guidance. Small US sensors are mounted on existing surgical tools. As the imager emits acoustic energy, the electrical signal from the sensor is analyzed to reconstruct its 3D coordinates. These coordinates can then be used for 3D surgical navigation, similar to current day tracking systems. A system with real-time 3D tool tracking and image enhancement was implemented on a commercial ultrasound scanner and 3D probe. Extensive water tank experiments with a tracked 0.2mm sensor show robust performance in a wide range of imaging conditions and tool position/orientations. The 3D tracking accuracy was 0.36 +/- 0.16mm throughout the imaging volume of 55 degrees x 27 degrees x 150mm. Additionally, the tool was successfully tracked inside a beating heart phantom. This paper proposes an image enhancement and tool tracking technology with sub-mm accuracy for US-guided interventions. The technology is non-disruptive, both in terms of existing clinical workflow and commercial considerations, showing promise for large scale clinical impact. PMID:22003612

  2. Evaluating the utility of 3D TRUS image information in guiding intra-procedure registration for motion compensation

    NASA Astrophysics Data System (ADS)

    De Silva, Tharindu; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    In targeted 3D transrectal ultrasound (TRUS)-guided biopsy, patient and prostate movement during the procedure can cause target misalignments that hinder accurate sampling of pre-planned suspicious tissue locations. Multiple solutions have been proposed for motion compensation via registration of intra-procedural TRUS images to a baseline 3D TRUS image acquired at the beginning of the biopsy procedure. While 2D TRUS images are widely used for intra-procedural guidance, some solutions utilize richer intra-procedural images such as bi- or multi-planar TRUS or 3D TRUS, acquired by specialized probes. In this work, we measured the impact of such richer intra-procedural imaging on motion compensation accuracy, to evaluate the tradeoff between cost and complexity of intra-procedural imaging versus improved motion compensation. We acquired baseline and intra-procedural 3D TRUS images from 29 patients at standard sextant-template biopsy locations. We used the planes extracted from the 3D intra-procedural scans to simulate 2D and 3D information available in different clinically relevant scenarios for registration. The registration accuracy was evaluated by calculating the target registration error (TRE) using manually identified homologous fiducial markers (micro-calcifications). Our results indicate that TRE improves gradually when the number of intra-procedural imaging planes used in registration is increased. Full 3D TRUS information helps the registration algorithm to robustly converge to more accurate solutions. These results can also inform the design of a fail-safe workflow during motion compensation in a system using a tracked 2D TRUS probe, by prescribing rotational acquisitions that can be performed quickly and easily by the physician immediately prior to needle targeting.

  3. Modeling cell migration on filamentous tracks in 3D

    NASA Astrophysics Data System (ADS)

    Schwarz, J. M.

    2014-03-01

    Cell motility is integral to a number of physiological processes ranging from wound healing to immune response to cancer metastasis. Many studies of cell migration, both experimental and theoretical, have addressed various aspects of it in two dimensions, including protrusion and retraction at the level of single cells. However, the in vivo environment for a crawling cell is typically a three-dimensional environment, consisting of the extracellular matrix (ECM) and surrounding cells. Recent experiments demonstrate that some cells crawling along fibers of the ECM mimic the geometry of the fibers to become long and thin, as opposed to fan-like in two dimensions, and can remodel the ECM. Inspired by these experiments, a model cell consisting of beads and springs that moves along a tense semiflexible filamentous track is constructed and studied, paying particular attention to the mechanical feedback between the model cell and the track, as mediated by the active myosin-driven contractility and the catch/slip bond behavior of the focal adhesions, as the model cell crawls. This simple construction can then be scaled up to a model cell moving along a three-dimensional filamentous network, with a prescribed microenvironment, in order to make predictions for proposed experiments.

  4. Track of Right-Wheel Drag (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    This 360-degree stereo panorama combines several frames taken by the navigation camera on NASA's Mars Exploration Rover Spirit during the rover's 313th martian day (Nov. 19, 2004). The site, labeled Spirit site 93, is in the 'Columbia Hills' inside Gusev Crater. The rover tracks point westward. Spirit had driven eastward, in reverse and dragging its right front wheel, for about 30 meters (100 feet) on the day the picture was taken. Driving backwards while dragging that wheel is a precautionary strategy to extend the usefulness of the wheel for when it is most needed, because it has developed more friction than the other wheels. The right-hand track in this look backwards shows how the dragging disturbed the soil. This view is presented in a cylindrical-perspective projection with geometric seam correction.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  5. Ultra-Wideband Time-Difference-of-Arrival High Resolution 3D Proximity Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Dekome, Kent; Dusl, John

    2010-01-01

    This paper describes a research and development effort for a prototype ultra-wideband (UWB) tracking system that is currently under development at NASA Johnson Space Center (JSC). The system is being studied for use in tracking of lunar./Mars rovers and astronauts during early exploration missions when satellite navigation systems are not available. U IATB impulse radio (UWB-IR) technology is exploited in the design and implementation of the prototype location and tracking system. A three-dimensional (3D) proximity tracking prototype design using commercially available UWB products is proposed to implement the Time-Difference- Of-Arrival (TDOA) tracking methodology in this research effort. The TDOA tracking algorithm is utilized for location estimation in the prototype system, not only to exploit the precise time resolution possible with UWB signals, but also to eliminate the need for synchronization between the transmitter and the receiver. Simulations show that the TDOA algorithm can achieve the fine tracking resolution with low noise TDOA estimates for close-in tracking. Field tests demonstrated that this prototype UWB TDOA High Resolution 3D Proximity Tracking System is feasible for providing positioning-awareness information in a 3D space to a robotic control system. This 3D tracking system is developed for a robotic control system in a facility called "Moonyard" at Honeywell Defense & System in Arizona under a Space Act Agreement.

  6. Towards intraoperative monitoring of ablation using tracked 3D ultrasound elastography and internal palpation

    NASA Astrophysics Data System (ADS)

    Foroughi, Pezhman; Burgner, Jessica; Choti, Michael A.; Webster, Robert J., III; Hager, Gregory D.; Boctor, Emad M.

    2012-03-01

    B-mode ultrasound is widely used in liver ablation. However, the necrosis zone is typically not visible under b-mode ultrasound, since ablation does not necessarily change the acoustic properties of the tissue. In contrast, the change in tissue stiffness makes elastography ideal for monitoring ablation. Tissue palpation for elastography is typically applied at the imaging probe, by indenting it slightly into the tissue surface. However, in this paper we propose an alternate approach, where palpation is applied by a surgical instrument located inside the tissue. In our approach, the ablation needle is placed inside a steerable device called an active cannula and inserted into the tissue. A controlled motion is applied to the center of the ablation zone via the active cannula. Since the type and direction of motion is known, displacement can then be computed from two frames with the desired motion. The elastography results show the ablated region around the needle. While internal palpation provides excellent local contrast, freehand palpation from outside of the tissue via the transducer can provide a more global view of the region of the interest. For this purpose, we used a tracked 3D transducer to generate volumetric elastography images covering the ablated region. The tracking information is employed to improve the elastography results by selecting volume pairs suitable for elastography. This is an extension of our 2D frame selection technique which can cope with uncertainties associated with intra-operative elastography. In our experiments with phantom and ex-vivo tissue, we were able to generate high-quality images depicting the boundaries of the hard lesions.

  7. Nonrigid Autofocus Motion Correction for Coronary MR Angiography with a 3D Cones Trajectory

    PubMed Central

    Ingle, R. Reeve; Wu, Holden H.; Addy, Nii Okai; Cheng, Joseph Y.; Yang, Phillip C.; Hu, Bob S.; Nishimura, Dwight G.

    2014-01-01

    Purpose: To implement a nonrigid autofocus motion correction technique to improve respiratory motion correction of free-breathing whole-heart coronary magnetic resonance angiography (CMRA) acquisitions using an image-navigated 3D cones sequence. Methods: 2D image navigators acquired every heartbeat are used to measure superior-inferior, anterior-posterior, and right-left translation of the heart during a free-breathing CMRA scan using a 3D cones readout trajectory. Various tidal respiratory motion patterns are modeled by independently scaling the three measured displacement trajectories. These scaled motion trajectories are used for 3D translational compensation of the acquired data, and a bank of motion-compensated images is reconstructed. From this bank, a gradient entropy focusing metric is used to generate a nonrigid motion-corrected image on a pixel-by-pixel basis. The performance of the autofocus motion correction technique is compared with rigid-body translational correction and no correction in phantom, volunteer, and patient studies. Results: Nonrigid autofocus motion correction yields improved image quality compared to rigid-body-corrected images and uncorrected images. Quantitative vessel sharpness measurements indicate superiority of the proposed technique in 14 out of 15 coronary segments from three patient and two volunteer studies. Conclusion: The proposed technique corrects nonrigid motion artifacts in free-breathing 3D cones acquisitions, improving image quality compared to rigid-body motion correction. PMID:24006292

  8. Motion-Corrected 3D Sonic Anemometer for Tethersondes and Other Moving Platforms

    NASA Technical Reports Server (NTRS)

    Bognar, John

    2012-01-01

    To date, it has not been possible to apply 3D sonic anemometers on tethersondes or similar atmospheric research platforms due to the motion of the supporting platform. A tethersonde module including both a 3D sonic anemometer and associated motion correction sensors has been developed, enabling motion-corrected 3D winds to be measured from a moving platform such as a tethersonde. Blimps and other similar lifting systems are used to support tethersondes meteorological devices that fly on the tether of a blimp or similar platform. To date, tethersondes have been limited to making basic meteorological measurements (pressure, temperature, humidity, and wind speed and direction). The motion of the tethersonde has precluded the addition of 3D sonic anemometers, which can be used for high-speed flux measurements, thereby limiting what has been achieved to date with tethersondes. The tethersonde modules fly on a tether that can be constantly moving and swaying. This would introduce enormous error into the output of an uncorrected 3D sonic anemometer. The motion correction that is required must be implemented in a low-weight, low-cost manner to be suitable for this application. Until now, flux measurements using 3D sonic anemometers could only be made if the 3D sonic anemometer was located on a rigid, fixed platform such as a tower. This limited the areas in which they could be set up and used. The purpose of the innovation was to enable precise 3D wind and flux measurements to be made using tether - sondes. In brief, a 3D accelerometer and a 3D gyroscope were added to a tethersonde module along with a 3D sonic anemometer. This combination allowed for the necessary package motions to be measured, which were then mathematically combined with the measured winds to yield motion-corrected 3D winds. At the time of this reporting, no tethersonde has been able to make any wind measurement other than a basic wind speed and direction measurement. The addition of a 3D sonic

  9. Feasibility of low-dose single-view 3D fiducial tracking concurrent with external beam delivery

    SciTech Connect

    Speidel, Michael A.; Wilfley, Brian P.; Hsu, Annie; Hristov, Dimitre

    2012-04-15

    Purpose: In external-beam radiation therapy, existing on-board x-ray imaging chains orthogonal to the delivery beam cannot recover 3D target trajectories from a single view in real-time. This limits their utility for real-time motion management concurrent with beam delivery. To address this limitation, the authors propose a novel concept for on-board imaging based on the inverse-geometry Scanning-Beam Digital X-ray (SBDX) system and evaluate its feasibility for single-view 3D intradelivery fiducial tracking. Methods: A chest phantom comprising a posterior wall, a central lung volume, and an anterior wall was constructed. Two fiducials were placed along the mediastinal ridge between the lung cavities: a 1.5 mm diameter steel sphere superiorly and a gold cylinder (2.6 mm length x 0.9 mm diameter) inferiorly. The phantom was placed on a linear motion stage that moved sinusoidally. Fiducial motion was along the source-detector (z) axis of the SBDX system with {+-}10 mm amplitude and a programmed period of either 3.5 s or 5 s. The SBDX system was operated at 15 frames per second, 100 kVp, providing good apparent conspicuity of the fiducials. With the stage moving, detector data were acquired and subsequently reconstructed into 15 planes with a 12 mm plane-to-plane spacing using digital tomosynthesis. A tracking algorithm was applied to the image planes for each temporal frame to determine the position of each fiducial in (x,y,z)-space versus time. A 3D time-sinusoidal motion model was fit to the measured 3D coordinates and root mean square (RMS) deviations about the fitted trajectory were calculated. Results: Tracked motion was sinusoidal and primarily along the source-detector (z) axis. The RMS deviation of the tracked z-coordinate ranged from 0.53 to 0.71 mm. The motion amplitude derived from the model fit agreed with the programmed amplitude to within 0.28 mm for the steel sphere and within -0.77 mm for the gold seed. The model fit periods agreed with the programmed

  10. Eulerian and Lagrangian methods for vortex tracking in 2D and 3D flows

    NASA Astrophysics Data System (ADS)

    Huang, Yangzi; Green, Melissa

    2014-11-01

    Coherent structures are a key component of unsteady flows in shear layers. Improvement of experimental techniques has led to larger amounts of data and requires of automated procedures for vortex tracking. Many vortex criteria are Eulerian, and identify the structures by an instantaneous local swirling motion in the field, which are indicated by closed or spiral streamlines or pathlines in a reference frame. Alternatively, a Lagrangian Coherent Structures (LCS) analysis is a Lagrangian method based on the quantities calculated along fluid particle trajectories. In the current work, vortex detection is demonstrated on data from the simulation of two cases: a 2D flow with a flat plate undergoing a 45 ° pitch-up maneuver and a 3D wall-bounded turbulence channel flow. Vortices are visualized and tracked by their centers and boundaries using Γ1, the Q criterion, and LCS saddle points. In the cases of 2D flow, saddle points trace showed a rapid acceleration of the structure which indicates the shedding from the plate. For channel flow, saddle points trace shows that average structure convection speed exhibits a similar trend as a function of wall-normal distance as the mean velocity profile, and leads to statistical quantities of vortex dynamics. Dr. Jeff Eldredge and his research group at UCLA are gratefully acknowledged for sharing the database of simulation for the current research. This work was supported by the Air Force Office of Scientific Research under AFOSR Award No. FA9550-14-1-0210.

  11. Computing 3-D structure of rigid objects using stereo and motion

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1987-01-01

    Work performed as a step toward an intelligent automatic machine vision system for 3-D imaging is discussed. The problem considered is the quantitative 3-D reconstruction of rigid objects. Motion and stereo are the two clues considered in this system. The system basically consists of three processes: the low level process to extract image features, the middle level process to establish the correspondence in the stereo (spatial) and motion (temporal) modalities, and the high level process to compute the 3-D coordinates of the corner points by integrating the spatial and temporal correspondences.

  12. LayTracks3D: A new approach for meshing general solids using medial axis transform

    DOE PAGESBeta

    Quadros, William Roshan

    2015-08-22

    This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to themore » MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.« less

  13. Accurate 3D rigid-body target motion and structure estimation by using GMTI/HRR with template information

    NASA Astrophysics Data System (ADS)

    Wu, Shunguang; Hong, Lang

    2008-04-01

    A framework of simultaneously estimating the motion and structure parameters of a 3D object by using high range resolution (HRR) and ground moving target indicator (GMTI) measurements with template information is given. By decoupling the motion and structure information and employing rigid-body constraints, we have developed the kinematic and measurement equations of the problem. Since the kinematic system is unobservable by using only one scan HRR and GMTI measurements, we designed an architecture to run the motion and structure filters in parallel by using multi-scan measurements. Moreover, to improve the estimation accuracy in large noise and/or false alarm environments, an interacting multi-template joint tracking (IMTJT) algorithm is proposed. Simulation results have shown that the averaged root mean square errors for both motion and structure state vectors have been significantly reduced by using the template information.

  14. Blind watermark algorithm on 3D motion model based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Qi, Hu; Zhai, Lang

    2013-12-01

    With the continuous development of 3D vision technology, digital watermark technology, as the best choice for copyright protection, has fused with it gradually. This paper proposed a blind watermark plan of 3D motion model based on wavelet transform, and made it loaded into the Vega real-time visual simulation system. Firstly, put 3D model into affine transform, and take the distance from the center of gravity to the vertex of 3D object in order to generate a one-dimensional discrete signal; then make this signal into wavelet transform to change its frequency coefficients and embed watermark, finally generate 3D motion model with watermarking. In fixed affine space, achieve the robustness in translation, revolving and proportion transforms. The results show that this approach has better performances not only in robustness, but also in watermark- invisibility.

  15. Model-based risk assessment for motion effects in 3D radiotherapy of lung tumors

    NASA Astrophysics Data System (ADS)

    Werner, René; Ehrhardt, Jan; Schmidt-Richberg, Alexander; Handels, Heinz

    2012-02-01

    Although 4D CT imaging becomes available in an increasing number of radiotherapy facilities, 3D imaging and planning is still standard in current clinical practice. In particular for lung tumors, respiratory motion is a known source of uncertainty and should be accounted for during radiotherapy planning - which is difficult by using only a 3D planning CT. In this contribution, we propose applying a statistical lung motion model to predict patients' motion patterns and to estimate dosimetric motion effects in lung tumor radiotherapy if only 3D images are available. Being generated based on 4D CT images of patients with unimpaired lung motion, the model tends to overestimate lung tumor motion. It therefore promises conservative risk assessment regarding tumor dose coverage. This is exemplarily evaluated using treatment plans of lung tumor patients with different tumor motion patterns and for two treatment modalities (conventional 3D conformal radiotherapy and step-&- shoot intensity modulated radiotherapy). For the test cases, 4D CT images are available. Thus, also a standard registration-based 4D dose calculation is performed, which serves as reference to judge plausibility of the modelbased 4D dose calculation. It will be shown that, if combined with an additional simple patient-specific breathing surrogate measurement (here: spirometry), the model-based dose calculation provides reasonable risk assessment of respiratory motion effects.

  16. On the integrability of the motion of 3D-Swinging Atwood machine and related problems

    NASA Astrophysics Data System (ADS)

    Elmandouh, A. A.

    2016-03-01

    In the present article, we study the problem of the motion of 3D- Swinging Atwood machine. A new integrable case for this problem is announced. We point out a new integrable case describing the motion of a heavy particle on a titled cone.

  17. Structural response to 3D simulated earthquake motions in San Bernardino Valley

    USGS Publications Warehouse

    Safak, E.; Frankel, A.

    1994-01-01

    Structural repsonse to one- and three-dimensional (3D) simulated motions in San Bernardino Valley from a hypothetical earthquake along the San Andreas fault with moment magnitude 6.5 and rupture length of 30km is investigated. The results show that the ground motions and the structural response vary dramatically with the type of simulation and the location. -from Authors

  18. 3-D Particle Tracking Velocimetry: Development and Applications in Small Scale Flows

    NASA Astrophysics Data System (ADS)

    Tien, Wei-Hsin

    The thesis contains two parts of studies. In part I, a novel volumetric velocimetry technique is developed to measure the 3-D flow field of small-scale flows. The technique utilizes a color-coded pinhole plate with multiple light sources aligned to each pinhole to achieve high particle image density and large measurable depth on a single lens microscope system. A color separation algorithm and an improved particle identification algorithm are developed to identify individual particle images from each pinhole view. Furthermore, a calibration-based technique based on epi-polar line search method is developed to reconstruct the spatial coordinates of the particle, and a new two-frame tracking particle-tracking algorithm is developed to calculate the velocity field. The system was setup to achieve a magnification of 2.69, resulting in an imaging volume of 3.35 x 2.5 x 1.5 mm3 and showed satisfactory measurement accuracy. The technique was then further miniaturized to achieve a magnification of 10, resulting in a imaging volume of 600 x 600 x 600 microm3. The system was applied to a backward-facing step flow to test its ability to reconstruct the unsteady flow field with two-frame tracking. Finally, this technique was applied to a steady streaming flow field in a microfluidic device used to trap particles. The results revealed the three-dimensional flow structure that has not been observed in previous studies, and provided insights to the design of a more efficient trapping device. In part II, an in-vitro study was carried out to investigate the flow around a prosthetic venous valve. Using 2-D PIV, the dynamics of the valve motion was captured and the velocity fields were measured to investigate the effect of the sinus pocket and the coupling effect of a pair of valves. The PIV and hemodynamic results showed that the sinus pocket around the valve functioned as a flow regulator to smooth the entrained velocity profile and suppress the jet width. For current prosthetic

  19. Computational Graph Model for 3D Cells Tracking in Zebra Fish Datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Lelin; Xiong, Hongkai; Zhao, Yang; Zhang, Kai; Zhou, Xiaobo

    2007-11-01

    This paper leads to a novel technique for tracking and identification of zebra-fish cells in 3D image sequences, extending graph-based multi-objects tracking algorithm to 3D applications. As raised in previous work of 2D graph-based method, separated cells are modeled as vertices that connected by edges. Then the tracking work is simplified to that of vertices matching between graphs generated from consecutive frames. Graph-based tracking is composed of three steps: graph generation, initial source vertices selection and graph saturation. To satisfy demands in this work separated cell records are segmented from original datasets using 3D level-set algorithms. Besides, advancements are achieved in each of the step including graph regulations, multi restrictions on source vertices and enhanced flow quantifications. Those strategies make a good compensation for graph-based multi-objects tracking method in 2D space. Experiments are carried out in 3D datasets sampled from zebra fish, results of which shows that this enhanced method could be potentially applied to tracking of objects with diverse features.

  20. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    PubMed Central

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722

  1. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    NASA Astrophysics Data System (ADS)

    Dhou, S.; Hurwitz, M.; Mishra, P.; Cai, W.; Rottmann, J.; Li, R.; Williams, C.; Wagar, M.; Berbeco, R.; Ionascu, D.; Lewis, J. H.

    2015-05-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  2. The effect of motion on IMRT - looking at interplay with 3D measurements

    NASA Astrophysics Data System (ADS)

    Thomas, A.; Yan, H.; Oldham, M.; Juang, T.; Adamovics, J.; Yin, F. F.

    2013-06-01

    Clinical recommendations to address tumor motion management have been derived from studies dealing with simulations and 2D measurements. 3D measurements may provide more insight and possibly alter the current motion management guidelines. This study provides an initial look at true 3D measurements involving leaf motion deliveries by use of a motion phantom and the PRESAGE/DLOS dosimetry system. An IMRT and VMAT plan were delivered to the phantom and analyzed by means of DVHs to determine whether the expansion of treatment volumes based on known imaging motion adequately cover the target. DVHs confirmed that for these deliveries the expansion volumes were adequate to treat the intended target although further studies should be conducted to allow for differences in parameters that could alter the results, such as delivery dose and breathe rate.

  3. Confocal fluorometer for diffusion tracking in 3D engineered tissue constructs

    NASA Astrophysics Data System (ADS)

    Daly, D.; Zilioli, A.; Tan, N.; Buttenschoen, K.; Chikkanna, B.; Reynolds, J.; Marsden, B.; Hughes, C.

    2016-03-01

    We present results of the development of a non-contacting instrument, called fScan, based on scanning confocal fluorometry for assessing the diffusion of materials through a tissue matrix. There are many areas in healthcare diagnostics and screening where it is now widely accepted that the need for new quantitative monitoring technologies is a major pinch point in patient diagnostics and in vitro testing. With the increasing need to interpret 3D responses this commonly involves the need to track the diffusion of compounds, pharma-active species and cells through a 3D matrix of tissue. Methods are available but to support the advances that are currently only promised, this monitoring needs to be real-time, non-invasive, and economical. At the moment commercial meters tend to be invasive and usually require a sample of the medium to be removed and processed prior to testing. This methodology clearly has a number of significant disadvantages. fScan combines a fiber based optical arrangement with a compact, free space optical front end that has been integrated so that the sample's diffusion can be measured without interference. This architecture is particularly important due to the "wet" nature of the samples. fScan is designed to measure constructs located within standard well plates and a 2-D motion stage locates the required sample with respect to the measurement system. Results are presented that show how the meter has been used to evaluate movements of samples through collagen constructs in situ without disturbing their kinetic characteristics. These kinetics were little understood prior to these measurements.

  4. The 3D Human Motion Control Through Refined Video Gesture Annotation

    NASA Astrophysics Data System (ADS)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  5. High-throughput 3D tracking of bacteria on a standard phase contrast microscope

    NASA Astrophysics Data System (ADS)

    Taute, K. M.; Gude, S.; Tans, S. J.; Shimizu, T. S.

    2015-11-01

    Bacteria employ diverse motility patterns in traversing complex three-dimensional (3D) natural habitats. 2D microscopy misses crucial features of 3D behaviour, but the applicability of existing 3D tracking techniques is constrained by their performance or ease of use. Here we present a simple, broadly applicable, high-throughput 3D bacterial tracking method for use in standard phase contrast microscopy. Bacteria are localized at micron-scale resolution over a range of 350 × 300 × 200 μm by maximizing image cross-correlations between their observed diffraction patterns and a reference library. We demonstrate the applicability of our technique to a range of bacterial species and exploit its high throughput to expose hidden contributions of bacterial individuality to population-level variability in motile behaviour. The simplicity of this powerful new tool for bacterial motility research renders 3D tracking accessible to a wider community and paves the way for investigations of bacterial motility in complex 3D environments.

  6. High-throughput 3D tracking of bacteria on a standard phase contrast microscope

    PubMed Central

    Taute, K.M.; Gude, S.; Tans, S.J.; Shimizu, T.S.

    2015-01-01

    Bacteria employ diverse motility patterns in traversing complex three-dimensional (3D) natural habitats. 2D microscopy misses crucial features of 3D behaviour, but the applicability of existing 3D tracking techniques is constrained by their performance or ease of use. Here we present a simple, broadly applicable, high-throughput 3D bacterial tracking method for use in standard phase contrast microscopy. Bacteria are localized at micron-scale resolution over a range of 350 × 300 × 200 μm by maximizing image cross-correlations between their observed diffraction patterns and a reference library. We demonstrate the applicability of our technique to a range of bacterial species and exploit its high throughput to expose hidden contributions of bacterial individuality to population-level variability in motile behaviour. The simplicity of this powerful new tool for bacterial motility research renders 3D tracking accessible to a wider community and paves the way for investigations of bacterial motility in complex 3D environments. PMID:26522289

  7. Motion-Capture-Enabled Software for Gestural Control of 3D Models

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey S.; Luo, Victor; Crockett, Thomas M.; Shams, Khawaja S.; Powell, Mark W.; Valderrama, Anthony

    2012-01-01

    Current state-of-the-art systems use general-purpose input devices such as a keyboard, mouse, or joystick that map to tasks in unintuitive ways. This software enables a person to control intuitively the position, size, and orientation of synthetic objects in a 3D virtual environment. It makes possible the simultaneous control of the 3D position, scale, and orientation of 3D objects using natural gestures. Enabling the control of 3D objects using a commercial motion-capture system allows for natural mapping of the many degrees of freedom of the human body to the manipulation of the 3D objects. It reduces training time for this kind of task, and eliminates the need to create an expensive, special-purpose controller.

  8. Detailed Evaluation of Five 3D Speckle Tracking Algorithms Using Synthetic Echocardiographic Recordings.

    PubMed

    Alessandrini, Martino; Heyde, Brecht; Queiros, Sandro; Cygan, Szymon; Zontak, Maria; Somphone, Oudom; Bernard, Olivier; Sermesant, Maxime; Delingette, Herve; Barbosa, Daniel; De Craene, Mathieu; ODonnell, Matthew; Dhooge, Jan

    2016-08-01

    A plethora of techniques for cardiac deformation imaging with 3D ultrasound, typically referred to as 3D speckle tracking techniques, are available from academia and industry. Although the benefits of single methods over alternative ones have been reported in separate publications, the intrinsic differences in the data and definitions used makes it hard to compare the relative performance of different solutions. To address this issue, we have recently proposed a framework to simulate realistic 3D echocardiographic recordings and used it to generate a common set of ground-truth data for 3D speckle tracking algorithms, which was made available online. The aim of this study was therefore to use the newly developed database to contrast non-commercial speckle tracking solutions from research groups with leading expertise in the field. The five techniques involved cover the most representative families of existing approaches, namely block-matching, radio-frequency tracking, optical flow and elastic image registration. The techniques were contrasted in terms of tracking and strain accuracy. The feasibility of the obtained strain measurements to diagnose pathology was also tested for ischemia and dyssynchrony. PMID:26960220

  9. Vision-Based Long-Range 3D Tracking, applied to Underground Surveying Tasks

    NASA Astrophysics Data System (ADS)

    Mossel, Annette; Gerstweiler, Georg; Vonach, Emanuel; Kaufmann, Hannes; Chmelina, Klaus

    2014-04-01

    To address the need of highly automated positioning systems in underground construction, we present a long-range 3D tracking system based on infrared optical markers. It provides continuous 3D position estimation of static or kinematic targets with low latency over a tracking volume of 12 m x 8 m x 70 m (width x height x depth). Over the entire volume, relative 3D point accuracy with a maximal deviation ≤ 22 mm is ensured with possible target rotations of yaw, pitch = 0 - 45° and roll = 0 - 360°. No preliminary sighting of target(s) is necessary since the system automatically locks onto a target without user intervention and autonomously starts tracking as soon as a target is within the view of the system. The proposed system needs a minimal hardware setup, consisting of two machine vision cameras and a standard workstation for data processing. This allows for quick installation with minimal disturbance of construction work. The data processing pipeline ensures camera calibration and tracking during on-going underground activities. Tests in real underground scenarios prove the system's capabilities to act as 3D position measurement platform for multiple underground tasks that require long range, low latency and high accuracy. Those tasks include simultaneously tracking of personnel, machines or robots.

  10. 3D target tracking in infrared imagery by SIFT-based distance histograms

    NASA Astrophysics Data System (ADS)

    Yan, Ruicheng; Cao, Zhiguo

    2011-11-01

    SIFT tracking algorithm is an excellent point-based tracking algorithm, which has high tracking performance and accuracy due to its robust capability against rotation, scale change and occlusion. However, when tracking a huge 3D target in complicated real scenarios in a forward-looking infrared (FLIR) image sequence taken from an airborne moving platform, the tracked point locating in the vertical surface usually shifts away from the correct position. In this paper, we propose a novel algorithm for 3D target tracking in FLIR image sequences. Our approach uses SIFT keypoints detected in consecutive frames for point correspondence. The candidate position of the tracked point is firstly estimated by computing the affine transformation using local corresponding SIFT keypoints. Then the correct position is located via an optimal method. Euclidean distances between a candidate point and SIFT keypoints nearby are calculated and formed into a SIFT-based distance histogram. The distance histogram is defined a cost of associating each candidate point to a correct tracked point using the constraint based on the topology of each candidate point with its surrounding SIFT keypoints. Minimization of the cost is formulated as a combinatorial optimization problem. Experiments demonstrate that the proposed algorithm efficiently improves the tracking performance and accuracy.

  11. Low-cost respiratory motion tracking system

    NASA Astrophysics Data System (ADS)

    Goryawala, Mohammed; Del Valle, Misael; Wang, Jiali; Byrne, James; Franquiz, Juan; McGoron, Anthony

    2008-03-01

    Lung cancer is the cause of more than 150,000 deaths annually in the United States. Early and accurate detection of lung tumors with Positron Emission Tomography has enhanced lung tumor diagnosis. However, respiratory motion during the imaging period of PET results in the reduction of accuracy of detection due to blurring of the images. Chest motion can serve as a surrogate for tracking the motion of the tumor. For tracking chest motion, an optical laser system was designed which tracks the motion of a patterned card placed on the chest by illuminating the pattern with two structured light sources, generating 8 positional markers. The position of markers is used to determine the vertical, translational, and rotational motion of the card. Information from the markers is used to decide whether the patient's breath is abnormal compared to their normal breathing pattern. The system is developed with an inexpensive web-camera and two low-cost laser pointers. The experiments were carried out using a dynamic phantom developed in-house, to simulate chest movement with different amplitudes and breathing periods. Motion of the phantom was tracked by the system developed and also by a pressure transducer for comparison. The studies showed a correlation of 96.6% between the respiratory tracking waveforms by the two systems, demonstrating the capability of the system. Unlike the pressure transducer method, the new system tracks motion in 3 dimensions. The developed system also demonstrates the ability to track a sliding motion of the patient in the direction parallel to the bed and provides the potential to stop the PET scan in case of such motion.

  12. Note: Time-gated 3D single quantum dot tracking with simultaneous spinning disk imaging

    NASA Astrophysics Data System (ADS)

    DeVore, M. S.; Stich, D. G.; Keller, A. M.; Cleyrat, C.; Phipps, M. E.; Hollingsworth, J. A.; Lidke, D. S.; Wilson, B. S.; Goodwin, P. M.; Werner, J. H.

    2015-12-01

    We describe recent upgrades to a 3D tracking microscope to include simultaneous Nipkow spinning disk imaging and time-gated single-particle tracking (SPT). Simultaneous 3D molecular tracking and spinning disk imaging enable the visualization of cellular structures and proteins around a given fluorescently labeled target molecule. The addition of photon time-gating to the SPT hardware improves signal to noise by discriminating against Raman scattering and short-lived fluorescence. In contrast to camera-based SPT, single-photon arrival times are recorded, enabling time-resolved spectroscopy (e.g., measurement of fluorescence lifetimes and photon correlations) to be performed during single molecule/particle tracking experiments.

  13. Rapid 3D Track Reconstruction with the BaBar Trigger Upgrade

    SciTech Connect

    Bailey, S

    2004-05-24

    As the PEP-II luminosity increases the BaBar trigger and dataflow systems must accommodate the increasing data rate. A significant source of background events at the first trigger level comes from beam particle interactions with the beampipe and synchrotron masks, which are separated from the interaction region by more than 20 cm. The BaBar trigger upgrade will provide 3D tracking capabilities at the first trigger level in order to remove background events by distinguishing the origin of particle tracks. Each new z{sub 0} p{sub T} Discriminator (ZPD) board processes over 1 gigabyte of data per second in order to reconstruct the tracks and make trigger decisions based upon the 3D track parameters.

  14. Note: Time-gated 3D single quantum dot tracking with simultaneous spinning disk imaging

    SciTech Connect

    DeVore, M. S.; Stich, D. G.; Keller, A. M.; Phipps, M. E.; Hollingsworth, J. A.; Goodwin, P. M.; Werner, J. H.; Cleyrat, C.; Lidke, D. S.; Wilson, B. S.

    2015-12-15

    We describe recent upgrades to a 3D tracking microscope to include simultaneous Nipkow spinning disk imaging and time-gated single-particle tracking (SPT). Simultaneous 3D molecular tracking and spinning disk imaging enable the visualization of cellular structures and proteins around a given fluorescently labeled target molecule. The addition of photon time-gating to the SPT hardware improves signal to noise by discriminating against Raman scattering and short-lived fluorescence. In contrast to camera-based SPT, single-photon arrival times are recorded, enabling time-resolved spectroscopy (e.g., measurement of fluorescence lifetimes and photon correlations) to be performed during single molecule/particle tracking experiments.

  15. Intrathoracic tumour motion estimation from CT imaging using the 3D optical flow method

    NASA Astrophysics Data System (ADS)

    Guerrero, Thomas; Zhang, Geoffrey; Huang, Tzung-Chi; Lin, Kang-Ping

    2004-09-01

    The purpose of this work was to develop and validate an automated method for intrathoracic tumour motion estimation from breath-hold computed tomography (BH CT) imaging using the three-dimensional optical flow method (3D OFM). A modified 3D OFM algorithm provided 3D displacement vectors for each voxel which were used to map tumour voxels on expiration BH CT onto inspiration BH CT images. A thoracic phantom and simulated expiration/inspiration BH CT pairs were used for validation. The 3D OFM was applied to the measured inspiration and expiration BH CT images from one lung cancer and one oesophageal cancer patient. The resulting displacements were plotted in histogram format and analysed to provide insight regarding the tumour motion. The phantom tumour displacement was measured as 1.20 and 2.40 cm with full-width at tenth maximum (FWTM) for the distribution of displacement estimates of 0.008 and 0.006 cm, respectively. The maximum error of any single voxel's motion estimate was 1.1 mm along the z-dimension or approximately one-third of the z-dimension voxel size. The simulated BH CT pairs revealed an rms error of less than 0.25 mm. The displacement of the oesophageal tumours was nonuniform and up to 1.4 cm, this was a new finding. A lung tumour maximum displacement of 2.4 cm was found in the case evaluated. In conclusion, 3D OFM provided an accurate estimation of intrathoracic tumour motion, with estimated errors less than the voxel dimension in a simulated motion phantom study. Surprisingly, oesophageal tumour motion was large and nonuniform, with greatest motion occurring at the gastro-oesophageal junction. Presented at The IASTED Second International Conference on Biomedical Engineering (BioMED 2004), Innsbruck, Austria, 16-18 February 2004.

  16. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system.

    PubMed

    Baumann, Michael; Mozer, Pierre; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space and thus renders optimization efficient. The method was tested on 237 prostate volumes acquired from 14 different patients for 3D to 3D and 3D to orthogonal 2D slices registration. The 3D-3D version of the algorithm converged correctly in 96.7% of all cases in 6.5s with an accuracy of 1.41mm (r.m.s.) and 3.84mm (max). The 3D to slices method yielded a success rate of 88.9% in 2.3s with an accuracy of 1.37mm (r.m.s.) and 4.3mm (max). PMID:18044549

  17. Design and Performance Evaluation on Ultra-Wideband Time-Of-Arrival 3D Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Dusl, John

    2012-01-01

    A three-dimensional (3D) Ultra-Wideband (UWB) Time--of-Arrival (TOA) tracking system has been studied at NASA Johnson Space Center (JSC) to provide the tracking capability inside the International Space Station (ISS) modules for various applications. One of applications is to locate and report the location where crew experienced possible high level of carbon-dioxide and felt upset. In order to accurately locate those places in a multipath intensive environment like ISS modules, it requires a robust real-time location system (RTLS) which can provide the required accuracy and update rate. A 3D UWB TOA tracking system with two-way ranging has been proposed and studied. The designed system will be tested in the Wireless Habitat Testbed which simulates the ISS module environment. In this presentation, we discuss the 3D TOA tracking algorithm and the performance evaluation based on different tracking baseline configurations. The simulation results show that two configurations of the tracking baseline are feasible. With 100 picoseconds standard deviation (STD) of TOA estimates, the average tracking error 0.2392 feet (about 7 centimeters) can be achieved for configuration Twisted Rectangle while the average tracking error 0.9183 feet (about 28 centimeters) can be achieved for configuration Slightly-Twisted Top Rectangle . The tracking accuracy can be further improved with the improvement of the STD of TOA estimates. With 10 picoseconds STD of TOA estimates, the average tracking error 0.0239 feet (less than 1 centimeter) can be achieved for configuration "Twisted Rectangle".

  18. Motion-induced phase error estimation and correction in 3D diffusion tensor imaging.

    PubMed

    Van, Anh T; Hernando, Diego; Sutton, Bradley P

    2011-11-01

    A multishot data acquisition strategy is one way to mitigate B0 distortion and T2∗ blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to significant image artifacts. This work proposes a maximum likelihood estimation and k-space correction of motion-induced phase errors in 3D multishot diffusion tensor imaging. The proposed error estimation is robust, unbiased, and approaches the Cramer-Rao lower bound. For rigid body motion, the proposed correction effectively removes motion-induced phase errors regardless of the k-space trajectory used and gives comparable performance to the more computationally expensive 3D iterative nonlinear phase error correction method. The method has been extended to handle multichannel data collected using phased-array coils. Simulation and in vivo data are shown to demonstrate the performance of the method. PMID:21652284

  19. Recovery of liver motion and deformation due to respiration using laparoscopic freehand 3D ultrasound system.

    PubMed

    Nakamoto, Masahiko; Hirayama, Hiroaki; Sato, Yoshinobu; Konishi, Kozo; Kakeji, Yoshihiro; Hashizume, Makoto; Tamura, Shinichi

    2006-01-01

    This paper describes a rapid method for intraoperative recovery of liver motion and deformation due to respiration by using a laparoscopic freehand 3D ultrasound (US) system. Using the proposed method, 3D US images of the liver can be extended to 4D US images by acquiring additional several sequences of 2D US images during a couple of respiration cycles. Time-varying 2D US images are acquired on several sagittal image planes and their 3D positions and orientations are measured using a laparoscopic ultrasound probe to which a miniature magnetic 3D position sensor is attached. During the acquisition, the probe is assumed to move together with the liver surface. In-plane 2D deformation fields and respiratory phase are estimated from the time-varying 2D US images, and then the time-varying 3D deformation fields on the sagittal image planes are obtained by combining 3D positions and orientations of the image planes. The time-varying 3D deformation field of the volume is obtained by interpolating the 3D deformation fields estimated on several planes. The proposed method was evaluated by in vivo experiments using a pig liver. PMID:17354794

  20. X-ray stereo imaging for micro 3D motions within non-transparent objects

    NASA Astrophysics Data System (ADS)

    Salih, Wasil H. M.; Buytaert, Jan A. N.; Dirckx, Joris J. J.

    2012-03-01

    We propose a new technique to measure the 3D motion of marker points along a straight path within an object using x-ray stereo projections. From recordings of two x-ray projections with 90° separation angle, the 3D coordinates of marker points can be determined. By synchronizing the x-ray exposure time to the motion event, a moving marker leaves a trace in the image of which the gray scale is linearly proportional to the marker velocity. From the gray scale along the motion path, the 3D motion (velocity) is obtained. The path of motion was reconstructed and compared with the applied waveform. The results showed that the accuracy is in order of 5%. The difference of displacement amplitude between the new method and laser vibrometry was less than 5μm. We demonstrated the method on the malleus ossicle motion in the gerbil middle ear as a function of pressure applied on the eardrum. The new method has the advantage over existing methods such as laser vibrometry that the structures under study do not need to be visually exposed. Due to the short measurement time and the high resolution, the method can be useful in the field of biomechanics for a variety of applications.

  1. Moving Human Path Tracking Based on Video Surveillance in 3d Indoor Scenarios

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Zlatanova, Sisi; Wang, Zhe; Zhang, Yeting; Liu, Liu

    2016-06-01

    Video surveillance systems are increasingly used for a variety of 3D indoor applications. We can analyse human behaviour, discover and avoid crowded areas, monitor human traffic and so forth. In this paper we concentrate on use of surveillance cameras to track and reconstruct the path a person has followed. For the purpose we integrated video surveillance data with a 3D indoor model of the building and develop a single human moving path tracking method. We process the surveillance videos to detected single human moving traces; then we match the depth information of 3D scenes to the constructed 3D indoor network model and define the human traces in the 3D indoor space. Finally, the single human traces extracted from multiple cameras are connected with the help of the connectivity provided by the 3D network model. Using this approach, we can reconstruct the entire walking path. The provided experiments with a single person have verified the effectiveness and robustness of the method.

  2. A full-parallax 3D display with restricted viewing zone tracking viewer's eye

    NASA Astrophysics Data System (ADS)

    Beppu, Naoto; Yendo, Tomohiro

    2015-03-01

    The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.

  3. The BaBar Level 1 Drift-Chamber Trigger Upgrade With 3D Tracking

    SciTech Connect

    Chai, X.D.; /Iowa U.

    2005-11-29

    At BABAR, the Level 1 Drift Chamber trigger is being upgraded to reduce increasing background rates while the PEP-II luminosity keeps improving. This upgrade uses the drift time information and stereo wires in the drift chamber to perform a 3D track reconstruction that effectively rejects background events spread out along the beam line.

  4. 3D single molecule tracking in thick cellular specimens using multifocal plane microscopy

    NASA Astrophysics Data System (ADS)

    Ram, Sripad; Ward, E. Sally; Ober, Raimund J.

    2011-03-01

    One of the major challenges in single molecule microscopy concerns 3D tracking of single molecules in cellular specimens. This has been a major impediment to study many fundamental cellular processes, such as protein transport across thick cellular specimens (e.g. a cell-monolayer). Here we show that multifocal plane microscopy (MUM), an imaging modality developed by our group, provides the much needed solution to this longstanding problem. While MUM was previously used for 3D single molecule tracking at shallow depths (~ 1 micron) in live-cells, the question arises if MUM can also live up to the significant challenge of tracking single molecules in thick samples. Here by substantially expanding the capabilities of MUM, we demonstrate 3D tracking of quantum-dot labeled molecules in a ~ 10 micron thick cell monolayer. In this way we have reconstructed the complete 3D intracellular trafficking itinerary of single molecules at high spatial and temporal precision in a thick cell-sample. Funding support: NIH and the National MS Society.

  5. A new 3D tracking method for cell mechanics investigation exploiting the capabilities of digital holography in microscopy

    NASA Astrophysics Data System (ADS)

    Miccio, L.; Memmolo, P.; Merola, F.; Fusco, S.; Netti, P. A.; Ferraro, P.

    2014-03-01

    A method for 3D tracking has been developed exploiting Digital Holography features in Microscopy (DHM). In the framework of self-consistent platform for manipulation and measurement of biological specimen we use DHM for quantitative and completely label free analysis of samples with low amplitude contrast. Tracking capability extend the potentiality of DHM allowing to monitor the motion of appropriate probes and correlate it with sample properties. Complete 3D tracking has been obtained for the probes avoiding the amplitude refocusing in traditional tracking processes. Moreover, in biology and biomedical research fields one of the main topic is the understanding of morphology and mechanics of cells and microorganisms. Biological samples present low amplitude contrast that limits the information that can be retrieved through optical bright-field microscope measurements. The main effect on light propagating in such objects is in phase. This is known as phase-retardation or phase-shift. DHM is an innovative and alternative approach in microscopy, it's a good candidate for no-invasive and complete specimen analysis because its main characteristic is the possibility to discern between intensity and phase information performing quantitative mapping of the Optical Path Length. In this paper, the flexibility of DH is employed to analyze cell mechanics of unstained cells subjected to appropriate stimuli. DHM is used to measure all the parameters useful to understand the deformations induced by external and controlled stresses on in-vitro cells. Our configuration allows 3D tracking of micro-particles and, simultaneously, furnish quantitative phase-contrast maps. Experimental results are presented and discussed for in vitro cells.

  6. Mesoscopic in vivo 3-D tracking of sparse cell populations using angular multiplexed optical projection tomography

    PubMed Central

    Chen, Lingling; Alexandrov, Yuriy; Kumar, Sunil; Andrews, Natalie; Dallman, Margaret J.; French, Paul M. W.; McGinty, James

    2015-01-01

    We describe an angular multiplexed imaging technique for 3-D in vivo cell tracking of sparse cell distributions and optical projection tomography (OPT) with superior time-lapse resolution and a significantly reduced light dose compared to volumetric time-lapse techniques. We demonstrate that using dual axis OPT, where two images are acquired simultaneously at different projection angles, can enable localization and tracking of features in 3-D with a time resolution equal to the camera frame rate. This is achieved with a 200x reduction in light dose compared to an equivalent volumetric time-lapse single camera OPT acquisition with 200 projection angles. We demonstrate the application of this technique to mapping the 3-D neutrophil migration pattern observed over ~25.5 minutes in a live 2 day post-fertilisation transgenic LysC:GFP zebrafish embryo following a tail wound. PMID:25909009

  7. Motion corrected LV quantification based on 3D modelling for improved functional assessment in cardiac MRI

    NASA Astrophysics Data System (ADS)

    Liew, Y. M.; McLaughlin, R. A.; Chan, B. T.; Aziz, Y. F. Abdul; Chee, K. H.; Ung, N. M.; Tan, L. K.; Lai, K. W.; Ng, S.; Lim, E.

    2015-04-01

    Cine MRI is a clinical reference standard for the quantitative assessment of cardiac function, but reproducibility is confounded by motion artefacts. We explore the feasibility of a motion corrected 3D left ventricle (LV) quantification method, incorporating multislice image registration into the 3D model reconstruction, to improve reproducibility of 3D LV functional quantification. Multi-breath-hold short-axis and radial long-axis images were acquired from 10 patients and 10 healthy subjects. The proposed framework reduced misalignment between slices to subpixel accuracy (2.88 to 1.21 mm), and improved interstudy reproducibility for 5 important clinical functional measures, i.e. end-diastolic volume, end-systolic volume, ejection fraction, myocardial mass and 3D-sphericity index, as reflected in a reduction in the sample size required to detect statistically significant cardiac changes: a reduction of 21-66%. Our investigation on the optimum registration parameters, including both cardiac time frames and number of long-axis (LA) slices, suggested that a single time frame is adequate for motion correction whereas integrating more LA slices can improve registration and model reconstruction accuracy for improved functional quantification especially on datasets with severe motion artefacts.

  8. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0–1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  9. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy.

    PubMed

    Stemkens, Bjorn; Tijssen, Rob H N; de Senneville, Baudouin Denis; Lagendijk, Jan J W; van den Berg, Cornelis A T

    2016-07-21

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy. PMID:27362636

  10. Particle Filters and Occlusion Handling for Rigid 2D-3D Pose Tracking.

    PubMed

    Lee, Jehoon; Sandhu, Romeil; Tannenbaum, Allen

    2013-08-01

    In this paper, we address the problem of 2D-3D pose estimation. Specifically, we propose an approach to jointly track a rigid object in a 2D image sequence and to estimate its pose (position and orientation) in 3D space. We revisit a joint 2D segmentation/3D pose estimation technique, and then extend the framework by incorporating a particle filter to robustly track the object in a challenging environment, and by developing an occlusion detection and handling scheme to continuously track the object in the presence of occlusions. In particular, we focus on partial occlusions that prevent the tracker from extracting an exact region properties of the object, which plays a pivotal role for region-based tracking methods in maintaining the track. To this end, a dynamical choice of how to invoke the objective functional is performed online based on the degree of dependencies between predictions and measurements of the system in accordance with the degree of occlusion and the variation of the object's pose. This scheme provides the robustness to deal with occlusions of an obstacle with different statistical properties from that of the object of interest. Experimental results demonstrate the practical applicability and robustness of the proposed method in several challenging scenarios. PMID:24058277

  11. 3D model-based detection and tracking for space autonomous and uncooperative rendezvous

    NASA Astrophysics Data System (ADS)

    Shang, Yang; Zhang, Yueqiang; Liu, Haibo

    2015-10-01

    In order to fully navigate using a vision sensor, a 3D edge model based detection and tracking technique was developed. Firstly, we proposed a target detection strategy over a sequence of several images from the 3D model to initialize the tracking. The overall purpose of such approach is to robustly match each image with the model views of the target. Thus we designed a line segment detection and matching method based on the multi-scale space technology. Experiments on real images showed that our method is highly robust under various image changes. Secondly, we proposed a method based on 3D particle filter (PF) coupled with M-estimation to track and estimate the pose of the target efficiently. In the proposed approach, a similarity observation model was designed according to a new distance function of line segments. Then, based on the tracking results of PF, the pose was optimized using M-estimation. Experiments indicated that the proposed method can effectively track and accurately estimate the pose of freely moving target in unconstrained environment.

  12. Extracting, Tracking, and Visualizing Magnetic Flux Vortices in 3D Complex-Valued Superconductor Simulation Data.

    PubMed

    Guo, Hanqi; Phillips, Carolyn L; Peterka, Tom; Karpeyev, Dmitry; Glatz, Andreas

    2016-01-01

    We propose a method for the vortex extraction and tracking of superconducting magnetic flux vortices for both structured and unstructured mesh data. In the Ginzburg-Landau theory, magnetic flux vortices are well-defined features in a complex-valued order parameter field, and their dynamics determine electromagnetic properties in type-II superconductors. Our method represents each vortex line (a 1D curve embedded in 3D space) as a connected graph extracted from the discretized field in both space and time. For a time-varying discrete dataset, our vortex extraction and tracking method is as accurate as the data discretization. We then apply 3D visualization and 2D event diagrams to the extraction and tracking results to help scientists understand vortex dynamics and macroscale superconductor behavior in greater detail than previously possible. PMID:26529730

  13. Eye Tracking to Explore the Impacts of Photorealistic 3d Representations in Pedstrian Navigation Performance

    NASA Astrophysics Data System (ADS)

    Dong, Weihua; Liao, Hua

    2016-06-01

    Despite the now-ubiquitous two-dimensional (2D) maps, photorealistic three-dimensional (3D) representations of cities (e.g., Google Earth) have gained much attention by scientists and public users as another option. However, there is no consistent evidence on the influences of 3D photorealism on pedestrian navigation. Whether 3D photorealism can communicate cartographic information for navigation with higher effectiveness and efficiency and lower cognitive workload compared to the traditional symbolic 2D maps remains unknown. This study aims to explore whether the photorealistic 3D representation can facilitate processes of map reading and navigation in digital environments using a lab-based eye tracking approach. Here we show the differences of symbolic 2D maps versus photorealistic 3D representations depending on users' eye-movement and navigation behaviour data. We found that the participants using the 3D representation were less effective, less efficient and were required higher cognitive workload than using the 2D map for map reading. However, participants using the 3D representation performed more efficiently in self-localization and orientation at the complex decision points. The empirical results can be helpful to improve the usability of pedestrian navigation maps in future designs.

  14. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    NASA Astrophysics Data System (ADS)

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  15. Effects of 3D random correlated velocity perturbations on predicted ground motions

    USGS Publications Warehouse

    Hartzell, S.; Harmsen, S.; Frankel, A.

    2010-01-01

    Three-dimensional, finite-difference simulations of a realistic finite-fault rupture on the southern Hayward fault are used to evaluate the effects of random, correlated velocity perturbations on predicted ground motions. Velocity perturbations are added to a three-dimensional (3D) regional seismic velocity model of the San Francisco Bay Area using a 3D von Karman random medium. Velocity correlation lengths of 5 and 10 km and standard deviations in the velocity of 5% and 10% are considered. The results show that significant deviations in predicted ground velocities are seen in the calculated frequency range (≤1 Hz) for standard deviations in velocity of 5% to 10%. These results have implications for the practical limits on the accuracy of scenario ground-motion calculations and on retrieval of source parameters using higher-frequency, strong-motion data.

  16. Method for dose-reduced 3D catheter tracking on a scanning-beam digital x-ray system using dynamic electronic collimation

    NASA Astrophysics Data System (ADS)

    Dunkerley, David A. P.; Funk, Tobias; Speidel, Michael A.

    2016-03-01

    Scanning-beam digital x-ray (SBDX) is an inverse geometry x-ray fluoroscopy system capable of tomosynthesis-based 3D catheter tracking. This work proposes a method of dose-reduced 3D tracking using dynamic electronic collimation (DEC) of the SBDX scanning x-ray tube. Positions in the 2D focal spot array are selectively activated to create a regionof- interest (ROI) x-ray field around the tracked catheter. The ROI position is updated for each frame based on a motion vector calculated from the two most recent 3D tracking results. The technique was evaluated with SBDX data acquired as a catheter tip inside a chest phantom was pulled along a 3D trajectory. DEC scans were retrospectively generated from the detector images stored for each focal spot position. DEC imaging of a catheter tip in a volume measuring 11.4 cm across at isocenter required 340 active focal spots per frame, versus 4473 spots in full-FOV mode. The dose-area-product (DAP) and peak skin dose (PSD) for DEC versus full field-of-view (FOV) scanning were calculated using an SBDX Monte Carlo simulation code. DAP was reduced to 7.4% to 8.4% of the full-FOV value, consistent with the relative number of active focal spots (7.6%). For image sequences with a moving catheter, PSD was 33.6% to 34.8% of the full-FOV value. The root-mean-squared-deviation between DEC-based 3D tracking coordinates and full-FOV 3D tracking coordinates was less than 0.1 mm. The 3D distance between the tracked tip and the sheath centerline averaged 0.75 mm. Dynamic electronic collimation can reduce dose with minimal change in tracking performance.

  17. Towards a magnetic localization system for 3-D tracking of tongue movements in speech-language therapy.

    PubMed

    Cheng, Chihwen; Huo, Xueliang; Ghovanloo, Maysam

    2009-01-01

    This paper presents a new magnetic localization system based on a compact triangular sensor setup and three different optimization algorithms, intended for tracking tongue motion in the 3-D oral space. A small permanent magnet, secured on the tongue by tissue adhesives, will be used as a tracer. The magnetic field variations due to tongue motion are detected by a 3-D magneto-inductive sensor array outside the mouth and wirelessly transmitted to a computer. The position and rotation angles of the tracer are reconstructed based on sensor outputs and magnetic dipole equation using DIRECT, Powell, and Nelder-Mead optimization algorithms. Localization accuracy and processing time of the three algorithms are compared using one data set collected in which source-sensor distance was changed from 40 to 150 mm. Powell algorithm showed the best performance with 0.92 mm accuracy in position and 0.7(o) in orientation. The average processing time was 43.9 ms/sample, which can satisfy real time tracking up to approximately 20 Hz. PMID:19964478

  18. Alignment of 3D Building Models and TIR Video Sequences with Line Tracking

    NASA Astrophysics Data System (ADS)

    Iwaszczuk, D.; Stilla, U.

    2014-11-01

    Thermal infrared imagery of urban areas became interesting for urban climate investigations and thermal building inspections. Using a flying platform such as UAV or a helicopter for the acquisition and combining the thermal data with the 3D building models via texturing delivers a valuable groundwork for large-area building inspections. However, such thermal textures are useful for further analysis if they are geometrically correctly extracted. This can be achieved with a good coregistrations between the 3D building models and thermal images, which cannot be achieved by direct georeferencing. Hence, this paper presents methodology for alignment of 3D building models and oblique TIR image sequences taken from a flying platform. In a single image line correspondences between model edges and image line segments are found using accumulator approach and based on these correspondences an optimal camera pose is calculated to ensure the best match between the projected model and the image structures. Among the sequence the linear features are tracked based on visibility prediction. The results of the proposed methodology are presented using a TIR image sequence taken from helicopter in a densely built-up urban area. The novelty of this work is given by employing the uncertainty of the 3D building models and by innovative tracking strategy based on a priori knowledge from the 3D building model and the visibility checking.

  19. A 3D space-time motion evaluation for image registration in digital subtraction angiography.

    PubMed

    Taleb, N; Bentoutou, Y; Deforges, O; Taleb, M

    2001-01-01

    In modern clinical practice, Digital Subtraction Angiography (DSA) is a powerful technique for the visualization of blood vessels in a sequence of X-ray images. A serious problem encountered in this technique is the presence of artifacts due to patient motion. The resulting artifacts frequently lead to misdiagnosis or rejection of a DSA image sequence. In this paper, a new technique for removing both global and local motion artifacts is presented. It is based on a 3D space-time motion evaluation for separating pixels changing values because of motion from those changing values because of contrast flow. This technique is proved to be very efficient to correct for patient motion artifacts and is computationally cheap. Experimental results with several clinical data sets show that this technique is very fast and results in higher quality images. PMID:11179698

  20. Markerless motion capture can provide reliable 3D gait kinematics in the sagittal and frontal plane.

    PubMed

    Sandau, Martin; Koblauch, Henrik; Moeslund, Thomas B; Aanæs, Henrik; Alkjær, Tine; Simonsen, Erik B

    2014-09-01

    Estimating 3D joint rotations in the lower extremities accurately and reliably remains unresolved in markerless motion capture, despite extensive studies in the past decades. The main problems have been ascribed to the limited accuracy of the 3D reconstructions. Accordingly, the purpose of the present study was to develop a new approach based on highly detailed 3D reconstructions in combination with a translational and rotational unconstrained articulated model. The highly detailed 3D reconstructions were synthesized from an eight camera setup using a stereo vision approach. The subject specific articulated model was generated with three rotational and three translational degrees of freedom for each limb segment and without any constraints to the range of motion. This approach was tested on 3D gait analysis and compared to a marker based method. The experiment included ten healthy subjects in whom hip, knee and ankle joint were analysed. Flexion/extension angles as well as hip abduction/adduction closely resembled those obtained from the marker based system. However, the internal/external rotations, knee abduction/adduction and ankle inversion/eversion were less reliable. PMID:25085672

  1. 3D single-molecule tracking using one- and two-photon excitation microscopy

    NASA Astrophysics Data System (ADS)

    Liu, Cong; Perillo, Evan P.; Zhuang, Quincy; Huynh, Khang T.; Dunn, Andrew K.; Yeh, Hsin-Chih

    2014-03-01

    Three dimensional single-molecule tracking (3D-SMT) has revolutionized the way we study fundamental cellular processes. By analyzing the spatial trajectories of individual molecules (e.g. a receptor or a signaling molecule) in 3D space, one can discern the internalization or transport dynamics of these molecules, study the heterogeneity of subcellular structures, and elucidate the complex spatiotemporal regulation mechanisms. Sub-diffraction localization precision, sub-millisecond temporal resolution and tens-of-seconds observation period are the benchmarks of current 3D-SMT techniques. We have recently built two molecular tracking systems in our labs. The first system is a previously reported confocal tracking system, which we denote as the 1P-1E-4D (one-photon excitation, one excitation beam, and four fiber-coupled detectors) system. The second system is a whole new design that is based on two-photon excitation, which we denote as the 2P-4E-1D (two-photon excitation, four excitation beams, and only one detector) system. Here we compare these two systems based on Monte Carlo simulation of tracking a diffusing fluorescent molecule. Through our simulation, we have characterized the limitation of individual systems and optimized the system parameters such as magnification, z-plane separation, and feedback gains.

  2. Motion cue effects on pilot tracking

    NASA Technical Reports Server (NTRS)

    Ringland, R. F.; Stapleford, R. L.

    1972-01-01

    The results of two successive experimental investigations of the effects of motion cues on manual control tracking tasks are reported. The first of these was an IFR single-axis VTOL roll attitude control task. Describing function data show the dominant motion feedback quantity to be angular velocity. The second experimental task was multiaxis, that of precision hovering of a VTOL using separated instrument displays with reduced motion amplitude scaling. Performance data and pilot opinion show angular position to be the dominant cue when simulator linear motion is absent.

  3. 3D Measurement of Forearm and Upper Arm during Throwing Motion using Body Mounted Sensor

    NASA Astrophysics Data System (ADS)

    Koda, Hideharu; Sagawa, Koichi; Kuroshima, Kouta; Tsukamoto, Toshiaki; Urita, Kazutaka; Ishibashi, Yasuyuki

    The aim of this study is to propose the measurement method of three-dimensional (3D) movement of forearm and upper arm during pitching motion of baseball using inertial sensors without serious consideration of sensor installation. Although high accuracy measurement of sports motion is achieved by using optical motion capture system at present, it has some disadvantages such as the calibration of cameras and limitation of measurement place. Whereas the proposed method for 3D measurement of pitching motion using body mounted sensors provides trajectory and orientation of upper arm by the integration of acceleration and angular velocity measured on upper limb. The trajectory of forearm is derived so that the elbow joint axis of forearm corresponds to that of upper arm. Spatial relation between upper limb and sensor system is obtained by performing predetermined movements of upper limb and utilizing angular velocity and gravitational acceleration. The integration error is modified so that the estimated final position, velocity and posture of upper limb agree with the actual ones. The experimental results of the measurement of pitching motion show that trajectories of shoulder, elbow and wrist estimated by the proposed method are highly correlated to those from the motion capture system within the estimation error of about 10 [%].

  4. Brightness-compensated 3-D optical flow algorithm for monitoring cochlear motion patterns

    NASA Astrophysics Data System (ADS)

    von Tiedemann, Miriam; Fridberger, Anders; Ulfendahl, Mats; de Monvel, Jacques Boutet

    2010-09-01

    A method for three-dimensional motion analysis designed for live cell imaging by fluorescence confocal microscopy is described. The approach is based on optical flow computation and takes into account brightness variations in the image scene that are not due to motion, such as photobleaching or fluorescence variations that may reflect changes in cellular physiology. The 3-D optical flow algorithm allowed almost perfect motion estimation on noise-free artificial sequences, and performed with a relative error of <10% on noisy images typical of real experiments. The method was applied to a series of 3-D confocal image stacks from an in vitro preparation of the guinea pig cochlea. The complex motions caused by slow pressure changes in the cochlear compartments were quantified. At the surface of the hearing organ, the largest motion component was the transverse one (normal to the surface), but significant radial and longitudinal displacements were also present. The outer hair cell displayed larger radial motion at their basolateral membrane than at their apical surface. These movements reflect mechanical interactions between different cellular structures, which may be important for communicating sound-evoked vibrations to the sensory cells. A better understanding of these interactions is important for testing realistic models of cochlear mechanics.

  5. 3D imaging of particle-scale rotational motion in cyclically driven granular flows

    NASA Astrophysics Data System (ADS)

    Harrington, Matt; Powers, Dylan; Cooper, Eric; Losert, Wolfgang

    Recent experimental advances have enabled three-dimensional (3D) imaging of motion, structure, and failure within granular systems. 3D imaging allows researchers to directly characterize bulk behaviors that arise from particle- and meso-scale features. For instance, segregation of a bidisperse system of spheres under cyclic shear can originate from microscopic irreversibilities and the development of convective secondary flows. Rotational motion and frictional rotational coupling, meanwhile, have been less explored in such experimental 3D systems, especially under cyclic forcing. In particular, relative amounts of sliding and/or rolling between pairs of contacting grains could influence the reversibility of both trajectories, in terms of both position and orientation. In this work, we apply the Refractive Index Matched Scanning technique to a granular system that is cyclically driven and measure both translational and rotational motion of individual grains. We relate measured rotational motion to resulting shear bands and convective flows, further indicating the degree to which pairs and neighborhoods of grains collectively rotate.

  6. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson's Disease.

    PubMed

    Piro, Neltje E; Piro, Lennart K; Kassubek, Jan; Blechschmidt-Trapp, Ronald A

    2016-01-01

    Remote monitoring of Parkinson's Disease (PD) patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS) classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a) an examination based on video recordings as a clinical reference; (b) an automatically classified UPDRS; and (c) a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG) sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b) the automatically classified UPDRS is 0.48 and with (c) the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team. PMID:27338400

  7. 4DCBCT-based motion modeling and 3D fluoroscopic image generation for lung cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Berbeco, Ross; Lewis, John

    2015-03-01

    A method is developed to build patient-specific motion models based on 4DCBCT images taken at treatment time and use them to generate 3D time-varying images (referred to as 3D fluoroscopic images). Motion models are built by applying Principal Component Analysis (PCA) on the displacement vector fields (DVFs) estimated by performing deformable image registration on each phase of 4DCBCT relative to a reference phase. The resulting PCA coefficients are optimized iteratively by comparing 2D projections captured at treatment time with projections estimated using the motion model. The optimized coefficients are used to generate 3D fluoroscopic images. The method is evaluated using anthropomorphic physical and digital phantoms reproducing real patient trajectories. For physical phantom datasets, the average tumor localization error (TLE) and (95th percentile) in two datasets were 0.95 (2.2) mm. For digital phantoms assuming superior image quality of 4DCT and no anatomic or positioning disparities between 4DCT and treatment time, the average TLE and the image intensity error (IIE) in six datasets were smaller using 4DCT-based motion models. When simulating positioning disparities and tumor baseline shifts at treatment time compared to planning 4DCT, the average TLE (95th percentile) and IIE were 4.2 (5.4) mm and 0.15 using 4DCT-based models, while they were 1.2 (2.2) mm and 0.10 using 4DCBCT-based ones, respectively. 4DCBCT-based models were shown to perform better when there are positioning and tumor baseline shift uncertainties at treatment time. Thus, generating 3D fluoroscopic images based on 4DCBCT-based motion models can capture both inter- and intra- fraction anatomical changes during treatment.

  8. Measurement Matrix Optimization and Mismatch Problem Compensation for DLSLA 3-D SAR Cross-Track Reconstruction.

    PubMed

    Bao, Qian; Jiang, Chenglong; Lin, Yun; Tan, Weixian; Wang, Zhirui; Hong, Wen

    2016-01-01

    With a short linear array configured in the cross-track direction, downward looking sparse linear array three-dimensional synthetic aperture radar (DLSLA 3-D SAR) can obtain the 3-D image of an imaging scene. To improve the cross-track resolution, sparse recovery methods have been investigated in recent years. In the compressive sensing (CS) framework, the reconstruction performance depends on the property of measurement matrix. This paper concerns the technique to optimize the measurement matrix and deal with the mismatch problem of measurement matrix caused by the off-grid scatterers. In the model of cross-track reconstruction, the measurement matrix is mainly affected by the configuration of antenna phase centers (APC), thus, two mutual coherence based criteria are proposed to optimize the configuration of APCs. On the other hand, to compensate the mismatch problem of the measurement matrix, the sparse Bayesian inference based method is introduced into the cross-track reconstruction by jointly estimate the scatterers and the off-grid error. Experiments demonstrate the performance of the proposed APCs' configuration schemes and the proposed cross-track reconstruction method. PMID:27556471

  9. Fast parallel interferometric 3D tracking of numerous optically trapped particles and their hydrodynamic interaction.

    PubMed

    Ruh, Dominic; Tränkle, Benjamin; Rohrbach, Alexander

    2011-10-24

    Multi-dimensional, correlated particle tracking is a key technology to reveal dynamic processes in living and synthetic soft matter systems. In this paper we present a new method for tracking micron-sized beads in parallel and in all three dimensions - faster and more precise than existing techniques. Using an acousto-optic deflector and two quadrant-photo-diodes, we can track numerous optically trapped beads at up to tens of kHz with a precision of a few nanometers by back-focal plane interferometry. By time-multiplexing the laser focus, we can calibrate individually all traps and all tracking signals in a few seconds and in 3D. We show 3D histograms and calibration constants for nine beads in a quadratic arrangement, although trapping and tracking is easily possible for more beads also in arbitrary 2D arrangements. As an application, we investigate the hydrodynamic coupling and diffusion anomalies of spheres trapped in a 3 × 3 arrangement. PMID:22109012

  10. A 3D front tracking method on a CPU/GPU system

    SciTech Connect

    Bo, Wurigen; Grove, John

    2011-01-21

    We describe the method to port a sequential 3D interface tracking code to a GPU with CUDA. The interface is represented as a triangular mesh. Interface geometry properties and point propagation are performed on a GPU. Interface mesh adaptation is performed on a CPU. The convergence of the method is assessed from the test problems with given velocity fields. Performance results show overall speedups from 11 to 14 for the test problems under mesh refinement. We also briefly describe our ongoing work to couple the interface tracking method with a hydro solver.

  11. Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation.

    PubMed

    Yang, L; Wang, J; Ando, T; Kubota, A; Yamashita, H; Sakuma, I; Chiba, T; Kobayashi, E

    2015-03-01

    This work introduces a self-contained framework for endoscopic camera tracking by combining 3D ultrasonography with endoscopy. The approach can be readily incorporated into surgical workflows without installing external tracking devices. By fusing the ultrasound-constructed scene geometry with endoscopic vision, this integrated approach addresses issues related to initialization, scale ambiguity, and interest point inadequacy that may be faced by conventional vision-based approaches when applied to fetoscopic procedures. Vision-based pose estimations were demonstrated by phantom and ex vivo monkey placenta imaging. The potential contribution of this method may extend beyond fetoscopic procedures to include general augmented reality applications in minimally invasive procedures. PMID:25263644

  12. 3D silicon sensors with variable electrode depth for radiation hard high resolution particle tracking

    NASA Astrophysics Data System (ADS)

    Da Vià, C.; Borri, M.; Dalla Betta, G.; Haughton, I.; Hasi, J.; Kenney, C.; Povoli, M.; Mendicino, R.

    2015-04-01

    3D sensors, with electrodes micro-processed inside the silicon bulk using Micro-Electro-Mechanical System (MEMS) technology, were industrialized in 2012 and were installed in the first detector upgrade at the LHC, the ATLAS IBL in 2014. They are the radiation hardest sensors ever made. A new idea is now being explored to enhance the three-dimensional nature of 3D sensors by processing collecting electrodes at different depths inside the silicon bulk. This technique uses the electric field strength to suppress the charge collection effectiveness of the regions outside the p-n electrodes' overlap. Evidence of this property is supported by test beam data of irradiated and non-irradiated devices bump-bonded with pixel readout electronics and simulations. Applications include High-Luminosity Tracking in the high multiplicity LHC forward regions. This paper will describe the technical advantages of this idea and the tracking application rationale.

  13. Solutions for 3D self-reconfiguration in a modular robotic system: implementation and motion planning

    NASA Astrophysics Data System (ADS)

    Unsal, Cem; Khosla, Pradeep K.

    2000-10-01

    In this manuscript, we discuss new solutions for mechanical design and motion planning for a class of 3D modular self- reconfigurable robotic system, namely I-Cubes. This system is a bipartite collection of active links that provide motions for self-reconfiguration, and cubes acting as connection points. The links are three degree of freedom manipulators that can attach to and detach from the cube faces. The cubes can be positioned and oriented using the links. These capabilities enable the system to change its shape and perform locomotion tasks over difficult terrain. This paper describes the scaled down version of the system previously described in and details the new design and manufacturing approaches. Initially designed algorithms for motion planning of I-Cubes are improved to provide better results. Results of our tests are given and issues related to motion planning are discussed. The user interfaces designed for the control of the system and algorithm evaluation is also described.

  14. Using a wireless motion controller for 3D medical image catheter interactions

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    State-of-the-art morphological imaging techniques usually provide high resolution 3D images with a huge number of slices. In clinical practice, however, 2D slice-based examinations are still the method of choice even for these large amounts of data. Providing intuitive interaction methods for specific 3D medical visualization applications is therefore a critical feature for clinical imaging applications. For the domain of catheter navigation and surgery planning, it is crucial to assist the physician with appropriate visualization techniques, such as 3D segmentation maps, fly-through cameras or virtual interaction approaches. There has been an ongoing development and improvement for controllers that help to interact with 3D environments in the domain of computer games. These controllers are based on both motion and infrared sensors and are typically used to detect 3D position and orientation. We have investigated how a state-of-the-art wireless motion sensor controller (Wiimote), developed by Nintendo, can be used for catheter navigation and planning purposes. By default the Wiimote controller only measure rough acceleration over a range of +/- 3g with 10% sensitivity and orientation. Therefore, a pose estimation algorithm was developed for computing accurate position and orientation in 3D space regarding 4 Infrared LEDs. Current results show that for the translation it is possible to obtain a mean error of (0.38cm, 0.41cm, 4.94cm) and for the rotation (0.16, 0.28) respectively. Within this paper we introduce a clinical prototype that allows steering of a virtual fly-through camera attached to the catheter tip by the Wii controller on basis of a segmented vessel tree.

  15. Stereoscopic motion analysis in densely packed clusters: 3D analysis of the shimmering behaviour in Giant honey bees

    PubMed Central

    2011-01-01

    Background The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns. Results Stereoscopic imaging provided non-invasive, automated, simultaneous, in-situ 3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. Segmentation was the basis for the stereo matching, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the tracking procedure and triangulated into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb) of individual bees over time. Conclusions The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method could be used to study

  16. A motion- and sound-activated, 3D-printed, chalcogenide-based triboelectric nanogenerator.

    PubMed

    Kanik, Mehmet; Say, Mehmet Girayhan; Daglar, Bihter; Yavuz, Ahmet Faruk; Dolas, Muhammet Halit; El-Ashry, Mostafa M; Bayindir, Mehmet

    2015-04-01

    A multilayered triboelectric nanogenerator (MULTENG) that can be actuated by acoustic waves, vibration of a moving car, and tapping motion is built using a 3D-printing technique. The MULTENG can generate an open-circuit voltage of up to 396 V and a short-circuit current of up to 1.62 mA, and can power 38 LEDs. The layers of the triboelectric generator are made of polyetherimide nanopillars and chalcogenide core-shell nanofibers. PMID:25722118

  17. 3D Geometry and Motion Estimations of Maneuvering Targets for Interferometric ISAR With Sparse Aperture.

    PubMed

    Xu, Gang; Xing, Mengdao; Xia, Xiang-Gen; Zhang, Lei; Chen, Qianqian; Bao, Zheng

    2016-05-01

    In the current scenario of high-resolution inverse synthetic aperture radar (ISAR) imaging, the non-cooperative targets may have strong maneuverability, which tends to cause time-variant Doppler modulation and imaging plane in the echoed data. Furthermore, it is still a challenge to realize ISAR imaging of maneuvering targets from sparse aperture (SA) data. In this paper, we focus on the problem of 3D geometry and motion estimations of maneuvering targets for interferometric ISAR (InISAR) with SA. For a target of uniformly accelerated rotation, the rotational modulation in echo is formulated as chirp sensing code under a chirp-Fourier dictionary to represent the maneuverability. In particular, a joint multi-channel imaging approach is developed to incorporate the multi-channel data and treat the multi-channel ISAR image formation as a joint-sparsity constraint optimization. Then, a modified orthogonal matching pursuit (OMP) algorithm is employed to solve the optimization problem to produce high-resolution range-Doppler (RD) images and chirp parameter estimation. The 3D target geometry and the motion estimations are followed by using the acquired RD images and chirp parameters. Herein, a joint estimation approach of 3D geometry and rotation motion is presented to realize outlier removing and error reduction. In comparison with independent single-channel processing, the proposed joint multi-channel imaging approach performs better in 2D imaging, 3D imaging, and motion estimation. Finally, experiments using both simulated and measured data are performed to confirm the effectiveness of the proposed algorithm. PMID:26930684

  18. Coded apertures for efficient pyroelectric motion tracking.

    PubMed

    Gopinathan, U; Brady, D; Pitsianis, N

    2003-09-01

    Coded apertures may be designed to modulate the visibility between source and measurement spaces such that the position of a source among N resolution cells may be discriminated using logarithm of N measurements. We use coded apertures as reference structures in a pyroelectric motion tracking system. This sensor system is capable of detecting source motion in one of the 15 cells uniformly distributed over a 1.6m x 1.6m domain using 4 pyroelectric detectors. PMID:19466102

  19. 3D Visualization of Monte-Carlo Simulation's of HZE Track Structure and Initial Chemical Species

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2009-01-01

    Heavy ions biophysics is important for space radiation risk assessment [1] and hadron-therapy [2]. The characteristic of heavy ions tracks include a very high energy deposition region close to the track (<20 nm) denoted as the track core, and an outer penumbra region consisting of individual secondary electrons (6-rays). A still open question is the radiobiological effects of 6- rays relative to the track core. Of importance is the induction of double-strand breaks (DSB) [3] and oxidative damage to the biomolecules and the tissue matrix, considered the most important lesions for acute and long term effects of radiation. In this work, we have simulated a 56Fe26+ ion track of 1 GeV/amu with our Monte-Carlo code RITRACKS [4]. The simulation results have been used to calculate the energy depiction and initial chemical species in a "voxelized" space, which is then visualized in 3D. Several voxels with dose >1000 Gy are found in the penumbra, some located 0.1 mm from the track core. In computational models, the DSB induction probability is calculated with radial dose [6], which may not take into account the higher RBE of electron track ends for DSB induction. Therefore, these simulations should help improve models of DSB induction and our understanding of heavy ions biophysics.

  20. Ground motion simulations in Marmara (Turkey) region from 3D finite difference method

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Ulrich, Thomas; Douglas, John

    2016-04-01

    In the framework of the European project MARSite (2012-2016), one of the main contributions from our research team was to provide ground-motion simulations for the Marmara region from various earthquake source scenarios. We adopted a 3D finite difference code, taking into account the 3D structure around the Sea of Marmara (including the bathymetry) and the sea layer. We simulated two moderate earthquakes (about Mw4.5) and found that the 3D structure improves significantly the waveforms compared to the 1D layer model. Simulations were carried out for different earthquakes (moderate point sources and large finite sources) in order to provide shake maps (Aochi and Ulrich, BSSA, 2015), to study the variability of ground-motion parameters (Douglas & Aochi, BSSA, 2016) as well as to provide synthetic seismograms for the blind inversion tests (Diao et al., GJI, 2016). The results are also planned to be integrated in broadband ground-motion simulations, tsunamis generation and simulations of triggered landslides (in progress by different partners). The simulations are freely shared among the partners via the internet and the visualization of the results is diffused on the project's homepage. All these simulations should be seen as a reference for this region, as they are based on the latest knowledge that obtained during the MARSite project, although their refinement and validation of the model parameters and the simulations are a continuing research task relying on continuing observations. The numerical code used, the models and the simulations are available on demand.

  1. Description of a 3D display with motion parallax and direct interaction

    NASA Astrophysics Data System (ADS)

    Tu, J.; Flynn, M. F.

    2014-03-01

    We present a description of a time sequential stereoscopic display which separates the images using a segmented polarization switch and passive eyewear. Additionally, integrated tracking cameras and an SDK on the host PC allow us to implement motion parallax in real time.

  2. Free-Breathing 3D Whole Heart Black Blood Imaging with Motion Sensitized Driven Equilibrium

    PubMed Central

    Srinivasan, Subashini; Hu, Peng; Kissinger, Kraig V.; Goddu, Beth; Goepfert, Lois; Schmidt, Ehud J.; Kozerke, Sebastian; Nezafat, Reza

    2012-01-01

    Purpose To assess the efficacy and robustness of motion sensitized driven equilibrium (MSDE) for blood suppression in volumetric 3D whole heart cardiac MR. Materials and Methods To investigate the efficacy of MSDE on blood suppression and myocardial SNR loss on different imaging sequences. 7 healthy adult subjects were imaged using 3D ECG-triggered MSDE-prep T1-weighted turbo spin echo (TSE), and spoiled gradient echo (GRE), after optimization of MSDE parameters in a pilot study of 5 subjects. Imaging artifacts, myocardial and blood SNR were assessed. Subsequently, the feasibility of isotropic spatial resolution MSDE-prep black-blood was assessed in 6 subjects. Finally, 15 patients with known or suspected cardiovascular disease were recruited to be imaged using conventional multi-slice 2D DIR TSE imaging sequence and 3D MSDE-prep spoiled GRE. Results The MSDE-prep yields significant blood suppression (75-92%), enabling a volumetric 3D black-blood assessment of the whole heart with significantly improved visualization of the chamber walls. The MSDE-prep also allowed successful acquisition of black-blood images with isotropic spatial resolution. In the patient study, 3D black-blood MSDE-prep and DIR resulted in similar blood suppression in LV and RV walls but the MSDE prep had superior myocardial signal and wall sharpness. Conclusion MSDE-prep allows volumetric black-blood imaging of the heart. PMID:22517477

  3. Real-time tracking with a 3D-Flow processor array

    SciTech Connect

    Crosetto, D.

    1993-06-01

    The problem of real-time track-finding has been performed to date with CAM (Content Addressable Memories) or with fast coincidence logic, because the processing scheme was thought to have much slower performance. Advances in technology together with a new architectural approach make it feasible to also explore the computing technique for real-time track finding thus giving the advantages of implementing algorithms that can find more parameters such as calculate the sagitta, curvature, pt, etc., with respect to the CAM approach. The report describes real-time track finding using new computing approach technique based on the 3D-Flow array processor system. This system consists of a fixed interconnection architecture scheme, allowing flexible algorithm implementation on a scalable platform. The 3D-Flow parallel processing system for track finding is scalable in size and performance by either increasing the number of processors, or increasing the speed or else the number of pipelined stages. The present article describes the conceptual idea and the design stage of the project.

  4. Meanie3D - a mean-shift based, multivariate, multi-scale clustering and tracking algorithm

    NASA Astrophysics Data System (ADS)

    Simon, Jürgen-Lorenz; Malte, Diederich; Silke, Troemel

    2014-05-01

    Project OASE is the one of 5 work groups at the HErZ (Hans Ertel Centre for Weather Research), an ongoing effort by the German weather service (DWD) to further research at Universities concerning weather prediction. The goal of project OASE is to gain an object-based perspective on convective events by identifying them early in the onset of convective initiation and follow then through the entire lifecycle. The ability to follow objects in this fashion requires new ways of object definition and tracking, which incorporate all the available data sets of interest, such as Satellite imagery, weather Radar or lightning counts. The Meanie3D algorithm provides the necessary tool for this purpose. Core features of this new approach to clustering (object identification) and tracking are the ability to identify objects using the mean-shift algorithm applied to a multitude of variables (multivariate), as well as the ability to detect objects on various scales (multi-scale) using elements of Scale-Space theory. The algorithm works in 2D as well as 3D without modifications. It is an extension of a method well known from the field of computer vision and image processing, which has been tailored to serve the needs of the meteorological community. In spite of the special application to be demonstrated here (like convective initiation), the algorithm is easily tailored to provide clustering and tracking for a wide class of data sets and problems. In this talk, the demonstration is carried out on two of the OASE group's own composite sets. One is a 2D nationwide composite of Germany including C-Band Radar (2D) and Satellite information, the other a 3D local composite of the Bonn/Jülich area containing a high-resolution 3D X-Band Radar composite.

  5. Towards real-time 2D/3D registration for organ motion monitoring in image-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Gendrin, C.; Spoerk, J.; Bloch, C.; Pawiro, S. A.; Weber, C.; Figl, M.; Markelj, P.; Pernus, F.; Georg, D.; Bergmann, H.; Birkfellner, W.

    2010-02-01

    Nowadays, radiation therapy systems incorporate kV imaging units which allow for the real-time acquisition of intra-fractional X-ray images of the patient with high details and contrast. An application of this technology is tumor motion monitoring during irradiation. For tumor tracking, implanted markers or position sensors are used which requires an intervention. 2D/3D intensity based registration is an alternative, non-invasive method but the procedure must be accelerate to the update rate of the device, which lies in the range of 5 Hz. In this paper we investigate fast CT to a single kV X-ray 2D/3D image registration using a new porcine reference phantom with seven implanted fiducial markers. Several parameters influencing the speed and accuracy of the registrations are investigated. First, four intensity based merit functions, namely Cross-Correlation, Rank Correlation, Mutual Information and Correlation Ratio, are compared. Secondly, wobbled splatting and ray casting rendering techniques are implemented on the GPU and the influence of each algorithm on the performance of 2D/3D registration is evaluated. Rendering times for a single DRR of 20 ms were achieved. Different thresholds of the CT volume were also examined for rendering to find the setting that achieves the best possible correspondence with the X-ray images. Fast registrations below 4 s became possible with an inplane accuracy down to 0.8 mm.

  6. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  7. Optimal transcostal high-intensity focused ultrasound with combined real-time 3D movement tracking and correction

    NASA Astrophysics Data System (ADS)

    Marquet, F.; Aubry, J. F.; Pernot, M.; Fink, M.; Tanter, M.

    2011-11-01

    Recent studies have demonstrated the feasibility of transcostal high intensity focused ultrasound (HIFU) treatment in liver. However, two factors limit thermal necrosis of the liver through the ribs: the energy deposition at focus is decreased by the respiratory movement of the liver and the energy deposition on the skin is increased by the presence of highly absorbing bone structures. Ex vivo ablations were conducted to validate the feasibility of a transcostal real-time 3D movement tracking and correction mode. Experiments were conducted through a chest phantom made of three human ribs immersed in water and were placed in front of a 300 element array working at 1 MHz. A binarized apodization law introduced recently in order to spare the rib cage during treatment has been extended here with real-time electronic steering of the beam. Thermal simulations have been conducted to determine the steering limits. In vivo 3D-movement detection was performed on pigs using an ultrasonic sequence. The maximum error on the transcostal motion detection was measured to be 0.09 ± 0.097 mm on the anterior-posterior axis. Finally, a complete sequence was developed combining real-time 3D transcostal movement correction and spiral trajectory of the HIFU beam, allowing the system to treat larger areas with optimized efficiency. Lesions as large as 1 cm in diameter have been produced at focus in excised liver, whereas no necroses could be obtained with the same emitted power without correcting the movement of the tissue sample.

  8. A brain-computer interface method combined with eye tracking for 3D interaction.

    PubMed

    Lee, Eui Chul; Woo, Jin Cheol; Kim, Jong Hwa; Whang, Mincheol; Park, Kang Ryoung

    2010-07-15

    With the recent increase in the number of three-dimensional (3D) applications, the need for interfaces to these applications has increased. Although the eye tracking method has been widely used as an interaction interface for hand-disabled persons, this approach cannot be used for depth directional navigation. To solve this problem, we propose a new brain computer interface (BCI) method in which the BCI and eye tracking are combined to analyze depth navigation, including selection and two-dimensional (2D) gaze direction, respectively. The proposed method is novel in the following five ways compared to previous works. First, a device to measure both the gaze direction and an electroencephalogram (EEG) pattern is proposed with the sensors needed to measure the EEG attached to a head-mounted eye tracking device. Second, the reliability of the BCI interface is verified by demonstrating that there is no difference between the real and the imaginary movements for the same work in terms of the EEG power spectrum. Third, depth control for the 3D interaction interface is implemented by an imaginary arm reaching movement. Fourth, a selection method is implemented by an imaginary hand grabbing movement. Finally, for the independent operation of gazing and the BCI, a mode selection method is proposed that measures a user's concentration by analyzing the pupil accommodation speed, which is not affected by the operation of gazing and the BCI. According to experimental results, we confirmed the feasibility of the proposed 3D interaction method using eye tracking and a BCI. PMID:20580646

  9. Integration of 3D Structure from Disparity into Biological Motion Perception Independent of Depth Awareness

    PubMed Central

    Wang, Ying; Jiang, Yi

    2014-01-01

    Images projected onto the retinas of our two eyes come from slightly different directions in the real world, constituting binocular disparity that serves as an important source for depth perception - the ability to see the world in three dimensions. It remains unclear whether the integration of disparity cues into visual perception depends on the conscious representation of stereoscopic depth. Here we report evidence that, even without inducing discernible perceptual representations, the disparity-defined depth information could still modulate the visual processing of 3D objects in depth-irrelevant aspects. Specifically, observers who could not discriminate disparity-defined in-depth facing orientations of biological motions (i.e., approaching vs. receding) due to an excessive perceptual bias nevertheless exhibited a robust perceptual asymmetry in response to the indistinguishable facing orientations, similar to those who could consciously discriminate such 3D information. These results clearly demonstrate that the visual processing of biological motion engages the disparity cues independent of observers’ depth awareness. The extraction and utilization of binocular depth signals thus can be dissociable from the conscious representation of 3D structure in high-level visual perception. PMID:24586622

  10. Using natural versus artificial stimuli to perform calibration for 3D gaze tracking

    NASA Astrophysics Data System (ADS)

    Maggia, Christophe; Guyader, Nathalie; Guérin-Dugué, Anne

    2013-03-01

    The presented study tests which type of stereoscopic image, natural or artificial, is more adapted to perform efficient and reliable calibration in order to track the gaze of observers in 3D space using classical 2D eye tracker. We measured the horizontal disparities, i.e. the difference between the x coordinates of the two eyes obtained using a 2D eye tracker. This disparity was recorded for each observer and for several target positions he had to fixate. Target positions were equally distributed in the 3D space, some on the screen (with a null disparity), some behind the screen (uncrossed disparity) and others in front of the screen (crossed disparity). We tested different regression models (linear and non linear) to explain either the true disparity or the depth with the measured disparity. Models were tested and compared on their prediction error for new targets at new positions. First of all, we found that we obtained more reliable disparities measures when using natural stereoscopic images rather than artificial. Second, we found that overall a non-linear model was more efficient. Finally, we discuss the fact that our results were observer dependent, with variability's between the observer's behavior when looking at 3D stimuli. Because of this variability, we proposed to compute observer specific model to accurately predict their gaze position when exploring 3D stimuli.

  11. Microfabricated collagen tracks facilitate single cell metastatic invasion in 3D.

    PubMed

    Kraning-Rush, Casey M; Carey, Shawn P; Lampi, Marsha C; Reinhart-King, Cynthia A

    2013-03-01

    While the mechanisms employed by metastatic cancer cells to migrate remain poorly understood, it has been widely accepted that metastatic cancer cells can invade the tumor stroma by degrading the extracellular matrix (ECM) with matrix metalloproteinases (MMPs). Although MMP inhibitors showed early promise in preventing metastasis in animal models, they have largely failed clinically. Recently, studies have shown that some cancer cells can use proteolysis to mechanically rearrange their ECM to form tube-like "microtracks" which other cells can follow without using MMPs themselves. We speculate that this mode of migration in the secondary cells may be one example of migration which can occur without endogenous protease activity in the secondary cells. Here we present a technique to study this migration in a 3D, collagen-based environment which mimics the size and topography of the tracks produced by proteolytically active cancer cells. Using time-lapse phase-contrast microscopy, we find that these microtracks permit the rapid and persistent migration of noninvasive MCF10A mammary epithelial cells, which are unable to otherwise migrate in 3D collagen. Additionally, while highly metastatic MDAMB231 breast cancer cells are able to invade a 3D collagen matrix, seeding within the patterned microtracks induced significantly increased cell migration speed, which was not decreased by pharmacological MMP inhibition. Together, these data suggest that microtracks within a 3D ECM may facilitate the migration of cells in an MMP-independent fashion, and may reveal novel insight into the clinical challenges facing MMP inhibitors. PMID:23388698

  12. A Little Knowledge of Ground Motion: Explaining 3-D Physics-Based Modeling to Engineers

    NASA Astrophysics Data System (ADS)

    Porter, K.

    2014-12-01

    Users of earthquake planning scenarios require the ground-motion map to be credible enough to justify costly planning efforts, but not all ground-motion maps are right for all uses. There are two common ways to create a map of ground motion for a hypothetical earthquake. One approach is to map the median shaking estimated by empirical attenuation relationships. The other uses 3-D physics-based modeling, in which one analyzes a mathematical model of the earth's crust near the fault rupture and calculates the generation and propagation of seismic waves from source to ground surface by first principles. The two approaches produce different-looking maps. The more-familiar median maps smooth out variability and correlation. Using them in a planning scenario can lead to a systematic underestimation of damage and loss, and could leave a community underprepared for realistic shaking. The 3-D maps show variability, including some very high values that can disconcert non-scientists. So when the USGS Science Application for Risk Reduction's (SAFRR) Haywired scenario project selected 3-D maps, it was necessary to explain to scenario users—especially engineers who often use median maps—the differences, advantages, and disadvantages of the two approaches. We used authority, empirical evidence, and theory to support our choice. We prefaced our explanation with SAFRR's policy of using the best available earth science, and cited the credentials of the maps' developers and the reputation of the journal in which they published the maps. We cited recorded examples from past earthquakes of extreme ground motions that are like those in the scenario map. We explained the maps on theoretical grounds as well, explaining well established causes of variability: directivity, basin effects, and source parameters. The largest mapped motions relate to potentially unfamiliar extreme-value theory, so we used analogies to human longevity and the average age of the oldest person in samples of

  13. Detection, 3-D positioning, and sizing of small pore defects using digital radiography and tracking

    NASA Astrophysics Data System (ADS)

    Lindgren, Erik

    2014-12-01

    This article presents an algorithm that handles the detection, positioning, and sizing of submillimeter-sized pores in welds using radiographic inspection and tracking. The possibility to detect, position, and size pores which have a low contrast-to-noise ratio increases the value of the nondestructive evaluation of welds by facilitating fatigue life predictions with lower uncertainty. In this article, a multiple hypothesis tracker with an extended Kalman filter is used to track an unknown number of pore indications in a sequence of radiographs as an object is rotated. Each pore is not required to be detected in all radiographs. In addition, in the tracking step, three-dimensional (3-D) positions of pore defects are calculated. To optimize, set up, and pre-evaluate the algorithm, the article explores a design of experimental approach in combination with synthetic radiographs of titanium laser welds containing pore defects. The pre-evaluation on synthetic radiographs at industrially reasonable contrast-to-noise ratios indicate less than 1% false detection rates at high detection rates and less than 0.1 mm of positioning errors for more than 90% of the pores. A comparison between experimental results of the presented algorithm and a computerized tomography reference measurement shows qualitatively good agreement in the 3-D positions of approximately 0.1-mm diameter pores in 5-mm-thick Ti-6242.

  14. METHODS FOR USING 3-D ULTRASOUND SPECKLE TRACKING IN BIAXIAL MECHANICAL TESTING OF BIOLOGICAL TISSUE SAMPLES

    PubMed Central

    Yap, Choon Hwai; Park, Dae Woo; Dutta, Debaditya; Simon, Marc; Kim, Kang

    2014-01-01

    Being multilayered and anisotropic, biological tissues such as cardiac and arterial walls are structurally complex, making full assessment and understanding of their mechanical behavior challenging. Current standard mechanical testing uses surface markers to track tissue deformations and does not provide deformation data below the surface. In the study described here, we found that combining mechanical testing with 3-D ultrasound speckle tracking could overcome this limitation. Rat myocardium was tested with a biaxial tester and was concurrently scanned with high-frequency ultrasound in three dimensions. The strain energy function was computed from stresses and strains using an iterative non-linear curve-fitting algorithm. Because the strain energy function consists of terms for the base matrix and for embedded fibers, spatially varying fiber orientation was also computed by curve fitting. Using finite-element simulations, we first validated the accuracy of the non-linear curve-fitting algorithm. Next, we compared experimentally measured rat myocardium strain energy function values with those in the literature and found a matching order of magnitude. Finally, we retained samples after the experiments for fiber orientation quantification using histology and found that the results satisfactorily matched those computed in the experiments. We conclude that 3-D ultrasound speckle tracking can be a useful addition to traditional mechanical testing of biological tissues and may provide the benefit of enabling fiber orientation computation. PMID:25616585

  15. 3D positional tracking of ellipsoidal particles in a microtube flow using holographic microscopy

    NASA Astrophysics Data System (ADS)

    Byeon, Hyeok Jun; Seo, Kyung Won; Lee, Sang Joon

    2014-11-01

    Understanding of micro-scale flow phenomena is getting large attention under advances in micro-scale measurement technologies. Especially, the dynamics of particles suspended in a fluid is essential in both scientific and industrial fields. Moreover, most particles handled in research and industrial fields have non-spherical shapes rather than a simple spherical shape. Under various flow conditions, these non-spherical particles exhibit unique dynamic behaviors. To analyze these dynamic behaviors in a fluid flow, 3D positional information of the particles should be measured accurately. In this study, digital holographic microscopy (DHM) is employed to measure the 3D positional information of non-spherical particles, which are fabricated by stretching spherical polystyrene particles. 3D motions of those particles are obtained by interpreting the holograms captured from particles. Ellipsoidal particles with known size and shape are observed to verify the performance of the DHM technique. In addition, 3D positions of particles in a microtube flow are traced. This DHM technique exhibits promising potential in the analysis of dynamic behaviors of non-spherical particles suspended in micro-scale fluid flows.

  16. 3D measurement of the position of gold particles via evanescent digital holographic particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Satake, Shin-ichi; Unno, Noriyuki; Nakata, Shuichiro; Taniguchi, Jun

    2016-08-01

    A new technique based on digital holography and evanescent waves was developed for 3D measurements of the position of gold nanoparticles in water. In this technique, an intensity profile is taken from a holographic image of a gold particle. To detect the position of the gold particle with high accuracy, its holographic image is recorded on a nanosized step made of MEXFLON, which has a refractive index close to that of water, and the position of the particle is reconstructed by means of digital holography. The height of the nanosized step was measured by using a profilometer and the digitally reconstructed height of the glass substrate had good agreement with the measured value. Furthermore, this method can be used to accurately track the 3D position of a gold particle in water.

  17. Targets For Three-Dimensional (3-D) Tracking Of Human Impact Test Subjects

    NASA Astrophysics Data System (ADS)

    Muzzy, William H.; Prell, Arthur M.

    1982-02-01

    Lightweight targets mounted on the head and neck of human volunteers are photographed by high-speed cameras during impact acceleration tests. The targets must be capable of being tracked through a wide angular motion by at least two cameras to obtain three-dimens-ional displacement and orientation. Because the targets are tracked and digitized by a computerized photodigitizer, their pattern must be selected to maximize recognition and minimize crossover confusion. This pater discusses the target construction, orientation on the accelerometer mount, pattern selection, and paint scheme.

  18. Motion compensation by registration-based catheter tracking

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Wimmer, Andreas; Liao, Rui; Hornegger, Joachim; Strobel, Norbert

    2011-03-01

    The treatment of atrial fibrillation has gained increasing importance in the field of computer-aided interventions. State-of-the-art treatment involves the electrical isolation of the pulmonary veins attached to the left atrium under fluoroscopic X-ray image guidance. Due to the rather low soft-tissue contrast of X-ray fluoroscopy, the heart is difficult to see. To overcome this problem, overlay images from pre-operative 3-D volumetric data can be used to add anatomical detail. Unfortunately, these overlay images are static at the moment, i.e., they do not move with respiratory and cardiac motion. The lack of motion compensation may impair X-ray based catheter navigation, because the physician could potentially position catheters incorrectly. To improve overlay-based catheter navigation, we present a novel two stage approach for respiratory and cardiac motion compensation. First, a cascade of boosted classifiers is employed to segment a commonly used circumferential mapping catheter which is firmly fixed at the ostium of the pulmonary vein during ablation. Then, a 2-D/2-D model-based registration is applied to track the segmented mapping catheter. Our novel hybrid approach was evaluated on 10 clinical data sets consisting of 498 fluoroscopic monoplane frames. We obtained an average 2-D tracking error of 0.61 mm, with a minimum error of 0.26 mm and a maximum error of 1.62 mm. These results demonstrate that motion compensation using registration-based catheter tracking is both feasible and accurate. Using this approach, we can only estimate in-plane motion. Fortunately, compensating for this is often sufficient for EP procedures where the motion is governed by breathing.

  19. 3D Motion Planning Algorithms for Steerable Needles Using Inverse Kinematics

    PubMed Central

    Duindam, Vincent; Xu, Jijie; Alterovitz, Ron; Sastry, Shankar; Goldberg, Ken

    2010-01-01

    Steerable needles can be used in medical applications to reach targets behind sensitive or impenetrable areas. The kinematics of a steerable needle are nonholonomic and, in 2D, equivalent to a Dubins car with constant radius of curvature. In 3D, the needle can be interpreted as an airplane with constant speed and pitch rate, zero yaw, and controllable roll angle. We present a constant-time motion planning algorithm for steerable needles based on explicit geometric inverse kinematics similar to the classic Paden-Kahan subproblems. Reachability and path competitivity are analyzed using analytic comparisons with shortest path solutions for the Dubins car (for 2D) and numerical simulations (for 3D). We also present an algorithm for local path adaptation using null-space results from redundant manipulator theory. Finally, we discuss several ways to use and extend the inverse kinematics solution to generate needle paths that avoid obstacles. PMID:21359051

  20. 3D delivered dose assessment using a 4DCT-based motion model

    SciTech Connect

    Cai, Weixing; Hurwitz, Martina H.; Williams, Christopher L.; Dhou, Salam; Berbeco, Ross I.; Mishra, Pankaj E-mail: jhlewis@lroc.harvard.edu; Lewis, John H. E-mail: jhlewis@lroc.harvard.edu; Seco, Joao

    2015-06-15

    Purpose: The purpose of this work is to develop a clinically feasible method of calculating actual delivered dose distributions for patients who have significant respiratory motion during the course of stereotactic body radiation therapy (SBRT). Methods: A novel approach was proposed to calculate the actual delivered dose distribution for SBRT lung treatment. This approach can be specified in three steps. (1) At the treatment planning stage, a patient-specific motion model is created from planning 4DCT data. This model assumes that the displacement vector field (DVF) of any respiratory motion deformation can be described as a linear combination of some basis DVFs. (2) During the treatment procedure, 2D time-varying projection images (either kV or MV projections) are acquired, from which time-varying “fluoroscopic” 3D images of the patient are reconstructed using the motion model. The DVF of each timepoint in the time-varying reconstruction is an optimized linear combination of basis DVFs such that the 2D projection of the 3D volume at this timepoint matches the projection image. (3) 3D dose distribution is computed for each timepoint in the set of 3D reconstructed fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating deformed dose distributions. This approach was first validated using two modified digital extended cardio-torso (XCAT) phantoms with lung tumors and different respiratory motions. The estimated doses were compared to the dose that would be calculated for routine 4DCT-based planning and to the actual delivered dose that was calculated using “ground truth” XCAT phantoms at all timepoints. The approach was also tested using one set of patient data, which demonstrated the application of our method in a clinical scenario. Results: For the first XCAT phantom that has a mostly regular breathing pattern, the errors in 95% volume dose (D95) are 0.11% and 0.83%, respectively for 3D fluoroscopic images

  1. 3D delivered dose assessment using a 4DCT-based motion model

    PubMed Central

    Cai, Weixing; Hurwitz, Martina H.; Williams, Christopher L.; Dhou, Salam; Berbeco, Ross I.; Seco, Joao; Mishra, Pankaj; Lewis, John H.

    2015-01-01

    Purpose: The purpose of this work is to develop a clinically feasible method of calculating actual delivered dose distributions for patients who have significant respiratory motion during the course of stereotactic body radiation therapy (SBRT). Methods: A novel approach was proposed to calculate the actual delivered dose distribution for SBRT lung treatment. This approach can be specified in three steps. (1) At the treatment planning stage, a patient-specific motion model is created from planning 4DCT data. This model assumes that the displacement vector field (DVF) of any respiratory motion deformation can be described as a linear combination of some basis DVFs. (2) During the treatment procedure, 2D time-varying projection images (either kV or MV projections) are acquired, from which time-varying “fluoroscopic” 3D images of the patient are reconstructed using the motion model. The DVF of each timepoint in the time-varying reconstruction is an optimized linear combination of basis DVFs such that the 2D projection of the 3D volume at this timepoint matches the projection image. (3) 3D dose distribution is computed for each timepoint in the set of 3D reconstructed fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating deformed dose distributions. This approach was first validated using two modified digital extended cardio-torso (XCAT) phantoms with lung tumors and different respiratory motions. The estimated doses were compared to the dose that would be calculated for routine 4DCT-based planning and to the actual delivered dose that was calculated using “ground truth” XCAT phantoms at all timepoints. The approach was also tested using one set of patient data, which demonstrated the application of our method in a clinical scenario. Results: For the first XCAT phantom that has a mostly regular breathing pattern, the errors in 95% volume dose (D95) are 0.11% and 0.83%, respectively for 3D fluoroscopic images

  2. Oblique needle segmentation and tracking for 3D TRUS guided prostate brachytherapy

    SciTech Connect

    Wei Zhouping; Gardi, Lori; Downey, Donal B.; Fenster, Aaron

    2005-09-15

    An algorithm was developed in order to segment and track brachytherapy needles inserted along oblique trajectories. Three-dimensional (3D) transrectal ultrasound (TRUS) images of the rigid rod simulating the needle inserted into the tissue-mimicking agar and chicken breast phantoms were obtained to test the accuracy of the algorithm under ideal conditions. Because the robot possesses high positioning and angulation accuracies, we used the robot as a ''gold standard,'' and compared the results of algorithm segmentation to the values measured by the robot. Our testing results showed that the accuracy of the needle segmentation algorithm depends on the needle insertion distance into the 3D TRUS image and the angulations with respect to the TRUS transducer, e.g., at a 10 deg. insertion anglulation in agar phantoms, the error of the algorithm in determining the needle tip position was less than 1 mm when the insertion distance was greater than 15 mm. Near real-time needle tracking was achieved by scanning a small volume containing the needle. Our tests also showed that, the segmentation time was less than 60 ms, and the scanning time was less than 1.2 s, when the insertion distance into the 3D TRUS image was less than 55 mm. In our needle tracking tests in chicken breast phantoms, the errors in determining the needle orientation were less than 2 deg. in robot yaw and 0.7 deg. in robot pitch orientations, for up to 20 deg. needle insertion angles with the TRUS transducer in the horizontal plane when the needle insertion distance was greater than 15 mm.

  3. 3D motion of DNA-Au nanoconjugates in graphene liquid cell electron microscopy.

    PubMed

    Chen, Qian; Smith, Jessica M; Park, Jungwon; Kim, Kwanpyo; Ho, Davy; Rasool, Haider I; Zettl, Alex; Alivisatos, A Paul

    2013-09-11

    Liquid-phase transmission electron microscopy (TEM) can probe and visualize dynamic events with structural or functional details at the nanoscale in a liquid medium. Earlier efforts have focused on the growth and transformation kinetics of hard material systems, relying on their stability under electron beam. Our recently developed graphene liquid cell technique pushed the spatial resolution of such imaging to the atomic scale but still focused on growth trajectories of metallic nanocrystals. Here, we adopt this technique to imaging three-dimensional (3D) dynamics of soft materials instead, double strand (dsDNA) connecting Au nanocrystals as one example, at nanometer resolution. We demonstrate first that a graphene liquid cell can seal an aqueous sample solution of a lower vapor pressure than previously investigated well against the high vacuum in TEM. Then, from quantitative analysis of real time nanocrystal trajectories, we show that the status and configuration of dsDNA dictate the motions of linked nanocrystals throughout the imaging time of minutes. This sustained connecting ability of dsDNA enables this unprecedented continuous imaging of its dynamics via TEM. Furthermore, the inert graphene surface minimizes sample-substrate interaction and allows the whole nanostructure to rotate freely in the liquid environment; we thus develop and implement the reconstruction of 3D configuration and motions of the nanostructure from the series of 2D projected TEM images captured while it rotates. In addition to further proving the nanoconjugate structural stability, this reconstruction demonstrates 3D dynamic imaging by TEM beyond its conventional use in seeing a flattened and dry sample. Altogether, we foresee the new and exciting use of graphene liquid cell TEM in imaging 3D biomolecular transformations or interaction dynamics at nanometer resolution. PMID:23944844

  4. Management of three-dimensional intrafraction motion through real-time DMLC tracking.

    PubMed

    Sawant, Amit; Venkat, Raghu; Srivastava, Vikram; Carlson, David; Povzner, Sergey; Cattell, Herb; Keall, Paul

    2008-05-01

    Tumor tracking using a dynamic multileaf collimator (DMLC) represents a promising approach for intrafraction motion management in thoracic and abdominal cancer radiotherapy. In this work, we develop, empirically demonstrate, and characterize a novel 3D tracking algorithm for real-time, conformal, intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT)-based radiation delivery to targets moving in three dimensions. The algorithm obtains real-time information of target location from an independent position monitoring system and dynamically calculates MLC leaf positions to account for changes in target position. Initial studies were performed to evaluate the geometric accuracy of DMLC tracking of 3D target motion. In addition, dosimetric studies were performed on a clinical linac to evaluate the impact of real-time DMLC tracking for conformal, step-and-shoot (S-IMRT), dynamic (D-IMRT), and VMAT deliveries to a moving target. The efficiency of conformal and IMRT delivery in the presence of tracking was determined. Results show that submillimeter geometric accuracy in all three dimensions is achievable with DMLC tracking. Significant dosimetric improvements were observed in the presence of tracking for conformal and IMRT deliveries to moving targets. A gamma index evaluation with a 3%-3 mm criterion showed that deliveries without DMLC tracking exhibit between 1.7 (S-IMRT) and 4.8 (D-IMRT) times more dose points that fail the evaluation compared to corresponding deliveries with tracking. The efficiency of IMRT delivery, as measured in the lab, was observed to be significantly lower in case of tracking target motion perpendicular to MLC leaf travel compared to motion parallel to leaf travel. Nevertheless, these early results indicate that accurate, real-time DMLC tracking of 3D tumor motion is feasible and can potentially result in significant geometric and dosimetric advantages leading to more effective management of intrafraction motion. PMID

  5. An automated tool for 3D tracking of single molecules in living cells

    NASA Astrophysics Data System (ADS)

    Gardini, L.; Capitanio, M.; Pavone, F. S.

    2015-03-01

    Since the behaviour of proteins and biological molecules is tightly related to cell's environment, more and more microscopy techniques are moving from in vitro to in living cells experiments. Looking at both diffusion and active transportation processes inside a cell requires three-dimensional localization over a few microns range, high SNR images and high temporal resolution. Since protein dynamics inside a cell involve all three dimensions, we developed an automated routine for 3D tracking of single fluorescent molecules inside living cells with nanometer accuracy, by exploiting the properties of the point-spread-function of out-of-focus Quantum Dots bound to the protein of interest.

  6. Longitudinal Measurement of Extracellular Matrix Rigidity in 3D Tumor Models Using Particle-tracking Microrheology

    PubMed Central

    El-Hamidi, Hamid; Celli, Jonathan P.

    2014-01-01

    The mechanical microenvironment has been shown to act as a crucial regulator of tumor growth behavior and signaling, which is itself remodeled and modified as part of a set of complex, two-way mechanosensitive interactions. While the development of biologically-relevant 3D tumor models have facilitated mechanistic studies on the impact of matrix rheology on tumor growth, the inverse problem of mapping changes in the mechanical environment induced by tumors remains challenging. Here, we describe the implementation of particle-tracking microrheology (PTM) in conjunction with 3D models of pancreatic cancer as part of a robust and viable approach for longitudinally monitoring physical changes in the tumor microenvironment, in situ. The methodology described here integrates a system of preparing in vitro 3D models embedded in a model extracellular matrix (ECM) scaffold of Type I collagen with fluorescently labeled probes uniformly distributed for position- and time-dependent microrheology measurements throughout the specimen. In vitro tumors are plated and probed in parallel conditions using multiwell imaging plates. Drawing on established methods, videos of tracer probe movements are transformed via the Generalized Stokes Einstein Relation (GSER) to report the complex frequency-dependent viscoelastic shear modulus, G*(ω). Because this approach is imaging-based, mechanical characterization is also mapped onto large transmitted-light spatial fields to simultaneously report qualitative changes in 3D tumor size and phenotype. Representative results showing contrasting mechanical response in sub-regions associated with localized invasion-induced matrix degradation as well as system calibration, validation data are presented. Undesirable outcomes from common experimental errors and troubleshooting of these issues are also presented. The 96-well 3D culture plating format implemented in this protocol is conducive to correlation of microrheology measurements with therapeutic

  7. Aref's chaotic orbits tracked by a general ellipsoid using 3D numerical simulations

    NASA Astrophysics Data System (ADS)

    Shui, Pei; Popinet, Stéphane; Govindarajan, Rama; Valluri, Prashant

    2015-11-01

    The motion of an ellipsoidal solid in an ideal fluid has been shown to be chaotic (Aref, 1993) under the limit of non-integrability of Kirchhoff's equations (Kozlov & Oniscenko, 1982). On the other hand, the particle could stop moving when the damping viscous force is strong enough. We present numerical evidence using our in-house immersed solid solver for 3D chaotic motion of a general ellipsoidal solid and suggest criteria for triggering such motion. Our immersed solid solver functions under the framework of the Gerris flow package of Popinet et al. (2003). This solver, the Gerris Immersed Solid Solver (GISS), resolves 6 degree-of-freedom motion of immersed solids with arbitrary geometry and number. We validate our results against the solution of Kirchhoff's equations. The study also shows that the translational/ rotational energy ratio plays the key role on the motion pattern, while the particle geometry and density ratio between the solid and fluid also have some influence on the chaotic behaviour. Along with several other benchmark cases for viscous flows, we propose prediction of chaotic Aref's orbits as a key benchmark test case for immersed boundary/solid solvers.

  8. Validation of INSAT-3D atmospheric motion vectors for monsoon 2015

    NASA Astrophysics Data System (ADS)

    Sharma, Priti; Rani, S. Indira; Das Gupta, M.

    2016-05-01

    Atmospheric Motion Vector (AMV) over Indian Ocean and surrounding region is one of the most important sources of tropospheric wind information assimilated in numerical weather prediction (NWP) system. Earlier studies showed that the quality of Indian geo-stationary satellite Kalpana-1 AMVs was not comparable to that of other geostationary satellites over this region and hence not used in NWP system. Indian satellite INSAT-3D was successfully launched on July 26, 2013 with upgraded imaging system as compared to that of previous Indian satellite Kalpana-1. INSAT-3D has middle infrared band (3.80 - 4.00 μm) which is capable of night time pictures of low clouds and fog. Three consecutive images of 30-minutes interval are used to derive the AMVs. New height assignment scheme (using NWP first guess and replacing old empirical GA method) along with modified quality control scheme were implemented for deriving INSAT-3D AMVs. In this paper an attempt has been made to validate these AMVs against in-situ observations as well as against NCMRWF's NWP first guess for monsoon 2015. AMVs are subdivided into three different pressure levels in the vertical viz. low (1000 - 700 hPa), middle (700 - 400 hPa) and high (400 - 100 hPa) for validation purpose. Several statistics viz. normalized root mean square vector difference; biases etc. have been computed over different latitudinal belt. Result shows that the general mean monsoon circulations along with all the transient monsoon systems are well captured by INSAT-3D AMVs, as well as the error statistics viz., RMSE etc of INSAT-3D AMVs is now comparable to other geostationary satellites.

  9. Forward-looking infrared 3D target tracking via combination of particle filter and SIFT

    NASA Astrophysics Data System (ADS)

    Li, Xing; Cao, Zhiguo; Yan, Ruicheng; Li, Tuo

    2013-10-01

    Aiming at the problem of tracking 3D target in forward-looking infrared (FLIR) image, this paper proposes a high-accuracy robust tracking algorithm based on SIFT and particle filter. The main contribution of this paper is the proposal of a new method of estimating the affine transformation matrix parameters based on Monte Carlo methods of particle filter. At first, we extract SIFT features on infrared image, and calculate the initial affine transformation matrix with optimal candidate key points. Then we take affine transformation parameters as particles, and use SIR (Sequential Importance Resampling) particle filter to estimate the best position, thus implementing our algorithm. The experiments demonstrate that our algorithm proves to be robust with high accuracy.

  10. An automated tool for 3D tracking of single molecules in living cells

    NASA Astrophysics Data System (ADS)

    Gardini, L.; Capitanio, M.; Pavone, F. S.

    2015-07-01

    Recently, tremendous improvements have been achieved in the precision of localization of single fluorescent molecules, allowing localization and tracking of biomolecules at the nm level. Since the behaviour of proteins and biological molecules is tightly influenced by the cell's environment, a growing number of microscopy techniques are moving from in vitro to live cell experiments. Looking at both diffusion and active transportation processes inside a cell requires three-dimensional localization over a few microns range, high SNR images and high temporal resolution (ms order of magnitude). To satisfy these requirements we developed an automated routine that allow 3D tracking of single fluorescent molecules in living cells with nanometer accuracy, by exploiting the properties of the point-spread-function of out-of-focus Quantum Dots bound to the protein of interest.

  11. Ultrasound image-based respiratory motion tracking

    NASA Astrophysics Data System (ADS)

    Hwang, Youngkyoo; Kim, Jung-Bae; Kim, Yong Sun; Bang, Won-Chul; Kim, James D. K.; Kim, ChangYeong

    2012-03-01

    Respiratory motion tracking has been issues for MR/CT imaging and noninvasive surgery such as HIFU and radiotherapy treatment when we apply these imaging or therapy technologies to moving organs such as liver, kidney or pancreas. Currently, some bulky and burdensome devices are placed externally on skin to estimate respiratory motion of an organ. It estimates organ motion indirectly using skin motion, not directly using organ itself. In this paper, we propose a system that measures directly the motion of organ itself only using ultrasound image. Our system has automatically selected a window in image sequences, called feature window, which is able to measure respiratory motion robustly even to noisy ultrasound images. The organ's displacement on each ultrasound image has been directly calculated through the feature window. It is very convenient to use since it exploits a conventional ultrasound probe. In this paper, we show that our proposed method can robustly extract respiratory motion signal with regardless of reference frame. It is superior to other image based method such as Mutual Information (MI) or Correlation Coefficient (CC). They are sensitive to what the reference frame is selected. Furthermore, our proposed method gives us clear information of the phase of respiratory cycle such as during inspiration or expiration and so on since it calculate not similarity measurement like MI or CC but actual organ's displacement.

  12. 3D Fluorescent and Reflective Imaging of Whole Stardust Tracks in Aerogel

    SciTech Connect

    Greenberg, M.; Ebel, D.S.

    2011-11-07

    The NASA Stardust mission returned to earth in 2006 with the cometary collector having captured over 1,000 particles in an aerogel medium at a relative velocity of 6.1 km/s. Particles captured in aerogel were heated, disaggregated and dispersed along 'tracks' or cavities in aerogel, singular tracks representing a history of one capture event. It has been our focus to chemically and morphologically characterize whole tracks in 3-dimensions, utilizing solely non-destructive methods. To this end, we have used a variety of methods: 3D Laser Scanning Confocal Microscopy (LSCM), synchrotron X-ray fluorescence (SXRF), and synchrotron X-ray diffraction (SXRD). In the past months we have developed two new techniques to aid in data collection. (1) We have received a new confocal microscope which has enabled autofluorescent and spectral imaging of aerogel samples. (2) We have developed a stereo-SXRF technique to chemically identify large grains in SXRF maps in 3-space. The addition of both of these methods to our analytic abilities provides a greater understanding of the mechanisms and results of track formation.

  13. Quantifying the 3D Odorant Concentration Field Used by Actively Tracking Blue Crabs

    NASA Astrophysics Data System (ADS)

    Webster, D. R.; Dickman, B. D.; Jackson, J. L.; Weissburg, M. J.

    2007-11-01

    Blue crabs and other aquatic organisms locate food and mates by tracking turbulent odorant plumes. The odorant concentration fluctuates unpredictably due to turbulent transport, and many characteristics of the fluctuation pattern have been hypothesized as useful cues for orienting to the odorant source. To make a direct linkage between tracking behavior and the odorant concentration signal, we developed a measurement system based the laser induced fluorescence technique to quantify the instantaneous 3D concentration field surrounding actively tracking blue crabs. The data suggest a correlation between upstream walking speed and the concentration of the odorant signal arriving at the antennule chemosensors, which are located near the mouth region. More specifically, we note an increase in upstream walking speed when high concentration bursts arrive at the antennules location. We also test hypotheses regarding the ability of blue crabs to steer relative to the plume centerline based on the signal contrast between the chemosensors located on their leg appendages. These chemosensors are located much closer to the substrate compared to the antennules and are separated by the width of the blue crab. In this case, it appears that blue crabs use the bilateral signal comparison to track along the edge of the plume.

  14. Broadband Near-Field Ground Motion Simulations in 3D Scattering Media

    NASA Astrophysics Data System (ADS)

    Imperatori, Walter; Mai, Martin

    2013-04-01

    The heterogeneous nature of Earth's crust is manifested in the scattering of propagating seismic waves. In recent years, different techniques have been developed to include such phenomenon in broadband ground-motion calculations, either considering scattering as a semi-stochastic or pure stochastic process. In this study, we simulate broadband (0-10 Hz) ground motions using a 3D finite-difference wave propagation solver using several 3D media characterized by Von Karman correlation functions with different correlation lengths and standard deviation values. Our goal is to investigate scattering characteristics and its influence on the seismic wave-field at short and intermediate distances from the source in terms of ground motion parameters. We also examine other relevant scattering-related phenomena, such as the loss of radiation pattern and the directivity breakdown. We first simulate broadband ground motions for a point-source characterized by a classic omega-squared spectrum model. Fault finiteness is then introduced by means of a Haskell-type source model presenting both sub-shear and super-shear rupture speed. Results indicate that scattering plays an important role in ground motion even at short distances from the source, where source effects are thought to be dominating. In particular, peak ground motion parameters can be affected even at relatively low frequencies, implying that earthquake ground-motion simulations should include scattering also for PGV calculations. At the same time, we find a gradual loss of the source signature in the 2-5 Hz frequency range, together with a distortion of the Mach cones in case of super-shear rupture. For more complex source models and truly heterogeneous Earth, these effects may occur even at lower frequencies. Our simulations suggest that Von Karman correlation functions with correlation length between several hundred meters and few kilometers, Hurst exponent around 0.3 and standard deviation in the 5-10% range

  15. Mapping dynamic mechanical remodeling in 3D tumor models via particle tracking microrheology

    NASA Astrophysics Data System (ADS)

    Jones, Dustin P.; Hanna, William; Celli, Jonathan P.

    2015-03-01

    Particle tracking microrheology (PTM) has recently been employed as a non-destructive way to longitudinally track physical changes in 3D pancreatic tumor co-culture models concomitant with tumor growth and invasion into the extracellular matrix (ECM). While the primary goal of PTM is to quantify local viscoelasticity via the Generalized Stokes-Einstein Relation (GSER), a more simplified way of describing local tissue mechanics lies in the tabulation and subsequent visualization of the spread of probe displacements in a given field of view. Proper analysis of this largely untapped byproduct of standard PTM has the potential to yield valuable insight into the structure and integrity of the ECM. Here, we use clustering algorithms in R to analyze the trajectories of probes in 3D pancreatic tumor/fibroblast co-culture models in an attempt to differentiate between probes that are effectively constrained by the ECM and/or contractile traction forces, and those that exhibit uninhibited mobility in local water-filled pores. We also discuss the potential pitfalls of this method. Accurately and reproducibly quantifying the boundary between these two categories of probe behavior could result in an effective method for measuring the average pore size in a given region of ECM. Such a tool could prove useful for studying stromal depletion, physical impedance to drug delivery, and degradation due to cellular invasion.

  16. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago

    2015-08-01

    The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most

  17. Biodynamic Doppler imaging of subcellular motion inside 3D living tissue culture and biopsies (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Nolte, David D.

    2016-03-01

    Biodynamic imaging is an emerging 3D optical imaging technology that probes up to 1 mm deep inside three-dimensional living tissue using short-coherence dynamic light scattering to measure the intracellular motions of cells inside their natural microenvironments. Biodynamic imaging is label-free and non-invasive. The information content of biodynamic imaging is captured through tissue dynamics spectroscopy that displays the changes in the Doppler signatures from intracellular constituents in response to applied compounds. The affected dynamic intracellular mechanisms include organelle transport, membrane undulations, cytoskeletal restructuring, strain at cellular adhesions, cytokinesis, mitosis, exo- and endo-cytosis among others. The development of 3D high-content assays such as biodynamic profiling can become a critical new tool for assessing efficacy of drugs and the suitability of specific types of tissue growth for drug discovery and development. The use of biodynamic profiling to predict clinical outcome of living biopsies to cancer therapeutics can be developed into a phenotypic companion diagnostic, as well as a new tool for therapy selection in personalized medicine. This invited talk will present an overview of the optical, physical and physiological processes involved in biodynamic imaging. Several different biodynamic imaging modalities include motility contrast imaging (MCI), tissue-dynamics spectroscopy (TDS) and tissue-dynamics imaging (TDI). A wide range of potential applications will be described that include process monitoring for 3D tissue culture, drug discovery and development, cancer therapy selection, embryo assessment for in-vitro fertilization and artificial reproductive technologies, among others.

  18. Interactive Motion Planning for Steerable Needles in 3D Environments with Obstacles

    PubMed Central

    Patil, Sachin; Alterovitz, Ron

    2011-01-01

    Bevel-tip steerable needles for minimally invasive medical procedures can be used to reach clinical targets that are behind sensitive or impenetrable areas and are inaccessible to straight, rigid needles. We present a fast algorithm that can compute motion plans for steerable needles to reach targets in complex, 3D environments with obstacles at interactive rates. The fast computation makes this method suitable for online control of the steerable needle based on 3D imaging feedback and allows physicians to interactively edit the planning environment in real-time by adding obstacle definitions as they are discovered or become relevant. We achieve this fast performance by using a Rapidly Exploring Random Tree (RRT) combined with a reachability-guided sampling heuristic to alleviate the sensitivity of the RRT planner to the choice of the distance metric. We also relax the constraint of constant-curvature needle trajectories by relying on duty-cycling to realize bounded-curvature needle trajectories. These characteristics enable us to achieve orders of magnitude speed-up compared to previous approaches; we compute steerable needle motion plans in under 1 second for challenging environments containing complex, polyhedral obstacles and narrow passages. PMID:22294214

  19. Data acquisition electronics and reconstruction software for real time 3D track reconstruction within the MIMAC project

    NASA Astrophysics Data System (ADS)

    Bourrion, O.; Bosson, G.; Grignon, C.; Bouly, J. L.; Richer, J. P.; Guillaudin, O.; Mayet, F.; Billard, J.; Santos, D.

    2011-11-01

    Directional detection of non-baryonic Dark Matter requires 3D reconstruction of low energy nuclear recoils tracks. A gaseous micro-TPC matrix, filled with either 3He, CF4 or C4H10 has been developed within the MIMAC project. A dedicated acquisition electronics and a real time track reconstruction software have been developed to monitor a 512 channel prototype. This auto-triggered electronic uses embedded processing to reduce the data transfer to its useful part only, i.e. decoded coordinates of hit tracks and corresponding energy measurements. An acquisition software with on-line monitoring and 3D track reconstruction is also presented.

  20. New method for detection of complex 3D fracture motion - Verification of an optical motion analysis system for biomechanical studies

    PubMed Central

    2012-01-01

    Background Fracture-healing depends on interfragmentary motion. For improved osteosynthesis and fracture-healing, the micromotion between fracture fragments is undergoing intensive research. The detection of 3D micromotions at the fracture gap still presents a challenge for conventional tactile measurement systems. Optical measurement systems may be easier to use than conventional systems, but, as yet, cannot guarantee accuracy. The purpose of this study was to validate the optical measurement system PONTOS 5M for use in biomechanical research, including measurement of micromotion. Methods A standardized transverse fracture model was created to detect interfragmentary motions under axial loadings of up to 200 N. Measurements were performed using the optical measurement system and compared with a conventional high-accuracy tactile system consisting of 3 standard digital dial indicators (1 μm resolution; 5 μm error limit). Results We found that the deviation in the mean average motion detection between the systems was at most 5.3 μm, indicating that detection of micromotion was possible with the optical measurement system. Furthermore, we could show two considerable advantages while using the optical measurement system. Only with the optical system interfragmentary motion could be analyzed directly at the fracture gap. Furthermore, the calibration of the optical system could be performed faster, safer and easier than that of the tactile system. Conclusion The PONTOS 5 M optical measurement system appears to be a favorable alternative to previously used tactile measurement systems for biomechanical applications. Easy handling, combined with a high accuracy for 3D detection of micromotions (≤ 5 μm), suggests the likelihood of high user acceptance. This study was performed in the context of the deployment of a new implant (dynamic locking screw; Synthes, Oberdorf, Switzerland). PMID:22405047

  1. Nondestructive 3D confocal laser imaging with deconvolution of seven whole stardust tracks with complementary XRF and quantitative analysis

    SciTech Connect

    Greenberg, M.; Ebel, D.S.

    2009-03-19

    We present a nondestructive 3D system for analysis of whole Stardust tracks, using a combination of Laser Confocal Scanning Microscopy and synchrotron XRF. 3D deconvolution is used for optical corrections, and results of quantitative analyses of several tracks are presented. The Stardust mission to comet Wild 2 trapped many cometary and ISM particles in aerogel, leaving behind 'tracks' of melted silica aerogel on both sides of the collector. Collected particles and their tracks range in size from submicron to millimeter scale. Interstellar dust collected on the obverse of the aerogel collector is thought to have an average track length of {approx}15 {micro}m. It has been our goal to perform a total non-destructive 3D textural and XRF chemical analysis on both types of tracks. To that end, we use a combination of Laser Confocal Scanning Microscopy (LCSM) and X Ray Florescence (XRF) spectrometry. Utilized properly, the combination of 3D optical data and chemical data provides total nondestructive characterization of full tracks, prior to flattening or other destructive analysis methods. Our LCSM techniques allow imaging at 0.075 {micro}m/pixel, without the use of oil-based lenses. A full textural analysis on track No.82 is presented here as well as analysis of 6 additional tracks contained within 3 keystones (No.128, No.129 and No.140). We present a method of removing the axial distortion inherent in LCSM images, by means of a computational 3D Deconvolution algorithm, and present some preliminary experiments with computed point spread functions. The combination of 3D LCSM data and XRF data provides invaluable information, while preserving the integrity of the samples for further analysis. It is imperative that these samples, the first extraterrestrial solids returned since the Apollo era, be fully mapped nondestructively in 3D, to preserve the maximum amount of information prior to other, destructive analysis.

  2. Feasibility Study for Ballet E-Learning: Automatic Composition System for Ballet "Enchainement" with Online 3D Motion Data Archive

    ERIC Educational Resources Information Center

    Umino, Bin; Longstaff, Jeffrey Scott; Soga, Asako

    2009-01-01

    This paper reports on "Web3D dance composer" for ballet e-learning. Elementary "petit allegro" ballet steps were enumerated in collaboration with ballet teachers, digitally acquired through 3D motion capture systems, and categorised into families and sub-families. Digital data was manipulated into virtual reality modelling language (VRML) and fit…

  3. Segmentation and Tracking of Adherens Junctions in 3D for the Analysis of Epithelial Tissue Morphogenesis

    PubMed Central

    Cilla, Rodrigo; Mechery, Vinodh; Hernandez de Madrid, Beatriz; Del Signore, Steven; Dotu, Ivan; Hatini, Victor

    2015-01-01

    Epithelial morphogenesis generates the shape of tissues, organs and embryos and is fundamental for their proper function. It is a dynamic process that occurs at multiple spatial scales from macromolecular dynamics, to cell deformations, mitosis and apoptosis, to coordinated cell rearrangements that lead to global changes of tissue shape. Using time lapse imaging, it is possible to observe these events at a system level. However, to investigate morphogenetic events it is necessary to develop computational tools to extract quantitative information from the time lapse data. Toward this goal, we developed an image-based computational pipeline to preprocess, segment and track epithelial cells in 4D confocal microscopy data. The computational pipeline we developed, for the first time, detects the adherens junctions of epithelial cells in 3D, without the need to first detect cell nuclei. We accentuate and detect cell outlines in a series of steps, symbolically describe the cells and their connectivity, and employ this information to track the cells. We validated the performance of the pipeline for its ability to detect vertices and cell-cell contacts, track cells, and identify mitosis and apoptosis in surface epithelia of Drosophila imaginal discs. We demonstrate the utility of the pipeline to extract key quantitative features of cell behavior with which to elucidate the dynamics and biomechanical control of epithelial tissue morphogenesis. We have made our methods and data available as an open-source multiplatform software tool called TTT (http://github.com/morganrcu/TTT) PMID:25884654

  4. 3D cardiac motion reconstruction from CT data and tagged MRI.

    PubMed

    Wang, Xiaoxu; Mihalef, Viorel; Qian, Zhen; Voros, Szilard; Metaxas, Dimitris

    2012-01-01

    In this paper we present a novel method for left ventricle (LV) endocardium motion reconstruction using high resolution CT data and tagged MRI. High resolution CT data provide anatomic details on the LV endocardial surface, such as the papillary muscle and trabeculae carneae. Tagged MRI provides better time resolution. The combination of these two imaging techniques can give us better understanding on left ventricle motion. The high resolution CT images are segmented with mean shift method and generate the LV endocardium mesh. The meshless deformable model built with high resolution endocardium surface from CT data fit to the tagged MRI of the same phase. 3D deformation of the myocardium is computed with the Lagrangian dynamics and local Laplacian deformation. The segmented inner surface of left ventricle is compared with the heart inner surface picture and show high agreement. The papillary muscles are attached to the inner surface with roots. The free wall of the left ventricle inner surface is covered with trabeculae carneae. The deformation of the heart wall and the papillary muscle in the first half of the cardiac cycle is presented. The motion reconstruction results are very close to the live heart video. PMID:23366825

  5. 3D Cardiac Motion Reconstruction from CT Data and Tagged MRI

    PubMed Central

    Wang, Xiaoxu; Mihalef, Viorel; Qian, Zhen; Voros, Szilard; Metaxas, Dimitris

    2016-01-01

    In this paper we present a novel method for left ventricle (LV) endocardium motion reconstruction using high resolution CT data and tagged MRI. High resolution CT data provide anatomic details on the LV endocardial surface, such as the papillary muscle and trabeculae carneae. Tagged MRI provides better time resolution. The combination of these two imaging techniques can give us better understanding on left ventricle motion. The high resolution CT images are segmented with mean shift method and generate the LV endocardium mesh. The meshless deformable model built with high resolution endocardium surface from CT data fit to the tagged MRI of the same phase. 3D deformation of the myocardium is computed with the Lagrangian dynamics and local Laplacian deformation. The segmented inner surface of left ventricle is compared with the heart inner surface picture and show high agreement. The papillary muscles are attached to the inner surface with roots. The free wall of the left ventricle inner surface is covered with trabeculae carneae. The deformation of the heart wall and the papillary muscle in the first half of the cardiac cycle is presented. The motion reconstruction results are very close to the live heart video. PMID:23366825

  6. 3D hand motion trajectory prediction from EEG mu and beta bandpower.

    PubMed

    Korik, A; Sosnik, R; Siddique, N; Coyle, D

    2016-01-01

    A motion trajectory prediction (MTP) - based brain-computer interface (BCI) aims to reconstruct the three-dimensional (3D) trajectory of upper limb movement using electroencephalography (EEG). The most common MTP BCI employs a time series of bandpass-filtered EEG potentials (referred to here as the potential time-series, PTS, model) for reconstructing the trajectory of a 3D limb movement using multiple linear regression. These studies report the best accuracy when a 0.5-2Hz bandpass filter is applied to the EEG. In the present study, we show that spatiotemporal power distribution of theta (4-8Hz), mu (8-12Hz), and beta (12-28Hz) bands are more robust for movement trajectory decoding when the standard PTS approach is replaced with time-varying bandpower values of a specified EEG band, ie, with a bandpower time-series (BTS) model. A comprehensive analysis comprising of three subjects performing pointing movements with the dominant right arm toward six targets is presented. Our results show that the BTS model produces significantly higher MTP accuracy (R~0.45) compared to the standard PTS model (R~0.2). In the case of the BTS model, the highest accuracy was achieved across the three subjects typically in the mu (8-12Hz) and low-beta (12-18Hz) bands. Additionally, we highlight a limitation of the commonly used PTS model and illustrate how this model may be suboptimal for decoding motion trajectory relevant information. Although our results, showing that the mu and beta bands are prominent for MTP, are not in line with other MTP studies, they are consistent with the extensive literature on classical multiclass sensorimotor rhythm-based BCI studies (classification of limbs as opposed to motion trajectory prediction), which report the best accuracy of imagined limb movement classification using power values of mu and beta frequency bands. The methods proposed here provide a positive step toward noninvasive decoding of imagined 3D hand movements for movement-free BCIs

  7. Breakup of Finite-Size Colloidal Aggregates in Turbulent Flow Investigated by Three-Dimensional (3D) Particle Tracking Velocimetry.

    PubMed

    Saha, Debashish; Babler, Matthaus U; Holzner, Markus; Soos, Miroslav; Lüthi, Beat; Liberzon, Alex; Kinzelbach, Wolfgang

    2016-01-12

    Aggregates grown in mild shear flow are released, one at a time, into homogeneous isotropic turbulence, where their motion and intermittent breakup is recorded by three-dimensional particle tracking velocimetry (3D-PTV). The aggregates have an open structure with a fractal dimension of ∼2.2, and their size is 1.4 ± 0.4 mm, which is large, compared to the Kolmogorov length scale (η = 0.15 mm). 3D-PTV of flow tracers allows for the simultaneous measurement of aggregate trajectories and the full velocity gradient tensor along their pathlines, which enables us to access the Lagrangian stress history of individual breakup events. From this data, we found no consistent pattern that relates breakup to the local flow properties at the point of breakup. Also, the correlation between the aggregate size and both shear stress and normal stress at the location of breakage is found to be weaker, when compared with the correlation between size and drag stress. The analysis suggests that the aggregates are mostly broken due to the accumulation of the drag stress over a time lag on the order of the Kolmogorov time scale. This finding is explained by the fact that the aggregates are large, which gives their motion inertia and increases the time for stress propagation inside the aggregate. Furthermore, it is found that the scaling of the largest fragment and the accumulated stress at breakup follows an earlier established power law, i.e., dfrag ∼ σ(-0.6) obtained from laminar nozzle experiments. This indicates that, despite the large size and the different type of hydrodynamic stress, the microscopic mechanism causing breakup is consistent over a wide range of aggregate size and stress magnitude. PMID:26646289

  8. Exploring single-molecule interactions through 3D optical trapping and tracking: From thermal noise to protein refolding

    NASA Astrophysics Data System (ADS)

    Wong, Wesley Philip

    The focus of this thesis is the development and application of a novel technique for investigating the structure and dynamics of weak interactions between and within single-molecules. This approach is designed to explore unusual features in bi-directional transitions near equilibrium. The basic idea is to infer molecular events by observing changes in the three-dimensional Brownian fluctuations of a functionalized microsphere held weakly near a reactive substrate. Experimentally, I have developed a unique optical tweezers system that combines an interference technique for accurate 3D tracking (˜1 nm vertically, and ˜2-3 nm laterally) with a continuous autofocus system which stabilizes the trap height to within 1-2 mn over hours. A number of different physical and biological systems were investigated with this instrument. Data interpretation was assisted by a multi-scale Brownian Dynamics simulation that I have developed. I have explored the 3D signatures of different molecular tethers, distinguishing between single and multiple attachments, as well as between stiff and soft linkages. As well, I have developed a technique for measuring the force-dependent compliance of molecular tethers from thermal noise fluctuations and demonstrated this with a short ssDNA oligomer. Another practical approach that I have developed for extracting information from fluctuation measurements is Inverse Brownian Dynamics, which yields the underlying potential of mean force and position dependent diffusion coefficient from the Brownian motion of a particle. I have also developed a new force calibration method that takes into account video motion blur, and that uses this information to measure bead dynamics. Perhaps most significantly, I have trade the first direct observations of the refolding of spectrin repeats under mechanical force, and investigated the force-dependent kinetics of this transition.

  9. A 3D Vector/Scalar Visualization and Particle Tracking Package

    1999-08-19

    BOILERMAKER is an interactive visualization system consisting of three components: a visualization component, a particle tracking component, and a communication layer. The software, to date, has been used primarily in the visualization of vector and scalar fields associated with computational fluid dynamics (CFD) models of flue gas flows in industrial boilers and incinerators. Users can interactively request and toggle static vector fields, dynamic streamlines, and flowing vector fields. In addition, the user can interactively placemore » injector nozzles on boiler walls and visualize massed, evaporating sprays emanating from them. Some characteristics of the spray can be adjusted from within the visualization environment including spray shape and particle size. Also included with this release is software that supports 3D menu capabilities, scrollbars, communication and navigation.« less

  10. Imaging SPR combined with stereoscopic 3D tracking to study barnacle cyprid-surface interactions

    NASA Astrophysics Data System (ADS)

    Maleshlijski, S.; Sendra, G. H.; Aldred, N.; Clare, A. S.; Liedberg, B.; Grunze, M.; Ederth, T.; Rosenhahn, A.

    2016-01-01

    Barnacle larvae (cyprids) explore surfaces to identify suitable settlement sites. This process is selective, and cyprids respond to numerous surface cues. To better understand the settlement process, it is desirable to simultaneously monitor both the surface exploration behavior and any close interactions with the surface. Stereoscopic 3D tracking of the cyprids provides quantitative access to surface exploration and pre-settlement rituals. Imaging surface plasmon resonance (SPR) reveals any interactions with the surfaces, such as surface inspection during bipedal walking and deposition of temporary adhesives. We report on a combination of both techniques to bring together information on swimming behavior in the vicinity of the interface and physical interactions of the cyprid with the surface. The technical requirements are described, and we applied the setup to cyprids of Balanus amphitrite. Initial data shows the applicability of the combined instrument to correlate exploration and touchdown events on surfaces with different chemical termination.

  11. A 3D Vector/Scalar Visualization and Particle Tracking Package

    SciTech Connect

    Freitag, Lori; Disz, Terry; Papka, Mike; Heath, Daniel; Diachin, Darin; Herzog, Jim; Ryan, and Bob

    1999-08-19

    BOILERMAKER is an interactive visualization system consisting of three components: a visualization component, a particle tracking component, and a communication layer. The software, to date, has been used primarily in the visualization of vector and scalar fields associated with computational fluid dynamics (CFD) models of flue gas flows in industrial boilers and incinerators. Users can interactively request and toggle static vector fields, dynamic streamlines, and flowing vector fields. In addition, the user can interactively place injector nozzles on boiler walls and visualize massed, evaporating sprays emanating from them. Some characteristics of the spray can be adjusted from within the visualization environment including spray shape and particle size. Also included with this release is software that supports 3D menu capabilities, scrollbars, communication and navigation.

  12. Real-time motion- and B0-correction for LASER-localized spiral-accelerated 3D-MRSI of the brain at 3T

    PubMed Central

    Bogner, Wolfgang; Hess, Aaron T; Gagoski, Borjan; Tisdall, M. Dylan; van der Kouwe, Andre J.W.; Trattnig, Siegfried; Rosen, Bruce; Andronesi, Ovidiu C

    2013-01-01

    The full potential of magnetic resonance spectroscopic imaging (MRSI) is often limited by localization artifacts, motion-related artifacts, scanner instabilities, and long measurement times. Localized adiabatic selective refocusing (LASER) provides accurate B1-insensitive spatial excitation even at high magnetic fields. Spiral encoding accelerates MRSI acquisition, and thus, enables 3D-coverage without compromising spatial resolution. Real-time position-and shim/frequency-tracking using MR navigators correct motion- and scanner instability-related artifacts. Each of these three advanced MRI techniques provides superior MRSI data compared to commonly used methods. In this work, we integrated in a single pulse sequence these three promising approaches. Real-time correction of motion, shim, and frequency-drifts using volumetric dual-contrast echo planar imaging-based navigators were implemented in an MRSI sequence that uses low-power gradient modulated short-echo time LASER localization and time efficient spiral readouts, in order to provide fast and robust 3D-MRSI in the human brain at 3T. The proposed sequence was demonstrated to be insensitive to motion- and scanner drift-related degradations of MRSI data in both phantoms and volunteers. Motion and scanner drift artifacts were eliminated and excellent spectral quality was recovered in the presence of strong movement. Our results confirm the expected benefits of combining a spiral 3D-LASER-MRSI sequence with real-time correction. The new sequence provides accurate, fast, and robust 3D metabolic imaging of the human brain at 3T. This will further facilitate the use of 3D-MRSI for neuroscience and clinical applications. PMID:24201013

  13. Three-dimensional liver motion tracking using real-time two-dimensional MRI

    SciTech Connect

    Brix, Lau; Ringgaard, Steffen; Sørensen, Thomas Sangild; Poulsen, Per Rugaard

    2014-04-15

    Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (or tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial, sagittal

  14. Application of 3D hydrodynamic and particle tracking models for better environmental management of finfish culture

    NASA Astrophysics Data System (ADS)

    Moreno Navas, Juan; Telfer, Trevor C.; Ross, Lindsay G.

    2011-04-01

    Hydrographic conditions, and particularly current speeds, have a strong influence on the management of fish cage culture. These hydrodynamic conditions can be used to predict particle movement within the water column and the results used to optimise environmental conditions for effective site selection, setting of environmental quality standards, waste dispersion, and potential disease transfer. To this end, a 3D hydrodynamic model, MOHID, has been coupled to a particle tracking model to study the effects of mean current speed, quiescent water periods and bulk water circulation in Mulroy Bay, Co. Donegal Ireland, an Irish fjard (shallow fjordic system) important to the aquaculture industry. A Lagangrian method simulated the instantaneous release of "particles" emulating discharge from finfish cages to show the behaviour of waste in terms of water circulation and water exchange. The 3D spatial models were used to identify areas of mixed and stratified water using a version of the Simpson-Hunter criteria, and to use this in conjunction with models of current flow for appropriate site selection for salmon aquaculture. The modelled outcomes for stratification were in good agreement with the direct measurements of water column stratification based on observed density profiles. Calculations of the Simpson-Hunter tidal parameter indicated that most of Mulroy Bay was potentially stratified with a well mixed region over the shallow channels where the water is faster flowing. The fjard was characterised by areas of both very low and high mean current speeds, with some areas having long periods of quiescent water. The residual current and the particle tracking animations created through the models revealed an anticlockwise eddy that may influence waste dispersion and potential for disease transfer, among salmon cages and which ensures that the retention time of waste substances from cages is extended. The hydrodynamic model results were incorporated into the ArcView TM GIS

  15. CISUS: an integrated 3D ultrasound system for IGT using a modular tracking API

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Viswanathan, Anand; Pieper, Steve; Choti, Michael A.; Taylor, Russell H.; Kikinis, Ron; Fichtinger, Gabor

    2004-05-01

    Ultrasound has become popular in clinical/surgical applications, both as the primary image guidance modality and also in conjunction with other modalities like CT or MRI. Three dimensional ultrasound (3DUS) systems have also demonstrated usefulness in image-guided therapy (IGT). At the same time, however, current lack of open-source and open-architecture multi-modal medical visualization systems prevents 3DUS from fulfilling its potential. Several stand-alone 3DUS systems, like Stradx or In-Vivo exist today. Although these systems have been found to be useful in real clinical setting, it is difficult to augment their functionality and integrate them in versatile IGT systems. To address these limitations, a robotic/freehand 3DUS open environment (CISUS) is being integrated into the 3D Slicer, an open-source research tool developed for medical image analysis and surgical planning. In addition, the system capitalizes on generic application programming interfaces (APIs) for tracking devices and robotic control. The resulting platform-independent open-source system may serve as a valuable tool to the image guided surgery community. Other researchers could straightforwardly integrate the generic CISUS system along with other functionalities (i.e. dual view visualization, registration, real-time tracking, segmentation, etc) to rapidly create their medical/surgical applications. Our current driving clinical application is robotically assisted and freehand 3DUS-guided liver ablation, which is fully being integrated under the CISUS-3D Slicer. Initial functionality and pre-clinical feasibility are demonstrated on phantom and ex-vivo animal models.

  16. Motion error analysis of the 3D coordinates of airborne lidar for typical terrains

    NASA Astrophysics Data System (ADS)

    Peng, Tao; Lan, Tian; Ni, Guoqiang

    2013-07-01

    A motion error model of 3D coordinates is established and the impact on coordinate errors caused by the non-ideal movement of the airborne platform is analyzed. The simulation results of the model show that when the lidar system operates at high altitude, the influence on the positioning errors derived from laser point cloud spacing is small. For the model the positioning errors obey simple harmonic vibration whose amplitude envelope gradually reduces with the increase of the vibration frequency. When the vibration period number is larger than 50, the coordinate errors are almost uncorrelated with time. The elevation error is less than the plane error and in the plane the error in the scanning direction is less than the error in the flight direction. Through the analysis of flight test data, the conclusion is verified.

  17. Modelling the 3D morphology and proper motions of the planetary nebula NGC 6302

    NASA Astrophysics Data System (ADS)

    Uscanga, L.; Velázquez, P. F.; Esquivel, A.; Raga, A. C.; Boumis, P.; Cantó, J.

    2014-08-01

    We present 3D hydrodynamical simulations of an isotropic fast wind interacting with a previously ejected toroidally shaped slow wind in order to model both the observed morphology and the kinematics of the planetary nebula (PN) NGC 6302. This source, also known as the Butterfly nebula, presents one of the most complex morphologies ever observed in PNe. From our numerical simulations, we have obtained an intensity map for the Hα emission to make a comparison with the Hubble Space Telescope (HST) observations of this object. We have also carried out a proper motion (PM) study from our numerical results, in order to compare with previous observational studies. We have found that the two interacting stellar wind model reproduce well the morphology of NGC 6302, and while the PMs in the models are similar to the observations, our results suggest that an acceleration mechanism is needed to explain the Hubble-type expansion found in HST observations.

  18. Using an Automated 3D-tracking System to Record Individual and Shoals of Adult Zebrafish

    PubMed Central

    Maaswinkel, Hans; Zhu, Liqun; Weng, Wei

    2013-01-01

    Like many aquatic animals, zebrafish (Danio rerio) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals. PMID:24336189

  19. Tactical 3D model generation using structure-from-motion on video from unmanned systems

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Bilinski, Mark; Nguyen, Kim B.; Powell, Darren

    2015-05-01

    Unmanned systems have been cited as one of the future enablers of all the services to assist the warfighter in dominating the battlespace. The potential benefits of unmanned systems are being closely investigated -- from providing increased and potentially stealthy surveillance, removing the warfighter from harms way, to reducing the manpower required to complete a specific job. In many instances, data obtained from an unmanned system is used sparingly, being applied only to the mission at hand. Other potential benefits to be gained from the data are overlooked and, after completion of the mission, the data is often discarded or lost. However, this data can be further exploited to offer tremendous tactical, operational, and strategic value. To show the potential value of this otherwise lost data, we designed a system that persistently stores the data in its original format from the unmanned vehicle and then generates a new, innovative data medium for further analysis. The system streams imagery and video from an unmanned system (original data format) and then constructs a 3D model (new data medium) using structure-from-motion. The 3D generated model provides warfighters additional situational awareness, tactical and strategic advantages that the original video stream lacks. We present our results using simulated unmanned vehicle data with Google Earth™providing the imagery as well as real-world data, including data captured from an unmanned aerial vehicle flight.

  20. The representation of moving 3-D objects in apparent motion perception.

    PubMed

    Hidaka, Souta; Kawachi, Yousuke; Gyoba, Jiro

    2009-08-01

    In the present research, we investigated the depth information contained in the representations of apparently moving 3-D objects. By conducting three experiments, we measured the magnitude of representational momentum (RM) as an index of the consistency of an object's representation. Experiment 1A revealed that RM magnitude was greater when shaded, convex, apparently moving objects shifted to a flat circle than when they shifted to a shaded, concave, hemisphere. The difference diminished when the apparently moving objects were concave hemispheres (Experiment 1B). Using luminance-polarized circles, Experiment 2 confirmed that these results were not due to the luminance information of shading. Experiment 3 demonstrated that RM magnitude was greater when convex apparently moving objects shifted to particular blurred convex hemispheres with low-pass filtering than when they shifted to concave hemispheres. These results suggest that the internal object's representation in apparent motion contains incomplete depth information intermediate between that of 2-D and 3-D objects, particularly with regard to convexity information with low-spatial-frequency components. PMID:19633345

  1. Velocity and Density Models Incorporating the Cascadia Subduction Zone for 3D Earthquake Ground Motion Simulations

    USGS Publications Warehouse

    Stephenson, William J.

    2007-01-01

    INTRODUCTION In support of earthquake hazards and ground motion studies in the Pacific Northwest, three-dimensional P- and S-wave velocity (3D Vp and Vs) and density (3D rho) models incorporating the Cascadia subduction zone have been developed for the region encompassed from about 40.2?N to 50?N latitude, and from about -122?W to -129?W longitude. The model volume includes elevations from 0 km to 60 km (elevation is opposite of depth in model coordinates). Stephenson and Frankel (2003) presented preliminary ground motion simulations valid up to 0.1 Hz using an earlier version of these models. The version of the model volume described here includes more structural and geophysical detail, particularly in the Puget Lowland as required for scenario earthquake simulations in the development of the Seattle Urban Hazards Maps (Frankel and others, 2007). Olsen and others (in press) used the model volume discussed here to perform a Cascadia simulation up to 0.5 Hz using a Sumatra-Andaman Islands rupture history. As research from the EarthScope Program (http://www.earthscope.org) is published, a wealth of important detail can be added to these model volumes, particularly to depths of the upper-mantle. However, at the time of development for this model version, no EarthScope-specific results were incorporated. This report is intended to be a reference for colleagues and associates who have used or are planning to use this preliminary model in their research. To this end, it is intended that these models will be considered a beginning template for a community velocity model of the Cascadia region as more data and results become available.

  2. 3D SMoSIFT: three-dimensional sparse motion scale invariant feature transform for activity recognition from RGB-D videos

    NASA Astrophysics Data System (ADS)

    Wan, Jun; Ruan, Qiuqi; Li, Wei; An, Gaoyun; Zhao, Ruizhen

    2014-03-01

    Human activity recognition based on RGB-D data has received more attention in recent years. We propose a spatiotemporal feature named three-dimensional (3D) sparse motion scale-invariant feature transform (SIFT) from RGB-D data for activity recognition. First, we build pyramids as scale space for each RGB and depth frame, and then use Shi-Tomasi corner detector and sparse optical flow to quickly detect and track robust keypoints around the motion pattern in the scale space. Subsequently, local patches around keypoints, which are extracted from RGB-D data, are used to build 3D gradient and motion spaces. Then SIFT-like descriptors are calculated on both 3D spaces, respectively. The proposed feature is invariant to scale, transition, and partial occlusions. More importantly, the running time of the proposed feature is fast so that it is well-suited for real-time applications. We have evaluated the proposed feature under a bag of words model on three public RGB-D datasets: one-shot learning Chalearn Gesture Dataset, Cornell Activity Dataset-60, and MSR Daily Activity 3D dataset. Experimental results show that the proposed feature outperforms other spatiotemporal features and are comparative to other state-of-the-art approaches, even though there is only one training sample for each class.

  3. Probabilistic Seismic Hazard Maps for Seattle, Washington, Based on 3D Ground-Motion Simulations

    NASA Astrophysics Data System (ADS)

    Frankel, A. D.; Stephenson, W. J.; Carver, D. L.; Williams, R. A.; Odum, J. K.; Rhea, S.

    2007-12-01

    We have produced probabilistic seismic hazard maps for Seattle using over 500 3D finite-difference simulations of ground motions from earthquakes in the Seattle fault zone, Cascadia subduction zone, South Whidbey Island fault, and background shallow and deep source areas. The maps depict 1 Hz response spectral accelerations with 2, 5, and 10% probabilities of being exceeded in 50 years. The simulations were used to generate site and source dependent amplification factors that are applied to rock-site attenuation relations. The maps incorporate essentially the same fault sources and earthquake recurrence times as the 2002 national seismic hazard maps. The simulations included basin surface waves and basin-edge focusing effects from a 3D model of the Seattle basin. The 3D velocity model was validated by modeling several earthquakes in the region, including the 2001 M6.8 Nisqually earthquake, that were recorded by our Seattle Urban Seismic Network and the Pacific Northwest Seismic Network. The simulations duplicate our observation that earthquakes from the south and southwest typically produce larger amplifications in the Seattle basin than earthquakes from other azimuths, relative to rock sites outside the basin. Finite-fault simulations were run for earthquakes along the Seattle fault zone, with magnitudes ranging from 6.6 to 7.2, so that the effects of rupture directivity were included. Nonlinear amplification factors for soft-soil sites of fill and alluvium were also applied in the maps. For the Cascadia subduction zone, 3D simulations with point sources at different locations along the zone were used to determine amplification factors across Seattle expected for great subduction-zone earthquakes. These new urban seismic hazard maps are based on determinations of hazard for 7236 sites with a spacing of 280 m. The maps show that the highest hazard locations for this frequency band (around 1 Hz) are soft-soil sites (fill and alluvium) within the Seattle basin and

  4. 3D reconstruction for sinusoidal motion based on different feature detection algorithms

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Zhang, Jin; Deng, Huaxia; Yu, Liandong

    2015-02-01

    The dynamic testing of structures and components is an important area of research. Extensive researches on the methods of using sensors for vibration parameters have been studied for years. With the rapid development of industrial high-speed camera and computer hardware, the method of using stereo vision for dynamic testing has been the focus of the research since the advantages of non-contact, full-field, high resolution and high accuracy. But in the country there is not much research about the dynamic testing based on stereo vision, and yet few people publish articles about the three-dimensional (3D) reconstruction of feature points in the case of dynamic. It is essential to the following analysis whether it can obtain accurate movement of target objects. In this paper, an object with sinusoidal motion is detected by stereo vision and the accuracy with different feature detection algorithms is investigated. Three different marks including dot, square and circle are stuck on the object and the object is doing sinusoidal motion by vibration table. Then use feature detection algorithm speed-up robust feature (SURF) to detect point, detect square corners by Harris and position the center by Hough transform. After obtaining the pixel coordinate values of the feature point, the stereo calibration parameters are used to achieve three-dimensional reconstruction through triangulation principle. The trajectories of the specific direction according to the vibration frequency and the frequency camera acquisition are obtained. At last, the reconstruction accuracy of different feature detection algorithms is compared.

  5. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    SciTech Connect

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; Heiser, John; Yoo, Shinjae; Kalb, Paul

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together into larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.

  6. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    DOE PAGESBeta

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; Heiser, John; Yoo, Shinjae; Kalb, Paul

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together intomore » larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.« less

  7. Numerical Benchmark of 3D Ground Motion Simulation in the Alpine valley of Grenoble, France.

    NASA Astrophysics Data System (ADS)

    Tsuno, S.; Chaljub, E.; Cornou, C.; Bard, P.

    2006-12-01

    Thank to the use of sophisticated numerical methods and to the access to increasing computational resources, our predictions of strong ground motion become more and more realistic and need to be carefully compared. We report our effort of benchmarking numerical methods of ground motion simulation in the case of the valley of Grenoble in the French Alps. The Grenoble valley is typical of a moderate seismicity area where strong site effects occur. The benchmark consisted in computing the seismic response of the `Y'-shaped Grenoble valley to (i) two local earthquakes (Ml<=3) for which recordings were avalaible; and (ii) two local hypothetical events (Mw=6) occuring on the so-called Belledonne Border Fault (BBF) [1]. A free-style prediction was also proposed, in which participants were allowed to vary the source and/or the model parameters and were asked to provide the resulting uncertainty in their estimation of ground motion. We received a total of 18 contributions from 14 different groups; 7 of these use 3D methods, among which 3 could handle surface topography, the other half comprises predictions based upon 1D (2 contributions), 2D (4 contributions) and empirical Green's function (EGF) (3 contributions) methods. Maximal frequency analysed ranged between 2.5 Hz for 3D calculations and 40 Hz for EGF predictions. We present a detailed comparison of the different predictions using raw indicators (e.g. peak values of ground velocity and acceleration, Fourier spectra, site over reference spectral ratios, ...) as well as sophisticated misfit criteria based upon previous works [2,3]. We further discuss the variability in estimating the importance of particular effects such as non-linear rheology, or surface topography. References: [1] Thouvenot F. et al., The Belledonne Border Fault: identification of an active seismic strike-slip fault in the western Alps, Geophys. J. Int., 155 (1), p. 174-192, 2003. [2] Anderson J., Quantitative measure of the goodness-of-fit of

  8. Methods for abdominal respiratory motion tracking.

    PubMed

    Spinczyk, Dominik; Karwan, Adam; Copik, Marcin

    2014-01-01

    Non-invasive surface registration methods have been developed to register and track breathing motions in a patient's abdomen and thorax. We evaluated several different registration methods, including marker tracking using a stereo camera, chessboard image projection, and abdominal point clouds. Our point cloud approach was based on a time-of-flight (ToF) sensor that tracked the abdominal surface. We tested different respiratory phases using additional markers as landmarks for the extension of the non-rigid Iterative Closest Point (ICP) algorithm to improve the matching of irregular meshes. Four variants for retrieving the correspondence data were implemented and compared. Our evaluation involved 9 healthy individuals (3 females and 6 males) with point clouds captured in opposite breathing phases (i.e., inhalation and exhalation). We measured three factors: surface distance, correspondence distance, and marker error. To evaluate different methods for computing the correspondence measurements, we defined the number of correspondences for every target point and the average correspondence assignment error of the points nearest the markers. PMID:24720494

  9. 4D ultrasound speckle tracking of intra-fraction prostate motion: a phantom-based comparison with x-ray fiducial tracking using CyberKnife

    NASA Astrophysics Data System (ADS)

    O'Shea, Tuathan P.; Garcia, Leo J.; Rosser, Karen E.; Harris, Emma J.; Evans, Philip M.; Bamber, Jeffrey C.

    2014-04-01

    This study investigates the use of a mechanically-swept 3D ultrasound (3D-US) probe for soft-tissue displacement monitoring during prostate irradiation, with emphasis on quantifying the accuracy relative to CyberKnife® x-ray fiducial tracking. An US phantom, implanted with x-ray fiducial markers was placed on a motion platform and translated in 3D using five real prostate motion traces acquired using the Calypso system. Motion traces were representative of all types of motion as classified by studying Calypso data for 22 patients. The phantom was imaged using a 3D swept linear-array probe (to mimic trans-perineal imaging) and, subsequently, the kV x-ray imaging system on CyberKnife. A 3D cross-correlation block-matching algorithm was used to track speckle in the ultrasound data. Fiducial and US data were each compared with known phantom displacement. Trans-perineal 3D-US imaging could track superior-inferior (SI) and anterior-posterior (AP) motion to ≤0.81 mm root-mean-square error (RMSE) at a 1.7 Hz volume rate. The maximum kV x-ray tracking RMSE was 0.74 mm, however the prostate motion was sampled at a significantly lower imaging rate (mean: 0.04 Hz). Initial elevational (right-left RL) US displacement estimates showed reduced accuracy but could be improved (RMSE <2.0 mm) using a correlation threshold in the ultrasound tracking code to remove erroneous inter-volume displacement estimates. Mechanically-swept 3D-US can track the major components of intra-fraction prostate motion accurately but exhibits some limitations. The largest US RMSE was for elevational (RL) motion. For the AP and SI axes, accuracy was sub-millimetre. It may be feasible to track prostate motion in 2D only. 3D-US also has the potential to improve high tracking accuracy for all motion types. It would be advisable to use US in conjunction with a small (˜2.0 mm) centre-of-mass displacement threshold in which case it would be possible to take full advantage of the accuracy and high imaging

  10. Are There Side Effects to Watching 3D Movies? A Prospective Crossover Observational Study on Visually Induced Motion Sickness

    PubMed Central

    Solimini, Angelo G.

    2013-01-01

    Background The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. Methods and Findings A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Conclusions Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators. PMID:23418530

  11. 3D Modelling of Inaccessible Areas using UAV-based Aerial Photography and Structure from Motion

    NASA Astrophysics Data System (ADS)

    Obanawa, Hiroyuki; Hayakawa, Yuichi; Gomez, Christopher

    2014-05-01

    In hardly accessible areas, the collection of 3D point-clouds using TLS (Terrestrial Laser Scanner) can be very challenging, while airborne equivalent would not give a correct account of subvertical features and concave geometries like caves. To solve such problem, the authors have experimented an aerial photography based SfM (Structure from Motion) technique on a 'peninsular-rock' surrounded on three sides by the sea at a Pacific coast in eastern Japan. The research was carried out using UAS (Unmanned Aerial System) combined with a commercial small UAV (Unmanned Aerial Vehicle) carrying a compact camera. The UAV is a DJI PHANTOM: the UAV has four rotors (quadcopter), it has a weight of 1000 g, a payload of 400 g and a maximum flight time of 15 minutes. The camera is a GoPro 'HERO3 Black Edition': resolution 12 million pixels; weight 74 g; and 0.5 sec. interval-shot. The 3D model has been constructed by digital photogrammetry using a commercial SfM software, Agisoft PhotoScan Professional®, which can generate sparse and dense point-clouds, from which polygonal models and orthophotographs can be calculated. Using the 'flight-log' and/or GCPs (Ground Control Points), the software can generate digital surface model. As a result, high-resolution aerial orthophotographs and a 3D model were obtained. The results have shown that it was possible to survey the sea cliff and the wave cut-bench, which are unobservable from land side. In details, we could observe the complexity of the sea cliff that is nearly vertical as a whole while slightly overhanging over the thinner base. The wave cut bench is nearly flat and develops extensively at the base of the cliff. Although there are some evidences of small rockfalls at the upper part of the cliff, there is no evidence of very recent activity, because no fallen rock exists on the wave cut bench. This system has several merits: firstly lower cost than the existing measuring methods such as manned-flight survey and aerial laser

  12. Self-Motion Impairs Multiple-Object Tracking

    ERIC Educational Resources Information Center

    Thomas, Laura E.; Seiffert, Adriane E.

    2010-01-01

    Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement…

  13. Three-dimensional motion tracking for high-resolution optical microscopy, in vivo.

    PubMed

    Bakalar, M; Schroeder, J L; Pursley, R; Pohida, T J; Glancy, B; Taylor, J; Chess, D; Kellman, P; Xue, H; Balaban, R S

    2012-06-01

    When conducting optical imaging experiments, in vivo, the signal to noise ratio and effective spatial and temporal resolution is fundamentally limited by physiological motion of the tissue. A three-dimensional (3D) motion tracking scheme, using a multiphoton excitation microscope with a resonant galvanometer, (512 × 512 pixels at 33 frames s(-1)) is described to overcome physiological motion, in vivo. The use of commercially available graphical processing units permitted the rapid 3D cross-correlation of sequential volumes to detect displacements and adjust tissue position to track motions in near real-time. Motion phantom tests maintained micron resolution with displacement velocities of up to 200 μm min(-1), well within the drift observed in many biological tissues under physiologically relevant conditions. In vivo experiments on mouse skeletal muscle using the capillary vasculature with luminal dye as a displacement reference revealed an effective and robust method of tracking tissue motion to enable (1) signal averaging over time without compromising resolution, and (2) tracking of cellular regions during a physiological perturbation. PMID:22582797

  14. Lung tumor tracking, trajectory reconstruction, and motion artifact removal using rotational cone-beam projections

    NASA Astrophysics Data System (ADS)

    Lewis, John Henry

    Management of lung tumor motion is a challenging and important problem for modern, highly conformal radiotherapy. Poorly managed tumor motion can lead to imaging artifacts, poor target coverage, and unnecessarily high dose to normal tissues. The goals of this dissertation are to develop a real-time localization algorithm that is applicable to rotational cone-beam projections acquired during regular (˜60 seconds) cone-beam computed tomography (CBCT) scans, and to use these tracking results to reconstruct a tumor's trajectory, shape and size immediately prior to treatment. Direct tumor tracking is performed via a multiple template matching algorithm where templates are derived from digitally reconstructed radiographs (DRRs) generated from four-dimensional computed tomography (4DCT). Three-dimensional (3D) tumor trajectories are reconstructed by binning twodimensional (2D) tracking results according to their corresponding respiratory phases. Within each phase bin a point is calculated approximating the 3D tumor position, resulting in a 3D phase-binned trajectory. These 3D trajectories are used to construct motion blurring functions which are in turn used to remove motion blurring artifacts from reconstructed CBCT volumes with a deconvolution algorithm. Finally, the initial direct tracking algorithm is combined with diaphragm-based tracking to develop a more robust "combined" tracking algorithm. Respiratory motion phantoms (digital and physical), and example patient cases were used to test each technique. Direct tumor tracking performed well for both phantom cases, with sub-millimeter root mean square error (e rms) in the axial and tangential imager dimensions. In patient studies the algorithm performed well for many angles, but exhibited large errors for some projections. Accurate 3D trajectories were successfully reconstructed for patients and phantoms. Errors in reconstructed trajectories were smaller than the errors in the direct tracking results in all cases. The

  15. A Motion Tracking and Sensor Fusion Module for Medical Simulation.

    PubMed

    Shen, Yunhe; Wu, Fan; Tseng, Kuo-Shih; Ye, Ding; Raymond, John; Konety, Badrinath; Sweet, Robert

    2016-01-01

    Here we introduce a motion tracking or navigation module for medical simulation systems. Our main contribution is a sensor fusion method for proximity or distance sensors integrated with inertial measurement unit (IMU). Since IMU rotation tracking has been widely studied, we focus on the position or trajectory tracking of the instrument moving freely within a given boundary. In our experiments, we have found that this module reliably tracks instrument motion. PMID:27046606

  16. Proton spin tracking with symplectic integration of orbit motion

    SciTech Connect

    Luo, Y.; Dutheil, Y.; Huang, H.; Meot, F.; Ranjbar, V.

    2015-05-03

    Symplectic integration had been adopted for orbital motion tracking in code SimTrack. SimTrack has been extensively used for dynamic aperture calculation with beam-beam interaction for the Relativistic Heavy Ion Collider (RHIC). Recently proton spin tracking has been implemented on top of symplectic orbital motion in this code. In this article, we will explain the implementation of spin motion based on Thomas-BMT equation, and the benchmarking with other spin tracking codes currently used for RHIC. Examples to calculate spin closed orbit and spin tunes are presented too.

  17. 3D Joint Speaker Position and Orientation Tracking with Particle Filters

    PubMed Central

    Segura, Carlos; Hernando, Javier

    2014-01-01

    This paper addresses the problem of three-dimensional speaker orientation estimation in a smart-room environment equipped with microphone arrays. A Bayesian approach is proposed to jointly track the location and orientation of an active speaker. The main motivation is that the knowledge of the speaker orientation may yield an increased localization performance and vice versa. Assuming that the sound produced by the speaker is originated from his mouth, the center of the head is deduced based on the estimated head orientation. Moreover, the elevation angle of the head of the speaker can be partly inferred from the fast vertical movements of the computed mouth location. In order to test the performance of the proposed algorithm, a new multimodal dataset has been recorded for this purpose, where the corresponding 3D orientation angles are acquired by an inertial measurement unit (IMU) provided by accelerometers, magnetometers and gyroscopes in the three-axes. The proposed joint algorithm outperforms a two-step approach in terms of localization and orientation angle precision assessing the superiority of the joint approach. PMID:24481230

  18. 3D joint speaker position and orientation tracking with particle filters.

    PubMed

    Segura, Carlos; Hernando, Javier

    2014-01-01

    This paper addresses the problem of three-dimensional speaker orientation estimation in a smart-room environment equipped with microphone arrays. A Bayesian approach is proposed to jointly track the location and orientation of an active speaker. The main motivation is that the knowledge of the speaker orientation may yield an increased localization performance and vice versa. Assuming that the sound produced by the speaker is originated from his mouth, the center of the head is deduced based on the estimated head orientation. Moreover, the elevation angle of the head of the speaker can be partly inferred from the fast vertical movements of the computed mouth location. In order to test the performance of the proposed algorithm, a new multimodal dataset has been recorded for this purpose, where the corresponding 3D orientation angles are acquired by an inertial measurement unit (IMU) provided by accelerometers, magnetometers and gyroscopes in the three-axes. The proposed joint algorithm outperforms a two-step approach in terms of localization and orientation angle precision assessing the superiority of the joint approach. PMID:24481230

  19. Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography.

    PubMed

    Carrasco-Zevallos, Oscar M; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A

    2016-01-01

    Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800

  20. Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography

    PubMed Central

    Carrasco-Zevallos, Oscar M.; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A.

    2016-01-01

    Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800

  1. Experimental evaluations of the accuracy of 3D and 4D planning in robotic tracking stereotactic body radiotherapy for lung cancers

    SciTech Connect

    Chan, Mark K. H.; Kwong, Dora L. W.; Ng, Sherry C. Y.; Tong, Anthony S. M.; Tam, Eric K. W.

    2013-04-15

    Purpose: Due to the complexity of 4D target tracking radiotherapy, the accuracy of this treatment strategy should be experimentally validated against established standard 3D technique. This work compared the accuracy of 3D and 4D dose calculations in respiration tracking stereotactic body radiotherapy (SBRT). Methods: Using the 4D planning module of the CyberKnife treatment planning system, treatment plans for a moving target and a static off-target cord structure were created on different four-dimensional computed tomography (4D-CT) datasets of a thorax phantom moving in different ranges. The 4D planning system used B-splines deformable image registrations (DIR) to accumulate dose distributions calculated on different breathing geometries, each corresponding to a static 3D-CT image of the 4D-CT dataset, onto a reference image to compose a 4D dose distribution. For each motion, 4D optimization was performed to generate a 4D treatment plan of the moving target. For comparison with standard 3D planning, each 4D plan was copied to the reference end-exhale images and a standard 3D dose calculation was followed. Treatment plans of the off-target structure were first obtained by standard 3D optimization on the end-exhale images. Subsequently, they were applied to recalculate the 4D dose distributions using DIRs. All dose distributions that were initially obtained using the ray-tracing algorithm with equivalent path-length heterogeneity correction (3D{sub EPL} and 4D{sub EPL}) were recalculated by a Monte Carlo algorithm (3D{sub MC} and 4D{sub MC}) to further investigate the effects of dose calculation algorithms. The calculated 3D{sub EPL}, 3D{sub MC}, 4D{sub EPL}, and 4D{sub MC} dose distributions were compared to measurements by Gafchromic EBT2 films in the axial and coronal planes of the moving target object, and the coronal plane for the static off-target object based on the {gamma} metric at 5%/3mm criteria ({gamma}{sub 5%/3mm}). Treatment plans were considered

  2. Use of MV and kV imager correlation for maintaining continuous real-time 3D internal marker tracking during beam interruptions

    NASA Astrophysics Data System (ADS)

    Wiersma, R. D.; Riaz, N.; Dieterich, Sonja; Suh, Yelin; Xing, L.

    2009-01-01

    The integration of onboard kV imaging together with a MV electronic portal imaging device (EPID) on linear accelerators (LINAC) can provide an easy to implement real-time 3D organ position monitoring solution for treatment delivery. Currently, real-time MV-kV tracking has only been demonstrated by simultaneous imagining by both MV and kV imaging devices. However, modalities such as step-and-shoot IMRT (SS-IMRT), which inherently contain MV beam interruptions, can lead to loss of target information necessary for 3D localization. Additionally, continuous kV imaging throughout the treatment delivery can lead to high levels of imaging dose to the patient. This work demonstrates for the first time how full 3D target tracking can be maintained even in the presence of such beam interruption, or MV/kV beam interleave, by use of a relatively simple correlation model together with MV-kV tracking. A moving correlation model was constructed using both present and prior positions of the marker in the available MV or kV image to compute the position of the marker on the interrupted imager. A commercially available radiotherapy system, equipped with both MV and kV imaging devices, was used to deliver typical SS-IMRT lung treatment plans to a 4D phantom containing internally embedded metallic markers. To simulate actual lung tumor motion, previous recorded 4D lung patient motion data were used. Lung tumor motion data of five separate patients were inputted into the 4D phantom, and typical SS-IMRT lung plans were delivered to simulate actual clinical deliveries. Application of the correlation model to SS-IMRT lung treatment deliveries was found to be an effective solution for maintaining continuous 3D tracking during 'step' beam interruptions. For deliveries involving five or more gantry angles with 50 or more fields per plan, the positional errors were found to have <=1 mm root mean squared error (RMSE) in all three spatial directions. In addition to increasing the robustness of

  3. Fast, automatic, and accurate catheter reconstruction in HDR brachytherapy using an electromagnetic 3D tracking system

    SciTech Connect

    Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc; Binnekamp, Dirk

    2015-03-15

    Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.

  4. Mapping 3D Strains with Ultrasound Speckle Tracking: Method Validation and Initial Results in Porcine Scleral Inflation.

    PubMed

    Cruz Perez, Benjamin; Pavlatos, Elias; Morris, Hugh J; Chen, Hong; Pan, Xueliang; Hart, Richard T; Liu, Jun

    2016-07-01

    This study aimed to develop and validate a high frequency ultrasound method for measuring distributive, 3D strains in the sclera during elevations of intraocular pressure. A 3D cross-correlation based speckle-tracking algorithm was implemented to compute the 3D displacement vector and strain tensor at each tracking point. Simulated ultrasound radiofrequency data from a sclera-like structure at undeformed and deformed states with known strains were used to evaluate the accuracy and signal-to-noise ratio (SNR) of strain estimation. An experimental high frequency ultrasound (55 MHz) system was built to acquire 3D scans of porcine eyes inflated from 15 to 17 and then 19 mmHg. Simulations confirmed good strain estimation accuracy and SNR (e.g., the axial strains had less than 4.5% error with SNRs greater than 16.5 for strains from 0.005 to 0.05). Experimental data in porcine eyes showed increasing tensile, compressive, and shear strains in the posterior sclera during inflation, with a volume ratio close to one suggesting near-incompressibility. This study established the feasibility of using high frequency ultrasound speckle tracking for measuring 3D tissue strains and its potential to characterize physiological deformations in the posterior eye. PMID:26563101

  5. A novel 3D micron-scale DPTV (Defocused Particle Tracking Velocimetry) and its applications in microfluidic devices

    NASA Astrophysics Data System (ADS)

    Roberts, John

    2005-11-01

    The rapid advancements in micro/nano biotechnology demand quantitative tools for characterizing microfluidic flows in lab-on-a-chip applications, validation of computational results for fully 3D flows in complex micro-devices, and efficient observation of cellular dynamics in 3D. We present a novel 3D micron-scale DPTV (defocused particle tracking velocimetry) that is capable of mapping out 3D Lagrangian, as well as 3D Eulerian velocity flow fields at sub-micron resolution and with one camera. The main part of the imaging system is an epi-fluorescent microscope (Olympus IX 51), and the seeding particles are fluorescent particles with diameter range 300nm - 10um. A software package has been developed for identifying (x,y,z,t) coordinates of the particles using the defocused images. Using the imaging system, we successfully mapped the pressure driven flow fields in microfluidic channels. In particular, we measured the Laglangian flow fields in a microfluidic channel with a herring bone pattern at the bottom, the later is used to enhance fluid mixing in lateral directions. The 3D particle tracks revealed the flow structure that has only been seen in numerical computation. This work is supported by the National Science Foundation (CTS - 0514443), the Nanobiotechnology Center at Cornell, and The New York State Center for Life Science Enterprise.

  6. Tracking of cracks in bridges using GPR: a 3D approach

    NASA Astrophysics Data System (ADS)

    Benedetto, A.

    2012-04-01

    Corrosion associated with reinforcing bars is the most significant contributor to bridge deficiencies. The corrosion is usually caused by moisture and chloride ion exposure. In particular, corrosion products FeO, Fe2O3, Fe3O4 and other oxides along reinforcement bars. The reinforcing bars are attacked by corrosion and yield expansive corrosion products. These oxidation products occupy a larger volume than the original intact steel and internal expansive stresses lead to cracking and debonding. There are some conventional inspection methods for detection of reinforcing bar corrosion but they can be invasive and destructive, often laborious, lane closures is required and it is difficult or unreliable any quantification of corrosion. For these reasons, bridge engineers are always more preferring to use the Ground Penetrating Radar (GPR) technique. In this work a novel numerical approach for three dimensional tracking and mapping of cracks in the bridge is proposed. The work starts from some interesting results based on the use of the 3D imaging technique in order to improve the potentiality of GPR to detect voids, cracks or buried object. The numerical approach has been tested on data acquired on some bridges using a pulse GPR system specifically designed for bridge deck and pavement inspection that is called RIS Hi Bright. The equipment integrates two arrays of Ultra Wide Band ground coupled antennas, having a main working frequency of 2 GHz. The two arrays within the RIS Hi Bright are using antennas arranged with different polarization. One array includes sensors with parallel polarization with respect to the scanning direction (VV array), the other has sensors in orthogonal polarization (HH array). Overall the system collects 16 profiles within a single scan (8 HH + 8 VV). The cracks, associated often to moisture increasing and higher values of the dielectric constant, produce a not negligible increasing of the signal amplitude. Following this, the algorithm

  7. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    SciTech Connect

    Dhou, S; Hurwitz, M; Lewis, J; Mishra, P

    2014-06-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT

  8. A study of the effects of degraded imagery on tactical 3D model generation using structure-from-motion

    NASA Astrophysics Data System (ADS)

    Bolick, Leslie; Harguess, Josh

    2016-05-01

    An emerging technology in the realm of airborne intelligence, surveillance, and reconnaissance (ISR) systems is structure-from-motion (SfM), which enables the creation of three-dimensional (3D) point clouds and 3D models from two-dimensional (2D) imagery. There are several existing tools, such as VisualSFM and open source project OpenSfM, to assist in this process, however, it is well-known that pristine imagery is usually required to create meaningful 3D data from the imagery. In military applications, such as the use of unmanned aerial vehicles (UAV) for surveillance operations, imagery is rarely pristine. Therefore, we present an analysis of structure-from-motion packages on imagery that has been degraded in a controlled manner.

  9. Four-dimensional IMRT treatment planning using a DMLC motion-tracking algorithm

    NASA Astrophysics Data System (ADS)

    Suh, Yelin; Sawant, Amit; Venkat, Raghu; Keall, Paul J.

    2009-06-01

    The purpose of this study is to develop a four-dimensional (4D) intensity-modulated radiation therapy (IMRT) treatment-planning method by modifying and applying a dynamic multileaf collimator (DMLC) motion-tracking algorithm. The 4D radiotherapy treatment scenario investigated is to obtain a 4D treatment plan based on a 4D computed tomography (CT) planning scan and to have the delivery flexible enough to account for changes in tumor position during treatment delivery. For each of 4D CT planning scans from 12 lung cancer patients, a reference phase plan was created; with its MLC leaf positions and three-dimensional (3D) tumor motion, the DMLC motion-tracking algorithm generated MLC leaf sequences for the plans of other respiratory phases. Then, a deformable dose-summed 4D plan was created by merging the leaf sequences of individual phase plans. Individual phase plans, as well as the deformable dose-summed 4D plan, are similar for each patient, indicating that this method is dosimetrically robust to the variations of fractional time spent in respiratory phases on a given 4D CT planning scan. The 4D IMRT treatment-planning method utilizing the DMLC motion-tracking algorithm explicitly accounts for 3D tumor motion and thus hysteresis and nonlinear motion, and is deliverable on a linear accelerator.

  10. Rapid, High-Throughput Tracking of Bacterial Motility in 3D via Phase-Contrast Holographic Video Microscopy

    PubMed Central

    Cheong, Fook Chiong; Wong, Chui Ching; Gao, YunFeng; Nai, Mui Hoon; Cui, Yidan; Park, Sungsu; Kenney, Linda J.; Lim, Chwee Teck

    2015-01-01

    Tracking fast-swimming bacteria in three dimensions can be extremely challenging with current optical techniques and a microscopic approach that can rapidly acquire volumetric information is required. Here, we introduce phase-contrast holographic video microscopy as a solution for the simultaneous tracking of multiple fast moving cells in three dimensions. This technique uses interference patterns formed between the scattered and the incident field to infer the three-dimensional (3D) position and size of bacteria. Using this optical approach, motility dynamics of multiple bacteria in three dimensions, such as speed and turn angles, can be obtained within minutes. We demonstrated the feasibility of this method by effectively tracking multiple bacteria species, including Escherichia coli, Agrobacterium tumefaciens, and Pseudomonas aeruginosa. In addition, we combined our fast 3D imaging technique with a microfluidic device to present an example of a drug/chemical assay to study effects on bacterial motility. PMID:25762336

  11. A 3D MR-acquisition scheme for nonrigid bulk motion correction in simultaneous PET-MR

    SciTech Connect

    Kolbitsch, Christoph Prieto, Claudia; Schaeffter, Tobias; Tsoumpas, Charalampos

    2014-08-15

    Purpose: Positron emission tomography (PET) is a highly sensitive medical imaging technique commonly used to detect and assess tumor lesions. Magnetic resonance imaging (MRI) provides high resolution anatomical images with different contrasts and a range of additional information important for cancer diagnosis. Recently, simultaneous PET-MR systems have been released with the promise to provide complementary information from both modalities in a single examination. Due to long scan times, subject nonrigid bulk motion, i.e., changes of the patient's position on the scanner table leading to nonrigid changes of the patient's anatomy, during data acquisition can negatively impair image quality and tracer uptake quantification. A 3D MR-acquisition scheme is proposed to detect and correct for nonrigid bulk motion in simultaneously acquired PET-MR data. Methods: A respiratory navigated three dimensional (3D) MR-acquisition with Radial Phase Encoding (RPE) is used to obtain T1- and T2-weighted data with an isotropic resolution of 1.5 mm. Healthy volunteers are asked to move the abdomen two to three times during data acquisition resulting in overall 19 movements at arbitrary time points. The acquisition scheme is used to retrospectively reconstruct dynamic 3D MR images with different temporal resolutions. Nonrigid bulk motion is detected and corrected in this image data. A simultaneous PET acquisition is simulated and the effect of motion correction is assessed on image quality and standardized uptake values (SUV) for lesions with different diameters. Results: Six respiratory gated 3D data sets with T1- and T2-weighted contrast have been obtained in healthy volunteers. All bulk motion shifts have successfully been detected and motion fields describing the transformation between the different motion states could be obtained with an accuracy of 1.71 ± 0.29 mm. The PET simulation showed errors of up to 67% in measured SUV due to bulk motion which could be reduced to less than

  12. Verification and validation of ShipMo3D ship motion predictions in the time and frequency domains

    NASA Astrophysics Data System (ADS)

    McTaggart, Kevin A.

    2011-03-01

    This paper compares frequency domain and time domain predictions from the ShipMo3D ship motion library with observed motions from model tests and sea trials. ShipMo3D evaluates hull radiation and diffraction forces using the frequency domain Green function for zero forward speed, which is a suitable approach for ships travelling at moderate speed (e.g., Froude numbers up to 0.4). Numerical predictions give generally good agreement with experiments. Frequency domain and linear time domain predictions are almost identical. Evaluation of nonlinear buoyancy and incident wave forces using the instantaneous wetted hull surface gives no improvement in numerical predictions. Consistent prediction of roll motions remains a challenge for seakeeping codes due to the associated viscous effects.

  13. 3D motion artifact compenstation in CT image with depth camera

    NASA Astrophysics Data System (ADS)

    Ko, Youngjun; Baek, Jongduk; Shim, Hyunjung

    2015-02-01

    Computed tomography (CT) is a medical imaging technology that projects computer-processed X-rays to acquire tomographic images or the slices of specific organ of body. A motion artifact caused by patient motion is a common problem in CT system and may introduce undesirable artifacts in CT images. This paper analyzes the critical problems in motion artifacts and proposes a new CT system for motion artifact compensation. We employ depth cameras to capture the patient motion and account it for the CT image reconstruction. In this way, we achieve the significant improvement in motion artifact compensation, which is not possible by previous techniques.

  14. A Framework for 3D Model-Based Visual Tracking Using a GPU-Accelerated Particle Filter.

    PubMed

    Brown, J A; Capson, D W

    2012-01-01

    A novel framework for acceleration of particle filtering approaches to 3D model-based, markerless visual tracking in monocular video is described. Specifically, we present a methodology for partitioning and mapping the computationally expensive weight-update stage of a particle filter to a graphics processing unit (GPU) to achieve particle- and pixel-level parallelism. Nvidia CUDA and Direct3D are employed to harness the massively parallel computational power of modern GPUs for simulation (3D model rendering) and evaluation (segmentation, feature extraction, and weight calculation) of hundreds of particles at high speeds. The proposed framework addresses the computational intensity that is intrinsic to all particle filter approaches, including those that have been modified to minimize the number of particles required for a particular task. Performance and tracking quality results for rigid object and articulated hand tracking experiments demonstrate markerless, model-based visual tracking on consumer-grade graphics hardware with pixel-level accuracy up to 95 percent at 60+ frames per second. The framework accelerates particle evaluation up to 49 times over a comparable CPU-only implementation, providing an increased particle count while maintaining real-time frame rates. PMID:21301027

  15. Dynamics and cortical distribution of neural responses to 2D and 3D motion in human

    PubMed Central

    McKee, Suzanne P.; Norcia, Anthony M.

    2013-01-01

    The perception of motion-in-depth is important for avoiding collisions and for the control of vergence eye-movements and other motor actions. Previous psychophysical studies have suggested that sensitivity to motion-in-depth has a lower temporal processing limit than the perception of lateral motion. The present study used functional MRI-informed EEG source-imaging to study the spatiotemporal properties of the responses to lateral motion and motion-in-depth in human visual cortex. Lateral motion and motion-in-depth displays comprised stimuli whose only difference was interocular phase: monocular oscillatory motion was either in-phase in the two eyes (lateral motion) or in antiphase (motion-in-depth). Spectral analysis was used to break the steady-state visually evoked potentials responses down into even and odd harmonic components within five functionally defined regions of interest: V1, V4, lateral occipital complex, V3A, and hMT+. We also characterized the responses within two anatomically defined regions: the inferior and superior parietal cortex. Even harmonic components dominated the evoked responses and were a factor of approximately two larger for lateral motion than motion-in-depth. These responses were slower for motion-in-depth and were largely independent of absolute disparity. In each of our regions of interest, responses at odd-harmonics were relatively small, but were larger for motion-in-depth than lateral motion, especially in parietal cortex, and depended on absolute disparity. Taken together, our results suggest a plausible neural basis for reduced psychophysical sensitivity to rapid motion-in-depth. PMID:24198326

  16. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson’s Disease

    PubMed Central

    Piro, Neltje E.; Piro, Lennart K.; Kassubek, Jan; Blechschmidt-Trapp, Ronald A.

    2016-01-01

    Remote monitoring of Parkinson’s Disease (PD) patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS) classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a) an examination based on video recordings as a clinical reference; (b) an automatically classified UPDRS; and (c) a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG) sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b) the automatically classified UPDRS is 0.48 and with (c) the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team. PMID:27338400

  17. Catheter tracking via online learning for dynamic motion compensation in transcatheter aortic valve implantation.

    PubMed

    Wang, Peng; Zheng, Yefeng; John, Matthias; Comaniciu, Dorin

    2012-01-01

    Dynamic overlay of 3D models onto 2D X-ray images has important applications in image guided interventions. In this paper, we present a novel catheter tracking for motion compensation in the Transcatheter Aortic Valve Implantation (TAVI). To address such challenges as catheter shape and appearance changes, occlusions, and distractions from cluttered backgrounds, we present an adaptive linear discriminant learning method to build a measurement model online to distinguish catheters from background. An analytic solution is developed to effectively and efficiently update the discriminant model and to minimize the classification errors between the tracking object and backgrounds. The online learned discriminant model is further combined with an offline learned detector and robust template matching in a Bayesian tracking framework. Quantitative evaluations demonstrate the advantages of this method over current state-of-the-art tracking methods in tracking catheters for clinical applications. PMID:23286027

  18. Nonrigid motion correction in 3D using autofocusing with localized linear translations.

    PubMed

    Cheng, Joseph Y; Alley, Marcus T; Cunningham, Charles H; Vasanawala, Shreyas S; Pauly, John M; Lustig, Michael

    2012-12-01

    MR scans are sensitive to motion effects due to the scan duration. To properly suppress artifacts from nonrigid body motion, complex models with elements such as translation, rotation, shear, and scaling have been incorporated into the reconstruction pipeline. However, these techniques are computationally intensive and difficult to implement for online reconstruction. On a sufficiently small spatial scale, the different types of motion can be well approximated as simple linear translations. This formulation allows for a practical autofocusing algorithm that locally minimizes a given motion metric--more specifically, the proposed localized gradient-entropy metric. To reduce the vast search space for an optimal solution, possible motion paths are limited to the motion measured from multichannel navigator data. The novel navigation strategy is based on the so-called "Butterfly" navigators, which are modifications of the spin-warp sequence that provides intrinsic translational motion information with negligible overhead. With a 32-channel abdominal coil, sufficient number of motion measurements were found to approximate possible linear motion paths for every image voxel. The correction scheme was applied to free-breathing abdominal patient studies. In these scans, a reduction in artifacts from complex, nonrigid motion was observed. PMID:22307933

  19. Real-time circumferential mapping catheter tracking for motion compensation in atrial fibrillation ablation procedures

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Bourier, Felix; Wimmer, Andreas; Koch, Martin; Kiraly, Atilla; Liao, Rui; Kurzidim, Klaus; Hornegger, Joachim; Strobel, Norbert

    2012-02-01

    Atrial fibrillation (AFib) has been identified as a major cause of stroke. Radiofrequency catheter ablation has become an increasingly important treatment option, especially when drug therapy fails. Navigation under X-ray can be enhanced by using augmented fluoroscopy. It renders overlay images from pre-operative 3-D data sets which are then fused with X-ray images to provide more details about the underlying soft-tissue anatomy. Unfortunately, these fluoroscopic overlay images are compromised by respiratory and cardiac motion. Various methods to deal with motion have been proposed. To meet clinical demands, they have to be fast. Methods providing a processing frame rate of 3 frames-per-second (fps) are considered suitable for interventional electrophysiology catheter procedures if an acquisition frame rate of 2 fps is used. Unfortunately, when working at a processing rate of 3 fps, the delay until the actual motion compensated image can be displayed is about 300 ms. More recent algorithms can achieve frame rates of up to 20 fps, which reduces the lag to 50 ms. By using a novel approach involving a 3-D catheter model, catheter segmentation and a distance transform, we can speed up motion compensation to 25 fps which results in a display delay of only 40 ms on a standard workstation for medical applications. Our method uses a constrained 2-D/3-D registration to perform catheter tracking, and it obtained a 2-D tracking error of 0.61 mm.

  20. Shape measurement by a multi-view methodology based on the remote tracking of a 3D optical scanner

    NASA Astrophysics Data System (ADS)

    Barone, Sandro; Paoli, Alessandro; Viviano Razionale, Armando

    2012-03-01

    Full field optical techniques can be reliably used for 3D measurements of complex shapes by multi-view processes, which require the computation of transformation parameters relating different views into a common reference system. Although, several multi-view approaches have been proposed, the alignment process is still the crucial step of a shape reconstruction. In this paper, a methodology to automatically align 3D views has been developed by integrating a stereo vision system and a full field optical scanner. In particular, the stereo vision system is used to remotely track the optical scanner within a working volume. The tracking system uses stereo images to detect the 3D coordinates of retro-reflective infrared markers rigidly connected to the scanner. Stereo correspondences are established by a robust methodology based on combining the epipolar geometry with an image spatial transformation constraint. The proposed methodology has been validated by experimental tests regarding both the evaluation of the measurement accuracy and the 3D reconstruction of an industrial shape.

  1. Multi-modality fusion of CT, 3D ultrasound, and tracked strain images for breast irradiation planning

    NASA Astrophysics Data System (ADS)

    Foroughi, Pezhman; Csoma, Csaba; Rivaz, Hassan; Fichtinger, Gabor; Zellars, Richard; Hager, Gregory; Boctor, Emad

    2009-02-01

    Breast irradiation significantly reduces the risk of recurrence of cancer. There is growing evidence suggesting that irradiation of only the involved area of the breast, partial breast irradiation (PBI), is as effective as whole breast irradiation. Benefits of PBI include shortened treatment time, and perhaps fewer side effects as less tissue is treated. However, these benefits cannot be realized without precise and accurate localization of the lumpectomy cavity. Several studies have shown that accurate delineation of the cavity in CT scans is very challenging and the delineated volumes differ dramatically over time and among users. In this paper, we propose utilizing 3D ultrasound (3D-US) and tracked strain images as complementary modalities to reduce uncertainties associated with current CT planning workflow. We present the early version of an integrated system that fuses 3D-US and real-time strain images. For the first time, we employ tracking information to reduce the noise in calculation of strain image by choosing the properly compressed frames and to position the strain image within the ultrasound volume. Using this system, we provide the tools to retrieve additional information from 3D-US and strain image alongside the CT scan. We have preliminarily evaluated our proposed system in a step-by-step fashion using a breast phantom and clinical experiments.

  2. A stroboscopic structured illumination system used in dynamic 3D visualization of high-speed motion object

    NASA Astrophysics Data System (ADS)

    Su, Xianyu; Zhang, Qican; Li, Yong; Xiang, Liqun; Cao, Yiping; Chen, Wenjing

    2005-04-01

    A stroboscopic structured illumination system, which can be used in measurement for 3D shape and deformation of high-speed motion object, is proposed and verified by experiments. The system, present in this paper, can automatically detect the position of high-speed moving object and synchronously control the flash of LED to project a structured optical field onto surface of motion object and the shoot of imaging system to acquire an image of deformed fringe pattern, also can create a signal, set artificially through software, to synchronously control the LED and imaging system to do their job. We experiment on a civil electric fan, successful acquire a serial of instantaneous, sharp and clear images of rotation blade and reconstruct its 3D shapes in difference revolutions.

  3. Change of Re dependency of single bubble 3D motion by surface slip condition in surfactant solution

    NASA Astrophysics Data System (ADS)

    Tagawa, Yoshiyuki; Funakubo, Ami; Takagi, Shu; Matsumoto, Yoichiro

    2009-11-01

    Path instability of single bubble in water is sensitive to surfactant. One of the key effects of surfactant is to decrease bubble rising velocity (i.e. increase drag) and change bubble slip condition from free-slip to no-slip. This phenomenon is described as Marangoni effect. However, the surfactant effect to path instability is not fully investigated. In this research, we measured bubble 3D trajectories and velocity in dilute surfactant solution to reveal the relation between 3D motion mode and slip condition. Experimental parameters are types of surfactants, concentrations and bubble sizes. Bubble motions categorized as straight, spiral or zigzag are plotted on two-dimensional field of bubble Reynolds number Re and normalized drag coefficient CD^* which is strongly related to surface slip condition. Range of Re is from 200 to 1000 and CD^* is from 0 to 1. Our results show that when CD^* equals 0 or 1 (free-slip condition or no-slip condition, respectively), bubble motion mode is changed by Re. However when CD^* is 0.5, bubble motion is always spiral. It means that Re dependency of bubble motions is strongly affected by slip condition. We will discuss its mechanism in detail in our presentation.

  4. Robust 2D/3D registration for fast-flexion motion of the knee joint using hybrid optimization.

    PubMed

    Ohnishi, Takashi; Suzuki, Masahiko; Kobayashi, Tatsuya; Naomoto, Shinji; Sukegawa, Tomoyuki; Nawata, Atsushi; Haneishi, Hideaki

    2013-01-01

    Previously, we proposed a 2D/3D registration method that uses Powell's algorithm to obtain 3D motion of a knee joint by 3D computed-tomography and bi-plane fluoroscopic images. The 2D/3D registration is performed consecutively and automatically for each frame of the fluoroscopic images. This method starts from the optimum parameters of the previous frame for each frame except for the first one, and it searches for the next set of optimum parameters using Powell's algorithm. However, if the flexion motion of the knee joint is fast, it is likely that Powell's algorithm will provide a mismatch because the initial parameters are far from the correct ones. In this study, we applied a hybrid optimization algorithm (HPS) combining Powell's algorithm with the Nelder-Mead simplex (NM-simplex) algorithm to overcome this problem. The performance of the HPS was compared with the separate performances of Powell's algorithm and the NM-simplex algorithm, the Quasi-Newton algorithm and hybrid optimization algorithm with the Quasi-Newton and NM-simplex algorithms with five patient data sets in terms of the root-mean-square error (RMSE), target registration error (TRE), success rate, and processing time. The RMSE, TRE, and the success rate of the HPS were better than those of the other optimization algorithms, and the processing time was similar to that of Powell's algorithm alone. PMID:23138929

  5. Websim3d: A Web-based System for Generation, Storage and Dissemination of Earthquake Ground Motion Simulations.

    NASA Astrophysics Data System (ADS)

    Olsen, K. B.

    2003-12-01

    Synthetic time histories from large-scale 3D ground motion simulations generally constitute large 'data' sets which typically require 100's of Mbytes or Gbytes of storage capacity. For the same reason, getting access to a researchers simulation output, for example for an earthquake engineer to perform site analysis, or a seismologist to perform seismic hazard analysis, can be a tedious procedure. To circumvent this problem we have developed a web-based ``community model'' (websim3D) for the generation, storage, and dissemination of ground motion simulation results. Websim3D allows user-friendly and fast access to view and download such simulation results for an earthquake-prone area. The user selects an earthquake scenario from a map of the region, which brings up a map of the area where simulation data is available. Now, by clicking on an arbitrary site location, synthetic seismograms and/or soil parameters for the site can be displayed at fixed or variable scaling and/or downloaded. Websim3D relies on PHP scripts for the dynamic plots of synthetic seismograms and soil profiles. Although not limited to a specific area, we illustrate the community model for simulation results from the Los Angeles basin, Wellington (New Zealand), and Mexico.

  6. 3D Finite-Difference Modeling of Strong Ground Motion in the Upper Rhine Graben - 1356 Basel Earthquake

    NASA Astrophysics Data System (ADS)

    Oprsal, I.; Faeh, D.; Giardini, D.

    2002-12-01

    The disastrous Basel earthquake of October 18, 1356 (I0=X, M ≈ 6.9), appeared in, today seismically modest, Basel region (Upper Rhine Graben). The lack of strong ground motion seismic data can be effectively supplied by numerical modeling. We applied the 3D finite differences (FD) to predict ground motions which can be used for microzonation and hazard assessment studies. The FD method is formulated for topography models on irregular rectangular grids. It is a 3D explicit FD formulation of the hyperbolic partial differential equation (PDE). Elastodynamic PDE is solved in the time domain. The Hooke's isotropic inhomogeneous medium contains discontinuities and a topographic free surface. The 3D elastic FD modeling is applied on a newly established P and S-wave velocities structure model. This complex structure contains main interfaces and gradients inside some layers. It is adjacent to the earth surface and includes topography (Kind, Faeh and Giardini, 2002, A 3D Reference Model for the Area of Basel, in prep.). The first attempt was done for a double-couple point source and relatively simple source function. Numerical tests are planned for several finite-extent source histories because the 1356 Basel earthquake source features have not been well determined, yet. The presumed finite-extent source is adjacent to the free surface. The results are compared to the macroseismic information of the Basel area.

  7. 3D tracking and phase-contrast imaging by twin-beams digital holographic microscope in microfluidics

    NASA Astrophysics Data System (ADS)

    Miccio, L.; Memmolo, P.; Finizio, A.; Paturzo, M.; Merola, F.; Grilli, S.; Ferraro, P.

    2012-06-01

    A compact twin-beam interferometer that can be adopted as a flexible diagnostic tool in microfluidic platforms is presented. The devise has two functionalities, as explained in the follow, and can be easily integrated in microfluidic chip. The configuration allows 3D tracking of micro-particles and, at same time, furnishes Quantitative Phase-Contrast maps of tracked micro-objects by interference microscopy. Experimental demonstration of its effectiveness and compatibility with biological field is given on for in vitro cells in microfluidic environment. Nowadays, several microfluidic configuration exist and many of them are commercially available, their development is due to the possibility for manipulating droplets, handling micro and nano-objects, visualize and quantify processes occurring in small volumes and, clearly, for direct applications on lab-on-a chip devices. In microfluidic research field, optical/photonics approaches are the more suitable ones because they have various advantages as to be non-contact, full-field, non-invasive and can be packaged thanks to the development of integrable optics. Moreover, phase contrast approaches, adapted to a lab-on-a-chip configurations, give the possibility to get quantitative information with remarkable lateral and vertical resolution directly in situ without the need to dye and/or kill cells. Furthermore, numerical techniques for tracking of micro-objects needs to be developed for measuring velocity fields, trajectories patterns, motility of cancer cell and so on. Here, we present a compact holographic microscope that can ensure, by the same configuration and simultaneously, accurate 3D tracking and quantitative phase-contrast analysis. The system, simple and solid, is based on twin laser beams coming from a single laser source. Through a easy conceptual design, we show how these two different functionalities can be accomplished by the same optical setup. The working principle, the optical setup and the mathematical

  8. Visual Tracking of an Object with its Motion Information

    NASA Astrophysics Data System (ADS)

    Shimeno, Atsutoshi; Uchida, Seiichi; Kurazume, Ryo; Taniguchi, Rin-Ichiro; Hasegawa, Tsutomu

    Tracking of a moving robot in surveillance video is an important task for coexistence of human beings with robots. An essential technology to manage coexistence environment of human beings and moving robots is separation and tracking of moving robots. For this task, the moving robot should be separated from other moving objects, i.e., human beings. We assume that the robot provides its additional motion information to the surveillance system to ease the task. The robot can be tracked from the other objects as a moving region being consistent with the additional motion information. For this purpose, we modify a tracking algorithm based on particle filter in order to incorporate the additional motion information. The results of an experiment on real surveillance video sequences have indicated that the proposed framework can separate and track a moving robot under the existence of several walking persons.

  9. Improved Pose Measurement and Tracking System for Motion Correction of Awake, Unrestrained Small Animal SPECT Imaging

    SciTech Connect

    Goddard Jr, James Samuel; Baba, Justin S; Weisenberger, A G; Smith, M F

    2007-01-01

    An improved optical landmark-based pose measurement and tracking system has been developed to provide 3D animal pose data for a single photon emission computed tomography (SPECT) imaging system for awake, unanesthetized, unrestrained laboratory animals. The six degree of freedom animal position and orientation measurement data are time synchronized with the SPECT list mode data to provide for motion correction after the scan and before reconstruction. The tracking system employs infrared (IR) markers placed on the animal's head along with synchronized, strobed IR LEDs to illuminate the reflectors and freeze motion while minimizing reflections. A new design trinocular stereo image acquisition system using IEEE 1394 CMOS cameras acquires images of the animal with markers contained within a transparent enclosure. The trinocular configuration provides improved accuracy, range of motion, and robustness over the binocular stereo used previously. Enhanced software detects obstructions, automatically segments the markers, rejects reflections, performs marker correspondence, and calculates the 3D pose of the animal's head using image data from three cameras. The new hardware design provides more compact camera positioning with enhanced animal viewing through the 360 degree SPECT scan. This system has been implemented on a commercial scanner and tested using live mice and has been shown to be more reliable with higher accuracy than the previous system. Experimental results showing the improved motion tracking results are given.

  10. Digital In-Line Holography System for 3D-3C Particle Tracking Velocimetry

    NASA Astrophysics Data System (ADS)

    Malek, Mokrane; Lebrun, Denis; Allano, Daniel

    Digital in-line holography is a suitable method for measuring three dimensional (3D) velocity fields. Such a system records directly on a charge-coupled device (CCD) camera a couple of diffraction patterns produced by small particles illuminated by a modulated laser diode. The numerical reconstruction is based on the wavelet transformation method. A 3D particle field is reconstructed by computing the wavelet components for different scale parameters. The scale parameter is directly related to the axial distance between a given particle and the CCD camera. The particle images are identified and localized by analyzing the maximum of the wavelet transform modulus (WTMM) and the equivalent diameter of the particle image (Deq). Afterwards, a 3D point-matching (PM) algorithm is applied to the pair of sets containing the 3D particle locations. In the PM algorithm, the displacement of the particles is modeled by an affine transformation. This affine transformation is based on the use of the dual number quaternions. Afterwards, the velocity-field extraction is performed. This system is tested with simulated particle field displacements and the feasibility is checked with an experimental displacement.

  11. Rapid 3D Track Reconstruction with the BaBar Trigger Upgrade

    SciTech Connect

    Bailey, S

    2004-05-24

    A new hardware trigger system based on tracks detected by a stereo drift chamber has been developed for the BABAR experiment at the Stanford Linear Accelerator Center. The z{sub 0} p{sub T} Discriminator (ZPD) is capable of fast, 3-dimensional reconstruction of charged particle tracks and provides rejection of background events due to beam particles interacting with the beam pipe at the first-level trigger. Over 1 gigabyte of data is processed per second by each ZPD module. Rapid track reconstruction has been realized using Xilinx Virtex-II FPGAs.

  12. Visual Target Tracking in the Presence of Unknown Observer Motion

    NASA Technical Reports Server (NTRS)

    Williams, Stephen; Lu, Thomas

    2009-01-01

    Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.

  13. Nonlinear, nonlaminar - 3D computation of electron motion through the output cavity of a klystron.

    NASA Technical Reports Server (NTRS)

    Albers, L. U.; Kosmahl, H. G.

    1971-01-01

    The accurate computation is discussed of electron motion throughout the output cavity of a klystron amplifier. The assumptions are defined whereon the computation is based, and the equations of motion are reviewed, along with the space charge fields derived from a Green's function potential of a solid cylinder. The integration process is then examined with special attention to its most difficult and important aspect - namely, the accurate treatment of the dynamic effect of space charge forces on the motion of individual cell rings of equal volume and charge. The correct treatment is demonstrated upon four specific examples, and a few comments are given on the results obtained.-

  14. The Complete (3-D) Co-Seismic Displacements Using Point-Like Targets Tracking With Ascending And Descending SAR Data

    NASA Astrophysics Data System (ADS)

    Hu, Xie; Wang, Teng; Liao, Mingsheng

    2013-12-01

    SAR Interferometry (InSAR) has its unique advantages, e.g., all weather/time accessibility, cm-level accuracy and large spatial coverage, however, it can only obtain one dimensional measurement along line-of-sight (LOS) direction. Offset tracking is an important complement to measure large and rapid displacements in both azimuth and range directions. Here we perform offset tracking on detected point-like targets (PT) by calculating the cross-correlation with a sinc-like template. And a complete 3-D displacement field can be derived using PT offset tracking from a pair of ascending and descending data. The presented case study on 2010 M7.2 El Mayor-Cucapah earthquake helps us better understand the rupture details.

  15. An eliminating method of motion-induced vertical parallax for time-division 3D display technology

    NASA Astrophysics Data System (ADS)

    Lin, Liyuan; Hou, Chunping

    2015-10-01

    A time difference between the left image and right image of the time-division 3D display makes a person perceive alternating vertical parallax when an object is moving vertically on a fixed depth plane, which causes the left image and right image perceived do not match and makes people more prone to visual fatigue. This mismatch cannot eliminate simply rely on the precise synchronous control of the left image and right image. Based on the principle of time-division 3D display technology and human visual system characteristics, this paper establishes a model of the true vertical motion velocity in reality and vertical motion velocity on the screen, and calculates the amount of the vertical parallax caused by vertical motion, and then puts forward a motion compensation method to eliminate the vertical parallax. Finally, subjective experiments are carried out to analyze how the time difference affects the stereo visual comfort by comparing the comfort values of the stereo image sequences before and after compensating using the eliminating method. The theoretical analysis and experimental results show that the proposed method is reasonable and efficient.

  16. Stereo and motion parallax cues in human 3D vision: can they vanish without a trace?

    PubMed

    Rauschecker, Andreas M; Solomon, Samuel G; Glennerster, Andrew

    2006-01-01

    In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the "correct" size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues. PMID:17209749

  17. Self optical motion-tracking for endoscopic optical coherence tomography probe using micro-beamsplitter probe

    NASA Astrophysics Data System (ADS)

    Li, Jiawen; Zhang, Jun; Chou, Lidek; Wang, Alex; Jing, Joseph; Chen, Zhongping

    2014-03-01

    Long range optical coherence tomography (OCT), with its high speed, high resolution, non-ionized properties and cross-sectional imaging capability, is suitable for upper airway lumen imaging. To render 2D OCT datasets to true 3D anatomy, additional tools are usually applied, such as X-ray guidance or a magnetic sensor. X-ray increases ionizing radiation. A magnetic sensor either increases probe size or requires an additional pull-back of the tracking sensor through the body cavity. In order to overcome these limitations, we present a novel tracking method using a 1.5 mm×1.5mm, 90/10-ratio micro-beamsplitter: 10% light through the beam-splitter is used for motion tracking and 90% light is used for regular OCT imaging and motion tracking. Two signals corresponding to these two split-beams that pass through different optical path length delays are obtained by the detector simultaneously. Using the two split beams' returned signals from the same marker line, the 2D inclination angle of each step is computed. By calculating the 2D inclination angle of each step and then connecting the translational displacements of each step, we can obtain the 2D motion trajectory of the probe. With two marker lines on the probe sheath, 3D inclination angles can be determined and then used for 3D trajectory reconstruction. We tested the accuracy of trajectory reconstruction using the probe and demonstrated the feasibility of the design for structure reconstruction of a biological sample using a porcine trachea specimen. This optical-tracking probe has the potential to be made as small as an outer diameter of 1.0mm, which is ideal for upper airway imaging.

  18. Reconstruction Accuracy Assessment of Surface and Underwater 3D Motion Analysis: A New Approach

    PubMed Central

    de Jesus, Kelly; de Jesus, Karla; Figueiredo, Pedro; Vilas-Boas, João Paulo; Fernandes, Ricardo Jorge; Machado, Leandro José

    2015-01-01

    This study assessed accuracy of surface and underwater 3D reconstruction of a calibration volume with and without homography. A calibration volume (6000 × 2000 × 2500 mm) with 236 markers (64 above and 88 underwater control points—with 8 common points at water surface—and 92 validation points) was positioned on a 25 m swimming pool and recorded with two surface and four underwater cameras. Planar homography estimation for each calibration plane was computed to perform image rectification. Direct linear transformation algorithm for 3D reconstruction was applied, using 1600000 different combinations of 32 and 44 points out of the 64 and 88 control points for surface and underwater markers (resp.). Root Mean Square (RMS) error with homography of control and validations points was lower than without it for surface and underwater cameras (P ≤ 0.03). With homography, RMS errors of control and validation points were similar between surface and underwater cameras (P ≥ 0.47). Without homography, RMS error of control points was greater for underwater than surface cameras (P ≤ 0.04) and the opposite was observed for validation points (P ≤ 0.04). It is recommended that future studies using 3D reconstruction should include homography to improve swimming movement analysis accuracy. PMID:26175796

  19. 3-D Human Action Recognition by Shape Analysis of Motion Trajectories on Riemannian Manifold.

    PubMed

    Devanne, Maxime; Wannous, Hazem; Berretti, Stefano; Pala, Pietro; Daoudi, Mohamed; Del Bimbo, Alberto

    2015-07-01

    Recognizing human actions in 3-D video sequences is an important open problem that is currently at the heart of many research domains including surveillance, natural interfaces and rehabilitation. However, the design and development of models for action recognition that are both accurate and efficient is a challenging task due to the variability of the human pose, clothing and appearance. In this paper, we propose a new framework to extract a compact representation of a human action captured through a depth sensor, and enable accurate action recognition. The proposed solution develops on fitting a human skeleton model to acquired data so as to represent the 3-D coordinates of the joints and their change over time as a trajectory in a suitable action space. Thanks to such a 3-D joint-based framework, the proposed solution is capable to capture both the shape and the dynamics of the human body, simultaneously. The action recognition problem is then formulated as the problem of computing the similarity between the shape of trajectories in a Riemannian manifold. Classification using k-nearest neighbors is finally performed on this manifold taking advantage of Riemannian geometry in the open curve shape space. Experiments are carried out on four representative benchmarks to demonstrate the potential of the proposed solution in terms of accuracy/latency for a low-latency action recognition. Comparative results with state-of-the-art methods are reported. PMID:25216492

  20. Real-time Awake Animal Motion Tracking System for SPECT Imaging

    SciTech Connect

    Goddard Jr, James Samuel; Baba, Justin S; Lee, Seung Joon; Weisenberger, A G; Stolin, A; McKisson, J; Smith, M F

    2008-01-01

    Enhancements have been made in the development of a real-time optical pose measurement and tracking system that provides 3D position and orientation data for a single photon emission computed tomography (SPECT) imaging system for awake, unanesthetized, unrestrained small animals. Three optical cameras with infrared (IR) illumination view the head movements of an animal enclosed in a transparent burrow. Markers placed on the head provide landmark points for image segmentation. Strobed IR LED s are synchronized to the cameras and illuminate the markers to prevent motion blur for each set of images. The system using the three cameras automatically segments the markers, detects missing data, rejects false reflections, performs trinocular marker correspondence, and calculates the 3D pose of the animal s head. Improvements have been made in methods for segmentation, tracking, and 3D calculation to give higher speed and more accurate measurements during a scan. The optical hardware has been installed within a Siemens MicroCAT II small animal scanner at Johns Hopkins without requiring functional changes to the scanner operation. The system has undergone testing using both phantoms and live mice and has been characterized in terms of speed, accuracy, robustness, and reliability. Experimental data showing these motion tracking results are given.

  1. 3D shape tracking of minimally invasive medical instruments using optical frequency domain reflectometry

    NASA Astrophysics Data System (ADS)

    Parent, Francois; Kanti Mandal, Koushik; Loranger, Sebastien; Watanabe Fernandes, Eric Hideki; Kashyap, Raman; Kadoury, Samuel

    2016-03-01

    We propose here a new alternative to provide real-time device tracking during minimally invasive interventions using a truly-distributed strain sensor based on optical frequency domain reflectometry (OFDR) in optical fibers. The guidance of minimally invasive medical instruments such as needles or catheters (ex. by adding a piezoelectric coating) has been the focus of extensive research in the past decades. Real-time tracking of instruments in medical interventions facilitates image guidance and helps the user to reach a pre-localized target more precisely. Image-guided systems using ultrasound imaging and shape sensors based on fiber Bragg gratings (FBG)-embedded optical fibers can provide retroactive feedback to the user in order to reach the targeted areas with even more precision. However, ultrasound imaging with electro-magnetic tracking cannot be used in the magnetic resonance imaging (MRI) suite, while shape sensors based on FBG embedded in optical fibers provides discrete values of the instrument position, which requires approximations to be made to evaluate its global shape. This is why a truly-distributed strain sensor based on OFDR could enhance the tracking accuracy. In both cases, since the strain is proportional to the radius of curvature of the fiber, a strain sensor can provide the three-dimensional shape of medical instruments by simply inserting fibers inside the devices. To faithfully follow the shape of the needle in the tracking frame, 3 fibers glued in a specific geometry are used, providing 3 degrees of freedom along the fiber. Near real-time tracking of medical instruments is thus obtained offering clear advantages for clinical monitoring in remotely controlled catheter or needle guidance. We present results demonstrating the promising aspects of this approach as well the limitations of using the OFDR technique.

  2. Towards hybrid bronchoscope tracking under respiratory motion: evaluation on a dynamic motion phantom

    NASA Astrophysics Data System (ADS)

    Luo, Xiongbiao; Feuerstein, Marco; Sugiura, Takamasa; Kitasaka, Takayuki; Imaizumi, Kazuyoshi; Hasegawa, Yoshinori; Mori, Kensaku

    2010-02-01

    This paper presents a hybrid camera tracking method that uses electromagnetic (EM) tracking and intensitybased image registration and its evaluation on a dynamic motion phantom. As respiratory motion can significantly affect rigid registration of the EM tracking and CT coordinate systems, a standard tracking approach that initializes intensity-based image registration with absolute pose data acquired by EM tracking will fail when the initial camera pose is too far from the actual pose. We here propose two new schemes to address this problem. Both of these schemes intelligently combine absolute pose data from EM tracking with relative motion data combined from EM tracking and intensity-based image registration. These schemes significantly improve the overall camera tracking performance. We constructed a dynamic phantom simulating the respiratory motion of the airways to evaluate these schemes. Our experimental results demonstrate that these schemes can track a bronchoscope more accurately and robustly than our previously proposed method even when maximum simulated respiratory motion reaches 24 mm.

  3. Sedimentary basin effects in Seattle, Washington: Ground-motion observations and 3D simulations

    USGS Publications Warehouse

    Frankel, Arthur; Stephenson, William; Carver, David

    2009-01-01

    Seismograms of local earthquakes recorded in Seattle exhibit surface waves in the Seattle basin and basin-edge focusing of S waves. Spectral ratios of Swaves and later arrivals at 1 Hz for stiff-soil sites in the Seattle basin show a dependence on the direction to the earthquake, with earthquakes to the south and southwest producing higher average amplification. Earthquakes to the southwest typically produce larger basin surface waves relative to S waves than earthquakes to the north and northwest, probably because of the velocity contrast across the Seattle fault along the southern margin of the Seattle basin. S to P conversions are observed for some events and are likely converted at the bottom of the Seattle basin. We model five earthquakes, including the M 6.8 Nisqually earthquake, using 3D finite-difference simulations accurate up to 1 Hz. The simulations reproduce the observed dependence of amplification on the direction to the earthquake. The simulations generally match the timing and character of basin surface waves observed for many events. The 3D simulation for the Nisqually earth-quake produces focusing of S waves along the southern margin of the Seattle basin near the area in west Seattle that experienced increased chimney damage from the earthquake, similar to the results of the higher-frequency 2D simulation reported by Stephenson et al. (2006). Waveforms from the 3D simulations show reasonable agreement with the data at low frequencies (0.2-0.4 Hz) for the Nisqually earthquake and an M 4.8 deep earthquake west of Seattle.

  4. Undersampled Cine 3D tagging for rapid assessment of cardiac motion

    PubMed Central

    2012-01-01

    Background CMR allows investigating cardiac contraction, rotation and torsion non-invasively by the use of tagging sequences. Three-dimensional tagging has been proposed to cover the whole-heart but data acquisition requires three consecutive breath holds and hence demands considerable patient cooperation. In this study we have implemented and studied k-t undersampled cine 3D tagging in conjunction with k-t PCA reconstruction to potentially permit for single breath-hold acquisitions. Methods The performance of undersampled cine 3D tagging was investigated using computer simulations and in-vivo measurements in 8 healthy subjects and 5 patients with myocardial infarction. Fully sampled data was obtained and compared to retrospectively and prospectively undersampled acquisitions. Fully sampled data was acquired in three consecutive breath holds. Prospectively undersampled data was obtained within a single breath hold. Based on harmonic phase (HARP) analysis, circumferential shortening, rotation and torsion were compared between fully sampled and undersampled data using Bland-Altman and linear regression analysis. Results In computer simulations, the error for circumferential shortening was 2.8 ± 2.3% and 2.7 ± 2.1% for undersampling rates of R = 3 and 4 respectively. Errors in ventricular rotation were 2.5 ± 1.9% and 3.0 ± 2.2% for R = 3 and 4. Comparison of results from fully sampled in-vivo data acquired with prospectively undersampled acquisitions showed a mean difference in circumferential shortening of −0.14 ± 5.18% and 0.71 ± 6.16% for R = 3 and 4. The mean differences in rotation were 0.44 ± 1.8° and 0.73 ± 1.67° for R = 3 and 4, respectively. In patients peak, circumferential shortening was significantly reduced (p < 0.002 for all patients) in regions with late gadolinium enhancement. Conclusion Undersampled cine 3D tagging enables significant reduction in scan time of whole-heart tagging and

  5. Analysis of 3-D Tongue Motion from Tagged and Cine Magnetic Resonance Images

    ERIC Educational Resources Information Center

    Xing, Fangxu; Woo, Jonghye; Lee, Junghoon; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.

    2016-01-01

    Purpose: Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during…

  6. Experience affects the use of ego-motion signals during 3D shape perception

    PubMed Central

    Jain, Anshul; Backus, Benjamin T.

    2011-01-01

    Experience has long-term effects on perceptual appearance (Q. Haijiang, J. A. Saunders, R. W. Stone, & B. T. Backus, 2006). We asked whether experience affects the appearance of structure-from-motion stimuli when the optic flow is caused by observer ego-motion. Optic flow is an ambiguous depth cue: a rotating object and its oppositely rotating, depth-inverted dual generate similar flow. However, the visual system exploits ego-motion signals to prefer the percept of an object that is stationary over one that rotates (M. Wexler, F. Panerai, I. Lamouret, & J. Droulez, 2001). We replicated this finding and asked whether this preference for stationarity, the “stationarity prior,” is modulated by experience. During training, two groups of observers were exposed to objects with identical flow, but that were either stationary or moving as determined by other cues. The training caused identical test stimuli to be seen preferentially as stationary or moving by the two groups, respectively. We then asked whether different priors can exist independently at different locations in the visual field. Observers were trained to see objects either as stationary or as moving at two different locations. Observers’ stationarity bias at the two respective locations was modulated in the directions consistent with training. Thus, the utilization of extraretinal ego-motion signals for disambiguating optic flow signals can be updated as the result of experience, consistent with the updating of a Bayesian prior for stationarity. PMID:21191132

  7. Motion Controllers for Learners to Manipulate and Interact with 3D Objects for Mental Rotation Training

    ERIC Educational Resources Information Center

    Yeh, Shih-Ching; Wang, Jin-Liang; Wang, Chin-Yeh; Lin, Po-Han; Chen, Gwo-Dong; Rizzo, Albert

    2014-01-01

    Mental rotation is an important spatial processing ability and an important element in intelligence tests. However, the majority of past attempts at training mental rotation have used paper-and-pencil tests or digital images. This study proposes an innovative mental rotation training approach using magnetic motion controllers to allow learners to…

  8. Numerical scheme for riser motion calculation during 3-D VIV simulation

    NASA Astrophysics Data System (ADS)

    Huang, Kevin; Chen, Hamn-Ching; Chen, Chia-Rong

    2011-10-01

    This paper presents a numerical scheme for riser motion calculation and its application to riser VIV simulations. The discretisation of the governing differential equation is studied first. The top tensioned risers are simplified as tensioned beams. A centered space and forward time finite difference scheme is derived from the governing equations of motion. Then an implicit method is adopted for better numerical stability. The method meets von Neumann criteria and is shown to be unconditionally stable. The discretized linear algebraic equations are solved using a LU decomposition method. This approach is then applied to a series of benchmark cases with known solutions. The comparisons show good agreement. Finally the method is applied to practical riser VIV simulations. The studied cases cover a wide range of riser VIV problems, i.e. different riser outer diameter, length, tensioning conditions, and current profiles. Reasonable agreement is obtained between the numerical simulations and experimental data on riser motions and cross-flow VIV a/D . These validations and comparisons confirm that the present numerical scheme for riser motion calculation is valid and effective for long riser VIV simulation.

  9. Motion object tracking algorithm using multi-cameras

    NASA Astrophysics Data System (ADS)

    Kong, Xiaofang; Chen, Qian; Gu, Guohua

    2015-09-01

    Motion object tracking is one of the most important research directions in computer vision. Challenges in designing a robust tracking method are usually caused by partial or complete occlusions on targets. However, motion object tracking algorithm based on multiple cameras according to the homography relation in three views can deal with this issue effectively since the information combining from multiple cameras in different views can make the target more complete and accurate. In this paper, a robust visual tracking algorithm based on the homography relations of three cameras in different views is presented to cope with the occlusion. First of all, being the main contribution of this paper, the motion object tracking algorithm based on the low-rank matrix representation under the framework of the particle filter is applied to track the same target in the public region respectively in different views. The target model and the occlusion model are established and an alternating optimization algorithm is utilized to solve the proposed optimization formulation while tracking. Then, we confirm the plane in which the target has the largest occlusion weight to be the principal plane and calculate the homography to find out the mapping relations between different views. Finally, the images of the other two views are projected into the main plane. By making use of the homography relation between different views, the information of the occluded target can be obtained completely. The proposed algorithm has been examined throughout several challenging image sequences, and experiments show that it overcomes the failure of the motion tracking especially under the situation of the occlusion. Besides, the proposed algorithm improves the accuracy of the motion tracking comparing with other state-of-the-art algorithms.

  10. Motion-parallax smoothness of short-, medium-, and long-distance 3D image presentation using multi-view displays.

    PubMed

    Takaki, Yasuhiro; Urano, Yohei; Nishio, Hiroyuki

    2012-11-19

    The discontinuity of motion parallax offered by multi-view displays was assessed by subjective evaluation. A super multi-view head-up display, which provides dense viewing points and has short-, medium-, and long-distance display ranges, was used. The results showed that discontinuity perception depended on the ratio of an image shift between adjacent parallax images to a pixel pitch of three-dimensional (3D) images and the crosstalk between viewing points. When the ratio was less than 0.2 and the crosstalk was small, the discontinuity was not perceived. When the ratio was greater than 1 and the crosstalk was small, the discontinuity was perceived, and the resolution of the 3D images decreased twice. When the crosstalk was large, the discontinuity was not perceived even when the ratio was 1 or 2. However, the resolution decreased two or more times. PMID:23187574

  11. Flying triangulation - A motion-robust optical 3D sensor for the real-time shape acquisition of complex objects

    NASA Astrophysics Data System (ADS)

    Willomitzer, Florian; Ettl, Svenja; Arold, Oliver; Häusler, Gerd

    2013-05-01

    The three-dimensional shape acquisition of objects has become more and more important in the last years. Up to now, there are several well-established methods which already yield impressive results. However, even under quite common conditions like object movement or a complex shaping, most methods become unsatisfying. Thus, the 3D shape acquisition is still a difficult and non-trivial task. We present our measurement principle "Flying Triangulation" which enables a motion-robust 3D acquisition of complex-shaped object surfaces by a freely movable handheld sensor. Since "Flying Triangulation" is scalable, a whole sensor-zoo for different object sizes is presented. Concluding, an overview of current and future fields of investigation is given.

  12. Combining marker-less patient setup and respiratory motion monitoring using low cost 3D camera technology

    NASA Astrophysics Data System (ADS)

    Tahavori, F.; Adams, E.; Dabbs, M.; Aldridge, L.; Liversidge, N.; Donovan, E.; Jordan, T.; Evans, PM.; Wells, K.

    2015-03-01

    Patient set-up misalignment/motion can be a significant source of error within external beam radiotherapy, leading to unwanted dose to healthy tissues and sub-optimal dose to the target tissue. Such inadvertent displacement or motion of the target volume may be caused by treatment set-up error, respiratory motion or an involuntary movement potentially decreasing therapeutic benefit. The conventional approach to managing abdominal-thoracic patient set-up is via skin markers (tattoos) and laser-based alignment. Alignment of the internal target volume with its position in the treatment plan can be achieved using Deep Inspiration Breath Hold (DIBH) in conjunction with marker-based respiratory motion monitoring. We propose a marker-less single system solution for patient set-up and respiratory motion management based on low cost 3D depth camera technology (such as the Microsoft Kinect). In this new work we assess this approach in a study group of six volunteer subjects. Separate simulated treatment mimic treatment "fractions" or set-ups are compared for each subject, undertaken using conventional laser-based alignment and with intrinsic depth images produced by Kinect. Microsoft Kinect is also compared with the well-known RPM system for respiratory motion management in terms of monitoring free-breathing and DIBH. Preliminary results suggest that Kinect is able to produce mm-level surface alignment and a comparable DIBH respiratory motion management when compared to the popular RPM system. Such an approach may also yield significant benefits in terms of patient throughput as marker alignment and respiratory motion can be automated in a single system.

  13. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge. PMID:26529685

  14. Laetoli's lost tracks: 3D generated mean shape and missing footprints.

    PubMed

    Bennett, M R; Reynolds, S C; Morse, S A; Budka, M

    2016-01-01

    The Laetoli site (Tanzania) contains the oldest known hominin footprints, and their interpretation remains open to debate, despite over 35 years of research. The two hominin trackways present are parallel to one another, one of which is a composite formed by at least two individuals walking in single file. Most researchers have focused on the single, clearly discernible G1 trackway while the G2/3 trackway has been largely dismissed due to its composite nature. Here we report the use of a new technique that allows us to decouple the G2 and G3 tracks for the first time. In so doing we are able to quantify the mean footprint topology of the G3 trackway and render it useable for subsequent data analyses. By restoring the effectively 'lost' G3 track, we have doubled the available data on some of the rarest traces directly associated with our Pliocene ancestors. PMID:26902912

  15. Laetoli’s lost tracks: 3D generated mean shape and missing footprints

    PubMed Central

    Bennett, M. R.; Reynolds, S. C.; Morse, S. A.; Budka, M.

    2016-01-01

    The Laetoli site (Tanzania) contains the oldest known hominin footprints, and their interpretation remains open to debate, despite over 35 years of research. The two hominin trackways present are parallel to one another, one of which is a composite formed by at least two individuals walking in single file. Most researchers have focused on the single, clearly discernible G1 trackway while the G2/3 trackway has been largely dismissed due to its composite nature. Here we report the use of a new technique that allows us to decouple the G2 and G3 tracks for the first time. In so doing we are able to quantify the mean footprint topology of the G3 trackway and render it useable for subsequent data analyses. By restoring the effectively ‘lost’ G3 track, we have doubled the available data on some of the rarest traces directly associated with our Pliocene ancestors. PMID:26902912

  16. 3D environment modeling and location tracking using off-the-shelf components

    NASA Astrophysics Data System (ADS)

    Luke, Robert H.

    2016-05-01

    The remarkable popularity of smartphones over the past decade has led to a technological race for dominance in market share. This has resulted in a flood of new processors and sensors that are inexpensive, low power and high performance. These sensors include accelerometers, gyroscope, barometers and most importantly cameras. This sensor suite, coupled with multicore processors, allows a new community of researchers to build small, high performance platforms for low cost. This paper describes a system using off-the-shelf components to perform position tracking as well as environment modeling. The system relies on tracking using stereo vision and inertial navigation to determine movement of the system as well as create a model of the environment sensed by the system.

  17. Camera motion tracking of real endoscope by using virtual endoscopy system and texture information

    NASA Astrophysics Data System (ADS)

    Shoji, Hidenori; Mori, Kensaku; Sugiyama, Jun; Suenaga, Yasuhito; Toriwaki, Jun-ichiro; Takabatake, Hirotsugu; Natori, Hiroshi

    2001-05-01

    This paper proposes an improved method for camera motion tracking of a real endoscope based on registration endoscopic images and Computerized Tomography (CT) images. Camera motion estimation is a fundamental function of an endoscope navigation system, which provides useful navigation information to medical doctors during endoscopic examinations. Our previous method consists of two steps: (1) rough estimation of camera motion using optical flow information and (2) precise estimation performed by the image based registration technique. The problem of the previous method was that only the forward and the backward motion of the camera were estimated by the optical flow information. To solve this problem, the proposed method effectively uses the change in texture information on real endoscopic video images and estimates the camera motion by mapping the texture to the organ's 3-D shape generated from the CT images. We roughly estimate the all motion of the camera using texture information of the organ's wall, then the precise estimation is performed by using image based registration method. We applied the proposed method to the real bronchoscopic video images and 3-D X-ray CT images. The results sowed that the proposed method was superior to our previous method in many cases.

  18. Using Fuzzy Gaussian Inference and Genetic Programming to Classify 3D Human Motions

    NASA Astrophysics Data System (ADS)

    Khoury, Mehdi; Liu, Honghai

    This research introduces and builds on the concept of Fuzzy Gaussian Inference (FGI) (Khoury and Liu in Proceedings of UKCI, 2008 and IEEE Workshop on Robotic Intelligence in Informationally Structured Space (RiiSS 2009), 2009) as a novel way to build Fuzzy Membership Functions that map to hidden Probability Distributions underlying human motions. This method is now combined with a Genetic Programming Fuzzy rule-based system in order to classify boxing moves from natural human Motion Capture data. In this experiment, FGI alone is able to recognise seven different boxing stances simultaneously with an accuracy superior to a GMM-based classifier. Results seem to indicate that adding an evolutionary Fuzzy Inference Engine on top of FGI improves the accuracy of the classifier in a consistent way.

  19. Pitching motion control of a butterfly-like 3D flapping wing-body model

    NASA Astrophysics Data System (ADS)

    Suzuki, Kosuke; Minami, Keisuke; Inamuro, Takaji

    2014-11-01

    Free flights and a pitching motion control of a butterfly-like flapping wing-body model are numerically investigated by using an immersed boundary-lattice Boltzmann method. The model flaps downward for generating the lift force and backward for generating the thrust force. Although the model can go upward against the gravity by the generated lift force, the model generates the nose-up torque, consequently gets off-balance. In this study, we discuss a way to control the pitching motion by flexing the body of the wing-body model like an actual butterfly. The body of the model is composed of two straight rigid rod connected by a rotary actuator. It is found that the pitching angle is suppressed in the range of +/-5° by using the proportional-plus-integral-plus-derivative (PID) control for the input torque of the rotary actuator.

  20. A very low-cost system for capturing 3D motion scans with color and texture data

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    This paper presents a technique for capturing 3D motion scans using hardware that can be constructed for approximately $5,000 in cost. This hardware-software solution, in addition to capturing the movement of the physical structures also captures color and texture data. The scanner configuration developed at the University of North Dakota is sufficient in size for capturing scans of a group of humans. Scanning starts with synchronization and then requires modeling of each frame. For some applications linking structural elements from frame-to-frame may also be required. The efficacy of this scanning approach is discussed and prospective applications for it are considered.

  1. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the

  2. Computer-aided target tracking in motion analysis studies

    NASA Astrophysics Data System (ADS)

    Burdick, Dominic C.; Marcuse, M. L.; Mislan, J. D.

    1990-08-01

    Motion analysis studies require the precise tracking of reference objects in sequential scenes. In a typical situation, events of interest are captured at high frame rates using special cameras, and selected objects or targets are tracked on a frame by frame basis to provide necessary data for motion reconstruction. Tracking is usually done using manual methods which are slow and prone to error. A computer based image analysis system has been developed that performs tracking automatically. The objective of this work was to eliminate the bottleneck due to manual methods in high volume tracking applications such as the analysis of crash test films for the automotive industry. The system has proven to be successful in tracking standard fiducial targets and other objects in crash test scenes. Over 95 percent of target positions which could be located using manual methods can be tracked by the system, with a significant improvement in throughput over manual methods. Future work will focus on the tracking of clusters of targets and on tracking deformable objects such as airbags.

  3. Distributed collaborative environment with real-time tracking of 3D body postures

    NASA Astrophysics Data System (ADS)

    Alisi, Thomas M.; Del Bimbo, Alberto; Pucci, Fabio; Valli, Alessandro

    2003-12-01

    In this paper a multi-user motion capture system is presented, where users work from separate locations and interact in a common virtual environment. The system functions well on low-end personal computers; it implements a natural human/machine interaction due to the complete absence of markers and weak constraints on users' clothes and environment lighting. It is suitable for every-day use, where the great precision reached by complex commercial systems is not the principal requisite.

  4. Diaphragm motion characterization using chest motion data for biomechanics-based lung tumor tracking during EBRT

    NASA Astrophysics Data System (ADS)

    Karami, Elham; Gaede, Stewart; Lee, Ting-Yim; Samani, Abbas

    2016-03-01

    Despite recent advances in image-guided interventions, lung cancer External Beam Radiation Therapy (EBRT) is still very challenging due to respiration induced tumor motion. Among various proposed methods of tumor motion compensation, real-time tumor tracking is known to be one of the most effective solutions as it allows for maximum normal tissue sparing, less overall radiation exposure and a shorter treatment session. As such, we propose a biomechanics-based real-time tumor tracking method for effective lung cancer radiotherapy. In the proposed algorithm, the required boundary conditions for the lung Finite Element model, including diaphragm motion, are obtained using the chest surface motion as a surrogate signal. The primary objective of this paper is to demonstrate the feasibility of developing a function which is capable of inputting the chest surface motion data and outputting the diaphragm motion in real-time. For this purpose, after quantifying the diaphragm motion with a Principal Component Analysis (PCA) model, correlation coefficient between the model parameters of diaphragm motion and chest motion data was obtained through Partial Least Squares Regression (PLSR). Preliminary results obtained in this study indicate that the PCA coefficients representing the diaphragm motion can be obtained through chest surface motion tracking with high accuracy.

  5. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking

    PubMed Central

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-01-01

    Purpose: In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described.Methods: The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a γ-test with a 3%/3 mm criterion.Results: The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%–100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7–1.1 mm for real-time tracking, and 3.7–7.2 mm for no compensation. The percentage of dosimetric points failing the γ-test ranged from 4 to 30% for moving average tracking, 0%–23% for real-time tracking, and 10%–47% for no compensation

  6. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking

    SciTech Connect

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-15

    Purpose: In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. Methods: The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a {gamma}-test with a 3%/3 mm criterion. Results: The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the {gamma}-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation

  7. Stereo photography of neutral density He-filled bubbles for 3-D fluid motion studies in an engine cylinder.

    PubMed

    Kent, J C; Eaton, A R

    1982-03-01

    A new technique has been developed for studies of fluid motion within the cylinder of a reciprocating piston engine during the air induction process. Helium-filled bubbles, serving as neutrally buoyant flow tracer particles, enter the cylinder along with the inducted air charge. The bubble motion is recorded by stereo cine photography through the transparent cylinder of a specially designed research engine. Quantitative data on the 3-D velocity field generated during induction is obtained from frame-to-frame analysis of the stereo images, taking into account refraction of the rays due to the transparent cylinder. Other applications for which this technique appears suitable include measurements of velocity fields within intake ports and flow-field dynamics within intake manifolds of multicylinder engines. PMID:20372559

  8. Hybrid markerless tracking of complex articulated motion in golf swings.

    PubMed

    Fung, Sim Kwoh; Sundaraj, Kenneth; Ahamed, Nizam Uddin; Kiang, Lam Chee; Nadarajah, Sivadev; Sahayadhas, Arun; Ali, Md Asraf; Islam, Md Anamul; Palaniappan, Rajkumar

    2014-04-01

    Sports video tracking is a research topic that has attained increasing attention due to its high commercial potential. A number of sports, including tennis, soccer, gymnastics, running, golf, badminton and cricket have been utilised to display the novel ideas in sports motion tracking. The main challenge associated with this research concerns the extraction of a highly complex articulated motion from a video scene. Our research focuses on the development of a markerless human motion tracking system that tracks the major body parts of an athlete straight from a sports broadcast video. We proposed a hybrid tracking method, which consists of a combination of three algorithms (pyramidal Lucas-Kanade optical flow (LK), normalised correlation-based template matching and background subtraction), to track the golfer's head, body, hands, shoulders, knees and feet during a full swing. We then match, track and map the results onto a 2D articulated human stick model to represent the pose of the golfer over time. Our work was tested using two video broadcasts of a golfer, and we obtained satisfactory results. The current outcomes of this research can play an important role in enhancing the performance of a golfer, provide vital information to sports medicine practitioners by providing technically sound guidance on movements and should assist to diminish the risk of golfing injuries. PMID:24725790

  9. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV kV imaging

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wiersma, R. D.; Mao, W.; Luxton, G.; Xing, L.

    2008-12-01

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from ~0.5 mm for the normal adult breathing pattern to ~1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real

  10. Pulmonary CT image registration and warping for tracking tissue deformation during the respiratory cycle through 3D consistent image registration

    PubMed Central

    Li, Baojun; Christensen, Gary E.; Hoffman, Eric A.; McLennan, Geoffrey; Reinhardt, Joseph M.

    2008-01-01

    Tracking lung tissues during the respiratory cycle has been a challenging task for diagnostic CT and CT-guided radiotherapy. We propose an intensity- and landmark-based image registration algorithm to perform image registration and warping of 3D pulmonary CT image data sets, based on consistency constraints and matching corresponding airway branchpoints. In this paper, we demonstrate the effectivenss and accuracy of this algorithm in tracking lung tissues by both animal and human data sets. In the animal study, the result showed a tracking accuracy of 1.9 mm between 50% functional residual capacity (FRC) and 85% total lung capacity (TLC) for 12 metal seeds implanted in the lungs of a breathing sheep under precise volume control using a pulmonary ventilator. Visual inspection of the human subject results revealed the algorithm’s potential not only in matching the global shapes, but also in registering the internal structures (e.g., oblique lobe fissures, pulmonary artery branches, etc.). These results suggest that our algorithm has significant potential for warping and tracking lung tissue deformation with applications in diagnostic CT, CT-guided radiotherapy treatment planning, and therapeutic effect evaluation. PMID:19175115

  11. Visualizing and Tracking Evolving Features in 3D Unstructured and Adaptive Datasets

    SciTech Connect

    Silver, D.; Zabusky, N.

    2002-08-01

    The massive amounts of time-varying datasets being generated demand new visualization and quantification techniques. Visualization alone is not sufficient. Without proper measurement information/computations real science cannot be done. Our focus is this work was to combine visualization with quantification of the data to allow for advanced querying and searching. As part of this proposal, we have developed a feature extraction adn tracking methodology which allows researcher to identify features of interest and follow their evolution over time. The implementation is distributed and operates over data In-situ: where it is stored and when it was computed.

  12. Lagrangian 3D particle tracking in high-speed flows: Shake-The-Box for multi-pulse systems

    NASA Astrophysics Data System (ADS)

    Novara, Matteo; Schanz, Daniel; Reuther, Nico; Kähler, Christian J.; Schröder, Andreas

    2016-08-01

    The Shake-The-Box (STB) particle tracking technique, recently introduced for time-resolved 3D particle image velocimetry (PIV) images, is applied here to data from a multi-pulse investigation of a turbulent boundary layer flow with adverse pressure gradient in air at 36 m/s ( Re τ = 10,650). The multi-pulse acquisition strategy allows for the recording of four-pulse long time-resolved sequences with a time separation of a few microseconds. The experimental setup consists of a dual-imaging system and a dual-double-cavity laser emitting orthogonal polarization directions to separate the four pulses. The STB particle triangulation and tracking strategy is adapted here to cope with the limited amount of realizations available along the time sequence and to take advantage of the ghost track reduction offered by the use of two independent imaging systems. Furthermore, a correction scheme to compensate for camera vibrations is discussed, together with a method to accurately identify the position of the wall within the measurement domain. Results show that approximately 80,000 tracks can be instantaneously reconstructed within the measurement volume, enabling the evaluation of both dense velocity fields, suitable for spatial gradients evaluation, and highly spatially resolved boundary layer profiles. Turbulent boundary layer profiles obtained from ensemble averaging of the STB tracks are compared to results from 2D-PIV and long-range micro particle tracking velocimetry; the comparison shows the capability of the STB approach in delivering accurate results across a wide range of scales.

  13. A Detailed Study of FDIRC Prototype with Waveform Digitizing Electronics in Cosmic Ray Telescope Using 3D Tracks

    SciTech Connect

    Nishimura, K.; Dey, B.; Aston, D.; Leith, D.W.G.S.; Ratcliff, B.; Roberts, D.; Ruckman, L.; Shtol, D.; Varner, G.S.; Va'vra, J.; Vavra, Jerry; /SLAC

    2012-07-30

    We present a detailed study of a novel Cherenkov imaging detector called the Focusing DIRC (FDIRC) with waveform digitizing electronics. In this test study, the FDIRC prototype has been instrumented with seven Hamamatsu H-8500 MaPMTs. Waveforms from {approx}450 pixels are digitized with waveform sampling electronics based on the BLAB2 ASIC, operating at a sampling speed of {approx}2.5 GSa/s. The FDIRC prototype was tested in a large cosmic ray telescope (CRT) providing 3D muon tracks with {approx}1.5 mrad angular resolution and muon energy of E{sub muon} > 1.6 GeV. In this study we provide a detailed analysis of the tails in the Cherenkov angle distribution as a function of various variables, compare experimental results with simulation, and identify the major contributions to the tails. We demonstrate that to see the full impact of these tails on the Cherenkov angle resolution, it is crucial to use 3D tracks, and have a full understanding of the role of ambiguities. These issues could not be fully explored in previous FDIRC studies where the beam was perpendicular to the quartz radiator bars. This work is relevant for the final FDIRC prototype of the PID detector at SuperB, which will be tested this year in the CRT setup.

  14. A Detailed Study of FDIRC Prototype with Waveform Digitizing Electronics in Cosmic Ray Telescope Using 3D Tracks.

    SciTech Connect

    Nishimura, K

    2012-07-01

    We present a detailed study of a novel Cherenkov imaging detector called the Focusing DIRC (FDIRC) with waveform digitizing electronics. In this test study, the FDIRC prototype has been instrumented with seven Hamamatsu H-8500 MaPMTs. Waveforms from ~450 pixels are digitized with waveform sampling electronics based on the BLAB2 ASIC, operating at a sampling speed of ~2.5 GSa/s. The FDIRC prototype was tested in a large cosmic ray telescope (CRT) providing 3D muon tracks with ~1.5 mrad angular resolution and muon energy of Emuon greater than 1.6 GeV. In this study we provide a detailed analysis of the tails in the Cherenkov angle distribution as a function of various variables, compare experimental results with simulation, and identify the major contributions to the tails. We demonstrate that to see the full impact of these tails on the Cherenkov angle resolution, it is crucial to use 3D tracks, and have a full understanding of the role of ambiguities. These issues could not be fully explored in previous FDIRC studies where the beam was perpendicular to the quartz radiator bars. This work is relevant for the final FDIRC prototype of the PID detector at SuperB, which will be tested this year in the CRT setup.

  15. A smart homecage system with 3D tracking for long-term behavioral experiments.

    PubMed

    Byunghun Lee; Kiani, Mehdi; Ghovanloo, Maysam

    2014-01-01

    A wirelessly-powered homecage system, called the EnerCage-HC, that is equipped with multi-coil wireless power transfer, closed-loop power control, optical behavioral tracking, and a graphic user interface (GUI) is presented for long-term electrophysiology experiments. The EnerCage-HC system can wirelessly power a mobile unit attached to a small animal subject and also track its behavior in real-time as it is housed inside a standard homecage. The EnerCage-HC system is equipped with one central and four overlapping slanted wire-wound coils (WWCs) with optimal geometries to form 3-and 4-coil power transmission links while operating at 13.56 MHz. Utilizing multi-coil links increases the power transfer efficiency (PTE) compared to conventional 2-coil links and also reduces the number of power amplifiers (PAs) to only one, which significantly reduces the system complexity, cost, and dissipated heat. A Microsoft Kinect installed 90 cm above the homecage localizes the animal position and orientation with 1.6 cm accuracy. An in vivo experiment was conducted on a freely behaving rat by continuously delivering 24 mW to the mobile unit for > 7 hours inside a standard homecage. PMID:25570379

  16. A Smart Homecage System with 3D Tracking for Long-Term Behavioral Experiments

    PubMed Central

    Lee, Byunghun; Kiani, Mehdi; Ghovanloo, Maysam

    2015-01-01

    A wirelessly-powered homecage system, called the EnerCage-HC, that is equipped with multi-coil wireless power transfer, closed-loop power control, optical behavioral tracking, and a graphic user interface (GUI) is presented for long-term electrophysiology experiments. The EnerCage-HC system can wirelessly power a mobile unit attached to a small animal subject and also track its behavior in real-time as it is housed inside a standard homecage. The EnerCage-HC system is equipped with one central and four overlapping slanted wire-wound coils (WWCs) with optimal geometries to form 3- and 4-coil power transmission links while operating at 13.56 MHz. Utilizing multi-coil links increases the power transfer efficiency (PTE) compared to conventional 2-coil links and also reduces the number of power amplifiers (PAs) to only one, which significantly reduces the system complexity, cost, and dissipated heat. A Microsoft Kinect installed 90 cm above the homecage localizes the animal position and orientation with 1.6 cm accuracy. An in vivo experiment was conducted on a freely behaving rat by continuously delivering 24 mW to the mobile unit for > 7 hours inside a standard homecage. PMID:25570379

  17. Multisensor 3D tracking for counter small unmanned air vehicles (CSUAV)

    NASA Astrophysics Data System (ADS)

    Vasquez, Juan R.; Tarplee, Kyle M.; Case, Ellen E.; Zelnio, Anne M.; Rigling, Brian D.

    2008-04-01

    A variety of unmanned air vehicles (UAVs) have been developed for both military and civilian use. The typical large UAV is typically state owned, whereas small UAVs (SUAVs) may be in the form of remote controlled aircraft that are widely available. The potential threat of these SUAVs to both the military and civilian populace has led to research efforts to counter these assets via track, ID, and attack. Difficulties arise from the small size and low radar cross section when attempting to detect and track these targets with a single sensor such as radar or video cameras. In addition, clutter objects make accurate ID difficult without very high resolution data, leading to the use of an acoustic array to support this function. This paper presents a multi-sensor architecture that exploits sensor modes including EO/IR cameras, an acoustic array, and future inclusion of a radar. A sensor resource management concept is presented along with preliminary results from three of the sensors.

  18. Tracking Motions Of Manually Controlled Welding Torches

    NASA Technical Reports Server (NTRS)

    Russell, Carolyn; Gangl, Ken

    1996-01-01

    Techniques for measuring motions of manually controlled welding torches undergoing development. Positions, orientations, and velocities determined in real time during manual arc welding. Makes possible to treat manual welding processes more systematically so manual welds made more predictable, especially in cases in which mechanical strengths and other properties of welded parts highly sensitive to heat inputs and thus to velocities and orientations of welding torches.

  19. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    PubMed

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming. PMID:27590974

  20. Ultra-high-speed 3D astigmatic particle tracking velocimetry: application to particle-laden supersonic impinging jets

    NASA Astrophysics Data System (ADS)

    Buchmann, N. A.; Cierpka, C.; Kähler, C. J.; Soria, J.

    2014-11-01

    The paper demonstrates ultra-high-speed three-component, three-dimensional (3C3D) velocity measurements of micron-sized particles suspended in a supersonic impinging jet flow. Understanding the dynamics of individual particles in such flows is important for the design of particle impactors for drug delivery or cold gas dynamic spray processing. The underexpanded jet flow is produced via a converging nozzle, and micron-sized particles ( d p = 110 μm) are introduced into the gas flow. The supersonic jet impinges onto a flat surface, and the particle impact velocity and particle impact angle are studied for a range of flow conditions and impingement distances. The imaging system consists of an ultra-high-speed digital camera (Shimadzu HPV-1) capable of recording rates of up to 1 Mfps. Astigmatism particle tracking velocimetry (APTV) is used to measure the 3D particle position (Cierpka et al., Meas Sci Technol 21(045401):13, 2010) by coding the particle depth location in the 2D images by adding a cylindrical lens to the high-speed imaging system. Based on the reconstructed 3D particle positions, the particle trajectories are obtained via a higher-order tracking scheme that takes advantage of the high temporal resolution to increase robustness and accuracy of the measurement. It is shown that the particle velocity and impingement angle are affected by the gas flow in a manner depending on the nozzle pressure ratio and stand-off distance where higher pressure ratios and stand-off distances lead to higher impact velocities and larger impact angles.

  1. Readily Accessible Multiplane Microscopy: 3D Tracking the HIV-1 Genome in Living Cells.

    PubMed

    Itano, Michelle S; Bleck, Marina; Johnson, Daniel S; Simon, Sanford M

    2016-02-01

    Human immunodeficiency virus (HIV)-1 infection and the associated disease AIDS are a major cause of human death worldwide with no vaccine or cure available. The trafficking of HIV-1 RNAs from sites of synthesis in the nucleus, through the cytoplasm, to sites of assembly at the plasma membrane are critical steps in HIV-1 viral replication, but are not well characterized. Here we present a broadly accessible microscopy method that captures multiple focal planes simultaneously, which allows us to image the trafficking of HIV-1 genomic RNAs with high precision. This method utilizes a customization of a commercial multichannel emission splitter that enables high-resolution 3D imaging with single-macromolecule sensitivity. We show with high temporal and spatial resolution that HIV-1 genomic RNAs are most mobile in the cytosol, and undergo confined mobility at sites along the nuclear envelope and in the nucleus and nucleolus. These provide important insights regarding the mechanism by which the HIV-1 RNA genome is transported to the sites of assembly of nascent virions. PMID:26567131

  2. Particle tracking approach for transport in three-dimensional discrete fracture networks: Particle tracking in 3-D DFNs

    DOE PAGESBeta

    Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.; Gable, Carl W.; Karra, Satish

    2015-09-16

    The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates massmore » balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.« less

  3. Particle tracking approach for transport in three-dimensional discrete fracture networks: Particle tracking in 3-D DFNs

    SciTech Connect

    Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.; Gable, Carl W.; Karra, Satish

    2015-09-16

    The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates mass balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.

  4. Real-time estimation of FLE statistics for 3-D tracking with point-based registration.

    PubMed

    Wiles, Andrew D; Peters, Terry M

    2009-09-01

    Target registration error (TRE) has become a widely accepted error metric in point-based registration since the error metric was introduced in the 1990s. It is particularly prominent in image-guided surgery (IGS) applications where point-based registration is used in both image registration and optical tracking. In point-based registration, the TRE is a function of the fiducial marker geometry, location of the target and the fiducial localizer error (FLE). While the first two items are easily obtained, the FLE is usually estimated using an a priori technique and applied without any knowledge of real-time information. However, if the FLE can be estimated in real-time, particularly as it pertains to optical tracking, then the TRE can be estimated more robustly. In this paper, a method is presented where the FLE statistics are estimated from the latest measurement of the fiducial registration error (FRE) statistics. The solution is obtained by solving a linear system of equations of the form Ax=b for each marker at each time frame where x are the six independent FLE covariance parameters and b are the six independent estimated FRE covariance parameters. The A matrix is only a function of the tool geometry and hence the inverse of the matrix can be computed a priori and used at each instant in which the FLE estimation is required, hence minimizing the level of computation at each frame. When using a good estimate of the FRE statistics, Monte Carlo simulations demonstrate that the root mean square of the FLE can be computed within a range of 70-90 microm. Robust estimation of the TRE for an optically tracked tool, using a good estimate of the FLE, will provide two enhancements in IGS. First, better patient to image registration will be obtained by using the TRE of the optical tool as a weighting factor of point-based registration used to map the patient to image space. Second, the directionality of the TRE can be relayed back to the surgeon giving the surgeon the option

  5. Uncertainty preserving patch-based online modeling for 3D model acquisition and integration from passive motion imagery

    NASA Astrophysics Data System (ADS)

    Tang, Hao; Chang, Peng; Molina, Edgardo; Zhu, Zhigang

    2012-06-01

    In both military and civilian applications, abundant data from diverse sources captured on airborne platforms are often available for a region attracting interest. Since the data often includes motion imagery streams collected from multiple platforms flying at different altitudes, with sensors of different field of views (FOVs), resolutions, frame rates and spectral bands, it is imperative that a cohesive site model encompassing all the information can be quickly built and presented to the analysts. In this paper, we propose to develop an Uncertainty Preserving Patch-based Online Modeling System (UPPOMS) leading towards the automatic creation and updating of a cohesive, geo-registered, uncertaintypreserving, efficient 3D site terrain model from passive imagery with varying field-of-views and phenomenologies. The proposed UPPOMS has the following technical thrusts that differentiate our approach from others: (1) An uncertaintypreserved, patch-based 3D model is generated, which enables the integration of images captured with a mixture of NFOV and WFOV and/or visible and infrared motion imagery sensors. (2) Patch-based stereo matching and multi-view 3D integration are utilized, which are suitable for scenes with many low texture regions, particularly in mid-wave infrared images. (3) In contrast to the conventional volumetric algorithms, whose computational and storage costs grow exponentially with the amount of input data and the scale of the scene, the proposed UPPOMS system employs an online algorithmic pipeline, and scales well to large amount of input data. Experimental results and discussions of future work will be provided.

  6. 3D PET image reconstruction including both motion correction and registration directly into an MR or stereotaxic spatial atlas

    NASA Astrophysics Data System (ADS)

    Gravel, Paul; Verhaeghe, Jeroen; Reader, Andrew J.

    2013-01-01

    This work explores the feasibility and impact of including both the motion correction and the image registration transformation parameters from positron emission tomography (PET) image space to magnetic resonance (MR), or stereotaxic, image space within the system matrix of PET image reconstruction. This approach is motivated by the fields of neuroscience and psychiatry, where PET is used to investigate differences in activation patterns between different groups of participants, requiring all images to be registered to a common spatial atlas. Currently, image registration is performed after image reconstruction which introduces interpolation effects into the final image. Furthermore, motion correction (also requiring registration) introduces a further level of interpolation, and the overall result of these operations can lead to resolution degradation and possibly artifacts. It is important to note that performing such operations on a post-reconstruction basis means, strictly speaking, that the final images are not ones which maximize the desired objective function (e.g. maximum likelihood (ML), or maximum a posteriori reconstruction (MAP)). To correctly seek parameter estimates in the desired spatial atlas which are in accordance with the chosen reconstruction objective function, it is necessary to include the transformation parameters for both motion correction and registration within the system modeling stage of image reconstruction. Such an approach not only respects the statistically chosen objective function (e.g. ML or MAP), but furthermore should serve to reduce the interpolation effects. To evaluate the proposed method, this work investigates registration (including motion correction) using 2D and 3D simulations based on the high resolution research tomograph (HRRT) PET scanner geometry, with and without resolution modeling, using the ML expectation maximization (MLEM) reconstruction algorithm. The quality of reconstruction was assessed using bias

  7. Modulated Magnetic Nanowires for Controlling Domain Wall Motion: Toward 3D Magnetic Memories.

    PubMed

    Ivanov, Yurii P; Chuvilin, Andrey; Lopatin, Sergei; Kosel, Jurgen

    2016-05-24

    Cylindrical magnetic nanowires are attractive materials for next generation data storage devices owing to the theoretically achievable high domain wall velocity and their efficient fabrication in highly dense arrays. In order to obtain control over domain wall motion, reliable and well-defined pinning sites are required. Here, we show that modulated nanowires consisting of alternating nickel and cobalt sections facilitate efficient domain wall pinning at the interfaces of those sections. By combining electron holography with micromagnetic simulations, the pinning effect can be explained by the interaction of the stray fields generated at the interface and the domain wall. Utilizing a modified differential phase contrast imaging, we visualized the pinned domain wall with a high resolution, revealing its three-dimensional vortex structure with the previously predicted Bloch point at its center. These findings suggest the potential of modulated nanowires for the development of high-density, three-dimensional data storage devices. PMID:27138460

  8. Quantification of Ground Motion Reductions by Fault Zone Plasticity with 3D Spontaneous Rupture Simulations

    NASA Astrophysics Data System (ADS)

    Roten, D.; Olsen, K. B.; Cui, Y.; Day, S. M.

    2015-12-01

    We explore the effects of fault zone nonlinearity on peak ground velocities (PGVs) by simulating a suite of surface rupturing earthquakes in a visco-plastic medium. Our simulations, performed with the AWP-ODC 3D finite difference code, cover magnitudes from 6.5 to 8.0, with several realizations of the stochastic stress drop for a given magnitude. We test three different models of rock strength, with friction angles and cohesions based on criteria which are frequently applied to fractured rock masses in civil engineering and mining. We use a minimum shear-wave velocity of 500 m/s and a maximum frequency of 1 Hz. In rupture scenarios with average stress drop (~3.5 MPa), plastic yielding reduces near-fault PGVs by 15 to 30% in pre-fractured, low-strength rock, but less than 1% in massive, high quality rock. These reductions are almost insensitive to the scenario earthquake magnitude. In the case of high stress drop (~7 MPa), however, plasticity reduces near-fault PGVs by 38 to 45% in rocks of low strength and by 5 to 15% in rocks of high strength. Because plasticity reduces slip rates and static slip near the surface, these effects can partially be captured by defining a shallow velocity-strengthening layer. We also perform a dynamic nonlinear simulation of a high stress drop M 7.8 earthquake rupturing the southern San Andreas fault along 250 km from Indio to Lake Hughes. With respect to the viscoelastic solution (a), nonlinearity in the fault damage zone and in near-surface deposits would reduce long-period (> 1 s) peak ground velocities in the Los Angeles basin by 15-50% (b), depending on the strength of crustal rocks and shallow sediments. These simulation results suggest that nonlinear effects may be relevant even at long periods, especially for earthquakes with high stress drop.

  9. Motion tracking-enhanced MART for tomographic PIV

    NASA Astrophysics Data System (ADS)

    Novara, Matteo; Joost Batenburg, Kees; Scarano, Fulvio

    2010-03-01

    A novel technique to increase the accuracy of multiplicative algebraic reconstruction technique (MART) reconstruction from tomographic particle image velocimetry (PIV) recordings at higher seeding density than currently possible is presented. The motion tracking enhancement (MTE) method is based on the combined utilization of images from two or more exposures to enhance the reconstruction of individual intensity fields. The working principle is first introduced qualitatively, and the mathematical background is given that explains how the MART reconstruction can be improved on the basis of an improved first guess object obtained from the combination of non-simultaneous views reduced to the same time instant deforming the 3D objects by an estimate of the particle motion field. The performances of MTE are quantitatively evaluated by numerical simulation of the imaging, reconstruction and image correlation processes. The cases of two or more exposures obtained from time-resolved experiments are considered. The iterative application of MTE appears to significantly improve the reconstruction quality, first by decreasing the intensity of the ghost images and second, by increasing the intensity and the reconstruction precision for the actual particles. Based on computer simulations, the maximum imaged seeding density that can be dealt with is tripled with respect to the MART analysis applied to a single exposure. The analysis also illustrates that the maximum effect of the MTE method is comparable to that of doubling the number of cameras in the tomographic system. Experiments performed on a transitional jet at Re = 5000 apply the MTE method to double-frame recordings. The velocity measurement precision is increased for a system with fewer views (two or three cameras compared with four cameras). The ghost particles' intensity is also visibly reduced although to a lesser extent with respect to the computer simulations. The velocity and vorticity field obtained from a three

  10. Crossed beam roof target for motion tracking

    NASA Technical Reports Server (NTRS)

    Olczak, Eugene (Inventor)

    2009-01-01

    A system for detecting motion between a first body and a second body includes first and second detector-emitter pairs, disposed on the first body, and configured to transmit and receive first and second optical beams, respectively. At least a first optical rotator is disposed on the second body and configured to receive and reflect at least one of the first and second optical beams. First and second detectors of the detector-emitter pairs are configured to detect the first and second optical beams, respectively. Each of the first and second detectors is configured to detect motion between the first and second bodies in multiple degrees of freedom (DOFs). The first optical rotator includes a V-notch oriented to form an apex of an isosceles triangle with respect to a base of the isosceles triangle formed by the first and second detector-emitter pairs. The V-notch is configured to receive the first optical beam and reflect the first optical beam to both the first and second detectors. The V-notch is also configured to receive the second optical beam and reflect the second optical beam to both the first and second detectors.

  11. Evaluation of suitability of a micro-processing unit of motion analysis for upper limb tracking.

    PubMed

    Barraza Madrigal, José Antonio; Cardiel, Eladio; Rogeli, Pablo; Leija Salas, Lorenzo; Muñoz Guerrero, Roberto

    2016-08-01

    The aim of this study is to assess the suitability of a micro-processing unit of motion analysis (MPUMA), for monitoring, reproducing, and tracking upper limb movements. The MPUMA is based on an inertial measurement unit, a 16-bit digital signal controller and a customized algorithm. To validate the performance of the system, simultaneous recordings of the angular trajectory were performed with a video-based motion analysis system. A test of the flexo-extension of the shoulder joint during the active elevation in a complete range of 120º of the upper limb was carried out in 10 healthy volunteers. Additional tests were carried out to assess MPUMA performance during upper limb tracking. The first, a 3D motion reconstruction of three movements of the shoulder joint (flexo-extension, abduction-adduction, horizontal internal-external rotation), and the second, an upper limb tracking online during the execution of three movements of the shoulder joint followed by a continuous random movement without any restrictions by using a virtual model and a mechatronic device of the shoulder joint. Experimental results demonstrated that the MPUMA measured joint angles that are close to those from a motion-capture system with orientation RMS errors less than 3º. PMID:27185034

  12. Nonlinear, nonlaminar-3D computation of electron motion through the output cavity of a klystron

    NASA Technical Reports Server (NTRS)

    Albers, L. U.; Kosmahl, H. G.

    1971-01-01

    The equations of motion used in the computation are discussed along with the space charge fields and the integration process. The following assumptions were used as a basis for the computation: (1) The beam is divided into N axisymmetric discs of equal charge and each disc into R rings of equal charge. (2) The velocity of each disc, its phase with respect to the gap voltage, and its radius at a specified position in the drift tunnel prior to the interaction gap is known from available large signal one dimensional programs. (3) The fringing rf fields are computed from exact analytical expressions derived from the wave equation assuming a known field shape between the tunnel tips at a radius a. (4) The beam is focused by an axisymmetric magnetic field. Both components of B, that is B sub z and B sub r, are taken into account. (5) Since this integration does not start at the cathode but rather further down the stream prior to entering the output cavity it is assumed that each electron moved along a laminar path from the cathode to the start of integration.

  13. 3D optical imagery for motion compensation in a limb ultrasound system

    NASA Astrophysics Data System (ADS)

    Ranger, Bryan J.; Feigin, Micha; Zhang, Xiang; Mireault, Al; Raskar, Ramesh; Herr, Hugh M.; Anthony, Brian W.

    2016-04-01

    Conventional processes for prosthetic socket fabrication are heavily subjective, often resulting in an interface to the human body that is neither comfortable nor completely functional. With nearly 100% of amputees reporting that they experience discomfort with the wearing of their prosthetic limb, designing an effective interface to the body can significantly affect quality of life and future health outcomes. Active research in medical imaging and biomechanical tissue modeling of residual limbs has led to significant advances in computer aided prosthetic socket design, demonstrating an interest in moving toward more quantifiable processes that are still patient-specific. In our work, medical ultrasonography is being pursued to acquire data that may quantify and improve the design process and fabrication of prosthetic sockets while greatly reducing cost compared to an MRI-based framework. This paper presents a prototype limb imaging system that uses a medical ultrasound probe, mounted to a mechanical positioning system and submerged in a water bath. The limb imaging is combined with three-dimensional optical imaging for motion compensation. Images are collected circumferentially around the limb and combined into cross-sectional axial image slices, resulting in a compound image that shows tissue distributions and anatomical boundaries similar to magnetic resonance imaging. In this paper we provide a progress update on our system development, along with preliminary results as we move toward full volumetric imaging of residual limbs for prosthetic socket design. This demonstrates a novel multi-modal approach to residual limb imaging.

  14. Aging affects postural tracking of complex visual motion cues.

    PubMed

    Sotirakis, H; Kyvelidou, A; Mademli, L; Stergiou, N; Hatzitaki, V

    2016-09-01

    Postural tracking of visual motion cues improves perception-action coupling in aging, yet the nature of the visual cues to be tracked is critical for the efficacy of such a paradigm. We investigated how well healthy older (72.45 ± 4.72 years) and young (22.98 ± 2.9 years) adults can follow with their gaze and posture horizontally moving visual target cues of different degree of complexity. Participants tracked continuously for 120 s the motion of a visual target (dot) that oscillated in three different patterns: a simple periodic (simulated by a sine), a more complex (simulated by the Lorenz attractor that is deterministic displaying mathematical chaos) and an ultra-complex random (simulated by surrogating the Lorenz attractor) pattern. The degree of coupling between performance (posture and gaze) and the target motion was quantified in the spectral coherence, gain, phase and cross-approximate entropy (cross-ApEn) between signals. Sway-target coherence decreased as a function of target complexity and was lower for the older compared to the young participants when tracking the chaotic target. On the other hand, gaze-target coherence was not affected by either target complexity or age. Yet, a lower cross-ApEn value when tracking the chaotic stimulus motion revealed a more synchronous gaze-target relationship for both age groups. Results suggest limitations in online visuo-motor processing of complex motion cues and a less efficient exploitation of the body sway dynamics with age. Complex visual motion cues may provide a suitable training stimulus to improve visuo-motor integration and restore sway variability in older adults. PMID:27126061

  15. Object tracking by combining detection, motion estimation, and verification

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver

    2010-01-01

    Object detection and tracking play an increasing role in modern surveillance systems. Vision research is still confronted with many challenges when it comes to robust tracking in realistic imaging scenarios. We describe a tracking framework which is aimed at the detection and tracking of objects in real-world situations (e.g. from surveillance cameras) and in real-time. Although the current system is used for pedestrian tracking only, it can easily be adapted to other detector types and object classes. The proposed tracker combines i) a simple background model to speed up all following computations, ii)1 a fast object detector realized with a cascaded HOG detector, iii) motion estimation with a KLT Tracker iv) object verification based on texture/color analysis by means of DCT coefficients and , v) dynamic trajectory and object management. The tracker has been successfully applied in indoor and outdoor scenarios it a public transportation hub in the City of Graz, Austria.

  16. Motion-based prediction explains the role of tracking in motion extrapolation.

    PubMed

    Khoei, Mina A; Masson, Guillaume S; Perrinet, Laurent U

    2013-11-01

    During normal viewing, the continuous stream of visual input is regularly interrupted, for instance by blinks of the eye. Despite these frequents blanks (that is the transient absence of a raw sensory source), the visual system is most often able to maintain a continuous representation of motion. For instance, it maintains the movement of the eye such as to stabilize the image of an object. This ability suggests the existence of a generic neural mechanism of motion extrapolation to deal with fragmented inputs. In this paper, we have modeled how the visual system may extrapolate the trajectory of an object during a blank using motion-based prediction. This implies that using a prior on the coherency of motion, the system may integrate previous motion information even in the absence of a stimulus. In order to compare with experimental results, we simulated tracking velocity responses. We found that the response of the motion integration process to a blanked trajectory pauses at the onset of the blank, but that it quickly recovers the information on the trajectory after reappearance. This is compatible with behavioral and neural observations on motion extrapolation. To understand these mechanisms, we have recorded the response of the model to a noisy stimulus. Crucially, we found that motion-based prediction acted at the global level as a gain control mechanism and that we could switch from a smooth regime to a binary tracking behavior where the dot is tracked or lost. Our results imply that a local prior implementing motion-based prediction is sufficient to explain a large range of neural and behavioral results at a more global level. We show that the tracking behavior deteriorates for sensory noise levels higher than a certain value, where motion coherency and predictability fail to hold longer. In particular, we found that motion-based prediction leads to the emergence of a tracking behavior only when enough information from the trajectory has been accumulated

  17. Quaternion correlation for tracking crystal motions

    NASA Astrophysics Data System (ADS)

    Shi, Qiwei; Latourte, Félix; Hild, François; Roux, Stéphane

    2016-09-01

    During in situ mechanical tests performed on polycrystalline materials in a scanning electron microscope, crystal orientation maps may be recorded at different stages of deformation from electron backscattered diffraction (EBSD). The present study introduces a novel correlation technique that exploits the crystallographic orientation field as a surface pattern to measure crystal motions. Introducing a quaternion-based formalism reveals crystal symmetry that is very convenient to handle and orientation extraction. Spatial regularization is provided by a penalty to deviation of displacement fields from being the solution to a homogeneous linear elastic problem. This procedure allows the large scale features of the displacement field to be captured, mostly from grain boundaries, and a fair interpolation of the displacement to be obtained within the grains. From these data, crystal rotations can be estimated very accurately. Both synthetic and real experimental cases are considered to illustrate the method.

  18. Effectiveness of an Automatic Tracking Software in Underwater Motion Analysis

    PubMed Central

    Magalhaes, Fabrício A.; Sawacha, Zimi; Di Michele, Rocco; Cortesi, Matteo; Gatta, Giorgio; Fantozzi, Silvia

    2013-01-01

    Tracking of markers placed on anatomical landmarks is a common practice in sports science to perform the kinematic analysis that interests both athletes and coaches. Although different software programs have been developed to automatically track markers and/or features, none of them was specifically designed to analyze underwater motion. Hence, this study aimed to evaluate the effectiveness of a software developed for automatic tracking of underwater movements (DVP), based on the Kanade-Lucas-Tomasi feature tracker. Twenty-one video recordings of different aquatic exercises (n = 2940 markers’ positions) were manually tracked to determine the markers’ center coordinates. Then, the videos were automatically tracked using DVP and a commercially available software (COM). Since tracking techniques may produce false targets, an operator was instructed to stop the automatic procedure and to correct the position of the cursor when the distance between the calculated marker’s coordinate and the reference one was higher than 4 pixels. The proportion of manual interventions required by the software was used as a measure of the degree of automation. Overall, manual interventions were 10.4% lower for DVP (7.4%) than for COM (17.8%). Moreover, when examining the different exercise modes separately, the percentage of manual interventions was 5.6% to 29.3% lower for DVP than for COM. Similar results were observed when analyzing the type of marker rather than the type of exercise, with 9.9% less manual interventions for DVP than for COM. In conclusion, based on these results, the developed automatic tracking software presented can be used as a valid and useful tool for underwater motion analysis. Key Points The availability of effective software for automatic tracking would represent a significant advance for the practical use of kinematic analysis in swimming and other aquatic sports. An important feature of automatic tracking software is to require limited human

  19. Study of human body: Kinematics and kinetics of a martial arts (Silat) performers using 3D-motion capture

    NASA Astrophysics Data System (ADS)

    Soh, Ahmad Afiq Sabqi Awang; Jafri, Mohd Zubir Mat; Azraai, Nur Zaidi

    2015-04-01

    The Interest in this studies of human kinematics goes back very far in human history drove by curiosity or need for the understanding the complexity of human body motion. To find new and accurate information about the human movement as the advance computing technology became available for human movement that can perform. Martial arts (silat) were chose and multiple type of movement was studied. This project has done by using cutting-edge technology which is 3D motion capture to characterize and to measure the motion done by the performers of martial arts (silat). The camera will detect the markers (infrared reflection by the marker) around the performer body (total of 24 markers) and will show as dot in the computer software. The markers detected were analyzing using kinematic kinetic approach and time as reference. A graph of velocity, acceleration and position at time,t (seconds) of each marker was plot. Then from the information obtain, more parameters were determined such as work done, momentum, center of mass of a body using mathematical approach. This data can be used for development of the effectiveness movement in martial arts which is contributed to the people in arts. More future works can be implemented from this project such as analysis of a martial arts competition.

  20. Effects of simple and complex motion patterns on gene expression of chondrocytes seeded in 3D scaffolds.

    PubMed

    Grad, Sibylle; Gogolewski, Sylwester; Alini, Mauro; Wimmer, Markus A

    2006-11-01

    This study investigated the effect of unidirectional and multidirectional motion patterns on gene expression and molecule release of chondrocyte-seeded 3D scaffolds. Resorbable porous polyurethane scaffolds were seeded with bovine articular chondrocytes and exposed to dynamic compression, applied with a ceramic hip ball, alone (group 1), with superimposed rotation of the scaffold around its cylindrical axis (group 2), oscillation of the ball over the scaffold surface (group 3), or oscillation of ball and scaffold in phase difference (group 4). Compared with group 1, the proteoglycan 4 (PRG4) and cartilage oligomeric matrix protein (COMP) mRNA expression levels were markedly increased by ball oscillation (groups 3 and 4). Furthermore, the collagen type II mRNA expression was enhanced in the groups 3 and 4, while the aggrecan and tissue inhibitor of metalloproteinase-3 (TIMP-3) mRNA expression levels were upregulated by multidirectional articular motion (group 4). Ball oscillation (groups 3 and 4) also increased the release of PRG4, COMP, and hyaluronan (HA) into the culture media. This indicates that the applied stimuli can contribute to the maintenance of the chondrocytic phenotype of the cells. The mechanical effects causing cell stimulation by applied surface motion might be related to fluid film buildup and/or frictional shear at the scaffold-ball interface. It is suggested that the oscillating ball drags the fluid into the joint space, thereby causing biophysical effects similar to those of fluid flow. PMID:17518631

  1. MRI - 3D Ultrasound - X-ray Image Fusion with Electromagnetic Tracking for Transendocardial Therapeutic Injections: In-vitro Validation and In-vivo Feasibility

    PubMed Central

    Hatt, Charles R.; Jain, Ameet K.; Parthasarathy, Vijay; Lang, Andrew; Raval, Amish N.

    2014-01-01

    Myocardial infarction (MI) is one of the leading causes of death in the world. Small animal studies have shown that stem-cell therapy offers dramatic functional improvement post-MI. An endomyocardial catheter injection approach to therapeutic agent delivery has been proposed to improve efficacy through increased cell retention. Accurate targeting is critical for reaching areas of greatest therapeutic potential while avoiding a life-threatening myocardial perforation. Multimodal image fusion has been proposed as a way to improve these procedures by augmenting traditional intra-operative imaging modalities with high resolution pre-procedural images. Previous approaches have suffered from a lack of real-time tissue imaging and dependence on X-ray imaging to track devices, leading to increased ionizing radiation dose. In this paper, we present a new image fusion system for catheter-based targeted delivery of therapeutic agents. The system registers real-time 3D echocardiography, magnetic resonance, X-ray, and electromagnetic sensor tracking within a single flexible framework. All system calibrations and registrations were validated and found to have target registration errors less than 5 mm in the worst case. Injection accuracy was validated in a motion enabled cardiac injection phantom, where targeting accuracy ranged from 0.57 to 3.81 mm. Clinical feasibility was demonstrated with in-vivo swine experiments, where injections were successfully made into targeted regions of the heart. PMID:23561056

  2. Polar motion from laser tracking of artificial satellites.

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Kolenkiewicz, R.; Plotkin, H. H.; Johnson, T. S.; Dunn, P. J.

    1972-01-01

    Measurements of the range to the Beacon Explorer C spacecraft from a single laser tracking system at Goddard Space Flight Center have been used to determine the change in latitude of the station arising from polar motion. A precision of 0.03 arc second was obtained for the latitude during a 5-month period in 1970.

  3. Polar motion from laser tracking of artificial satellites

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Kolenkiewicz, R.; Dunn, P. J.; Plotkin, H. H.; Johnson, T. S.

    1972-01-01

    Laser ranges to the Beacon Explorer C spacecraft from a single Goddard Space Flight Center tracking system were used to determine the change in latitude of the station arising from polar motion. A precision of 0.03 arcsecs rms was obtained for the latitude during a five-month period in 1970.

  4. Illusory motion reversals and feature tracking analyses of movement.

    PubMed

    Arnold, Derek H; Pearce, Samuel L; Marinovic, Welber

    2014-06-01

    Illusory motion reversals (IMRs) can happen when looking at a repetitive pattern of motion, such as a spinning wheel. To date these have been attributed to either a form of motion aftereffect seen while viewing a moving stimulus or to the visual system taking discrete perceptual snapshots of continuous input. Here we present evidence that we argue is inconsistent with both proposals. First, we show that IMRs are driven by the adaptation of nondirectional temporal frequency tuned cells, which is inconsistent with the motion aftereffect account. Then we establish that the optimal frequency for inducing IMRs differs for color and luminance defined movement. These data are problematic for any account based on a constant rate of discrete perceptual sampling. Instead, we suggest IMRs result from a perceptual rivalry involving discrepant signals from a feature tracking analysis of movement and motion-energy based analyses. We do not assume that feature tracking relies on a discrete sampling of input at a fixed rate, but rather that feature tracking can (mis)match features at any rate less than a stimulus driven maximal resolution. Consistent with this proposal, we show that the critical frequency for inducing IMRs is dictated by the duty cycle of salient features within a moving pattern, rather than by the temporal frequency of luminance changes. PMID:24635200

  5. Fast and accurate global multiphase arrival tracking: the irregular shortest-path method in a 3-D spherical earth model

    NASA Astrophysics Data System (ADS)

    Huang, Guo-Jiao; Bai, Chao-Ying; Greenhalgh, Stewart

    2013-09-01

    The traditional grid/cell-based wavefront expansion algorithms, such as the shortest path algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortest path method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.

  6. 3-D ground motion modeling for M7 dynamic rupture earthquake scenarios on the Wasatch fault, Utah

    NASA Astrophysics Data System (ADS)

    Roten, D.; Olsen, K. B.; Cruz Atienza, V. M.; Pechmann, J. C.; Magistrale, H. W.

    2009-12-01

    The Salt Lake City segment of the Wasatch fault (WFSLC), located on the eastern edge of the Salt Lake Basin (SLB), is capable of producing M7 earthquakes and represents a serious seismic hazard to Salt Lake City, Utah. We simulate a series of rupture scenarios on the WFSLC to quantify the ground motion expected from such M7 events and to assess the importance of amplification effects from basin focusing and source directivity. We use the newly revised Wasatch Front community velocity model for our simulations, which is tested by simulating records of three local Mw 3.3-3.7 earthquakes in the frequency band 0.5 to 1.0 Hz. The M7 earthquake scenarios make use of a detailed 3-D model geometry of the WFSLC that we developed based on geological observations. To obtain a suite of realistic source representations for M7 WFSLC simulations we perform spontaneous-rupture simulations on a planar 43 km by 23 km fault with the staggered-grid split-node finite-difference (FD) method. We estimate the initial distribution of shear stress using models that assume depth-dependent normal stress for a dipping, normal fault as well as simpler models which use constant (depth-independent) normal stress. The slip rate histories from the spontaneous rupture scenarios are projected onto the irregular dipping geometry of the WFSLC and used to simulate 0-1 Hz wave propagation in the SLB area using a 4th-order, staggered-grid visco-elastic FD method. We find that peak ground velocities tend to be larger on the low-velocity sediments on the hanging wall side of the fault than on outcropping rock on the footwall side, confirming results of previous studies on normal faulting earthquakes. The simulated ground motions reveal strong along-strike directivity effects for ruptures nucleating towards the ends of the WFSLC. The 0-1 Hz FD simulations are combined with local scattering operators to obtain broadband (0-10 Hz) synthetics and maps of average peak ground motions. Finally we use broadband

  7. How Plates Pull Transforms Apart: 3-D Numerical Models of Oceanic Transform Fault Response to Changes in Plate Motion Direction

    NASA Astrophysics Data System (ADS)

    Morrow, T. A.; Mittelstaedt, E. L.; Olive, J. A. L.

    2015-12-01

    Observations along oceanic fracture zones suggest that some mid-ocean ridge transform faults (TFs) previously split into multiple strike-slip segments separated by short (<~50 km) intra-transform spreading centers and then reunited to a single TF trace. This history of segmentation appears to correspond with changes in plate motion direction. Despite the clear evidence of TF segmentation, the processes governing its development and evolution are not well characterized. Here we use a 3-D, finite-difference / marker-in-cell technique to model the evolution of localized strain at a TF subjected to a sudden change in plate motion direction. We simulate the oceanic lithosphere and underlying asthenosphere at a ridge-transform-ridge setting using a visco-elastic-plastic rheology with a history-dependent plastic weakening law and a temperature- and stress-dependent mantle viscosity. To simulate the development of topography, a low density, low viscosity 'sticky air' layer is present above the oceanic lithosphere. The initial thermal gradient follows a half-space cooling solution with an offset across the TF. We impose an enhanced thermal diffusivity in the uppermost 6 km of lithosphere to simulate the effects of hydrothermal circulation. An initial weak seed in the lithosphere helps localize shear deformation between the two offset ridge axes to form a TF. For each model case, the simulation is run initially with TF-parallel plate motion until the thermal structure reaches a steady state. The direction of plate motion is then rotated either instantaneously or over a specified time period, placing the TF in a state of trans-tension. Model runs continue until the system reaches a new steady state. Parameters varied here include: initial TF length, spreading rate, and the rotation rate and magnitude of spreading obliquity. We compare our model predictions to structural observations at existing TFs and records of TF segmentation preserved in oceanic fracture zones.

  8. Improved motion information-based infrared dim target tracking algorithms

    NASA Astrophysics Data System (ADS)

    Lei, Liu; Zhijian, Huang

    2014-11-01

    Accurate and fast tracking of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. However, under complex backgrounds, such as clutter, varying illumination, and occlusion, the traditional tracking method often converges to a local maximum and loses the real infrared target. To cope with these problems, three improved tracking algorithm based on motion information are proposed in this paper, namely improved mean shift algorithm, improved Optical flow method and improved Particle Filter method. The basic principles and the implementing procedure of these modified algorithms for target tracking are described. Using these algorithms, the experiments on some real-life IR and color images are performed. The whole algorithm implementing processes and results are analyzed, and those algorithms for tracking targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying tracking effectiveness and robustness. Meanwhile, it has high tracking efficiency and can be used for real-time tracking.

  9. 3-D or median map? Earthquake scenario ground-motion maps from physics-based models versus maps from ground-motion prediction equations

    NASA Astrophysics Data System (ADS)

    Porter, K.

    2015-12-01

    There are two common ways to create a ground-motion map for a hypothetical earthquake: using ground motion prediction equations (by far the more common of the two) and using 3-D physics-based modeling. The former is very familiar to engineers, the latter much less so, and the difference can present a problem because engineers tend to trust the familiar and distrust novelty. Maps for essentially the same hypothetical earthquake using the two different methods can look very different, while appearing to present the same information. Using one or the other can lead an engineer or disaster planner to very different estimates of damage and risk. The reasons have to do with depiction of variability, spatial correlation of shaking, the skewed distribution of real-world shaking, and the upward-curving relationship between shaking and damage. The scientists who develop the two kinds of map tend to specialize in one or the other and seem to defend their turf, which can aggravate the problem of clearly communicating with engineers.The USGS Science Application for Risk Reduction's (SAFRR) HayWired scenario has addressed the challenge of explaining to engineers the differences between the two maps, and why, in a disaster planning scenario, one might want to use the less-familiar 3-D map.

  10. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging.

    PubMed

    Afik, Eldad

    2015-01-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection. PMID:26329642

  11. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    PubMed Central

    Afik, Eldad

    2015-01-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection. PMID:26329642

  12. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    NASA Astrophysics Data System (ADS)

    Afik, Eldad

    2015-09-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection.

  13. A quantitative study of 3D-scanning frequency and Δd of tracking points on the tooth surface

    PubMed Central

    Li, Hong; Lyu, Peijun; Sun, Yuchun; Wang, Yong; Liang, Xiaoyue

    2015-01-01

    Micro-movement of human jaws in the resting state might influence the accuracy of direct three-dimensional (3D) measurement. Providing a reference for sampling frequency settings of intraoral scanning systems to overcome this influence is important. In this study, we measured micro-movement, or change in distance (∆d), as the change in position of a single tracking point from one sampling time point to another in five human subjects. ∆d of tracking points on incisors at 7 sampling frequencies was judged against the clinical accuracy requirement to select proper sampling frequency settings. The curve equation was then fit quantitatively between ∆d median and the sampling frequency to predict the trend of ∆d with increasing f. The difference of ∆d among the subjects and the difference between upper and lower incisor feature points of the same subject were analyzed by a non-parametric test (α = 0.05). Significant differences of incisor feature points were noted among different subjects and between upper and lower jaws of the same subject (P < 0.01). Overall, ∆d decreased with increasing frequency. When the frequency was 60 Hz, ∆d nearly reached the clinical accuracy requirement. Frequencies higher than 60 Hz did not significantly decrease Δd further. PMID:26400112

  14. A kinematic model for Bayesian tracking of cyclic human motion

    NASA Astrophysics Data System (ADS)

    Greif, Thomas; Lienhart, Rainer

    2010-01-01

    We introduce a two-dimensional kinematic model for cyclic motions of humans, which is suitable for the use as temporal prior in any Bayesian tracking framework. This human motion model is solely based on simple kinematic properties: the joint accelerations. Distributions of joint accelerations subject to the cycle progress are learned from training data. We present results obtained by applying the introduced model to the cyclic motion of backstroke swimming in a Kalman filter framework that represents the posterior distribution by a Gaussian. We experimentally evaluate the sensitivity of the motion model with respect to the frequency and noise level of assumed appearance-based pose measurements by simulating various fidelities of the pose measurements using ground truth data.

  15. Motion tracking in infrared imaging for quantitative medical diagnostic applications

    PubMed Central

    Cheng, Tze-Yuan; Herman, Cila

    2014-01-01

    In medical applications, infrared (IR) thermography is used to detect and examine the thermal signature of skin abnormalities by quantitatively analyzing skin temperature in steady state conditions or its evolution over time, captured in an image sequence. However, during the image acquisition period, the involuntary movements of the patient are unavoidable, and such movements will undermine the accuracy of temperature measurement for any particular location on the skin. In this study, a tracking approach using a template-based algorithm is proposed, to follow the involuntary motion of the subject in the IR image sequence. The motion tacking will allow to associate a temperature evolution to each spatial location on the body while the body moves relative to the image frame. The affine transformation model is adopted to estimate the motion parameters of the template image. The Lucas–Kanade algorithm is applied to search for the optimized parameters of the affine transformation. A weighting mask is incorporated into the algorithm to ensure its tracking robustness. To evaluate the feasibility of the tracking approach, two sets of IR image sequences with random in-plane motion were tested in our experiments. A steady-state (no heating or cooling) IR image sequence in which the skin temperature is in equilibrium with the environment was considered first. The thermal recovery IR image sequence, acquired when the skin is recovering from 60-s cooling, was the second case analyzed. By proper selection of the template image along with template update, satisfactory tracking results were obtained for both IR image sequences. The achieved tracking accuracies are promising in terms of satisfying the demands imposed by clinical applications of IR thermography. PMID:24587692

  16. Motion tracking in infrared imaging for quantitative medical diagnostic applications

    NASA Astrophysics Data System (ADS)

    Cheng, Tze-Yuan; Herman, Cila

    2014-01-01

    In medical applications, infrared (IR) thermography is used to detect and examine the thermal signature of skin abnormalities by quantitatively analyzing skin temperature in steady state conditions or its evolution over time, captured in an image sequence. However, during the image acquisition period, the involuntary movements of the patient are unavoidable, and such movements will undermine the accuracy of temperature measurement for any particular location on the skin. In this study, a tracking approach using a template-based algorithm is proposed, to follow the involuntary motion of the subject in the IR image sequence. The motion tacking will allow to associate a temperature evolution to each spatial location on the body while the body moves relative to the image frame. The affine transformation model is adopted to estimate the motion parameters of the template image. The Lucas-Kanade algorithm is applied to search for the optimized parameters of the affine transformation. A weighting mask is incorporated into the algorithm to ensure its tracking robustness. To evaluate the feasibility of the tracking approach, two sets of IR image sequences with random in-plane motion were tested in our experiments. A steady-state (no heating or cooling) IR image sequence in which the skin temperature is in equilibrium with the environment was considered first. The thermal recovery IR image sequence, acquired when the skin is recovering from 60-s cooling, was the second case analyzed. By proper selection of the template image along with template update, satisfactory tracking results were obtained for both IR image sequences. The achieved tracking accuracies are promising in terms of satisfying the demands imposed by clinical applications of IR thermography.

  17. Object motion tracking in the NDE laboratory by random sample iterative closest point

    NASA Astrophysics Data System (ADS)

    Radkowski, Rafael; Wehr, David; Gregory, Elizabeth; Holland, Stephen D.

    2016-02-01

    We present a computationally efficient technique for real-time motion tracking in the NDE laboratory. Our goal is to track object shapes in an flash thermography test stand to determine the position and orientation of the specimen which facilitates to register thermography data to a 3D part model. Object shapes can be different specimens and fixtures. Specimens can be manually aligned at any test stand, the position and orientation of every a-priori known shape can be computed and forwarded to the data management software. Our technique relies on the random sample consensus (RANSAC) approach to the iterative closest point (ICP) problem for identifying object shapes, thus, it is robust in different situations. The paper introduces the computational techniques and experiments along with the results.

  18. Fish tracking by combining motion based segmentation and particle filtering

    NASA Astrophysics Data System (ADS)

    Bichot, E.; Mascarilla, L.; Courtellemont, P.

    2006-01-01

    In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.

  19. Vehicle tracking in wide area motion imagery from an airborne platform

    NASA Astrophysics Data System (ADS)

    van Eekeren, Adam W. M.; van Huis, Jasper R.; Eendebak, Pieter T.; Baan, Jan

    2015-10-01

    Airborne platforms, such as UAV's, with Wide Area Motion Imagery (WAMI) sensors can cover multiple square kilometers and produce large amounts of video data. Analyzing all data for information need purposes becomes increasingly labor-intensive for an image analyst. Furthermore, the capacity of the datalink in operational areas may be inadequate to transfer all data to the ground station. Automatic detection and tracking of people and vehicles enables to send only the most relevant footage to the ground station and assists the image analysts in effective data searches. In this paper, we propose a method for detecting and tracking vehicles in high-resolution WAMI images from a moving airborne platform. For the vehicle detection we use a cascaded set of classifiers, using an Adaboost training algorithm on Haar features. This detector works on individual images and therefore does not depend on image motion stabilization. For the vehicle tracking we use a local template matching algorithm. This approach has two advantages. In the first place, it does not depend on image motion stabilization and it counters the inaccuracy of the GPS data that is embedded in the video data. In the second place, it can find matches when the vehicle detector would miss a certain detection. This results in long tracks even when the imagery is of low frame-rate. In order to minimize false detections, we also integrate height information from a 3D reconstruction that is created from the same images. By using the locations of buildings and roads, we are able to filter out false detections and increase the performance of the tracker. In this paper we show that the vehicle tracks can also be used to detect more complex events, such as traffic jams and fast moving vehicles. This enables the image analyst to do a faster and more effective search of the data.

  20. Does fluid infiltration affect the motion of sediment grains? - A 3-D numerical modelling approach using SPH

    NASA Astrophysics Data System (ADS)

    Bartzke, Gerhard; Rogers, Benedict D.; Fourtakas, Georgios; Mokos, Athanasios; Huhn, Katrin

    2016-04-01

    The processes that cause the creation of a variety of sediment morphological features, e.g. laminated beds, ripples, or dunes, are based on the initial motion of individual sediment grains. However, with experimental techniques it is difficult to measure the flow characteristics, i.e., the velocity of the pore water flow in sediments, at a sufficient resolution and in a non-intrusive way. As a result, the role of fluid infiltration at the surface and in the interior affecting the initiation of motion of a sediment bed is not yet fully understood. Consequently, there is a strong need for numerical models, since these are capable of quantifying fluid driven sediment transport processes of complex sediment beds composed of irregular shapes. The numerical method Smoothed Particle Hydrodynamics (SPH) satisfies this need. As a meshless and Lagrangian technique, SPH is ideally suited to simulating flows in sediment beds composed of various grain shapes, but also flow around single grains at a high temporal and spatial resolution. The solver chosen is DualSPHysics (www.dual.sphysics.org) since this is validated for a range of flow conditions. For the present investigation a 3-D numerical flume model was generated using SPH with a length of 4.0 cm, a width of 0.05 cm and a height of 0.2 cm where mobile sediment particles were deposited in a recess. An experimental setup was designed to test sediment configurations composed of irregular grain shapes (grain diameter, D50=1000 μm). Each bed consisted of 3500 mobile objects. After the bed generation process, the entire domain was flooded with 18 million fluid particles. To drive the flow, an oscillating motion perpendicular to the bed was applied to the fluid, reaching a peak value of 0.3 cm/s, simulating 4 seconds of real time. The model results showed that flow speeds decreased logarithmically from the top of the domain towards the surface of the beds, indicating a fully developed boundary layer. Analysis of the fluid

  1. Geometric approach to target tracking motion analysis in bearing-only tracking

    NASA Astrophysics Data System (ADS)

    Gad, Ahmed S.; Mojica, Fernando; Farooq, Mohamad

    2002-07-01

    In maritime operations, target tracking and localization, also called target motion analysis (TMA), is an important issue. If an active sensor is used, the tracking process will be observable since we can predict the target range and bearing without any difficulty. The major disadvantage of using the active sources is that the enemy's targets can easily detect the ship position. Thus, tracking using active sources become a risky proposition. The alternative is to use passive tracking, but in this case the tracking process will be unobservable because we can only measure the target bearing. The range can be estimated via triangularization by using at least two platforms. Another method is to try to find the range using a geometrical approach to have at least one accurate range and then we can use it to construct the track under some assumptions. In this paper, a geometrical approach to bearing-only tracking is introduced. The target range is derived using few bearing measurements. Several own ship-target geometries have been set up for this purpose. To compute the target range, it is required that the own ship execute an admissible maneuver. The geometrical approach presented provides an acceptable performance and can be used for a short time period in the tracking process to provide a reasonable estimate of the range and then the tracker can use this range to generate the target track and hence reduce the bias.

  2. Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters.

    PubMed

    Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng

    2016-01-01

    Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers. PMID:27618046

  3. Real time markerless motion tracking using linked kinematic chains

    DOEpatents

    Luck, Jason P.; Small, Daniel E.

    2007-08-14

    A markerless method is described for tracking the motion of subjects in a three dimensional environment using a model based on linked kinematic chains. The invention is suitable for tracking robotic, animal or human subjects in real-time using a single computer with inexpensive video equipment, and does not require the use of markers or specialized clothing. A simple model of rigid linked segments is constructed of the subject and tracked using three dimensional volumetric data collected by a multiple camera video imaging system. A physics based method is then used to compute forces to align the model with subsequent volumetric data sets in real-time. The method is able to handle occlusion of segments and accommodates joint limits, velocity constraints, and collision constraints and provides for error recovery. The method further provides for elimination of singularities in Jacobian based calculations, which has been problematic in alternative methods.

  4. Left ventricle wall motion tracking using curvature properties

    NASA Astrophysics Data System (ADS)

    Chandra, Kambhamettu; Goldgof, Dmitry B.

    1992-06-01

    This paper presents the complete implementation of the new algorithm for tracking points on the left ventricle (LV) surface from volumetric cardiac images. We define the local surface stretching as an additional motion parameter of nonrigid transformation. Stretching is constant at all points on the surface for homothetic motion, or follows a polynomial function of certain order (linear in our implementation) in conformal motion. The wall deformation and correspondence information between successive frames of LV in a heart cycle are considered important in evaluating heart behavior and improved diagnosis. We utilize small motion assumption between consecutive frames, hypothesize all possible correspondences, and compute curvature changes for each hypothesis. The computed curvature change is then compared with the one predicted by conformal motion assumption for hypotheses evaluation. We demonstrate the improved performance of the new algorithm utilizing conformal motion with linear stretching assumption over constant stretching assumption on simulated data. Then, the algorithm is applied to real cardiac (CT) images and the stretching of the LV wall is determined. The data set used in our experiments was provided by Dr. Eric Hoffman at University of Pennsylvania Medical school and consists of 16 volumetric (128 by 128 by 118) images taken through the heart cycle.

  5. Simultaneous tracking of 3D actin and microtubule strains in individual MLO-Y4 osteocytes under oscillatory flow.

    PubMed

    Baik, Andrew D; Qiu, Jun; Hillman, Elizabeth M C; Dong, Cheng; Guo, X Edward

    2013-02-22

    Osteocytes in vivo experience complex fluid shear flow patterns to activate mechanotransduction pathways. The actin and microtubule (MT) cytoskeletons have been shown to play an important role in the osteocyte's biochemical response to fluid shear loading. The dynamic nature of physiologically relevant fluid flow profiles (i.e., 1Hz oscillatory flow) impedes the ability to image and study both actin and MT cytoskeletons simultaneously in the same cell with high spatiotemporal resolution. To overcome these limitations, a multi-channel quasi-3D microscopy technique was developed to track the actin and MT networks simultaneously under steady and oscillatory flow. Cells displayed high intercellular variability and intracellular cytoskeletal variability in strain profiles. Shear Exz was the predominant strain in both steady and oscillatory flows in the form of viscoelastic creep and elastic oscillations, respectively. Dramatic differences were seen in oscillatory flow, however. The actin strains displayed an oscillatory strain profile more often than the MT networks in all the strains tested and had a higher peak-to-trough strain magnitude. Taken together, the actin networks are the more responsive cytoskeletal networks in osteocytes under oscillatory flow and may play a bigger role in mechanotransduction pathway activation and regulation. PMID:23352617

  6. Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction

    NASA Astrophysics Data System (ADS)

    Verrier, N.; Grosjean, N.; Dib, E.; Méès, L.; Fournier, C.; Marié, J.-L.

    2016-04-01

    Digital holography is a valuable tool for three-dimensional information extraction. Among existing configurations, the originally proposed set-up (i.e. Gabor, or in-line holography), is reasonably immune to variations in the experimental environment making it a method of choice for studies of fluid dynamics. Nevertheless, standard hologram reconstruction techniques, based on numerical light back-propagation are prone to artifacts such as twin images or aliases that limit both the quality and quantity of information extracted from the acquired holograms. To get round this issue, the hologram reconstruction as a parametric inverse problem has been shown to accurately estimate 3D positions and the size of seeding particles directly from the hologram. To push the bounds of accuracy on size estimation still further, we propose to fully exploit the information redundancy of a hologram video sequence using joint estimation reconstruction. Applying this approach in a bench-top experiment, we show that it led to a relative precision of 0.13% (for a 60 μm diameter droplet) for droplet size estimation, and a tracking precision of {σx}× {σy}× {σz}=0.15× 0.15× 1~\\text{pixels} .

  7. Shoulder 3D range of motion and humerus rotation in two volleyball spike techniques: injury prevention and performance.

    PubMed

    Seminati, Elena; Marzari, Alessandra; Vacondio, Oreste; Minetti, Alberto E

    2015-06-01

    Repetitive stresses and movements on the shoulder in the volleyball spike expose this joint to overuse injuries, bringing athletes to a career threatening injury. Assuming that specific spike techniques play an important role in injury risk, we compared the kinematic of the traditional (TT) and the alternative (AT) techniques in 21 elite athletes, evaluating their safety with respect to performance. Glenohumeral joint was set as the centre of an imaginary sphere, intersected by the distal end of the humerus at different angles. Shoulder range of motion and angular velocities were calculated and compared to the joint limits. Ball speed and jump height were also assessed. Results indicated the trajectory of the humerus to be different for the TT, with maximal flexion of the shoulder reduced by 10 degrees, and horizontal abduction 15 degrees higher. No difference was found for external rotation angles, while axial rotation velocities were significantly higher in AT, with a 5% higher ball speed. Results suggest AT as a potential preventive solution to shoulder chronic pathologies, reducing shoulder flexion during spiking. The proposed method allows visualisation of risks associated with different overhead manoeuvres, by depicting humerus angles and velocities with respect to joint limits in the same 3D space. PMID:26151344

  8. SU-E-J-80: Interplay Effect Between VMAT Intensity Modulation and Tumor Motion in Hypofractioned Lung Treatment, Investigated with 3D Pressage Dosimeter

    SciTech Connect

    Touch, M; Wu, Q; Oldham, M

    2014-06-01

    Purpose: To demonstrate an embedded tissue equivalent presage dosimeter for measuring 3D doses in moving tumors and to study the interplay effect between the tumor motion and intensity modulation in hypofractioned Volumetric Modulated Arc Therapy(VMAT) lung treatment. Methods: Motion experiments were performed using cylindrical Presage dosimeters (5cm diameter by 7cm length) mounted inside the lung insert of a CIRS thorax phantom. Two different VMAT treatment plans were created and delivered in three different scenarios with the same prescribed dose of 18 Gy. Plan1, containing a 2 centimeter spherical CTV with an additional 2mm setup margin, was delivered on a stationary phantom. Plan2 used the same CTV except expanded by 1 cm in the Sup-Inf direction to generate ITV and PTV respectively. The dosimeters were irradiated in static and variable motion scenarios on a Truebeam system. After irradiation, high resolution 3D dosimetry was performed using the Duke Large Field-of-view Optical-CT Scanner, and compared to the calculated dose from Eclipse. Results: In the control case (no motion), good agreement was observed between the planned and delivered dose distributions as indicated by 100% 3D Gamma (3% of maximum planned dose and 3mm DTA) passing rates in the CTV. In motion cases gamma passing rates was 99% in CTV. DVH comparisons also showed good agreement between the planned and delivered dose in CTV for both control and motion cases. However, differences of 15% and 5% in dose to PTV were observed in the motion and control cases respectively. Conclusion: With very high dose nature of a hypofraction treatment, significant effect was observed only motion is introduced to the target. This can be resulted from the motion of the moving target and the modulation of the MLC. 3D optical dosimetry can be of great advantage in hypofraction treatment dose validation studies.

  9. Calculating the Probability of Strong Ground Motions Using 3D Seismic Waveform Modeling - SCEC CyberShake

    NASA Astrophysics Data System (ADS)

    Gupta, N.; Callaghan, S.; Graves, R.; Mehta, G.; Zhao, L.; Deelman, E.; Jordan, T. H.; Kesselman, C.; Okaya, D.; Cui, Y.; Field, E.; Gupta, V.; Vahi, K.; Maechling, P. J.

    2006-12-01

    Researchers from the SCEC Community Modeling Environment (SCEC/CME) project are utilizing the CyberShake computational platform and a distributed high performance computing environment that includes USC High Performance Computer Center and the NSF TeraGrid facilities to calculate physics-based probabilistic seismic hazard curves for several sites in the Southern California area. Traditionally, probabilistic seismic hazard analysis (PSHA) is conducted using intensity measure relationships based on empirical attenuation relationships. However, a more physics-based approach using waveform modeling could lead to significant improvements in seismic hazard analysis. Members of the SCEC/CME Project have integrated leading-edge PSHA software tools, SCEC-developed geophysical models, validated anelastic wave modeling software, and state-of-the-art computational technologies on the TeraGrid to calculate probabilistic seismic hazard curves using 3D waveform-based modeling. The CyberShake calculations for a single probablistic seismic hazard curve require tens of thousands of CPU hours and multiple terabytes of disk storage. The CyberShake workflows are run on high performance computing systems including multiple TeraGrid sites (currently SDSC and NCSA), and the USC Center for High Performance Computing and Communications. To manage the extensive job scheduling and data requirements, CyberShake utilizes a grid-based scientific workflow system based on the Virtual Data System (VDS), the Pegasus meta-scheduler system, and the Globus toolkit. Probabilistic seismic hazard curves for spectral acceleration at 3.0 seconds have been produced for eleven sites in the Southern California region, including rock and basin sites. At low ground motion levels, there is little difference between the CyberShake and attenuation relationship curves. At higher ground motion (lower probability) levels, the curves are similar for some sites (downtown LA, I-5/SR-14 interchange) but different for

  10. Tracking single particles motion in shaken wet powder clusters

    NASA Astrophysics Data System (ADS)

    Wenzl, Jennifer; Auernhammer, Guenter K.; Gilson, Laurent

    In many industrial branches wet granulate powders, where the particles are connected via an additional binding liquid, are widely used. Amply investigated were model systems, where the binding liquid is homogeneously distributed, i.e. building a connecting capillary network. In contrast wet granulate model systems with an inhomogeneous liquid distribution have been rarely in focus of research. In this work a model system for wet powders was developed, which is suitable for 3D imaging with confocal microscopy. Fluorescent silica particles were immersed in a mixture of two immiscible liquids, one continuous and one binding liquid. In detail a wet powder cluster, where the binding liquid formed droplets was studied in 3D. During applying a mechanical load the motion of the powder particles and the binding liquid droplets was followed. Deformation of the binding liquid droplets led to an increase of its surface area and energy. When the droplet relaxed to an energetically more favored shape upon further cluster deformation, the sudden release of the stored surface energy led to complex powder particle and droplet motions. The model system illustrated the complex dynamics upon shaking, and showed that the binding liquid dominated the cluster dynamics on a local scale.

  11. Cortical surface shift estimation using stereovision and optical flow motion tracking via projection image registration

    PubMed Central

    Ji, Songbai; Fan, Xiaoyao; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.

    2014-01-01

    Stereovision is an important intraoperative imaging technique that captures the exposed parenchymal surface noninvasively during open cranial surgery. Estimating cortical surface shift efficiently and accurately is critical to compensate for brain deformation in the operating room (OR). In this study, we present an automatic and robust registration technique based on optical flow (OF) motion tracking to compensate for cortical surface displacement throughout surgery. Stereo images of the cortical surface were acquired at multiple time points after dural opening to reconstruct three-dimensional (3D) texture intensity-encoded cortical surfaces. A local coordinate system was established with its z-axis parallel to the average surface normal direction of the reconstructed cortical surface immediately after dural opening in order to produce two-dimensional (2D) projection images. A dense displacement field between the two projection images was determined directly from OF motion tracking without the need for feature identification or tracking. The starting and end points of the displacement vectors on the two cortical surfaces were then obtained following spatial mapping inversion to produce the full 3D displacement of the exposed cortical surface. We evaluated the technique with images obtained from digital phantoms and 18 surgical cases – 10 of which involved independent measurements of feature locations acquired with a tracked stylus for accuracy comparisons, and 8 others of which 4 involved stereo image acquisitions at three or more time points during surgery to illustrate utility throughout a procedure. Results from the digital phantom images were very accurate (0.05 pixels). In the 10 surgical cases with independently digitized point locations, the average agreement between feature coordinates derived from the cortical surface reconstructions was 1.7–2.1 mm relative to those determined with the tracked stylus probe. The agreement in feature displacement

  12. Tracking 'differential organ motion' with a 'breathing' multileaf collimator: magnitude of problem assessed using 4D CT data and a motion-compensation strategy

    NASA Astrophysics Data System (ADS)

    McClelland, J. R.; Webb, S.; McQuaid, D.; Binnie, D. M.; Hawkes, D. J.

    2007-08-01

    Intrafraction tumour (e.g. lung) motion due to breathing can, in principle, be compensated for by applying identical breathing motions to the leaves of a multileaf collimator (MLC) as intensity-modulated radiation therapy is delivered by the dynamic MLC (DMLC) technique. A difficulty arising, however, is that irradiated voxels, which are in line with a bixel at one breathing phase (at which the treatment plan has been made), may move such that they cease to be in line with that breathing bixel at another phase. This is the phenomenon of differential voxel motion and existing tracking solutions have ignored this very real problem. There is absolutely no tracking solution to the problem of compensating for differential voxel motion. However, there is a strategy that can be applied in which the leaf breathing is determined to minimize the geometrical mismatch in a least-squares sense in irradiating differentially-moving voxels. A 1D formulation in very restricted circumstances is already in the literature and has been applied to some model breathing situations which can be studied analytically. These are, however, highly artificial. This paper presents the general 2D formulation of the problem including allowing different importance factors to be applied to planning target volume and organ at risk (or most generally) each voxel. The strategy also extends the literature strategy to the situation where the number of voxels connecting to a bixel is a variable. Additionally the phenomenon of 'cross-leaf-track/channel' voxel motion is formally addressed. The general equations are presented and analytic results are given for some 1D, artificially contrived, motions based on the Lujan equations of breathing motion. Further to this, 3D clinical voxel motion data have been extracted from 4D CT measurements to both assess the magnitude of the problem of 2D motion perpendicular to the beam-delivery axis in clinical practice and also to find the 2D optimum breathing-leaf strategy

  13. Implementation of a New Method for Dynamic Multileaf Collimator Tracking of Prostate Motion in Arc Radiotherapy Using a Single KV Imager

    SciTech Connect

    Poulsen, Per Rugaard; Cho, Byungchul; Sawant, Amit; Keall, Paul J.

    2010-03-01

    Purpose: To implement a method for real-time prostate motion estimation with a single kV imager during arc radiotherapy and to integrate it with dynamic multileaf collimator (DMLC) target tracking. Methods and Materials: An arc field with a circular aperture and 358 deg. gantry rotation was delivered to a motion phantom with a fiducial marker under continuous kV X-ray imaging at 5 Hz, perpendicular to the treatment beam. A pretreatment gantry rotation of 120 deg. in 20 sec with continuous imaging preceded the treatment. During treatment, each kV image was first used together with all previous images to estimate the three-dimensional (3D) target probability density function and then used together with this probability density function to estimate the 3D target position. The MLC aperture was then adapted to the estimated 3D target position. Tracking was performed with five patient-measured prostate trajectories that represented characteristic prostate motion patterns. Two data sets were recorded during tracking: (1) the estimated 3D target positions, for off-line comparison with the actual phantom motion; and (2) continuous portal images, for independent off-line calculation of the 2D tracking error as the positional difference between the marker and the MLC aperture center in each portal image. All experiments were also made with 1- Hz kV imaging. Results: The mean 3D root-mean-square error of the trajectory estimation was 0.6 mm. The mean root-mean-square tracking error was 0.7 mm, both parallel and perpendicular to the MLC. The accuracy degraded slightly for 1- Hz imaging. Conclusions: Single-imager DMLC prostate tracking that allows arbitrary beam modulation during arc radiotherapy was implemented. It has submillimeter accuracy for most prostate motion types.

  14. Proof of concept of MRI-guided tracked radiation delivery: tracking one-dimensional motion

    NASA Astrophysics Data System (ADS)

    Crijns, S. P. M.; Raaymakers, B. W.; Lagendijk, J. J. W.

    2012-12-01

    In radiotherapy one aims to deliver a radiation dose to a tumour with high geometrical accuracy while sparing organs at risk (OARs). Although image guidance decreases geometrical uncertainties, treatment of cancer of abdominal organs is further complicated by respiratory motion, requiring intra-fraction motion compensation to fulfil the treatment intent. With an ideal delivery system, the optimal method of intra-fraction motion compensation is to adapt the beam collimation to the moving target using a dynamic multi-leaf collimator (MLC) aperture. The many guidance strategies for such tracked radiation delivery tested up to now mainly use markers and are therefore invasive and cannot deal with target deformations or adaptations for OAR positions. We propose to address these shortcomings using the online MRI guidance provided by an MRI accelerator and present a first step towards demonstration of the technical feasibility of this proposal. The position of a phantom subjected to one-dimensional (1D) periodic translation was tracked using a fast 1D MR sequence. Real-time communication with the MR scanner and control of the MLC aperture were established. Based on the time-resolved position of the phantom, tracked radiation delivery to the phantom was realized. Dose distributions for various delivery conditions were recorded on a gafchromic film. Without motion a sharply defined dose distribution is obtained, whereas considerable blur occurs for delivery to a moving phantom. With compensation for motion, the sharpness of the dose distribution is nearly restored. The total latency in our motion management architecture is approximately 200 ms. Combination of the recorded phantom and aperture positions with the planned dose distribution enabled the reconstruction of the delivered dose in all cases, which illustrates the promise of online dose accumulation and confirms that latency compensation could further enhance our results. For a simple 1D tracked delivery scenario, the

  15. Motion management during IMAT treatment of mobile lung tumors—A comparison of MLC tracking and gated delivery

    PubMed Central

    Falk, Marianne; Pommer, Tobias; Keall, Paul; Korreman, Stine; Persson, Gitte; Poulsen, Per; Munck af Rosenschöld, Per

    2014-01-01

    Purpose: To compare real-time dynamic multileaf collimator (MLC) tracking, respiratory amplitude and phase gating, and no compensation for intrafraction motion management during intensity modulated arc therapy (IMAT). Methods: Motion management with MLC tracking and gating was evaluated for four lung cancer patients. The IMAT plans were delivered to a dosimetric phantom mounted onto a 3D motion phantom performing patient-specific lung tumor motion. The MLC tracking system was guided by an optical system that used stereoscopic infrared (IR) cameras and five spherical reflecting markers attached to the dosimetric phantom. The gated delivery used a duty cycle of 35% and collected position data using an IR camera and two reflecting markers attached to a marker block. Results: The average gamma index failure rate (2% and 2 mm criteria) was <0.01% with amplitude gating for all patients, and <0.1% with phase gating and <3.7% with MLC tracking for three of the four patients. One of the patients had an average failure rate of 15.1% with phase gating and 18.3% with MLC tracking. With no motion compensation, the average gamma index failure rate ranged from 7.1% to 46.9% for the different patients. Evaluation of the dosimetric error contributions showed that the gated delivery mainly had errors in target localization, while MLC tracking also had contributions from MLC leaf fitting and leaf adjustment. The average treatment time was about three times longer with gating compared to delivery with MLC tracking (that did not prolong the treatment time) or no motion compensation. For two of the patients, the different motion compensation techniques allowed for approximately the same margin reduction but for two of the patients, gating enabled a larger reduction of the margins than MLC tracking. Conclusions: Both gating and MLC tracking reduced the effects of the target movements, although the gated delivery showed a better dosimetric accuracy and enabled a larger reduction of the

  16. Tissue reconstruction in 3D-spheroids from rodent retina in a motion-free, bioreactor-based microstructure.

    PubMed

    Rieke, Matthias; Gottwald, Eric; Weibezahn, Karl-Friedrich; Layer, Paul Gottlob

    2008-12-01

    While conventional rotation culture-based retinal spheroids are most useful to study basic processes of retinogenesis and tissue regeneration, they are less appropriate for an easy and inexpensive mass production of histotypic 3-dimensional tissue spheroids, which will be of utmost importance for future bioengineering, e.g. for replacement of animal experimentation. Here we compared conventionally reaggregated spheroids derived from dissociated retinal cells from neonatal gerbils (Meriones unguiculatus) with spheroids cultured on a novel microscaffold cell chip (called cf-chip) in a motion-free bioreactor. Reaggregation and developmental processes leading to tissue formation, e.g. proliferation, apoptosis and differentiation were observed during the first 10 days in vitro (div). Remarkably, in each cf-chip micro-chamber, only one spheroid developed. In both culture systems, sphere sizes and proliferation rates were almost identical. However, apoptosis was only comparably high up to 5 div, but then became negligible in the cf-chip, while it up-rose again in the conventional culture. In both systems, immunohistochemical characterisation revealed the presence of Müller glia cells, of ganglion, amacrine, bipolar and horizontal cells at a highly comparable arrangement. In both systems, photoreceptors were detected only in spheroids from P3 retinae. Benefits of the chip-based 3D cell culture were a reliable sphere production at enhanced viability, the feasibility of single sphere observation during cultivation time, a high reproducibility and easy control of culture conditions. Further development of this approach should allow high-throughput systems not only for retinal but also other types of histotypic spheroids, to become suitable for environmental monitoring and biomedical diagnostics. PMID:19023488

  17. Video motion analysis with automated tracking: an insight

    NASA Astrophysics Data System (ADS)

    Aftab Usman, Bilal; Alam, Junaid; Sabieh Anwar, Muhammad

    2015-11-01

    The article describes the use of elementary techniques in computer vision and motion photography for the analysis of well known experiments in interactive instructional physics laboratories. We describe a method for the automated tracking of the kinematics of physical objects which involves the subtraction of orthogonal colors in color space. The aim is to expose undergraduate students to image processing and its applications in video motion analysis. The straightforward technique is simple, results in computational speedup compared to an existing method, removes the need for a laborious repetitive and manual tagging of frames and is generally robust against color variations. Insight is also presented into the process of thresholding and selecting the correct region out of the several choice presented in the post-threshold frames. Finally, the approach is illustrated through a selection of well known mechanics experiments.

  18. Direct measurement of particle size and 3D velocity of a gas-solid pipe flow with digital holographic particle tracking velocimetry.

    PubMed

    Wu, Yingchun; Wu, Xuecheng; Yao, Longchao; Gréhan, Gérard; Cen, Kefa

    2015-03-20

    The 3D measurement of the particles in a gas-solid pipe flow is of great interest, but remains challenging due to curved pipe walls in various engineering applications. Because of the astigmatism induced by the pipe, concentric ellipse fringes in the hologram of spherical particles are observed in the experiments. With a theoretical analysis of the particle holography by an ABCD matrix, the in-focus particle image can be reconstructed by the modified convolution method and fractional Fourier transform. Thereafter, the particle size, 3D position, and velocity are simultaneously measured by digital holographic particle tracking velocimetry (DHPTV). The successful application of DHPTV to the particle size and 3D velocity measurement in a glass pipe's flow can facilitate its 3D diagnostics. PMID:25968543

  19. Air motion determination by tracking humidity patterns in isentropic layers

    NASA Technical Reports Server (NTRS)

    Mancuso, R. L.; Hall, D. J.

    1975-01-01

    Determining air motions by tracking humidity patterns in isentropic layers was investigated. Upper-air rawinsonde data from the NSSL network and from the AVE-II pilot experiment were used to simulate temperature and humidity profile data that will eventually be available from geosynchronous satellites. Polynomial surfaces that move with time were fitted to the mixing-ratio values of the different isentropic layers. The velocity components of the polynomial surfaces are part of the coefficients that are determined in order to give an optimum fitting of the data. In the mid-troposphere, the derived humidity motions were in good agreement with the winds measured by rawinsondes so long as there were few or no clouds and the lapse rate was relatively stable. In the lower troposphere, the humidity motions were unreliable primarily because of nonadiabatic processes and unstable lapse rates. In the upper troposphere, the humidity amounts were too low to be measured with sufficient accuracy to give reliable results. However, it appears that humidity motions could be used to provide mid-tropospheric wind data over large regions of the globe.

  20. Respiratory motion compensation for simultaneous PET/MR based on a 3D-2D registration of strongly undersampled radial MR data: a simulation study

    NASA Astrophysics Data System (ADS)

    Rank, Christopher M.; Heußer, Thorsten; Flach, Barbara; Brehm, Marcus; Kachelrieß, Marc

    2015-03-01

    We propose a new method for PET/MR respiratory motion compensation, which is based on a 3D-2D registration of strongly undersampled MR data and a) runs in parallel with the PET acquisition, b) can be interlaced with clinical MR sequences, and c) requires less than one minute of the total MR acquisition time per bed position. In our simulation study, we applied a 3D encoded radial stack-of-stars sampling scheme with 160 radial spokes per slice and an acquisition time of 38 s. Gated 4D MR images were reconstructed using a 4D iterative reconstruction algorithm. Based on these images, motion vector fields were estimated using our newly-developed 3D-2D registration framework. A 4D PET volume of a patient with eight hot lesions in the lungs and upper abdomen was simulated and MoCo 4D PET images were reconstructed based on the motion vector fields derived from MR. For evaluation, average SUVmean values of the artificial lesions were determined for a 3D, a gated 4D, a MoCo 4D and a reference (with ten-fold measurement time) gated 4D reconstruction. Compared to the reference, 3D reconstructions yielded an underestimation of SUVmean values due to motion blurring. In contrast, gated 4D reconstructions showed the highest variation of SUVmean due to low statistics. MoCo 4D reconstructions were only slightly affected by these two sources of uncertainty resulting in a significant visual and quantitative improvement in terms of SUVmean values. Whereas temporal resolution was comparable to the gated 4D images, signal-to-noise ratio and contrast-to-noise ratio were close to the 3D reconstructions.

  1. Tracking Arabia-India motion from Miocene to Present

    NASA Astrophysics Data System (ADS)

    Chamot-Rooke, N. R.; Fournier, M.

    2009-12-01

    Although small, the present-day Arabia-India motion has been captured by several global and regional geodetic surveys that consistently show dextral motion of a few mm/yr, either transpressive or transtensive (Fournier et al., 2008). This motion is accommodated along the Owen Fracture Zone, an active strike-slip boundary that runs for more than 700 km from the Somalia-India-Arabia triple junction in the south to the Dalrymple trough in the north. Two recent marine cruises conducted along this fault aboard the BHO Beautemps-Beaupré (AOC 2006 and OWEN 2009) using a high resolution multibeam sounder (Simrad EM120, 10 m vertical resolution) provided a complete map of the active fault and confirmed a present-day pure dextral motion. The surface breaks closely follow a small circle of the Arabia-India motion, with several pull-part basins at the junctions between the main segments of the fault. Geomorphologic offsets reach 10 km, suggesting that the mapped fault has been active with the same style for past several million years. When did this motion start? The difficulty in tracking the past Arabia-India motion is that there is no direct kinematic indicator available, since the boundary has been strike-slip and/or convergent during the Tertiary. Motion was most probably sinistral during the rapid northward travelling of India towards Eurasia in the early Tertiary, Arabia being rigidly attached to Africa until the opening of the Gulf of Aden. However, the nature and location of the Arabia-India boundary at that time remain speculative. Throughout the Miocene, the relative motion between India and Arabia has been indirectly recorded at the Sheba and Carslberg ridges, the former recording Arabia-Somalia motion (opening of the Gulf of Aden) and the latter India-Somalia motion (Indian Ocean opening). Both ridges have been studied with some details recently, using up to date magnetic lineations identification (Merkouriev and DeMets, 2006; Fournier et al., 2009). We combine

  2. Surrogate-driven deformable motion model for organ motion tracking in particle radiation therapy

    NASA Astrophysics Data System (ADS)

    Fassi, Aurora; Seregni, Matteo; Riboldi, Marco; Cerveri, Pietro; Sarrut, David; Battista Ivaldi, Giovanni; Tabarelli de Fatis, Paola; Liotta, Marco; Baroni, Guido

    2015-02-01

    The aim of this study is the development and experimental testing of a tumor tracking method for particle radiation therapy, providing the daily respiratory dynamics of the patient’s thoraco-abdominal anatomy as a function of an external surface surrogate combined with an a priori motion model. The proposed tracking approach is based on a patient-specific breathing motion model, estimated from the four-dimensional (4D) planning computed tomography (CT) through deformable image registration. The model is adapted to the interfraction baseline variations in the patient’s anatomical configuration. The driving amplitude and phase parameters are obtained intrafractionally from a respiratory surrogate signal derived from the external surface displacement. The developed technique was assessed on a dataset of seven lung cancer patients, who underwent two repeated 4D CT scans. The first 4D CT was used to build the respiratory motion model, which was tested on the second scan. The geometric accuracy in localizing lung lesions, mediated over all breathing phases, ranged between 0.6 and 1.7 mm across all patients. Errors in tracking the surrounding organs at risk, such as lungs, trachea and esophagus, were lower than 1.3 mm on average. The median absolute variation in water equivalent path length (WEL) within the target volume did not exceed 1.9 mm-WEL for simulated particle beams. A significant improvement was achieved compared with error compensation based on standard rigid alignment. The present work can be regarded as a feasibility study for the potential extension of tumor tracking techniques in particle treatments. Differently from current tracking methods applied in conventional radiotherapy, the proposed approach allows for the dynamic localization of all anatomical structures scanned in the planning CT, thus providing complete information on density and WEL variations required for particle beam range adaptation.

  3. SU-E-T-562: Motion Tracking Optimization for Conformal Arc Radiotherapy Plans: A QUASAR Phantom Based Study

    SciTech Connect

    Xu, Z; Wang, I; Yao, R; Podgorsak, M

    2015-06-15

    Purpose: This study is to use plan parameters optimization (Dose rate, collimator angle, couch angle, initial starting phase) to improve the performance of conformal arc radiotherapy plans with motion tracking by increasing the plan performance score (PPS). Methods: Two types of 3D conformal arc plans were created based on QUASAR respiratory motion phantom with spherical and cylindrical targets. Sinusoidal model was applied to the MLC leaves to generate motion tracking plans. A MATLAB program was developed to calculate PPS of each plan (ranges from 0–1) and optimize plan parameters. We first selected the dose rate for motion tracking plans and then used simulated annealing algorithm to search for the combination of the other parameters that resulted in the plan of the maximal PPS. The optimized motion tracking plan was delivered by Varian Truebeam Linac. In-room cameras and stopwatch were used for starting phase selection and synchronization between phantom motion and plan delivery. Gaf-EBT2 dosimetry films were used to measure the dose delivered to the target in QUASAR phantom. Dose profiles and Truebeam trajectory log files were used for plan delivery performance evaluation. Results: For spherical target, the maximal PPS (PPSsph) of the optimized plan was 0.79: (Dose rate: 500MU/min, Collimator: 90°, Couch: +10°, starting phase: 0.83π). For cylindrical target, the maximal PPScyl was 0.75 (Dose rate: 300MU/min, Collimator: 87°, starting phase: 0.97π) with couch at 0°. Differences of dose profiles between motion tracking plans (with the maximal and the minimal PPS) and 3D conformal plans were as follows: PPSsph=0.79: %ΔFWHM: 8.9%, %Dmax: 3.1%; PPSsph=0.52: %ΔFWHM: 10.4%, %Dmax: 6.1%. PPScyl=0.75: %ΔFWHM: 4.7%, %Dmax: 3.6%; PPScyl=0.42: %ΔFWHM: 12.5%, %Dmax: 9.6%. Conclusion: By achieving high plan performance score through parameters optimization, we can improve target dose conformity of motion tracking plan by decreasing total MLC leaf travel distance

  4. Respiration induced fiducial motion tracking in ultrasound using an extended SFA approach

    NASA Astrophysics Data System (ADS)

    Cao, Kunlin; Bednarz, Bryan; Smith, L. S.; Foo, Thomas K. F.; Patwardhan, Kedar A.

    2015-03-01

    Radiation therapy (RT) plays an essential role in the management of cancers. The precision of the treatment delivery process in chest and abdominal cancers is often impeded by respiration induced tumor positional variations, which are accounted for by using larger therapeutic margins around the tumor volume leading to sub-optimal treatment deliveries and risk to healthy tissue. Real-time tracking of tumor motion during RT will help reduce unnecessary margin area and benefit cancer patients by allowing the treatment volume to closely match the positional variation of the tumor volume over time. In this work, we propose a fast approach which enables transferring the pre-estimated target (e.g. tumor) motion extracted from ultrasound (US) image sequences in training stage (e.g. before RT) to online data in real-time (e.g. acquired during RT). The method is based on extracting feature points of the target object, exploiting low-dimensional description of the feature motion through slow feature analysis, and finding the most similar image frame from training data for estimating current/online object location. The approach is evaluated on two 2D + time and one 3D + time US acquisitions. The locations of six annotated fiducials are used for designing experiments and validating tracking accuracy. The average fiducial distance between expert's annotation and the location extracted from our indexed training frame is 1.9+/-0.5mm. Adding a fast template matching procedure within a small search range reduces the distance to 1.4+/-0.4mm. The tracking time per frame is on the order of millisecond, which is below the frame acquisition time.

  5. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  6. Structured light-based motion tracking in the limited view of an MR head coil

    NASA Astrophysics Data System (ADS)

    Erikshøj, M.; Olesen, O. V.; Conradsen, K.; Højgaard, L.; Larsen, R.

    2013-02-01

    A markerless motion tracking (MT) system developed for use in PET brain imaging has been tested in the limited field of view (FOV) of the MR head coil from the Siemens Biograph mMR. The system is a 3D surface scanner that uses structured light (SL) to create point cloud reconstructions of the facial surface. The point clouds are continuously realigned to a reference scan to obtain pose estimates. The system has been tested on a mannequin head performing controlled rotational and translational axial movements within the head coil outside the range of the magnetic field. The RMS of the residual error of the rotation was 0.11° and the RMS difference in the translation with the control system was 0.17 mm, within the trackable range of movement.

  7. SU-E-J-199: Evaluation of Motion Tracking Effects On Stereotactic Body Radiotherapy of Abdominal Targets

    SciTech Connect

    Monterroso, M; Dogan, N; Yang, Y

    2014-06-01

    Purpose: To evaluate the effects of respiratory motion on the delivered dose distribution of CyberKnife motion tracking-based stereotactic body radiotherapy (SBRT) of abdominal targets. Methods: Four patients (two pancreas and two liver, and all with 4DCT scans) were retrospectively evaluated. A plan (3D plan) using CyberKnife Synchrony was optimized on the end-exhale phase in the CyberKnife's MultiPlan treatment planning system (TPS), with 40Gy prescribed in 5 fractions. A 4D plan was then created following the 4D planning utility in the MultiPlan TPS, by recalculating dose from the 3D plan beams on all 4DCT phases, with the same prescribed isodose line. The other seven phases of the 4DCT were then deformably registered to the end-exhale phase for 4D dose summation. Doses to the target and organs at risk (OAR) were compared between 3D and 4D plans for each patient. The mean and maximum doses to duodenum, liver, spinal cord and kidneys, and doses to 5cc of duodenum, 700cc of liver, 0.25cc of spinal cord and 200cc of kidneys were used. Results: Target coverage in the 4D plans was about 1% higher for two patients and about 9% lower in the other two. OAR dose differences between 3D and 4D varied among structures, with doses as much as 8.26Gy lower or as much as 5.41Gy higher observed in the 4D plans. Conclusion: The delivered dose can be significantly different from the planned dose for both the target and OAR close to the target, which is caused by the relative geometry change while the beams chase the moving target. Studies will be performed on more patients in the future. The differences of motion tracking versus passive motion management with the use of internal target volumes will also be investigated.

  8. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  9. Effect of Task-Correlated Physiological Fluctuations and Motion in 2D and 3D Echo-Planar Imaging in a Higher Cognitive Level fMRI Paradigm

    PubMed Central

    Ladstein, Jarle; Evensmoen, Hallvard R.; Håberg, Asta K.; Kristoffersen, Anders; Goa, Pål E.

    2016-01-01

    Purpose: To compare 2D and 3D echo-planar imaging (EPI) in a higher cognitive level fMRI paradigm. In particular, to study the link between the presence of task-correlated physiological fluctuations and motion and the fMRI contrast estimates from either 2D EPI or 3D EPI datasets, with and without adding nuisance regressors to the model. A signal model in the presence of partly task-correlated fluctuations is derived, and predictions for contrast estimates with and without nuisance regressors are made. Materials and Methods: Thirty-one healthy volunteers were scanned using 2D EPI and 3D EPI during a virtual environmental learning paradigm. In a subgroup of 7 subjects, heart rate and respiration were logged, and the correlation with the paradigm was evaluated. FMRI analysis was performed using models with and without nuisance regressors. Differences in the mean contrast estimates were investigated by analysis-of-variance using Subject, Sequence, Day, and Run as factors. The distributions of group level contrast estimates were compared. Results: Partially task-correlated fluctuations in respiration, heart rate and motion were observed. Statistically significant differences were found in the mean contrast estimates between the 2D EPI and 3D EPI when using a model without nuisance regressors. The inclusion of nuisance regressors for cardiorespiratory effects and motion reduced the difference to a statistically non-significant level. Furthermore, the contrast estimate values shifted more when including nuisance regressors for 3D EPI compared to 2D EPI. Conclusion: The results are consistent with 3D EPI having a higher sensitivity to fluctuations compared to 2D EPI. In the presence partially task-correlated physiological fluctuations or motion, proper correction is necessary to get expectation correct contrast estimates when using 3D EPI. As such task-correlated physiological fluctuations or motion is difficult to avoid in paradigms exploring higher cognitive functions, 2

  10. Computer numerical control (CNC) lithography: light-motion synchronized UV-LED lithography for 3D microfabrication

    NASA Astrophysics Data System (ADS)

    Kim, Jungkwun; Yoon, Yong-Kyu; Allen, Mark G.

    2016-03-01

    This paper presents a computer-numerical-controlled ultraviolet light-emitting diode (CNC UV-LED) lithography scheme for three-dimensional (3D) microfabrication. The CNC lithography scheme utilizes sequential multi-angled UV light exposures along with a synchronized switchable UV light source to create arbitrary 3D light traces, which are transferred into the photosensitive resist. The system comprises a switchable, movable UV-LED array as a light source, a motorized tilt-rotational sample holder, and a computer-control unit. System operation is such that the tilt-rotational sample holder moves in a pre-programmed routine, and the UV-LED is illuminated only at desired positions of the sample holder during the desired time period, enabling the formation of complex 3D microstructures. This facilitates easy fabrication of complex 3D structures, which otherwise would have required multiple manual exposure steps as in the previous multidirectional 3D UV lithography approach. Since it is batch processed, processing time is far less than that of the 3D printing approach at the expense of some reduction in the degree of achievable 3D structure complexity. In order to produce uniform light intensity from the arrayed LED light source, the UV-LED array stage has been kept rotating during exposure. UV-LED 3D fabrication capability was demonstrated through a plurality of complex structures such as V-shaped micropillars, micropanels, a micro-‘hi’ structure, a micro-‘cat’s claw,’ a micro-‘horn,’ a micro-‘calla lily,’ a micro-‘cowboy’s hat,’ and a micro-‘table napkin’ array.

  11. HSA: integrating multi-track Hi-C data for genome-scale reconstruction of 3D chromatin structure.

    PubMed

    Zou, Chenchen; Zhang, Yuping; Ouyang, Zhengqing

    2016-01-01

    Genome-wide 3C technologies (Hi-C) are being increasingly employed to study three-dimensional (3D) genome conformations. Existing computational approaches are unable to integrate accumulating data to facilitate studying 3D chromatin structure and function. We present HSA ( http://ouyanglab.jax.org/hsa/ ), a flexible tool that jointly analyzes multiple contact maps to infer 3D chromatin structure at the genome scale. HSA globally searches the latent structure underlying different cleavage footprints. Its robustness and accuracy outperform or rival existing tools on extensive simulations and orthogonal experiment validations. Applying HSA to recent in situ Hi-C data, we found the 3D chromatin structures are highly conserved across various human cell types. PMID:26936376

  12. A common-path optical coherence tomography distance-sensor based surface tracking and motion compensation hand-held microsurgical tool

    NASA Astrophysics Data System (ADS)

    Zhang, Kang; Gehlbach, Peter; Kang, Jin U.

    2011-03-01

    Microsurgery requires constant attention to the involuntary motion due to physiological tremors. In this work, we demonstrated a simple and compact hand-held microsurgical tool capable of surface tracking and motion compensation based on common-path optical coherence tomography (CP-OCT) distance-sensor to improve the accuracy and safety of microsurgery. This tool is miniaturized into a 15mm-diameter plastic syringe and capable of surface tracking at less than 5 micrometer resolution. A phantom made with Intralipid layers is used to simulate a real tissue surface and a single-fiber integrated micro-dissector works as a surgical tip to perform tracking and accurate incision on the phantom surface. The micro-incision depth is evaluated after each operation through a fast 3D scanning by the Fourier domain OCT system. The results using the surface tracking and motion compensation tool show significant improvement compared to the results by free-hand.

  13. How Fast Is Your Body Motion? Determining a Sufficient Frame Rate for an Optical Motion Tracking System Using Passive Markers.

    PubMed

    Song, Min-Ho; Godøy, Rolf Inge

    2016-01-01

    This paper addresses how to determine a sufficient frame (sampling) rate for an optical motion tracking system using passive reflective markers. When using passive markers for the optical motion tracking, avoiding identity confusion between the markers becomes a problem as the speed of motion increases, necessitating a higher frame rate to avoid a failure of the motion tracking caused by marker confusions and/or dropouts. Initially, one might believe that the Nyquist-Shannon sampling rate estimated from the assumed maximal temporal variation of a motion (i.e. a sampling rate at least twice that of the maximum motion frequency) could be the complete solution to the problem. However, this paper shows that also the spatial distance between the markers should be taken into account in determining the suitable frame rate of an optical motion tracking with passive markers. In this paper, a frame rate criterion for the optical tracking using passive markers is theoretically derived and also experimentally verified using a high-quality optical motion tracking system. Both the theoretical and the experimental results showed that the minimum frame rate is proportional to the ratio between the maximum speed of the motion and the minimum spacing between markers, and may also be predicted precisely if the proportional constant is known in advance. The inverse of the proportional constant is here defined as the tracking efficiency constant and it can be easily determined with some test measurements. Moreover, this newly defined constant can provide a new way of evaluating the tracking algorithm performance of an optical tracking system. PMID:26967900

  14. How Fast Is Your Body Motion? Determining a Sufficient Frame Rate for an Optical Motion Tracking System Using Passive Markers

    PubMed Central

    Song, Min-Ho; Godøy, Rolf Inge

    2016-01-01

    This paper addresses how to determine a sufficient frame (sampling) rate for an optical motion tracking system using passive reflective markers. When using passive markers for the optical motion tracking, avoiding identity confusion between the markers becomes a problem as the speed of motion increases, necessitating a higher frame rate to avoid a failure of the motion tracking caused by marker confusions and/or dropouts. Initially, one might believe that the Nyquist-Shannon sampling rate estimated from the assumed maximal temporal variation of a motion (i.e. a sampling rate at least twice that of the maximum motion frequency) could be the complete solution to the problem. However, this paper shows that also the spatial distance between the markers should be taken into account in determining the suitable frame rate of an optical motion tracking with passive markers. In this paper, a frame rate criterion for the optical tracking using passive markers is theoretically derived and also experimentally verified using a high-quality optical motion tracking system. Both the theoretical and the experimental results showed that the minimum frame rate is proportional to the ratio between the maximum speed of the motion and the minimum spacing between markers, and may also be predicted precisely if the proportional constant is known in advance. The inverse of the proportional constant is here defined as the tracking efficiency constant and it can be easily determined with some test measurements. Moreover, this newly defined constant can provide a new way of evaluating the tracking algorithm performance of an optical tracking system. PMID:26967900

  15. Unstructured grids in 3D and 4D for a time-dependent interface in front tracking with improved accuracy

    SciTech Connect

    Glimm, J.; Grove, J. W.; Li, X. L.; Li, Y.; Xu, Z.

    2002-01-01

    Front tracking traces the dynamic evolution of an interface separating differnt materials or fluid components. In this paper, they describe three types of the grid generation methods used in the front tracking method. One is the unstructured surface grid. The second is a structured grid-based reconstruction method. The third is a time-space grid, also grid based, for a conservative tracking algorithm with improved accuracy.

  16. Marker-less multi-frame motion tracking and compensation in PET-brain imaging

    NASA Astrophysics Data System (ADS)

    Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.

    2015-03-01

    In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.

  17. Automatic parking lot occupancy computation using motion tracking

    NASA Astrophysics Data System (ADS)

    Justo, Francisco; Kalva, Hari; Raviv, Daniel

    2014-03-01

    Nowadays it is very hard to find available spots in public parking lots and even harder in public facilities such as universities and sports venues. A system that provides drivers with parking availability and parking lot occupancy will allow users find a parking space much easier and faster. This paper presents a system for automatic parking lot occupancy computation using motion tracking. Methods for complexity reduction are presented. The system showed approximately 96% accuracy in determining parking lot occupancy. We showed that by optimizing the resolution and bitrate of the input video, we can reduce the complexity by 70% and still achieved over 90% of accuracy. The results showed that high quality video is not necessary for the proposed algorithm to obtain accurate results.

  18. Imaging bacterial 3D motion using digital in-line holographic microscopy and correlation-based de-noising algorithm.

    PubMed

    Molaei, Mehdi; Sheng, Jian

    2014-12-29

    Better understanding of bacteria environment interactions in the context of biofilm formation requires accurate 3-dimentional measurements of bacteria motility. Digital Holographic Microscopy (DHM) has demonstrated its capability in resolving 3D distribution and mobility of particulates in a dense suspension. Due to their low scattering efficiency, bacteria are substantially difficult to be imaged by DHM. In this paper, we introduce a novel correlation-based de-noising algorithm to remove the background noise and enhance the quality of the hologram. Implemented in conjunction with DHM, we demonstrate that the method allows DHM to resolve 3-D E. coli bacteria locations of a dense suspension (>107 cells/ml) with submicron resolutions (<0.5 µm) over substantial depth and to obtain thousands of 3D cell trajectories. PMID:25607177

  19. Imaging bacterial 3D motion using digital in-line holographic microscopy and correlation-based de-noising algorithm

    PubMed Central

    Molaei, Mehdi; Sheng, Jian

    2014-01-01

    Abstract: Better understanding of bacteria environment interactions in the context of biofilm formation requires accurate 3-dimentional measurements of bacteria motility. Digital Holographic Microscopy (DHM) has demonstrated its capability in resolving 3D distribution and mobility of particulates in a dense suspension. Due to their low scattering efficiency, bacteria are substantially difficult to be imaged by DHM. In this paper, we introduce a novel correlation-based de-noising algorithm to remove the background noise and enhance the quality of the hologram. Implemented in conjunction with DHM, we demonstrate that the method allows DHM to resolve 3-D E. coli bacteria locations of a dense suspension (>107 cells/ml) with submicron resolutions (<0.5 µm) over substantial depth and to obtain thousands of 3D cell trajectories. PMID:25607177

  20. Influence of Head Motion on the Accuracy of 3D Reconstruction with Cone-Beam CT: Landmark Identification Errors in Maxillofacial Surface Model

    PubMed Central

    Song, Jin-Myoung; Cho, Jin-Hyoung

    2016-01-01

    Purpose The purpose of this study was to investigate the influence of head motion on the accuracy of three-dimensional (3D) reconstruction with cone-beam computed tomography (CBCT) scan. Materials and Methods Fifteen dry skulls were incorporated into a motion controller which simulated four types of head motion during CBCT scan: 2 horizontal rotations (to the right/to the left) and 2 vertical rotations (upward/downward). Each movement was triggered to occur at the start of the scan for 1 second by remote control. Four maxillofacial surface models with head motion and one control surface model without motion were obtained for each skull. Nine landmarks were identified on the five maxillofacial surface models for each skull, and landmark identification errors were compared between the control model and each of the models with head motion. Results Rendered surface models with head motion were similar to the control model in appearance; however, the landmark identification errors showed larger values in models with head motion than in the control. In particular, the Porion in the horizontal rotation models presented statistically significant differences (P < .05). Statistically significant difference in the errors between the right and left side landmark was present in the left side rotation which was opposite direction to the scanner rotation (P < .05). Conclusions Patient movement during CBCT scan might cause landmark identification errors on the 3D surface model in relation to the direction of the scanner rotation. Clinicians should take this into consideration to prevent patient movement during CBCT scan, particularly horizontal movement. PMID:27065238

  1. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  2. Bedside assistance in freehand ultrasonic diagnosis by real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion

    NASA Astrophysics Data System (ADS)

    Fukuzawa, M.; Kawata, K.; Nakamori, N.; Kitsunezuka, Y.

    2011-03-01

    By real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion, freehand ultrasonic diagnosis of neonatal ischemic diseases has been assisted at the bedside. The 2D ultrasonic movie was taken with a conventional ultrasonic apparatus (ATL HDI5000) and ultrasonic probes of 5-7 MHz with the compact tilt-sensor to measure the probe orientation. The real-time 3D visualization was realized by developing an extended version of the PC-based visualization system. The software was originally developed on the DirectX platform and optimized with the streaming SIMD extensions. The 3D scatter diagram of the latest pulsatile tissues has been continuously generated and visualized as projection image with the ultrasonic movie in the current section more than 15 fps. It revealed the 3D structure of pulsatile tissues such as middle and posterior cerebral arteries, Willis ring and cerebellar arteries, in which pediatricians have great interests in the blood flow because asphyxiated and/or low-birth-weight neonates have a high risk of ischemic diseases such as hypoxic-ischemic encephalopathy and periventricular leukomalacia. Since the pulsatile tissue-motion is due to local blood flow, it can be concluded that the system developed in this work is very useful to assist freehand ultrasonic diagnosis of ischemic diseases in the neonatal cranium.

  3. Human motion tracking by temporal-spatial local gaussian process experts.

    PubMed

    Zhao, Xu; Fu, Yun; Liu, Yuncai

    2011-04-01

    Human pose estimation via motion tracking systems can be considered as a regression problem within a discriminative framework. It is always a challenging task to model the mapping from observation space to state space because of the high-dimensional characteristic in the multimodal conditional distribution. In order to build the mapping, existing techniques usually involve a large set of training samples in the learning process which are limited in their capability to deal with multimodality. We propose, in this work, a novel online sparse Gaussian Process (GP) regression model to recover 3-D human motion in monocular videos. Particularly, we investigate the fact that for a given test input, its output is mainly determined by the training samples potentially residing in its local neighborhood and defined in the unified input-output space. This leads to a local mixture GP experts system composed of different local GP experts, each of which dominates a mapping behavior with the specific covariance function adapting to a local region. To handle the multimodality, we combine both temporal and spatial information therefore to obtain two categories of local experts. The temporal and spatial experts are integrated into a seamless hybrid system, which is automatically self-initialized and robust for visual tracking of nonlinear human motion. Learning and inference are extremely efficient as all the local experts are defined online within very small neighborhoods. Extensive experiments on two real-world databases, HumanEva and PEAR, demonstrate the effectiveness of our proposed model, which significantly improve the performance of existing models. PMID:20851794

  4. Patient motion tracking in the presence of measurement errors.

    PubMed

    Haidegger, Tamás; Benyó, Zoltán; Kazanzides, Peter

    2009-01-01

    The primary aim of computer-integrated surgical systems is to provide physicians with superior surgical tools for better patient outcome. Robotic technology is capable of both minimally invasive surgery and microsurgery, offering remarkable advantages for the surgeon and the patient. Current systems allow for sub-millimeter intraoperative spatial positioning, however certain limitations still remain. Measurement noise and unintended changes in the operating room environment can result in major errors. Positioning errors are a significant danger to patients in procedures involving robots and other automated devices. We have developed a new robotic system at the Johns Hopkins University to support cranial drilling in neurosurgery procedures. The robot provides advanced visualization and safety features. The generic algorithm described in this paper allows for automated compensation of patient motion through optical tracking and Kalman filtering. When applied to the neurosurgery setup, preliminary results show that it is possible to identify patient motion within 700 ms, and apply the appropriate compensation with an average of 1.24 mm positioning error after 2 s of setup time. PMID:19964394