Sample records for frame motion estimation

  1. Improved frame-based estimation of head motion in PET brain imaging.

    PubMed

    Mukherjee, J M; Lindsay, C; Mukherjee, A; Olivier, P; Shao, L; King, M A; Licho, R

    2016-05-01

    Head motion during PET brain imaging can cause significant degradation of image quality. Several authors have proposed ways to compensate for PET brain motion to restore image quality and improve quantitation. Head restraints can reduce movement but are unreliable; thus the need for alternative strategies such as data-driven motion estimation or external motion tracking. Herein, the authors present a data-driven motion estimation method using a preprocessing technique that allows the usage of very short duration frames, thus reducing the intraframe motion problem commonly observed in the multiple frame acquisition method. The list mode data for PET acquisition is uniformly divided into 5-s frames and images are reconstructed without attenuation correction. Interframe motion is estimated using a 3D multiresolution registration algorithm and subsequently compensated for. For this study, the authors used 8 PET brain studies that used F-18 FDG as the tracer and contained minor or no initial motion. After reconstruction and prior to motion estimation, known motion was introduced to each frame to simulate head motion during a PET acquisition. To investigate the trade-off in motion estimation and compensation with respect to frames of different length, the authors summed 5-s frames accordingly to produce 10 and 60 s frames. Summed images generated from the motion-compensated reconstructed frames were then compared to the original PET image reconstruction without motion compensation. The authors found that our method is able to compensate for both gradual and step-like motions using frame times as short as 5 s with a spatial accuracy of 0.2 mm on average. Complex volunteer motion involving all six degrees of freedom was estimated with lower accuracy (0.3 mm on average) than the other types investigated. Preprocessing of 5-s images was necessary for successful image registration. Since their method utilizes nonattenuation corrected frames, it is not susceptible to motion introduced between CT and PET acquisitions. The authors have shown that they can estimate motion for frames with time intervals as short as 5 s using nonattenuation corrected reconstructed FDG PET brain images. Intraframe motion in 60-s frames causes degradation of accuracy to about 2 mm based on the motion type.

  2. Improved frame-based estimation of head motion in PET brain imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, J. M., E-mail: joyeeta.mitra@umassmed.edu; Lindsay, C.; King, M. A.

    Purpose: Head motion during PET brain imaging can cause significant degradation of image quality. Several authors have proposed ways to compensate for PET brain motion to restore image quality and improve quantitation. Head restraints can reduce movement but are unreliable; thus the need for alternative strategies such as data-driven motion estimation or external motion tracking. Herein, the authors present a data-driven motion estimation method using a preprocessing technique that allows the usage of very short duration frames, thus reducing the intraframe motion problem commonly observed in the multiple frame acquisition method. Methods: The list mode data for PET acquisition ismore » uniformly divided into 5-s frames and images are reconstructed without attenuation correction. Interframe motion is estimated using a 3D multiresolution registration algorithm and subsequently compensated for. For this study, the authors used 8 PET brain studies that used F-18 FDG as the tracer and contained minor or no initial motion. After reconstruction and prior to motion estimation, known motion was introduced to each frame to simulate head motion during a PET acquisition. To investigate the trade-off in motion estimation and compensation with respect to frames of different length, the authors summed 5-s frames accordingly to produce 10 and 60 s frames. Summed images generated from the motion-compensated reconstructed frames were then compared to the original PET image reconstruction without motion compensation. Results: The authors found that our method is able to compensate for both gradual and step-like motions using frame times as short as 5 s with a spatial accuracy of 0.2 mm on average. Complex volunteer motion involving all six degrees of freedom was estimated with lower accuracy (0.3 mm on average) than the other types investigated. Preprocessing of 5-s images was necessary for successful image registration. Since their method utilizes nonattenuation corrected frames, it is not susceptible to motion introduced between CT and PET acquisitions. Conclusions: The authors have shown that they can estimate motion for frames with time intervals as short as 5 s using nonattenuation corrected reconstructed FDG PET brain images. Intraframe motion in 60-s frames causes degradation of accuracy to about 2 mm based on the motion type.« less

  3. Improved frame-based estimation of head motion in PET brain imaging

    PubMed Central

    Mukherjee, J. M.; Lindsay, C.; Mukherjee, A.; Olivier, P.; Shao, L.; King, M. A.; Licho, R.

    2016-01-01

    Purpose: Head motion during PET brain imaging can cause significant degradation of image quality. Several authors have proposed ways to compensate for PET brain motion to restore image quality and improve quantitation. Head restraints can reduce movement but are unreliable; thus the need for alternative strategies such as data-driven motion estimation or external motion tracking. Herein, the authors present a data-driven motion estimation method using a preprocessing technique that allows the usage of very short duration frames, thus reducing the intraframe motion problem commonly observed in the multiple frame acquisition method. Methods: The list mode data for PET acquisition is uniformly divided into 5-s frames and images are reconstructed without attenuation correction. Interframe motion is estimated using a 3D multiresolution registration algorithm and subsequently compensated for. For this study, the authors used 8 PET brain studies that used F-18 FDG as the tracer and contained minor or no initial motion. After reconstruction and prior to motion estimation, known motion was introduced to each frame to simulate head motion during a PET acquisition. To investigate the trade-off in motion estimation and compensation with respect to frames of different length, the authors summed 5-s frames accordingly to produce 10 and 60 s frames. Summed images generated from the motion-compensated reconstructed frames were then compared to the original PET image reconstruction without motion compensation. Results: The authors found that our method is able to compensate for both gradual and step-like motions using frame times as short as 5 s with a spatial accuracy of 0.2 mm on average. Complex volunteer motion involving all six degrees of freedom was estimated with lower accuracy (0.3 mm on average) than the other types investigated. Preprocessing of 5-s images was necessary for successful image registration. Since their method utilizes nonattenuation corrected frames, it is not susceptible to motion introduced between CT and PET acquisitions. Conclusions: The authors have shown that they can estimate motion for frames with time intervals as short as 5 s using nonattenuation corrected reconstructed FDG PET brain images. Intraframe motion in 60-s frames causes degradation of accuracy to about 2 mm based on the motion type. PMID:27147355

  4. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  5. Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation.

    PubMed

    Dikbas, Salih; Altunbasak, Yucel

    2013-08-01

    In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.

  6. Three-Dimensional Motion Estimation Using Shading Information in Multiple Frames

    DTIC Science & Technology

    1989-09-01

    j. Threle-D.imensionai GO Motion Estimation U sing, Shadin g Ilnformation in Multiple Frames- IJean-Pierre Schotf MIT Artifi -cial intelligence...vision 3-D structure 3-D vision- shape from shading multiple frames 20. ABSTRACT (Cofrn11,00 an reysrf* OWd Of Rssss00n7 Ad 4111111& F~ block f)nseq See...motion and shading have been treated as two disjoint problems. On the one hand, researchers studying motion or structure from motion often assume

  7. Improved optical flow motion estimation for digital image stabilization

    NASA Astrophysics Data System (ADS)

    Lai, Lijun; Xu, Zhiyong; Zhang, Xuyao

    2015-11-01

    Optical flow is the instantaneous motion vector at each pixel in the image frame at a time instant. The gradient-based approach for optical flow computation can't work well when the video motion is too large. To alleviate such problem, we incorporate this algorithm into a pyramid multi-resolution coarse-to-fine search strategy. Using pyramid strategy to obtain multi-resolution images; Using iterative relationship from the highest level to the lowest level to obtain inter-frames' affine parameters; Subsequence frames compensate back to the first frame to obtain stabilized sequence. The experiment results demonstrate that the promoted method has good performance in global motion estimation.

  8. A robust motion estimation system for minimal invasive laparoscopy

    NASA Astrophysics Data System (ADS)

    Marcinczak, Jan Marek; von Öhsen, Udo; Grigat, Rolf-Rainer

    2012-02-01

    Laparoscopy is a reliable imaging method to examine the liver. However, due to the limited field of view, a lot of experience is required from the surgeon to interpret the observed anatomy. Reconstruction of organ surfaces provide valuable additional information to the surgeon for a reliable diagnosis. Without an additional external tracking system the structure can be recovered from feature correspondences between different frames. In laparoscopic images blurred frames, specular reflections and inhomogeneous illumination make feature tracking a challenging task. We propose an ego-motion estimation system for minimal invasive laparoscopy that can cope with specular reflection, inhomogeneous illumination and blurred frames. To obtain robust feature correspondence, the approach combines SIFT and specular reflection segmentation with a multi-frame tracking scheme. The calibrated five-point algorithm is used with the MSAC robust estimator to compute the motion of the endoscope from multi-frame correspondence. The algorithm is evaluated using endoscopic videos of a phantom. The small incisions and the rigid endoscope limit the motion in minimal invasive laparoscopy. These limitations are considered in our evaluation and are used to analyze the accuracy of pose estimation that can be achieved by our approach. The endoscope is moved by a robotic system and the ground truth motion is recorded. The evaluation on typical endoscopic motion gives precise results and demonstrates the practicability of the proposed pose estimation system.

  9. Precise Image-Based Motion Estimation for Autonomous Small Body Exploration

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew E.; Matthies, Larry H.

    1998-01-01

    Space science and solar system exploration are driving NASA to develop an array of small body missions ranging in scope from near body flybys to complete sample return. This paper presents an algorithm for onboard motion estimation that will enable the precision guidance necessary for autonomous small body landing. Our techniques are based on automatic feature tracking between a pair of descent camera images followed by two frame motion estimation and scale recovery using laser altimetry data. The output of our algorithm is an estimate of rigid motion (attitude and position) and motion covariance between frames. This motion estimate can be passed directly to the spacecraft guidance and control system to enable rapid execution of safe and precise trajectories.

  10. Temporally diffeomorphic cardiac motion estimation from three-dimensional echocardiography by minimization of intensity consistency error.

    PubMed

    Zhang, Zhijun; Ashraf, Muhammad; Sahn, David J; Song, Xubo

    2014-05-01

    Quantitative analysis of cardiac motion is important for evaluation of heart function. Three dimensional (3D) echocardiography is among the most frequently used imaging modalities for motion estimation because it is convenient, real-time, low-cost, and nonionizing. However, motion estimation from 3D echocardiographic sequences is still a challenging problem due to low image quality and image corruption by noise and artifacts. The authors have developed a temporally diffeomorphic motion estimation approach in which the velocity field instead of the displacement field was optimized. The optimal velocity field optimizes a novel similarity function, which we call the intensity consistency error, defined as multiple consecutive frames evolving to each time point. The optimization problem is solved by using the steepest descent method. Experiments with simulated datasets, images of anex vivo rabbit phantom, images of in vivo open-chest pig hearts, and healthy human images were used to validate the authors' method. Simulated and real cardiac sequences tests showed that results in the authors' method are more accurate than other competing temporal diffeomorphic methods. Tests with sonomicrometry showed that the tracked crystal positions have good agreement with ground truth and the authors' method has higher accuracy than the temporal diffeomorphic free-form deformation (TDFFD) method. Validation with an open-access human cardiac dataset showed that the authors' method has smaller feature tracking errors than both TDFFD and frame-to-frame methods. The authors proposed a diffeomorphic motion estimation method with temporal smoothness by constraining the velocity field to have maximum local intensity consistency within multiple consecutive frames. The estimated motion using the authors' method has good temporal consistency and is more accurate than other temporally diffeomorphic motion estimation methods.

  11. Variable disparity-motion estimation based fast three-view video coding

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo

    2009-02-01

    In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.

  12. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    PubMed

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  13. Myocardial motion estimation of tagged cardiac magnetic resonance images using tag motion constraints and multi-level b-splines interpolation.

    PubMed

    Liu, Hong; Yan, Meng; Song, Enmin; Wang, Jie; Wang, Qian; Jin, Renchao; Jin, Lianghai; Hung, Chih-Cheng

    2016-05-01

    Myocardial motion estimation of tagged cardiac magnetic resonance (TCMR) images is of great significance in clinical diagnosis and the treatment of heart disease. Currently, the harmonic phase analysis method (HARP) and the local sine-wave modeling method (SinMod) have been proven as two state-of-the-art motion estimation methods for TCMR images, since they can directly obtain the inter-frame motion displacement vector field (MDVF) with high accuracy and fast speed. By comparison, SinMod has better performance over HARP in terms of displacement detection, noise and artifacts reduction. However, the SinMod method has some drawbacks: 1) it is unable to estimate local displacements larger than half of the tag spacing; 2) it has observable errors in tracking of tag motion; and 3) the estimated MDVF usually has large local errors. To overcome these problems, we present a novel motion estimation method in this study. The proposed method tracks the motion of tags and then estimates the dense MDVF by using the interpolation. In this new method, a parameter estimation procedure for global motion is applied to match tag intersections between different frames, ensuring specific kinds of large displacements being correctly estimated. In addition, a strategy of tag motion constraints is applied to eliminate most of errors produced by inter-frame tracking of tags and the multi-level b-splines approximation algorithm is utilized, so as to enhance the local continuity and accuracy of the final MDVF. In the estimation of the motion displacement, our proposed method can obtain a more accurate MDVF compared with the SinMod method and our method can overcome the drawbacks of the SinMod method. However, the motion estimation accuracy of our method depends on the accuracy of tag lines detection and our method has a higher time complexity. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.

    PubMed

    Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong

    2018-08-01

    This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.

  15. Patch-based frame interpolation for old films via the guidance of motion paths

    NASA Astrophysics Data System (ADS)

    Xia, Tianran; Ding, Youdong; Yu, Bing; Huang, Xi

    2018-04-01

    Due to improper preservation, traditional films will appear frame loss after digital. To deal with this problem, this paper presents a new adaptive patch-based method of frame interpolation via the guidance of motion paths. Our method is divided into three steps. Firstly, we compute motion paths between two reference frames using optical flow estimation. Then, the adaptive bidirectional interpolation with holes filled is applied to generate pre-intermediate frames. Finally, using patch match to interpolate intermediate frames with the most similar patches. Since the patch match is based on the pre-intermediate frames that contain the motion paths constraint, we show a natural and inartificial frame interpolation. We test different types of old film sequences and compare with other methods, the results prove that our method has a desired performance without hole or ghost effects.

  16. Constrained motion estimation-based error resilient coding for HEVC

    NASA Astrophysics Data System (ADS)

    Guo, Weihan; Zhang, Yongfei; Li, Bo

    2018-04-01

    Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.

  17. Motion compensation and noise tolerance in phase-shifting digital in-line holography.

    PubMed

    Stenner, Michael D; Neifeld, Mark A

    2006-05-15

    We present a technique for phase-shifting digital in-line holography which compensates for lateral object motion. By collecting two frames of interference between object and reference fields with identical reference phase, one can estimate the lateral motion that occurred between frames using the cross-correlation. We also describe a very general linear framework for phase-shifting holographic reconstruction which minimizes additive white Gaussian noise (AWGN) for an arbitrary set of reference field amplitudes and phases. We analyze the technique's sensitivity to noise (AWGN, quantization, and shot), errors in the reference fields, errors in motion estimation, resolution, and depth of field. We also present experimental motion-compensated images achieving the expected resolution.

  18. Building and using a statistical 3D motion atlas for analyzing myocardial contraction in MRI

    NASA Astrophysics Data System (ADS)

    Rougon, Nicolas F.; Petitjean, Caroline; Preteux, Francoise J.

    2004-05-01

    We address the issue of modeling and quantifying myocardial contraction from 4D MR sequences, and present an unsupervised approach for building and using a statistical 3D motion atlas for the normal heart. This approach relies on a state-of-the-art variational non rigid registration (NRR) technique using generalized information measures, which allows for robust intra-subject motion estimation and inter-subject anatomical alignment. The atlas is built from a collection of jointly acquired tagged and cine MR exams in short- and long-axis views. Subject-specific non parametric motion estimates are first obtained by incremental NRR of tagged images onto the end-diastolic (ED) frame. Individual motion data are then transformed into the coordinate system of a reference subject using subject-to-reference mappings derived by NRR of cine ED images. Finally, principal component analysis of aligned motion data is performed for each cardiac phase, yielding a mean model and a set of eigenfields encoding kinematic ariability. The latter define an organ-dedicated hierarchical motion basis which enables parametric motion measurement from arbitrary tagged MR exams. To this end, the atlas is transformed into subject coordinates by reference-to-subject NRR of ED cine frames. Atlas-based motion estimation is then achieved by parametric NRR of tagged images onto the ED frame, yielding a compact description of myocardial contraction during diastole.

  19. Registration Methods for IVUS: Transversal and Longitudinal Transducer Motion Compensation.

    PubMed

    Talou, Gonzalo D Maso; Blanco, Pablo J; Larrabide, Ignacio; Bezerra, Cristiano Guedes; Lemos, Pedro A; Feijoo, Raul A

    2017-04-01

    Intravascular ultrasound (IVUS) is a fundamental imaging technique for atherosclerotic plaque assessment, interventionist guidance, and, ultimately, as a tissue characterization tool. The studies acquired by this technique present the spatial description of the vessel during the cardiac cycle. However, the study frames are not properly sorted. As gating methods deal with the cardiac phase classification of the frames, the gated studies lack motion compensation between vessel and catheter. In this study, we develop registration strategies to arrange the vessel data into its rightful spatial sequence. Registration is performed by compensating longitudinal and transversal relative motion between vessel and catheter. Transversal motion is identified through maximum likelihood estimator optimization, while longitudinal motion is estimated by a neighborhood similarity estimator among the study frames. A strongly coupled implementation is proposed to compensate for both motion components at once. Loosely coupled implementations (DLT and DTL) decouple the registration process, resulting in more computationally efficient algorithms in detriment of the size of the set of candidate solutions. The DTL outperforms DLT and coupled implementations in terms of accuracy by a factor of 1.9 and 1.4, respectively. Sensitivity analysis shows that perivascular tissue must be considered to obtain the best registration outcome. Evidences suggest that the method is able to measure axial strain along the vessel wall. The proposed registration sorts the IVUS frames for spatial location, which is crucial for a correct interpretation of the vessel wall kinematics along the cardiac phases.

  20. Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery.

    PubMed

    Rottmann, Joerg; Keall, Paul; Berbeco, Ross

    2013-09-01

    To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time.

  1. Visual processing of rotary motion.

    PubMed

    Werkhoven, P; Koenderink, J J

    1991-01-01

    Local descriptions of velocity fields (e.g., rotation, divergence, and deformation) contain a wealth of information for form perception and ego motion. In spite of this, human psychophysical performance in estimating these entities has not yet been thoroughly examined. In this paper, we report on the visual discrimination of rotary motion. A sequence of image frames is used to elicit an apparent rotation of an annulus, composed of dots in the frontoparallel plane, around a fixation spot at the center of the annulus. Differential angular velocity thresholds are measured as a function of the angular velocity, the diameter of the annulus, the number of dots, the display time per frame, and the number of frames. The results show a U-shaped dependence of angular velocity discrimination on spatial scale, with minimal Weber fractions of 7%. Experiments with a scatter in the distance of the individual dots to the center of rotation demonstrate that angular velocity cannot be assessed directly; perceived angular velocity depends strongly on the distance of the dots relative to the center of rotation. We suggest that the estimation of rotary motion is mediated by local estimations of linear velocity.

  2. SU-D-210-05: The Accuracy of Raw and B-Mode Image Data for Ultrasound Speckle Tracking in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Shea, T; Bamber, J; Harris, E

    Purpose: For ultrasound speckle tracking there is some evidence that the envelope-detected signal (the main step in B-mode image formation) may be more accurate than raw ultrasound data for tracking larger inter-frame tissue motion. This study investigates the accuracy of raw radio-frequency (RF) versus non-logarithmic compressed envelope-detected (B-mode) data for ultrasound speckle tracking in the context of image-guided radiation therapy. Methods: Transperineal ultrasound RF data was acquired (with a 7.5 MHz linear transducer operating at a 12 Hz frame rate) from a speckle phantom moving with realistic intra-fraction prostate motion derived from a commercial tracking system. A normalised cross-correlation templatemore » matching algorithm was used to track speckle motion at the focus using (i) the RF signal and (ii) the B-mode signal. A range of imaging rates (0.5 to 12 Hz) were simulated by decimating the imaging sequences, therefore simulating larger to smaller inter-frame displacements. Motion estimation accuracy was quantified by comparison with known phantom motion. Results: The differences between RF and B-mode motion estimation accuracy (2D mean and 95% errors relative to ground truth displacements) were less than 0.01 mm for stable and persistent motion types and 0.2 mm for transient motion for imaging rates of 0.5 to 12 Hz. The mean correlation for all motion types and imaging rates was 0.851 and 0.845 for RF and B-mode data, respectively. Data type is expected to have most impact on axial (Superior-Inferior) motion estimation. Axial differences were <0.004 mm for stable and persistent motion and <0.3 mm for transient motion (axial mean errors were lowest for B-mode in all cases). Conclusions: Using the RF or B-mode signal for speckle motion estimation is comparable for translational prostate motion. B-mode image formation may involve other signal-processing steps which also influence motion estimation accuracy. A similar study for respiratory-induced motion would also be prudent. This work is support by Cancer Research UK Programme Grant C33589/A19727.« less

  3. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    NASA Astrophysics Data System (ADS)

    Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  4. Novel Integration of Frame Rate Up Conversion and HEVC Coding Based on Rate-Distortion Optimization.

    PubMed

    Guo Lu; Xiaoyun Zhang; Li Chen; Zhiyong Gao

    2018-02-01

    Frame rate up conversion (FRUC) can improve the visual quality by interpolating new intermediate frames. However, high frame rate videos by FRUC are confronted with more bitrate consumption or annoying artifacts of interpolated frames. In this paper, a novel integration framework of FRUC and high efficiency video coding (HEVC) is proposed based on rate-distortion optimization, and the interpolated frames can be reconstructed at encoder side with low bitrate cost and high visual quality. First, joint motion estimation (JME) algorithm is proposed to obtain robust motion vectors, which are shared between FRUC and video coding. What's more, JME is embedded into the coding loop and employs the original motion search strategy in HEVC coding. Then, the frame interpolation is formulated as a rate-distortion optimization problem, where both the coding bitrate consumption and visual quality are taken into account. Due to the absence of original frames, the distortion model for interpolated frames is established according to the motion vector reliability and coding quantization error. Experimental results demonstrate that the proposed framework can achieve 21% ~ 42% reduction in BDBR, when compared with the traditional methods of FRUC cascaded with coding.

  5. Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery

    PubMed Central

    Rottmann, Joerg; Keall, Paul; Berbeco, Ross

    2013-01-01

    Purpose: To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. Methods: 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Results: Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. Conclusions: The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time. PMID:24007146

  6. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-11-04

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  7. Detection of obstacles on runway using Ego-Motion compensation and tracking of significant features

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar (Principal Investigator); Camps, Octavia (Principal Investigator); Gandhi, Tarak; Devadiga, Sadashiva

    1996-01-01

    This report describes a method for obstacle detection on a runway for autonomous navigation and landing of an aircraft. Detection is done in the presence of extraneous features such as tiremarks. Suitable features are extracted from the image and warping using approximately known camera and plane parameters is performed in order to compensate ego-motion as far as possible. Residual disparity after warping is estimated using an optical flow algorithm. Features are tracked from frame to frame so as to obtain more reliable estimates of their motion. Corrections are made to motion parameters with the residual disparities using a robust method, and features having large residual disparities are signaled as obstacles. Sensitivity analysis of the procedure is also studied. Nelson's optical flow constraint is proposed to separate moving obstacles from stationary ones. A Bayesian framework is used at every stage so that the confidence in the estimates can be determined.

  8. Vision System Measures Motions of Robot and External Objects

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2008-01-01

    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.

  9. Determination of Galactic Aberration from VLBI Measurements and Its Effect on VLBI Reference Frames and Earth Orientation Parameters.

    NASA Astrophysics Data System (ADS)

    MacMillan, D. S.

    2014-12-01

    Galactic aberration is due to the motion of the solar system barycenter around the galactic center. It results in a systematic pattern of apparent proper motion of radio sources observed by VLBI. This effect is not currently included in VLBI analysis. Estimates of the size of this effect indicate that it is important that this secular aberration drift be accounted for in order to maintain an accurate celestial reference frame and allow astrometry at the several microarcsecond level. Future geodetic observing systems are being designed to be capable of producing a future terrestrial reference frame with an accuracy of 1 mm and stability of 0.1 mm/year. We evaluate the effect galactic aberration on attaining these reference frame goals. This presentation will discuss 1) the estimation of galactic aberration from VLBI data and 2) the effect of aberration on the Terrestrial and Celestial Reference Frames and the Earth Orientation Parameters that connect these frames.

  10. Variability in wood-frame building damage using broad-band synthetic ground motions: a comparative numerical study with recorded motions

    USGS Publications Warehouse

    Pei, Shiling; van de Lindt, John W.; Hartzell, Stephen; Luco, Nicolas

    2014-01-01

    Earthquake damage to light-frame wood buildings is a major concern for North America because of the volume of this construction type. In order to estimate wood building damage using synthetic ground motions, we need to verify the ability of synthetically generated ground motions to simulate realistic damage for this structure type. Through a calibrated damage potential indicator, four different synthetic ground motion models are compared with the historically recorded ground motions at corresponding sites. We conclude that damage for sites farther from the fault (>20 km) is under-predicted on average and damage at closer sites is sometimes over-predicted.

  11. Comparison of method using phase-sensitive motion estimator with speckle tracking method and application to measurement of arterial wall motion

    NASA Astrophysics Data System (ADS)

    Miyajo, Akira; Hasegawa, Hideyuki

    2018-07-01

    At present, the speckle tracking method is widely used as a two- or three-dimensional (2D or 3D) motion estimator for the measurement of cardiovascular dynamics. However, this method requires high-level interpolation of a function, which evaluates the similarity between ultrasonic echo signals in two frames, to estimate a subsample small displacement in high-frame-rate ultrasound, which results in a high computational cost. To overcome this problem, a 2D motion estimator using the 2D Fourier transform, which does not require any interpolation process, was proposed by our group. In this study, we compared the accuracies of the speckle tracking method and our method using a 2D motion estimator, and applied the proposed method to the measurement of motion of a human carotid arterial wall. The bias error and standard deviation in the lateral velocity estimates obtained by the proposed method were 0.048 and 0.282 mm/s, respectively, which were significantly better than those (‑0.366 and 1.169 mm/s) obtained by the speckle tracking method. The calculation time of the proposed phase-sensitive method was 97% shorter than the speckle tracking method. Furthermore, the in vivo experimental results showed that a characteristic change in velocity around the carotid bifurcation could be detected by the proposed method.

  12. A hybrid frame concealment algorithm for H.264/AVC.

    PubMed

    Yan, Bo; Gharavi, Hamid

    2010-01-01

    In packet-based video transmissions, packets loss due to channel errors may result in the loss of the whole video frame. Recently, many error concealment algorithms have been proposed in order to combat channel errors; however, most of the existing algorithms can only deal with the loss of macroblocks and are not able to conceal the whole missing frame. In order to resolve this problem, in this paper, we have proposed a new hybrid motion vector extrapolation (HMVE) algorithm to recover the whole missing frame, and it is able to provide more accurate estimation for the motion vectors of the missing frame than other conventional methods. Simulation results show that it is highly effective and significantly outperforms other existing frame recovery methods.

  13. Impact of time-of-flight on indirect 3D and direct 4D parametric image reconstruction in the presence of inconsistent dynamic PET data.

    PubMed

    Kotasidis, F A; Mehranian, A; Zaidi, H

    2016-05-07

    Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.

  14. Impact of time-of-flight on indirect 3D and direct 4D parametric image reconstruction in the presence of inconsistent dynamic PET data

    NASA Astrophysics Data System (ADS)

    Kotasidis, F. A.; Mehranian, A.; Zaidi, H.

    2016-05-01

    Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.

  15. On the establishment and maintenance of a modern conventional terrestrial reference system

    NASA Technical Reports Server (NTRS)

    Bock, Y.; Zhu, S. Y.

    1982-01-01

    The frame of the Conventional Terrestrial Reference System (CTS) is defined by an adopted set of coordinates, at a fundamental epoxh, of a global network of stations which contribute the vertices of a fundamental polyhedron. A method to estimate this set of coordinates using a combination of modern three dimensional geodetic systems is presented. Once established, the function of the CTS is twofold. The first is to monitor the external (or global) motions of the polyhedron with respect to the frame of a Conventional Inertial Reference System, i.e., those motions common to all stations. The second is to monitor the internal motions (or deformations) of the polyhedron, i.e., those motions that are not common to all stations. Two possible estimators for use in earth deformation analysis are given and their statistical and physical properties are described.

  16. Dynamic Imaging of the Eye, Optic Nerve, and Extraocular Muscles With Golden Angle Radial MRI

    PubMed Central

    Smith, David S.; Smith, Alex K.; Welch, E. Brian; Smith, Seth A.

    2017-01-01

    Purpose The eye and its accessory structures, the optic nerve and the extraocular muscles, form a complex dynamic system. In vivo magnetic resonance imaging (MRI) of this system in motion can have substantial benefits in understanding oculomotor functioning in health and disease, but has been restricted to date to imaging of static gazes only. The purpose of this work was to develop a technique to image the eye and its accessory visual structures in motion. Methods Dynamic imaging of the eye was developed on a 3-Tesla MRI scanner, based on a golden angle radial sequence that allows freely selectable frame-rate and temporal-span image reconstructions from the same acquired data set. Retrospective image reconstructions at a chosen frame rate of 57 ms per image yielded high-quality in vivo movies of various eye motion tasks performed in the scanner. Motion analysis was performed for a left–right version task where motion paths, lengths, and strains/globe angle of the medial and lateral extraocular muscles and the optic nerves were estimated. Results Offline image reconstructions resulted in dynamic images of bilateral visual structures of healthy adults in only ∼15-s imaging time. Qualitative and quantitative analyses of the motion enabled estimation of trajectories, lengths, and strains on the optic nerves and extraocular muscles at very high frame rates of ∼18 frames/s. Conclusions This work presents an MRI technique that enables high-frame-rate dynamic imaging of the eyes and orbital structures. The presented sequence has the potential to be used in furthering the understanding of oculomotor mechanics in vivo, both in health and disease. PMID:28813574

  17. A Comprehensive Motion Estimation Technique for the Improvement of EIS Methods Based on the SURF Algorithm and Kalman Filter.

    PubMed

    Cheng, Xuemin; Hao, Qun; Xie, Mengdi

    2016-04-07

    Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.

  18. Method and System for Temporal Filtering in Video Compression Systems

    NASA Technical Reports Server (NTRS)

    Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim

    2011-01-01

    Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.

  19. Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation.

    PubMed

    Frick, Eric; Rahmatalla, Salam

    2018-04-04

    The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO). First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated ( r > 0.82) with the true, time-varying joint center solution.

  20. Layered motion segmentation and depth ordering by tracking edges.

    PubMed

    Smith, Paul; Drummond, Tom; Cipolla, Roberto

    2004-04-01

    This paper presents a new Bayesian framework for motion segmentation--dividing a frame from an image sequence into layers representing different moving objects--by tracking edges between frames. Edges are found using the Canny edge detector, and the Expectation-Maximization algorithm is then used to fit motion models to these edges and also to calculate the probabilities of the edges obeying each motion model. The edges are also used to segment the image into regions of similar color. The most likely labeling for these regions is then calculated by using the edge probabilities, in association with a Markov Random Field-style prior. The identification of the relative depth ordering of the different motion layers is also determined, as an integral part of the process. An efficient implementation of this framework is presented for segmenting two motions (foreground and background) using two frames. It is then demonstrated how, by tracking the edges into further frames, the probabilities may be accumulated to provide an even more accurate and robust estimate, and segment an entire sequence. Further extensions are then presented to address the segmentation of more than two motions. Here, a hierarchical method of initializing the Expectation-Maximization algorithm is described, and it is demonstrated that the Minimum Description Length principle may be used to automatically select the best number of motion layers. The results from over 30 sequences (demonstrating both two and three motions) are presented and discussed.

  1. A fuzzy measure approach to motion frame analysis for scene detection. M.S. Thesis - Houston Univ.

    NASA Technical Reports Server (NTRS)

    Leigh, Albert B.; Pal, Sankar K.

    1992-01-01

    This paper addresses a solution to the problem of scene estimation of motion video data in the fuzzy set theoretic framework. Using fuzzy image feature extractors, a new algorithm is developed to compute the change of information in each of two successive frames to classify scenes. This classification process of raw input visual data can be used to establish structure for correlation. The algorithm attempts to fulfill the need for nonlinear, frame-accurate access to video data for applications such as video editing and visual document archival/retrieval systems in multimedia environments.

  2. Respiratory motion estimation in x-ray angiography for improved guidance during coronary interventions

    NASA Astrophysics Data System (ADS)

    Baka, N.; Lelieveldt, B. P. F.; Schultz, C.; Niessen, W.; van Walsum, T.

    2015-05-01

    During percutaneous coronary interventions (PCI) catheters and arteries are visualized by x-ray angiography (XA) sequences, using brief contrast injections to show the coronary arteries. If we could continue visualizing the coronary arteries after the contrast agent passed (thus in non-contrast XA frames), we could potentially lower contrast use, which is advantageous due to the toxicity of the contrast agent. This paper explores the possibility of such visualization in mono-plane XA acquisitions with a special focus on respiratory based coronary artery motion estimation. We use the patient specific coronary artery centerlines from pre-interventional 3D CTA images to project on the XA sequence for artery visualization. To achieve this, a framework for registering the 3D centerlines with the mono-plane 2D + time XA sequences is presented. During the registration the patient specific cardiac and respiratory motion is learned. We investigate several respiratory motion estimation strategies with respect to accuracy, plausibility and ease of use for motion prediction in XA frames with and without contrast. The investigated strategies include diaphragm motion based prediction, and respiratory motion extraction from the guiding catheter tip motion. We furthermore compare translational and rigid respiratory based heart motion. We validated the accuracy of the 2D/3D registration and the respiratory and cardiac motion estimations on XA sequences of 12 interventions. The diaphragm based motion model and the catheter tip derived motion achieved 1.58 mm and 1.83 mm median 2D accuracy, respectively. On a subset of four interventions we evaluated the artery visualization accuracy for non-contrast cases. Both diaphragm, and catheter tip based prediction performed similarly, with about half of the cases providing satisfactory accuracy (median error < 2 mm).

  3. Automatic facial animation parameters extraction in MPEG-4 visual communication

    NASA Astrophysics Data System (ADS)

    Yang, Chenggen; Gong, Wanwei; Yu, Lu

    2002-01-01

    Facial Animation Parameters (FAPs) are defined in MPEG-4 to animate a facial object. The algorithm proposed in this paper to extract these FAPs is applied to very low bit-rate video communication, in which the scene is composed of a head-and-shoulder object with complex background. This paper addresses the algorithm to automatically extract all FAPs needed to animate a generic facial model, estimate the 3D motion of head by points. The proposed algorithm extracts human facial region by color segmentation and intra-frame and inter-frame edge detection. Facial structure and edge distribution of facial feature such as vertical and horizontal gradient histograms are used to locate the facial feature region. Parabola and circle deformable templates are employed to fit facial feature and extract a part of FAPs. A special data structure is proposed to describe deformable templates to reduce time consumption for computing energy functions. Another part of FAPs, 3D rigid head motion vectors, are estimated by corresponding-points method. A 3D head wire-frame model provides facial semantic information for selection of proper corresponding points, which helps to increase accuracy of 3D rigid object motion estimation.

  4. Direct Estimation of Structure and Motion from Multiple Frames

    DTIC Science & Technology

    1990-03-01

    sequential frames in an image sequence. As a consequence, the information that can be extracted from a single optical flow field is limited to a snapshot of...researchers have developed techniques that extract motion and structure inform.4tion without computation of the optical flow. Best known are the "direct...operated iteratively on a sequence of images to recover structure. It required feature extraction and matching. Broida and Chellappa [9] suggested the use of

  5. Full-frame video stabilization with motion inpainting.

    PubMed

    Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung

    2006-07-01

    Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.

  6. Measurement of motion detection of wireless capsule endoscope inside large intestine.

    PubMed

    Zhou, Mingda; Bao, Guanqun; Pahlavan, Kaveh

    2014-01-01

    Wireless Capsule Endoscope (WCE) provides a noninvasive way to inspect the entire Gastrointestinal (GI) tract, including large intestine, where intestinal diseases most likely occur. As a critical component of capsule endoscopic examination, physicians need to know the precise position of the endoscopic capsule in order to identify the position of detected intestinal diseases. Knowing how the capsule moves inside the large intestine would greatly complement the existing wireless localization systems by providing the motion information. Since the most recently released WCE can take up to 6 frames per second, it's possible to estimate the movement of the capsule by processing the successive image sequence. In this paper, a computer vision based approach without utilizing any external device is proposed to estimate the motion of WCE inside the large intestine. The proposed approach estimate the displacement and rotation of the capsule by calculating entropy and mutual information between frames using Fibonacci method. The obtained results of this approach show its stability and better performance over other existing approaches of motion measurements. Meanwhile, findings of this paper lay a foundation for motion pattern of WCEs inside the large intestine, which will benefit other medical applications.

  7. As time passes by: Observed motion-speed and psychological time during video playback.

    PubMed

    Nyman, Thomas Jonathan; Karlsson, Eric Per Anders; Antfolk, Jan

    2017-01-01

    Research shows that psychological time (i.e., the subjective experience and assessment of the passage of time) is malleable and that the central nervous system re-calibrates temporal information in accordance with situational factors so that psychological time flows slower or faster. Observed motion-speed (e.g., the visual perception of a rolling ball) is an important situational factor which influences the production of time estimates. The present study examines previous findings showing that observed slow and fast motion-speed during video playback respectively results in over- and underproductions of intervals of time. Here, we investigated through three separate experiments: a) the main effect of observed motion-speed during video playback on a time production task and b) the interactive effect of the frame rate (frames per second; fps) and motion-speed during video playback on a time production task. No main effect of video playback-speed or interactive effect between video playback-speed and frame rate was found on time production.

  8. As time passes by: Observed motion-speed and psychological time during video playback

    PubMed Central

    Karlsson, Eric Per Anders; Antfolk, Jan

    2017-01-01

    Research shows that psychological time (i.e., the subjective experience and assessment of the passage of time) is malleable and that the central nervous system re-calibrates temporal information in accordance with situational factors so that psychological time flows slower or faster. Observed motion-speed (e.g., the visual perception of a rolling ball) is an important situational factor which influences the production of time estimates. The present study examines previous findings showing that observed slow and fast motion-speed during video playback respectively results in over- and underproductions of intervals of time. Here, we investigated through three separate experiments: a) the main effect of observed motion-speed during video playback on a time production task and b) the interactive effect of the frame rate (frames per second; fps) and motion-speed during video playback on a time production task. No main effect of video playback-speed or interactive effect between video playback-speed and frame rate was found on time production. PMID:28614353

  9. Active contour-based visual tracking by integrating colors, shapes, and motions.

    PubMed

    Hu, Weiming; Zhou, Xue; Li, Wei; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen

    2013-05-01

    In this paper, we present a framework for active contour-based visual tracking using level sets. The main components of our framework include contour-based tracking initialization, color-based contour evolution, adaptive shape-based contour evolution for non-periodic motions, dynamic shape-based contour evolution for periodic motions, and the handling of abrupt motions. For the initialization of contour-based tracking, we develop an optical flow-based algorithm for automatically initializing contours at the first frame. For the color-based contour evolution, Markov random field theory is used to measure correlations between values of neighboring pixels for posterior probability estimation. For adaptive shape-based contour evolution, the global shape information and the local color information are combined to hierarchically evolve the contour, and a flexible shape updating model is constructed. For the dynamic shape-based contour evolution, a shape mode transition matrix is learnt to characterize the temporal correlations of object shapes. For the handling of abrupt motions, particle swarm optimization is adopted to capture the global motion which is applied to the contour in the current frame to produce an initial contour in the next frame.

  10. Moving object detection using dynamic motion modelling from UAV aerial images.

    PubMed

    Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid

    2014-01-01

    Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.

  11. Motion-compensated compressed sensing for dynamic imaging

    NASA Astrophysics Data System (ADS)

    Sundaresan, Rajagopalan; Kim, Yookyung; Nadar, Mariappan S.; Bilgin, Ali

    2010-08-01

    The recently introduced Compressed Sensing (CS) theory explains how sparse or compressible signals can be reconstructed from far fewer samples than what was previously believed possible. The CS theory has attracted significant attention for applications such as Magnetic Resonance Imaging (MRI) where long acquisition times have been problematic. This is especially true for dynamic MRI applications where high spatio-temporal resolution is needed. For example, in cardiac cine MRI, it is desirable to acquire the whole cardiac volume within a single breath-hold in order to avoid artifacts due to respiratory motion. Conventional MRI techniques do not allow reconstruction of high resolution image sequences from such limited amount of data. Vaswani et al. recently proposed an extension of the CS framework to problems with partially known support (i.e. sparsity pattern). In their work, the problem of recursive reconstruction of time sequences of sparse signals was considered. Under the assumption that the support of the signal changes slowly over time, they proposed using the support of the previous frame as the "known" part of the support for the current frame. While this approach works well for image sequences with little or no motion, motion causes significant change in support between adjacent frames. In this paper, we illustrate how motion estimation and compensation techniques can be used to reconstruct more accurate estimates of support for image sequences with substantial motion (such as cardiac MRI). Experimental results using phantoms as well as real MRI data sets illustrate the improved performance of the proposed technique.

  12. Dynamical reference frames in the planetary and earth-moon systems

    NASA Technical Reports Server (NTRS)

    Standish, E. M.; Williams, G.

    1990-01-01

    Estimates of the accuracies of the ephemerides are reviewed using data for planetary and lunar systems to determine the efficacy of the inherent dynamical reference frame. The varied observational data are listed and given with special attention given to ephemeris improvements. The importance of ranging data is discussed with respect to the inner four planets and the moon, and the discrepancy of 1 arcsec/century between mean motions determined by optical observations versus ranging data is addressed. The Viking mission data provide inertial mean motions for the earth and Mars of 0.003 arcsec/century which will deteriorate to 0.01 arcsec after about 10 years. Uncertainties for other planets and the moon are found to correspond to approximately the same level of degradation. In general the data measurements and error estimates are improving the ephemerides, although refitting the data cannot account for changes in mean motion.

  13. Direction-dependent regularization for improved estimation of liver and lung motion in 4D image data

    NASA Astrophysics Data System (ADS)

    Schmidt-Richberg, Alexander; Ehrhardt, Jan; Werner, René; Handels, Heinz

    2010-03-01

    The estimation of respiratory motion is a fundamental requisite for many applications in the field of 4D medical imaging, for example for radiotherapy of thoracic and abdominal tumors. It is usually done using non-linear registration of time frames of the sequence without further modelling of physiological motion properties. In this context, the accurate calculation of liver und lung motion is especially challenging because the organs are slipping along the surrounding tissue (i.e. the rib cage) during the respiratory cycle, which leads to discontinuities in the motion field. Without incorporating this specific physiological characteristic, common smoothing mechanisms cause an incorrect estimation along the object borders. In this paper, we present an extended diffusion-based model for incorporating physiological knowledge in image registration. By decoupling normal- and tangential-directed smoothing, we are able to estimate slipping motion at the organ borders while preventing gaps and ensuring smooth motion fields inside. We evaluate our model for the estimation of lung and liver motion on the basis of publicly accessible 4D CT and 4D MRI data. The results show a considerable increase of registration accuracy with respect to the target registration error and a more plausible motion estimation.

  14. Dense motion estimation using regularization constraints on local parametric models.

    PubMed

    Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein

    2004-11-01

    This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.

  15. Annual Geocenter Motion from Space Geodesy and Models

    NASA Astrophysics Data System (ADS)

    Ries, J. C.

    2013-12-01

    Ideally, the origin of the terrestrial reference frame and the center of mass of the Earth are always coincident. By construction, the origin of the reference frame is coincident with the mean Earth center of mass, averaged over the time span of the satellite laser ranging (SLR) observations used in the reference frame solution, within some level of uncertainty. At shorter time scales, tidal and non-tidal mass variations result in an offset between the origin and geocenter, called geocenter motion. Currently, there is a conventional model for the tidally-coherent diurnal and semi-diurnal geocenter motion, but there is no model for the non-tidal annual variation. This annual motion reflects the largest-scale mass redistribution in the Earth system, so it essential to observe it for a complete description of the total mass transport. Failing to model it can also cause false signals in geodetic products such as sea height observations from satellite altimeters. In this paper, a variety of estimates for the annual geocenter motion are presented based on several different geodetic techniques and models, and a ';consensus' model from SLR is suggested.

  16. Adaptive mesh optimization and nonrigid motion recovery based image registration for wide-field-of-view ultrasound imaging.

    PubMed

    Tan, Chaowei; Wang, Bo; Liu, Paul; Liu, Dong

    2008-01-01

    Wide field of view (WFOV) imaging mode obtains an ultrasound image over an area much larger than the real time window normally available. As the probe is moved over the region of interest, new image frames are combined with prior frames to form a panorama image. Image registration techniques are used to recover the probe motion, eliminating the need for a position sensor. Speckle patterns, which are inherent in ultrasound imaging, change, or become decorrelated, as the scan plane moves, so we pre-smooth the image to reduce the effects of speckle in registration, as well as reducing effects from thermal noise. Because we wish to track the movement of features such as structural boundaries, we use an adaptive mesh over the entire smoothed image to home in on areas with feature. Motion estimation using blocks centered at the individual mesh nodes generates a field of motion vectors. After angular correction of motion vectors, we model the overall movement between frames as a nonrigid deformation. The polygon filling algorithm for precise, persistence-based spatial compounding constructs the final speckle reduced WFOV image.

  17. Estimation of contour motion and deformation for nonrigid object tracking

    NASA Astrophysics Data System (ADS)

    Shao, Jie; Porikli, Fatih; Chellappa, Rama

    2007-08-01

    We present an algorithm for nonrigid contour tracking in heavily cluttered background scenes. Based on the properties of nonrigid contour movements, a sequential framework for estimating contour motion and deformation is proposed. We solve the nonrigid contour tracking problem by decomposing it into three subproblems: motion estimation, deformation estimation, and shape regulation. First, we employ a particle filter to estimate the global motion parameters of the affine transform between successive frames. Then we generate a probabilistic deformation map to deform the contour. To improve robustness, multiple cues are used for deformation probability estimation. Finally, we use a shape prior model to constrain the deformed contour. This enables us to retrieve the occluded parts of the contours and accurately track them while allowing shape changes specific to the given object types. Our experiments show that the proposed algorithm significantly improves the tracker performance.

  18. Optical flow estimation on image sequences with differently exposed frames

    NASA Astrophysics Data System (ADS)

    Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin

    2015-09-01

    Optical flow (OF) methods are used to estimate dense motion information between consecutive frames in image sequences. In addition to the specific OF estimation method itself, the quality of the input image sequence is of crucial importance to the quality of the resulting flow estimates. For instance, lack of texture in image frames caused by saturation of the camera sensor during exposure can significantly deteriorate the performance. An approach to avoid this negative effect is to use different camera settings when capturing the individual frames. We provide a framework for OF estimation on such sequences that contain differently exposed frames. Information from multiple frames are combined into a total cost functional such that the lack of an active data term for saturated image areas is avoided. Experimental results demonstrate that using alternate camera settings to capture the full dynamic range of an underlying scene can clearly improve the quality of flow estimates. When saturation of image data is significant, the proposed methods show superior performance in terms of lower endpoint errors of the flow vectors compared to a set of baseline methods. Furthermore, we provide some qualitative examples of how and when our method should be used.

  19. On the Hipparcos Link to the ICRF derived from VLA and MERLIN radio astrometry

    NASA Astrophysics Data System (ADS)

    Hering, R.; Walter, H. G.

    2007-06-01

    Positions and proper motions obtained from observations by the very large array (VLA) and the multi-element radio-linked interferometer network (MERLIN) are used to establish the link of the Hipparcos Celestial Reference Frame (HCRF) to the International Celestial Reference Frame (ICRF). The VLA and MERLIN data are apparently the latest ones published in the literature. Their mean epoch at around 2001 is about 10 years after the epoch of the Hipparcos catalogue and, therefore, the data are considered suitable to check the Hipparcos link established at epoch 1991.25. The parameters of the link, i.e., the angles of frame orientation and the angular rates of frame rotation, are estimated by fitting these parameters to the differences of the optical and radio positions and proper motions of stars common to the Hipparcos catalogue and the VLA and MERLIN data. Both the estimates of the angles of orientation and the angular rates of rotation show nearly consistent but insignificant results for all samples of stars treated. We conclude that not only the size of the samples of 9 15 stars is too small, but also that the accuracy of the radio positions and, above all, of the radio proper motions is insufficient, the latter being based on early-epoch star positions of low accuracy. The present observational data at epoch 2001 suggest that maintenance of the Hipparcos frame is not feasible at this stage.

  20. Relationship between selected orientation rest frame, circular vection and space motion sickness

    NASA Technical Reports Server (NTRS)

    Harm, D. L.; Parker, D. E.; Reschke, M. F.; Skinner, N. C.

    1998-01-01

    Space motion sickness (SMS) and spatial orientation and motion perception disturbances occur in 70-80% of astronauts. People select "rest frames" to create the subjective sense of spatial orientation. In microgravity, the astronaut's rest frame may be based on visual scene polarity cues and on the internal head and body z axis (vertical body axis). The data reported here address the following question: Can an astronaut's orientation rest frame be related and described by other variables including circular vection response latencies and space motion sickness? The astronaut's microgravity spatial orientation rest frames were determined from inflight and postflight verbal reports. Circular vection responses were elicited by rotating a virtual room continuously at 35 degrees/s in pitch, roll and yaw with respect to the astronaut. Latency to the onset of vection was recorded from the time the crew member opened their eyes to the onset of vection. The astronauts who used visual cues exhibited significantly shorter vection latencies than those who used internal z axis cues. A negative binomial regression model was used to represent the observed total SMS symptom scores for each subject for each flight day. Orientation reference type had a significant effect, resulting in an estimated three-fold increase in the expected motion sickness score on flight day 1 for astronauts who used visual cues. The results demonstrate meaningful classification of astronauts' rest frames and their relationships to sensitivity to circular vection and SMS. Thus, it may be possible to use vection latencies to predict SMS severity and duration.

  1. Evolution of motion uncertainty in rectal cancer: implications for adaptive radiotherapy

    NASA Astrophysics Data System (ADS)

    Kleijnen, Jean-Paul J. E.; van Asselen, Bram; Burbach, Johannes P. M.; Intven, Martijn; Philippens, Marielle E. P.; Reerink, Onne; Lagendijk, Jan J. W.; Raaymakers, Bas W.

    2016-01-01

    Reduction of motion uncertainty by applying adaptive radiotherapy strategies depends largely on the temporal behavior of this motion. To fully optimize adaptive strategies, insight into target motion is needed. The purpose of this study was to analyze stability and evolution in time of motion uncertainty of both the gross tumor volume (GTV) and clinical target volume (CTV) for patients with rectal cancer. We scanned 16 patients daily during one week, on a 1.5 T MRI scanner in treatment position, prior to each radiotherapy fraction. Single slice sagittal cine MRIs were made at the beginning, middle, and end of each scan session, for one minute at 2 Hz temporal resolution. GTV and CTV motion were determined by registering a delineated reference frame to time-points later in time. The 95th percentile of observed motion (dist95%) was taken as a measure of motion. The stability of motion in time was evaluated within each cine-MRI separately. The evolution of motion was investigated between the reference frame and the cine-MRIs of a single scan session and between the reference frame and the cine-MRIs of several days later in the course of treatment. This observed motion was then converted into a PTV-margin estimate. Within a one minute cine-MRI scan, motion was found to be stable and small. Independent of the time-point within the scan session, the average dist95% remains below 3.6 mm and 2.3 mm for CTV and GTV, respectively 90% of the time. We found similar motion over time intervals from 18 min to 4 days. When reducing the time interval from 18 min to 1 min, a large reduction in motion uncertainty is observed. A reduction in motion uncertainty, and thus the PTV-margin estimate, of 71% and 75% for CTV and tumor was observed, respectively. Time intervals of 15 and 30 s yield no further reduction in motion uncertainty compared to a 1 min time interval.

  2. A sensor fusion method for tracking vertical velocity and height based on inertial and barometric altimeter measurements.

    PubMed

    Sabatini, Angelo Maria; Genovese, Vincenzo

    2014-07-24

    A sensor fusion method was developed for vertical channel stabilization by fusing inertial measurements from an Inertial Measurement Unit (IMU) and pressure altitude measurements from a barometric altimeter integrated in the same device (baro-IMU). An Extended Kalman Filter (EKF) estimated the quaternion from the sensor frame to the navigation frame; the sensed specific force was rotated into the navigation frame and compensated for gravity, yielding the vertical linear acceleration; finally, a complementary filter driven by the vertical linear acceleration and the measured pressure altitude produced estimates of height and vertical velocity. A method was also developed to condition the measured pressure altitude using a whitening filter, which helped to remove the short-term correlation due to environment-dependent pressure changes from raw pressure altitude. The sensor fusion method was implemented to work on-line using data from a wireless baro-IMU and tested for the capability of tracking low-frequency small-amplitude vertical human-like motions that can be critical for stand-alone inertial sensor measurements. Validation tests were performed in different experimental conditions, namely no motion, free-fall motion, forced circular motion and squatting. Accurate on-line tracking of height and vertical velocity was achieved, giving confidence to the use of the sensor fusion method for tracking typical vertical human motions: velocity Root Mean Square Error (RMSE) was in the range 0.04-0.24 m/s; height RMSE was in the range 5-68 cm, with statistically significant performance gains when the whitening filter was used by the sensor fusion method to track relatively high-frequency vertical motions.

  3. Compilation of Published Estimates of Annual Geocenter Motions Using Space Geodesy

    NASA Technical Reports Server (NTRS)

    Elosegui, P.

    2005-01-01

    The definition of the term "geocenter motion" depends on the adopted origin of the reference frame. Common reference frames used in Space Geodesy include: the center of mass of the whole Earth (CM), the center of mass of the Solid Earth without mass load (CE), and the center of figure of the outer surface of the Solid Earth (CF). There are two established definitions of the term geocenter: one, the vector offset of CF relative to CM and, two, the reverse, the vector offset of CM relative to CF. Obviously, their amplitude is the same and their phase differs by 180 deg. Following Dong et al. [2003], we label the first X(sub CF, sup CM) and the second X(sup CF, sup CM) (i.e., the superscript represents the frame, the subscript represents any point in the frame).

  4. Efficient low-bit-rate adaptive mesh-based motion compensation technique

    NASA Astrophysics Data System (ADS)

    Mahmoud, Hanan A.; Bayoumi, Magdy A.

    2001-08-01

    This paper proposes a two-stage global motion estimation method using a novel quadtree block-based motion estimation technique and an active mesh model. In the first stage, motion parameters are estimated by fitting block-based motion vectors computed using a new efficient quadtree technique, that divides a frame into equilateral triangle blocks using the quad-tree structure. Arbitrary partition shapes are achieved by allowing 4-to-1, 3-to-1 and 2-1 merge/combine of sibling blocks having the same motion vector . In the second stage, the mesh is constructed using an adaptive triangulation procedure that places more triangles over areas with high motion content, these areas are estimated during the first stage. finally the motion compensation is achieved by using a novel algorithm that is carried by both the encoder and the decoder to determine the optimal triangulation of the resultant partitions followed by affine mapping at the encoder. Computer simulation results show that the proposed method gives better performance that the conventional ones in terms of the peak signal-to-noise ration (PSNR) and the compression ratio (CR).

  5. Bilayer segmentation of webcam videos using tree-based classifiers.

    PubMed

    Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan

    2011-01-01

    This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.

  6. Parallel updating and weighting of multiple spatial maps for visual stability during whole body motion

    PubMed Central

    Medendorp, W. P.

    2015-01-01

    It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms. PMID:26490289

  7. Velocity Estimate Following Air Data System Failure

    DTIC Science & Technology

    2008-03-01

    39 Figure 3.3. Sampled Two Vector Approach .................................................................... 40 Figure 3.4...algorithm design in terms of reference frames, equations of motion, and velocity triangles describing the vector relationship between airspeed, wind speed...2.2.1 Reference Frames The flight of an aircraft through the air mass can be described in specific coordinate systems [ Nelson 1998]. To determine how

  8. Pixel-By Estimation of Scene Motion in Video

    NASA Astrophysics Data System (ADS)

    Tashlinskii, A. G.; Smirnov, P. V.; Tsaryov, M. G.

    2017-05-01

    The paper considers the effectiveness of motion estimation in video using pixel-by-pixel recurrent algorithms. The algorithms use stochastic gradient decent to find inter-frame shifts of all pixels of a frame. These vectors form shift vectors' field. As estimated parameters of the vectors the paper studies their projections and polar parameters. It considers two methods for estimating shift vectors' field. The first method uses stochastic gradient descent algorithm to sequentially process all nodes of the image row-by-row. It processes each row bidirectionally i.e. from the left to the right and from the right to the left. Subsequent joint processing of the results allows compensating inertia of the recursive estimation. The second method uses correlation between rows to increase processing efficiency. It processes rows one after the other with the change in direction after each row and uses obtained values to form resulting estimate. The paper studies two criteria of its formation: gradient estimation minimum and correlation coefficient maximum. The paper gives examples of experimental results of pixel-by-pixel estimation for a video with a moving object and estimation of a moving object trajectory using shift vectors' field.

  9. 3D kinematic measurement of human movement using low cost fish-eye cameras

    NASA Astrophysics Data System (ADS)

    Islam, Atiqul; Asikuzzaman, Md.; Garratt, Matthew A.; Pickering, Mark R.

    2017-02-01

    3D motion capture is difficult when the capturing is performed in an outdoor environment without controlled surroundings. In this paper, we propose a new approach of using two ordinary cameras arranged in a special stereoscopic configuration and passive markers on a subject's body to reconstruct the motion of the subject. Firstly for each frame of the video, an adaptive thresholding algorithm is applied for extracting the markers on the subject's body. Once the markers are extracted, an algorithm for matching corresponding markers in each frame is applied. Zhang's planar calibration method is used to calibrate the two cameras. As the cameras use the fisheye lens, they cannot be well estimated using a pinhole camera model which makes it difficult to estimate the depth information. In this work, to restore the 3D coordinates we use a unique calibration method for fisheye lenses. The accuracy of the 3D coordinate reconstruction is evaluated by comparing with results from a commercially available Vicon motion capture system.

  10. Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor

    PubMed Central

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016

  11. A Sensor Fusion Method for Tracking Vertical Velocity and Height Based on Inertial and Barometric Altimeter Measurements

    PubMed Central

    Sabatini, Angelo Maria; Genovese, Vincenzo

    2014-01-01

    A sensor fusion method was developed for vertical channel stabilization by fusing inertial measurements from an Inertial Measurement Unit (IMU) and pressure altitude measurements from a barometric altimeter integrated in the same device (baro-IMU). An Extended Kalman Filter (EKF) estimated the quaternion from the sensor frame to the navigation frame; the sensed specific force was rotated into the navigation frame and compensated for gravity, yielding the vertical linear acceleration; finally, a complementary filter driven by the vertical linear acceleration and the measured pressure altitude produced estimates of height and vertical velocity. A method was also developed to condition the measured pressure altitude using a whitening filter, which helped to remove the short-term correlation due to environment-dependent pressure changes from raw pressure altitude. The sensor fusion method was implemented to work on-line using data from a wireless baro-IMU and tested for the capability of tracking low-frequency small-amplitude vertical human-like motions that can be critical for stand-alone inertial sensor measurements. Validation tests were performed in different experimental conditions, namely no motion, free-fall motion, forced circular motion and squatting. Accurate on-line tracking of height and vertical velocity was achieved, giving confidence to the use of the sensor fusion method for tracking typical vertical human motions: velocity Root Mean Square Error (RMSE) was in the range 0.04–0.24 m/s; height RMSE was in the range 5–68 cm, with statistically significant performance gains when the whitening filter was used by the sensor fusion method to track relatively high-frequency vertical motions. PMID:25061835

  12. Sporadic frame dropping impact on quality perception

    NASA Astrophysics Data System (ADS)

    Pastrana-Vidal, Ricardo R.; Gicquel, Jean Charles; Colomes, Catherine; Cherifi, Hocine

    2004-06-01

    Over the past few years there has been an increasing interest in real time video services over packet networks. When considering quality, it is essential to quantify user perception of the received sequence. Severe motion discontinuities are one of the most common degradations in video streaming. The end-user perceives a jerky motion when the discontinuities are uniformly distributed over time and an instantaneous fluidity break is perceived when the motion loss is isolated or irregularly distributed. Bit rate adaptation techniques, transmission errors in the packet networks or restitution strategy could be the origin of this perceived jerkiness. In this paper we present a psychovisual experiment performed to quantify the effect of sporadically dropped pictures on the overall perceived quality. First, the perceptual detection thresholds of generated temporal discontinuities were measured. Then, the quality function was estimated in relation to a single frame dropping for different durations. Finally, a set of tests was performed to quantify the effect of several impairments distributed over time. We have found that the detection thresholds are content, duration and motion dependent. The assessment results show how quality is impaired by a single burst of dropped frames in a 10 sec sequence. The effect of several bursts of discarded frames, irregularly distributed over the time is also discussed.

  13. Multi-volumetric registration and mosaicking using swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Bozic, Ivan; El-Haddad, Mohamed T.; Malone, Joseph D.; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2017-02-01

    Ophthalmic diagnostic imaging using optical coherence tomography (OCT) is limited by bulk eye motions and a fundamental trade-off between field-of-view (FOV) and sampling density. Here, we introduced a novel multi-volumetric registration and mosaicking method using our previously described multimodal swept-source spectrally encoded scanning laser ophthalmoscopy and OCT (SS-SESLO-OCT) system. Our SS-SESLO-OCT acquires an entire en face fundus SESLO image simultaneously with every OCT cross-section at 200 frames-per-second. In vivo human retinal imaging was performed in a healthy volunteer, and three volumetric datasets were acquired with the volunteer moving freely and refixating between each acquisition. In post-processing, SESLO frames were used to estimate en face rotational and translational motions by registering every frame in all three volumetric datasets to the first frame in the first volume. OCT cross-sections were contrast-normalized and registered axially and rotationally across all volumes. Rotational and translational motions calculated from SESLO frames were applied to corresponding OCT B-scans to compensate for interand intra-B-scan bulk motions, and the three registered volumes were combined into a single interpolated multi-volumetric mosaic. Using complementary information from SESLO and OCT over serially acquired volumes, we demonstrated multivolumetric registration and mosaicking to recover regions of missing data resulting from blinks, saccades, and ocular drifts. We believe our registration method can be directly applied for multi-volumetric motion compensation, averaging, widefield mosaicking, and vascular mapping with potential applications in ophthalmic clinical diagnostics, handheld imaging, and intraoperative guidance.

  14. Impact of the galactic acceleration on the terrestrial reference frame and the scale factor in VLBI

    NASA Astrophysics Data System (ADS)

    Krásná, Hana; Titov, Oleg

    2017-04-01

    The relative motion of the solar system barycentre around the galactic centre can also be described as an acceleration of the solar system directed towards the centre of the Galaxy. So far, this effect has been omitted in the a priori modelling of the Very Long Baseline Interferometry (VLBI) observable. Therefore, it results in a systematic dipole proper motion (Secular Aberration Drift, SAD) of extragalactic radio sources building the celestial reference frame with a theoretical maximum magnitude of 5-7 microarcsec/year. In this work, we present our estimation of the SAD vector obtained within a global adjustment of the VLBI measurements (1979.0 - 2016.5) using the software VieVS. We focus on the influence of the observed radio sources with the maximum SAD effect on the terrestrial reference frame. We show that the scale factor from the VLBI measurements estimated for each source individually discloses a clear systematic aligned with the direction to the Galactic centre-anticentre. Therefore, the radio sources located near Galactic anticentre may cause a strong systematic effect, especially, in early VLBI years. For instance, radio source 0552+398 causes a difference up to 1 mm in the estimated baseline length. Furthermore, we discuss the scale factor estimated for each radio source after removal of the SAD systematic.

  15. An improved multi-paths optimization method for video stabilization

    NASA Astrophysics Data System (ADS)

    Qin, Tao; Zhong, Sheng

    2018-03-01

    For video stabilization, the difference between original camera motion path and the optimized one is proportional to the cropping ratio and warping ratio. A good optimized path should preserve the moving tendency of the original one meanwhile the cropping ratio and warping ratio of each frame should be kept in a proper range. In this paper we use an improved warping-based motion representation model, and propose a gauss-based multi-paths optimization method to get a smoothing path and obtain a stabilized video. The proposed video stabilization method consists of two parts: camera motion path estimation and path smoothing. We estimate the perspective transform of adjacent frames according to warping-based motion representation model. It works well on some challenging videos where most previous 2D methods or 3D methods fail for lacking of long features trajectories. The multi-paths optimization method can deal well with parallax, as we calculate the space-time correlation of the adjacent grid, and then a kernel of gauss is used to weigh the motion of adjacent grid. Then the multi-paths are smoothed while minimize the crop ratio and the distortion. We test our method on a large variety of consumer videos, which have casual jitter and parallax, and achieve good results.

  16. The algorithm of motion blur image restoration based on PSF half-blind estimation

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ke; Lin, Zhe

    2011-08-01

    A novel algorithm of motion blur image restoration based on PSF half-blind estimation with Hough transform was introduced on the basis of full analysis of the principle of TDICCD camera, with the problem that vertical uniform linear motion estimation used by IBD algorithm as the original value of PSF led to image restoration distortion. Firstly, the mathematical model of image degradation was established with the transcendental information of multi-frame images, and then two parameters (movement blur length and angle) that have crucial influence on PSF estimation was set accordingly. Finally, the ultimate restored image can be acquired through multiple iterative of the initial value of PSF estimation in Fourier domain, which the initial value was gained by the above method. Experimental results show that the proposal algorithm can not only effectively solve the image distortion problem caused by relative motion between TDICCD camera and movement objects, but also the details characteristics of original image are clearly restored.

  17. Video compression of coronary angiograms based on discrete wavelet transform with block classification.

    PubMed

    Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P

    1996-01-01

    A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.

  18. Evaluation of potential internal target volume of liver tumors using cine-MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akino, Yuichi, E-mail: akino@radonc.med.osaka-u.ac.jp; Oh, Ryoong-Jin; Masai, Norihisa

    2014-11-01

    Purpose: Four-dimensional computed tomography (4DCT) is widely used for evaluating moving tumors, including lung and liver cancers. For patients with unstable respiration, however, the 4DCT may not visualize tumor motion properly. High-speed magnetic resonance imaging (MRI) sequences (cine-MRI) permit direct visualization of respiratory motion of liver tumors without considering radiation dose exposure to patients. Here, the authors demonstrated a technique for evaluating internal target volume (ITV) with consideration of respiratory variation using cine-MRI. Methods: The authors retrospectively evaluated six patients who received stereotactic body radiotherapy (SBRT) to hepatocellular carcinoma. Before acquiring planning CT, sagittal and coronal cine-MRI images were acquiredmore » for 30 s with a frame rate of 2 frames/s. The patient immobilization was conducted under the same condition as SBRT. Planning CT images were then acquired within 15 min from cine-MRI image acquisitions, followed by a 4DCT scan. To calculate tumor motion, the motion vectors between two continuous frames of cine-MRI images were calculated for each frame using the pyramidal Lucas–Kanade method. The target contour was delineated on one frame, and each vertex of the contour was shifted and copied onto the following frame using neighboring motion vectors. 3D trajectory data were generated with the centroid of the contours on sagittal and coronal images. To evaluate the accuracy of the tracking method, the motion of clearly visible blood vessel was analyzed with the motion tracking and manual detection techniques. The target volume delineated on the 50% (end-exhale) phase of 4DCT was translated with the trajectory data, and the distribution of the occupancy probability of target volume was calculated as potential ITV (ITV {sub Potential}). The concordance between ITV {sub Potential} and ITV estimated with 4DCT (ITV {sub 4DCT}) was evaluated using the Dice’s similarity coefficient (DSC). Results: The distance between blood vessel positions determined with motion tracking and manual detection was analyzed. The mean and SD of the distance were less than 0.80 and 0.52 mm, respectively. The maximum ranges of tumor motion on cine-MRI were 2.4 ± 1.4 mm (range, 1.0–5.0 mm), 4.4 ± 3.3 mm (range, 0.8–9.4 mm), and 14.7 ± 5.9 mm (range, 7.4–23.4 mm) in lateral, anterior–posterior, and superior–inferior directions, respectively. The ranges in the superior–inferior direction were larger than those estimated with 4DCT images for all patients. The volume of ITV {sub Potential} was 160.3% ± 13.5% (range, 142.0%–179.2%) of the ITV {sub 4DCT}. The maximum DSC values were observed when the cutoff value of 24.7% ± 4.0% (range, 20%–29%) was applied. Conclusions: The authors demonstrated a novel method of calculating 3D motion and ITV {sub Potential} of liver cancer using orthogonal cine-MRI. Their method achieved accurate calculation of the respiratory motion of moving structures. Individual evaluation of the ITV {sub Potential} will aid in improving respiration management and treatment planning.« less

  19. Aerial video mosaicking using binary feature tracking

    NASA Astrophysics Data System (ADS)

    Minnehan, Breton; Savakis, Andreas

    2015-05-01

    Unmanned Aerial Vehicles are becoming an increasingly attractive platform for many applications, as their cost decreases and their capabilities increase. Creating detailed maps from aerial data requires fast and accurate video mosaicking methods. Traditional mosaicking techniques rely on inter-frame homography estimations that are cascaded through the video sequence. Computationally expensive keypoint matching algorithms are often used to determine the correspondence of keypoints between frames. This paper presents a video mosaicking method that uses an object tracking approach for matching keypoints between frames to improve both efficiency and robustness. The proposed tracking method matches local binary descriptors between frames and leverages the spatial locality of the keypoints to simplify the matching process. Our method is robust to cascaded errors by determining the homography between each frame and the ground plane rather than the prior frame. The frame-to-ground homography is calculated based on the relationship of each point's image coordinates and its estimated location on the ground plane. Robustness to moving objects is integrated into the homography estimation step through detecting anomalies in the motion of keypoints and eliminating the influence of outliers. The resulting mosaics are of high accuracy and can be computed in real time.

  20. Insensitivity of GNSS to geocenter motion through the network shift approach (Invited)

    NASA Astrophysics Data System (ADS)

    Rebischung, P.; Altamimi, Z.; Springer, T.

    2013-12-01

    As a satellite-based technique, GNSS should be sensitive to motions of the Earth's center of mass (CM) with respect to the Earth's crust. In theory, the weekly solutions of the Analysis Centers of the International GNSS Service (IGS ACs) should indeed have the "instantaneous" CM as their origin, and the net translations between the weekly AC frames and a secular frame such as ITRF2008 should thus approximate the non-linear motion of CM with respect to the Earth's center of figure. However, the comparison of the AC translation time series with each other, with SLR geocenter estimates or with geophysical models reveals that this way of observing geocenter motion with GNSS currently gives unreliable results. We addressed the problem of observing geocenter motion with GNSS through this network shift approach from the perspective of collinearity (or multicollinearity) among the parameters of a least-squares regression. A collinearity diagnosis, based on the notion of variance inflation factor, was therefore developed and allows handling several peculiarities of the GNSS geocenter determination problem. Its application reveals that the determination of all three components of geocenter motion with GNSS suffers from serious collinearity issues, with a comparable level as in the problem of determining the terrestrial scale simultaneously with the GNSS satellite phase center offsets. We show that the inability of current GNSS, as opposed to Satellite Laser Ranging (SLR), to properly sense geocenter motion is mostly explained by the estimation, in the GNSS case, of epoch-wise station and satellite clock offsets simultaneously with tropospheric parameters. The empirical satellite accelerations, as estimated by most IGS ACs, slightly amplify the collinearity of the Z geocenter coordinate, but their role remains secondary.

  1. Estimating network effect in geocenter motion: Theory

    NASA Astrophysics Data System (ADS)

    Zannat, Umma Jamila; Tregoning, Paul

    2017-10-01

    Geophysical models and their interpretations of several processes of interest, such as sea level rise, postseismic relaxation, and glacial isostatic adjustment, are intertwined with the need to realize the International Terrestrial Reference Frame. However, this realization needs to take into account the geocenter motion, that is, the motion of the center of figure of the Earth surface, due to, for example, deformation of the surface by earthquakes or hydrological loading effects. Usually, there is also a discrepancy, known as the network effect, between the theoretically convenient center of figure and the physically accessible center of network frames, because of unavoidable factors such as uneven station distribution, lack of stations in the oceans, disparity in the coverage between the two hemispheres, and the existence of tectonically deforming zones. Here we develop a method to estimate the magnitude of the network effect, that is, the error introduced by the incomplete sampling of the Earth surface, in measuring the geocenter motion, for a network of space geodetic stations of a fixed size N. For this purpose, we use, as our proposed estimate, the standard deviations of the changes in Helmert parameters measured by a random network of the same size N. We show that our estimate scales as 1/√N and give an explicit formula for it in terms of the vector spherical harmonics expansion of the displacement field. In a complementary paper we apply this formalism to coseismic displacements and elastic deformations due to surface water movements.

  2. a Sensor Aided H.264/AVC Video Encoder for Aerial Video Sequences with in the Loop Metadata Correction

    NASA Astrophysics Data System (ADS)

    Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.

    2015-08-01

    Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.

  3. High-Frame-Rate Speckle-Tracking Echocardiography.

    PubMed

    Joos, Philippe; Poree, Jonathan; Liebgott, Herve; Vray, Didier; Baudet, Mathilde; Faurie, Julia; Tournoux, Francois; Cloutier, Guy; Nicolas, Barbara; Garcia, Damien; Baudet, Mathilde; Tournoux, Francois; Joos, Philippe; Poree, Jonathan; Cloutier, Guy; Liebgott, Herve; Faurie, Julia; Vray, Didier; Nicolas, Barbara; Garcia, Damien

    2018-05-01

    Conventional echocardiography is the leading modality for noninvasive cardiac imaging. It has been recently illustrated that high-frame-rate echocardiography using diverging waves could improve cardiac assessment. The spatial resolution and contrast associated with this method are commonly improved by coherent compounding of steered beams. However, owing to fast tissue velocities in the myocardium, the summation process of successive diverging waves can lead to destructive interferences if motion compensation (MoCo) is not considered. Coherent compounding methods based on MoCo have demonstrated their potential to provide high-contrast B-mode cardiac images. Ultrafast speckle-tracking echocardiography (STE) based on common speckle-tracking algorithms could substantially benefit from this original approach. In this paper, we applied STE on high-frame-rate B-mode images obtained with a specific MoCo technique to quantify the 2-D motion and tissue velocities of the left ventricle. The method was first validated in vitro and then evaluated in vivo in the four-chamber view of 10 volunteers. High-contrast high-resolution B-mode images were constructed at 500 frames/s. The sequences were generated with a Verasonics scanner and a 2.5-MHz phased array. The 2-D motion was estimated with standard cross correlation combined with three different subpixel adjustment techniques. The estimated in vitro velocity vectors derived from STE were consistent with the expected values, with normalized errors ranging from 4% to 12% in the radial direction and from 10% to 20% in the cross-range direction. Global longitudinal strain of the left ventricle was also obtained from STE in 10 subjects and compared to the results provided by a clinical scanner: group means were not statistically different ( value = 0.33). The in vitro and in vivo results showed that MoCo enables preservation of the myocardial speckles and in turn allows high-frame-rate STE.

  4. Alignment of cryo-EM movies of individual particles by optimization of image translations.

    PubMed

    Rubinstein, John L; Brubaker, Marcus A

    2015-11-01

    Direct detector device (DDD) cameras have revolutionized single particle electron cryomicroscopy (cryo-EM). In addition to an improved camera detective quantum efficiency, acquisition of DDD movies allows for correction of movement of the specimen, due to both instabilities in the microscope specimen stage and electron beam-induced movement. Unlike specimen stage drift, beam-induced movement is not always homogeneous within an image. Local correlation in the trajectories of nearby particles suggests that beam-induced motion is due to deformation of the ice layer. Algorithms have already been described that can correct movement for large regions of frames and for >1 MDa protein particles. Another algorithm allows individual <1 MDa protein particle trajectories to be estimated, but requires rolling averages to be calculated from frames and fits linear trajectories for particles. Here we describe an algorithm that allows for individual <1 MDa particle images to be aligned without frame averaging or linear trajectories. The algorithm maximizes the overall correlation of the shifted frames with the sum of the shifted frames. The optimum in this single objective function is found efficiently by making use of analytically calculated derivatives of the function. To smooth estimates of particle trajectories, rapid changes in particle positions between frames are penalized in the objective function and weighted averaging of nearby trajectories ensures local correlation in trajectories. This individual particle motion correction, in combination with weighting of Fourier components to account for increasing radiation damage in later frames, can be used to improve 3-D maps from single particle cryo-EM. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Non-linear motions in reprocessed GPS station position time series

    NASA Astrophysics Data System (ADS)

    Rudenko, Sergei; Gendt, Gerd

    2010-05-01

    Global Positioning System (GPS) data of about 400 globally distributed stations obtained at time span from 1998 till 2007 were reprocessed using GFZ Potsdam EPOS (Earth Parameter and Orbit System) software within International GNSS Service (IGS) Tide Gauge Benchmark Monitoring (TIGA) Pilot Project and IGS Data Reprocessing Campaign with the purpose to determine weekly precise coordinates of GPS stations located at or near tide gauges. Vertical motions of these stations are used to correct the vertical motions of tide gauges for local motions and to tie tide gauge measurements to the geocentric reference frame. Other estimated parameters include daily values of the Earth rotation parameters and their rates, as well as satellite antenna offsets. The solution GT1 derived is based on using absolute phase center variation model, ITRF2005 as a priori reference frame, and other new models. The solution contributed also to ITRF2008. The time series of station positions are analyzed to identify non-linear motions caused by different effects. The paper presents the time series of GPS station coordinates and investigates apparent non-linear motions and their influence on GPS station height rates.

  6. Global Plate Motions Relative to the Hotspots since 48 Ma B.P. from Simultaneous Inversion of Hotspot Tracks in the Pacific, Indian, and Atlantic Oceans Constrained to Consistency with Known Relative Plate Motions

    NASA Astrophysics Data System (ADS)

    Gordon, R. G.; Koivisto, E. A. L.

    2016-12-01

    A fundamental problem of global tectonics and paleomagnetism is determining what part of apparent polar wander is due to plate motion and what part is due to true polar wander. One approach for separating these is available if global hotspots can be used as a reference frame approximately fixed with respect to the deep mantle. Some other workers have used a hotspot reference based only on tracks in the Atlantic and Indian Oceans, and some have used reference frames with moving hotspots and many adjustable parameters. In sharp contrast to the assumptions made in these other works, our recent results demonstrate that there is no significant motion between the Pacific and Indo-Atlantic hotspots since 48 Ma B.P. (lower bound of zero and upper bound of 8-13 mm/yr [Koivisto et al., 2014]). Corrected methodologies combined with cumulative improvements in the age progression along the hotspot tracks, the geomagnetic reversal time scale, and relative plate reconstructions lead to significantly lower rates of motion between hotspots than found in prior studies. Building on our prior results, here we present a globally self-consistent estimate of plate motions relative to the hotspots for the past 48 million years from inversions to fit simultaneously the tracks of the Hawaiian, Louisville, Tristan da Cunha, Réunion, and Iceland hotspots constrained to consistency with known relative plate motions. Each finite rotation is estimated for an age corresponding to a key magnetic anomaly used in plate reconstructions. The new set of plate reconstructions presented here provides a firm basis for estimating absolute plate motions for the past 48 million years and, in particular, can be used to separate paleomagnetically determined apparent polar wander into the part due to plate motion and the part due to true polar wander. Implications for true polar wander since the age of the Hawaiian-Emperor Bend will be discussed.

  7. Intrinsic frame transport for a model of nematic liquid crystal

    NASA Astrophysics Data System (ADS)

    Cozzini, S.; Rull, L. F.; Ciccotti, G.; Paolini, G. V.

    1997-02-01

    We present a computer simulation study of the dynamical properties of a nematic liquid crystal model. The diffusional motion of the nematic director is taken into account in our calculations in order to give a proper estimate of the transport coefficients. Differently from other groups we do not attempt to stabilize the director through rigid constraints or applied external fields. We instead define an intrinsic frame which moves along with the director at each step of the simulation. The transport coefficients computed in the intrinsic frame are then compared against the ones calculated in the fixed laboratory frame, to show the inadequacy of the latter for systems with less than 500 molecules. Using this general scheme on the Gay-Berne liquid crystal model, we evidence the natural motion of the director and attempt to quantify its intrinsic time scale and size dependence. Through extended simulations of systems of different size we calculate the diffusion and viscosity coefficients of this model and compare our results with values previously obtained with fixed director.

  8. Bounded Kalman filter method for motion-robust, non-contact heart rate estimation

    PubMed Central

    Prakash, Sakthi Kumar Arul; Tucker, Conrad S.

    2018-01-01

    The authors of this work present a real-time measurement of heart rate across different lighting conditions and motion categories. This is an advancement over existing remote Photo Plethysmography (rPPG) methods that require a static, controlled environment for heart rate detection, making them impractical for real-world scenarios wherein a patient may be in motion, or remotely connected to a healthcare provider through telehealth technologies. The algorithm aims to minimize motion artifacts such as blurring and noise due to head movements (uniform, random) by employing i) a blur identification and denoising algorithm for each frame and ii) a bounded Kalman filter technique for motion estimation and feature tracking. A case study is presented that demonstrates the feasibility of the algorithm in non-contact estimation of the pulse rate of subjects performing everyday head and body movements. The method in this paper outperforms state of the art rPPG methods in heart rate detection, as revealed by the benchmarked results. PMID:29552419

  9. Adaptive correlation filter-based video stabilization without accumulative global motion estimation

    NASA Astrophysics Data System (ADS)

    Koh, Eunjin; Lee, Chanyong; Jeong, Dong Gil

    2014-12-01

    We present a digital video stabilization approach that provides both robustness and efficiency for practical applications. In this approach, we adopt a stabilization model that maintains spatio-temporal information of past input frames efficiently and can track original stabilization position. Because of the stabilization model, the proposed method does not need accumulative global motion estimation and can recover the original position even if there is a failure in interframe motion estimation. It can also intelligently overcome the situation of damaged or interrupted video sequences. Moreover, because it is simple and suitable to parallel scheme, we implement it on a commercial field programmable gate array and a graphics processing unit board with compute unified device architecture in a breeze. Experimental results show that the proposed approach is both fast and robust.

  10. Noninvasive Thermometry Assisted by a Dual Function Ultrasound Transducer for Mild Hyperthermia

    PubMed Central

    Lai, Chun-Yen; Kruse, Dustin E.; Caskey, Charles F.; Stephens, Douglas N.; Sutcliffe, Patrick L.; Ferrara, Katherine W.

    2010-01-01

    Mild hyperthermia is increasingly important for the activation of temperature-sensitive drug delivery vehicles. Noninvasive ultrasound thermometry based on a 2-D speckle tracking algorithm was examined in this study. Here, a commercial ultrasound scanner, a customized co-linear array transducer, and a controlling PC system were used to generate mild hyperthermia. Because the co-linear array transducer is capable of both therapy and imaging at widely separated frequencies, RF image frames were acquired during therapeutic insonation and then exported for off-line analysis. For in vivo studies in a mouse model, before temperature estimation, motion correction was applied between a reference RF frame and subsequent RF frames. Both in vitro and in vivo experiments were examined; in the in vitro and in vivo studies, the average temperature error had a standard deviation of 0.7°C and 0.8°C, respectively. The application of motion correction improved the accuracy of temperature estimation, where the error range was −1.9 to 4.5°C without correction compared with −1.1 to 1.0°C following correction. This study demonstrates the feasibility of combining therapy and monitoring using a commercial system. In the future, real-time temperature estimation will be incorporated into this system. PMID:21156363

  11. Multiple-camera/motion stereoscopy for range estimation in helicopter flight

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.

    1993-01-01

    Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.

  12. A distributed automatic target recognition system using multiple low resolution sensors

    NASA Astrophysics Data System (ADS)

    Yue, Zhanfeng; Lakshmi Narasimha, Pramod; Topiwala, Pankaj

    2008-04-01

    In this paper, we propose a multi-agent system which uses swarming techniques to perform high accuracy Automatic Target Recognition (ATR) in a distributed manner. The proposed system can co-operatively share the information from low-resolution images of different looks and use this information to perform high accuracy ATR. An advanced, multiple-agent Unmanned Aerial Vehicle (UAV) systems-based approach is proposed which integrates the processing capabilities, combines detection reporting with live video exchange, and swarm behavior modalities that dramatically surpass individual sensor system performance levels. We employ real-time block-based motion analysis and compensation scheme for efficient estimation and correction of camera jitter, global motion of the camera/scene and the effects of atmospheric turbulence. Our optimized Partition Weighted Sum (PWS) approach requires only bitshifts and additions, yet achieves a stunning 16X pixel resolution enhancement, which is moreover parallizable. We develop advanced, adaptive particle-filtering based algorithms to robustly track multiple mobile targets by adaptively changing the appearance model of the selected targets. The collaborative ATR system utilizes the homographies between the sensors induced by the ground plane to overlap the local observation with the received images from other UAVs. The motion of the UAVs distorts estimated homography frame to frame. A robust dynamic homography estimation algorithm is proposed to address this, by using the homography decomposition and the ground plane surface estimation.

  13. Overall properties of the Gaia DR1 reference frame

    NASA Astrophysics Data System (ADS)

    Liu, N.; Zhu, Z.; Liu, J.-C.; Ding, C.-Y.

    2017-03-01

    Aims: The first Gaia data release (Gaia DR1) provides 2191 ICRF2 sources with their positions in the auxiliary quasar solution and five astrometric parameters - positions, parallaxes, and proper motions - for stars in common between the Tycho-2 catalogue and Gaia in the joint Tycho-Gaia astrometric solution (TGAS). We aim to analyze the overall properties of Gaia DR1 reference frame. Methods: We compare quasar positions of the auxiliary quasar solution with ICRF2 sources using different samples and evaluate the influence on the Gaia DR1 reference frame owing to the Galactic aberration effect over the J2000.0-J2015.0 period. Then we estimate the global rotation between TGAS with Tycho-2 proper motion systems to investigate the property of the Gaia DR1 reference frame. Finally, the Galactic kinematics analysis using the K-M giant proper motions is performed to understand the property of Gaia DR1 reference frame. Results: The positional comparison between the auxiliary quasar solution and ICRF2 shows negligible orientation and validates the declination bias of -0.1mas in Gaia quasar positions with respect to ICRF2. Galactic aberration effect is thought to cause an offset 0.01mas of the Z axis direction of Gaia DR1 reference frame. The global rotation between TGAS and Tycho-2 proper motion systems, obtained by different samples, shows a much smaller value than the claimed value 0.24mas yr-1. For the Galactic kinematics analysis of the TGAS K-M giants, we find possible non-zero Galactic rotation components beyond the classical Oort constants: the rigid part ωYG = -0.38±0.15mas yr-1 and the differential part ω^primeYG = -0.29±0.19mas yr-1 around the YG axis of Galactic coordinates, which indicates possible residual rotation in Gaia DR1 reference frame or problems in the current Galactic kinematical model. Conclusions: The Gaia DR1 reference frame is well aligned to ICRF2, and the possible influence of the Galactic aberration effect should be taken into consideration for the future Gaia-ICRF link. The cause of the rather small global rotation between TGAS and Tycho-2 proper motion systems is unclear and needs further investigation. The possible residual rotation in Gaia DR1 reference frame inferred from the Galactic kinematic analysis should be noted and examined in future data release.

  14. A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection

    NASA Astrophysics Data System (ADS)

    Ju, Kuanyu; Xiong, Hongkai

    2014-11-01

    To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.

  15. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    PubMed

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  16. EVA Robotic Assistant Project: Platform Attitude Prediction

    NASA Technical Reports Server (NTRS)

    Nickels, Kevin M.

    2003-01-01

    The Robotic Systems Technology Branch is currently working on the development of an EVA Robotic Assistant under the sponsorship of the Surface Systems Thrust of the NASA Cross Enterprise Technology Development Program (CETDP). This will be a mobile robot that can follow a field geologist during planetary surface exploration, carry his tools and the samples that he collects, and provide video coverage of his activity. Prior experiments have shown that for such a robot to be useful it must be able to follow the geologist at walking speed over any terrain of interest. Geologically interesting terrain tends to be rough rather than smooth. The commercial mobile robot that was recently purchased as an initial testbed for the EVA Robotic Assistant Project, an ATRV Jr., is capable of faster than walking speed outside but it has no suspension. Its wheels with inflated rubber tires are attached to axles that are connected directly to the robot body. Any angular motion of the robot produced by driving over rough terrain will directly affect the pointing of the on-board stereo cameras. The resulting image motion is expected to make tracking of the geologist more difficult. This will either require the tracker to search a larger part of the image to find the target from frame to frame or to search mechanically in pan and tilt whenever the image motion is large enough to put the target outside the image in the next frame. This project consists of the design and implementation of a Kalman filter that combines the output of the angular rate sensors and linear accelerometers on the robot to estimate the motion of the robot base. The motion of the stereo camera pair mounted on the robot that results from this motion as the robot drives over rough terrain is then straightforward to compute. The estimates may then be used, for example, to command the robot s on-board pan-tilt unit to compensate for the camera motion induced by the base movement. This has been accomplished in two ways: first, a standalone head stabilizer has been implemented and second, the estimates have been used to influence the search algorithm of the stereo tracking algorithm. Studies of the image motion of a tracked object indicate that the image motion of objects is suppressed while the robot crossing rough terrain. This work expands the range of speed and surface roughness over which the robot should be able to track and follow a field geologist and accept arm gesture commands from the geologist.

  17. A Method of Time-Intensity Curve Calculation for Vascular Perfusion of Uterine Fibroids Based on Subtraction Imaging with Motion Correction

    NASA Astrophysics Data System (ADS)

    Zhu, Xinjian; Wu, Ruoyu; Li, Tao; Zhao, Dawei; Shan, Xin; Wang, Puling; Peng, Song; Li, Faqi; Wu, Baoming

    2016-12-01

    The time-intensity curve (TIC) from contrast-enhanced ultrasound (CEUS) image sequence of uterine fibroids provides important parameter information for qualitative and quantitative evaluation of efficacy of treatment such as high-intensity focused ultrasound surgery. However, respiration and other physiological movements inevitably affect the process of CEUS imaging, and this reduces the accuracy of TIC calculation. In this study, a method of TIC calculation for vascular perfusion of uterine fibroids based on subtraction imaging with motion correction is proposed. First, the fibroid CEUS recording video was decoded into frame images based on the record frame rate. Next, the Brox optical flow algorithm was used to estimate the displacement field and correct the motion between two frames based on warp technique. Then, subtraction imaging was performed to extract the positional distribution of vascular perfusion (PDOVP). Finally, the average gray of all pixels in the PDOVP from each image was determined, and this was considered the TIC of CEUS image sequence. Both the correlation coefficient and mutual information of the results with proposed method were larger than those determined using the original method. PDOVP extraction results have been improved significantly after motion correction. The variance reduction rates were all positive, indicating that the fluctuations of TIC had become less pronounced, and the calculation accuracy has been improved after motion correction. This proposed method can effectively overcome the influence of motion mainly caused by respiration and allows precise calculation of TIC.

  18. Incompressible Deformation Estimation Algorithm (IDEA) from Tagged MR Images

    PubMed Central

    Liu, Xiaofeng; Abd-Elmoniem, Khaled Z.; Stone, Maureen; Murano, Emi Z.; Zhuo, Jiachen; Gullapalli, Rao P.; Prince, Jerry L.

    2013-01-01

    Measuring the three-dimensional motion of muscular tissues, e.g., the heart or the tongue, using magnetic resonance (MR) tagging is typically carried out by interpolating the two-dimensional motion information measured on orthogonal stacks of images. The incompressibility of muscle tissue is an important constraint on the reconstructed motion field and can significantly help to counter the sparsity and incompleteness of the available motion information. Previous methods utilizing this fact produced incompressible motions with limited accuracy. In this paper, we present an incompressible deformation estimation algorithm (IDEA) that reconstructs a dense representation of the three-dimensional displacement field from tagged MR images and the estimated motion field is incompressible to high precision. At each imaged time frame, the tagged images are first processed to determine components of the displacement vector at each pixel relative to the reference time. IDEA then applies a smoothing, divergence-free, vector spline to interpolate velocity fields at intermediate discrete times such that the collection of velocity fields integrate over time to match the observed displacement components. Through this process, IDEA yields a dense estimate of a three-dimensional displacement field that matches our observations and also corresponds to an incompressible motion. The method was validated with both numerical simulation and in vivo human experiments on the heart and the tongue. PMID:21937342

  19. Geopositioning with a quadcopter: Extracted feature locations and predicted accuracy without a priori sensor attitude information

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron

    2017-05-01

    This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.

  20. Phase Helps Find Geometrically Optimal Gaits

    NASA Astrophysics Data System (ADS)

    Revzen, Shai; Hatton, Ross

    Geometric motion planning describes motions of animals and machines governed by g ˙ = gA (q) q ˙ - a connection A (.) relating shape q and shape velocity q ˙ to body frame velocity g-1 g ˙ ∈ se (3) . Measuring the entire connection over a multidimensional q is often unfeasible with current experimental methods. We show how using a phase estimator can make tractable measuring the local structure of the connection surrounding a periodic motion q (φ) driven by a phase φ ∈S1 . This approach reduces the complexity of the estimation problem by a factor of dimq . The results suggest that phase estimation can be combined with geometric optimization into an iterative gait optimization algorithm usable on experimental systems, or alternatively, to allow the geometric optimality of an observed gait to be detected. ARO W911NF-14-1-0573, NSF 1462555.

  1. MR-assisted PET motion correction in simultaneous PET/MRI studies of dementia subjects.

    PubMed

    Chen, Kevin T; Salcedo, Stephanie; Chonde, Daniel B; Izquierdo-Garcia, David; Levine, Michael A; Price, Julie C; Dickerson, Bradford C; Catana, Ciprian

    2018-03-08

    Subject motion in positron emission tomography (PET) studies leads to image blurring and artifacts; simultaneously acquired magnetic resonance imaging (MRI) data provides a means for motion correction (MC) in integrated PET/MRI scanners. To assess the effect of realistic head motion and MR-based MC on static [ 18 F]-fluorodeoxyglucose (FDG) PET images in dementia patients. Observational study. Thirty dementia subjects were recruited. 3T hybrid PET/MR scanner where EPI-based and T 1 -weighted sequences were acquired simultaneously with the PET data. Head motion parameters estimated from high temporal resolution MR volumes were used for PET MC. The MR-based MC method was compared to PET frame-based MC methods in which motion parameters were estimated by coregistering 5-minute frames before and after accounting for the attenuation-emission mismatch. The relative changes in standardized uptake value ratios (SUVRs) between the PET volumes processed with the various MC methods, without MC, and the PET volumes with simulated motion were compared in relevant brain regions. The absolute value of the regional SUVR relative change was assessed with pairwise paired t-tests testing at the P = 0.05 level, comparing the values obtained through different MR-based MC processing methods as well as across different motion groups. The intraregion voxelwise variability of regional SUVRs obtained through different MR-based MC processing methods was also assessed with pairwise paired t-tests testing at the P = 0.05 level. MC had a greater impact on PET data quantification in subjects with larger amplitude motion (higher than 18% in the medial orbitofrontal cortex) and greater changes were generally observed for the MR-based MC method compared to the frame-based methods. Furthermore, a mean relative change of ∼4% was observed after MC even at the group level, suggesting the importance of routinely applying this correction. The intraregion voxelwise variability of regional SUVRs was also decreased using MR-based MC. All comparisons were significant at the P = 0.05 level. Incorporating temporally correlated MR data to account for intraframe motion has a positive impact on the FDG PET image quality and data quantification in dementia patients. 3 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.

  2. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    PubMed Central

    Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji

    2017-01-01

    This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385

  3. Joint estimation of subject motion and tracer kinetic parameters of dynamic PET data in an EM framework

    NASA Astrophysics Data System (ADS)

    Jiao, Jieqing; Salinas, Cristian A.; Searle, Graham E.; Gunn, Roger N.; Schnabel, Julia A.

    2012-02-01

    Dynamic Positron Emission Tomography is a powerful tool for quantitative imaging of in vivo biological processes. The long scan durations necessitate motion correction, to maintain the validity of the dynamic measurements, which can be particularly challenging due to the low signal-to-noise ratio (SNR) and spatial resolution, as well as the complex tracer behaviour in the dynamic PET data. In this paper we develop a novel automated expectation-maximisation image registration framework that incorporates temporal tracer kinetic information to correct for inter-frame subject motion during dynamic PET scans. We employ the Zubal human brain phantom to simulate dynamic PET data using SORTEO (a Monte Carlo-based simulator), in order to validate the proposed method for its ability to recover imposed rigid motion. We have conducted a range of simulations using different noise levels, and corrupted the data with a range of rigid motion artefacts. The performance of our motion correction method is compared with pairwise registration using normalised mutual information as a voxel similarity measure (an approach conventionally used to correct for dynamic PET inter-frame motion based solely on intensity information). To quantify registration accuracy, we calculate the target registration error across the images. The results show that our new dynamic image registration method based on tracer kinetics yields better realignment of the simulated datasets, halving the target registration error when compared to the conventional method at small motion levels, as well as yielding smaller residuals in translation and rotation parameters. We also show that our new method is less affected by the low signal in the first few frames, which the conventional method based on normalised mutual information fails to realign.

  4. New test of general relativity - Measurement of de Sitter geodetic precession rate for lunar perigee

    NASA Technical Reports Server (NTRS)

    Bertotti, Bruno; Ciufolini, Ignazio; Bender, Peter L.

    1987-01-01

    According to general relativity, the calculated rate of motion of lunar perigee should include a contribution of 19.2 msec/yr from geodetic precession. It is shown that existing analyses of lunar-laser-ranging data confirm the general-relativistic rate for geodetic precession with respect to the planetary dynamical frame. In addition, the comparison of earth-rotation results from lunar laser ranging and from VLBI shows that the relative drift of the planetary dynamical frame and the extragalactic VLBI reference frame is small. The estimated accuracy is about 10 percent.

  5. A rate-constrained fast full-search algorithm based on block sum pyramid.

    PubMed

    Song, Byung Cheol; Chun, Kang-Wook; Ra, Jong Beom

    2005-03-01

    This paper presents a fast full-search algorithm (FSA) for rate-constrained motion estimation. The proposed algorithm, which is based on the block sum pyramid frame structure, successively eliminates unnecessary search positions according to rate-constrained criterion. This algorithm provides the identical estimation performance to a conventional FSA having rate constraint, while achieving considerable reduction in computation.

  6. Lung tumor tracking in fluoroscopic video based on optical flow

    PubMed Central

    Xu, Qianyi; Hamilton, Russell J.; Schowengerdt, Robert A.; Alexander, Brian; Jiang, Steve B.

    2008-01-01

    Respiratory gating and tumor tracking for dynamic multileaf collimator delivery require accurate and real-time localization of the lung tumor position during treatment. Deriving tumor position from external surrogates such as abdominal surface motion may have large uncertainties due to the intra- and interfraction variations of the correlation between the external surrogates and internal tumor motion. Implanted fiducial markers can be used to track tumors fluoroscopically in real time with sufficient accuracy. However, it may not be a practical procedure when implanting fiducials bronchoscopically. In this work, a method is presented to track the lung tumor mass or relevant anatomic features projected in fluoroscopic images without implanted fiducial markers based on an optical flow algorithm. The algorithm generates the centroid position of the tracked target and ignores shape changes of the tumor mass shadow. The tracking starts with a segmented tumor projection in an initial image frame. Then, the optical flow between this and all incoming frames acquired during treatment delivery is computed as initial estimations of tumor centroid displacements. The tumor contour in the initial frame is transferred to the incoming frames based on the average of the motion vectors, and its positions in the incoming frames are determined by fine-tuning the contour positions using a template matching algorithm with a small search range. The tracking results were validated by comparing with clinician determined contours on each frame. The position difference in 95% of the frames was found to be less than 1.4 pixels (∼0.7 mm) in the best case and 2.8 pixels (∼1.4 mm) in the worst case for the five patients studied. PMID:19175094

  7. Lung tumor tracking in fluoroscopic video based on optical flow.

    PubMed

    Xu, Qianyi; Hamilton, Russell J; Schowengerdt, Robert A; Alexander, Brian; Jiang, Steve B

    2008-12-01

    Respiratory gating and tumor tracking for dynamic multileaf collimator delivery require accurate and real-time localization of the lung tumor position during treatment. Deriving tumor position from external surrogates such as abdominal surface motion may have large uncertainties due to the intra- and interfraction variations of the correlation between the external surrogates and internal tumor motion. Implanted fiducial markers can be used to track tumors fluoroscopically in real time with sufficient accuracy. However, it may not be a practical procedure when implanting fiducials bronchoscopically. In this work, a method is presented to track the lung tumor mass or relevant anatomic features projected in fluoroscopic images without implanted fiducial markers based on an optical flow algorithm. The algorithm generates the centroid position of the tracked target and ignores shape changes of the tumor mass shadow. The tracking starts with a segmented tumor projection in an initial image frame. Then, the optical flow between this and all incoming frames acquired during treatment delivery is computed as initial estimations of tumor centroid displacements. The tumor contour in the initial frame is transferred to the incoming frames based on the average of the motion vectors, and its positions in the incoming frames are determined by fine-tuning the contour positions using a template matching algorithm with a small search range. The tracking results were validated by comparing with clinician determined contours on each frame. The position difference in 95% of the frames was found to be less than 1.4 pixels (approximately 0.7 mm) in the best case and 2.8 pixels (approximately 1.4 mm) in the worst case for the five patients studied.

  8. Vision-based stress estimation model for steel frame structures with rigid links

    NASA Astrophysics Data System (ADS)

    Park, Hyo Seon; Park, Jun Su; Oh, Byung Kwan

    2017-07-01

    This paper presents a stress estimation model for the safety evaluation of steel frame structures with rigid links using a vision-based monitoring system. In this model, the deformed shape of a structure under external loads is estimated via displacements measured by a motion capture system (MCS), which is a non-contact displacement measurement device. During the estimation of the deformed shape, the effective lengths of the rigid link ranges in the frame structure are identified. The radius of the curvature of the structural member to be monitored is calculated using the estimated deformed shape and is employed to estimate stress. Using MCS in the presented model, the safety of a structure can be assessed gauge-freely. In addition, because the stress is directly extracted from the radius of the curvature obtained from the measured deformed shape, information on the loadings and boundary conditions of the structure are not required. Furthermore, the model, which includes the identification of the effective lengths of the rigid links, can consider the influences of the stiffness of the connection and support on the deformation in the stress estimation. To verify the applicability of the presented model, static loading tests for a steel frame specimen were conducted. By comparing the stress estimated by the model with the measured stress, the validity of the model was confirmed.

  9. The Current Status and Tendency of China Millimeter Coordinate Frame Implementation and Maintenance

    NASA Astrophysics Data System (ADS)

    Cheng, P.; Cheng, Y.; Bei, J.

    2017-12-01

    China Geodetic Coordinate System 2000 (CGCS2000) was first officially declared as the national standard coordinate system on July 1, 2008. This reference frame was defined in the ITRF97 frame at epoch 2000.0 and included 2600 GPS geodetic control points. The paper discusses differences between China Geodetic Coordinate System 2000 (CGCS2000) and later updated ITRF versions, such as ITRF2014,in terms of technical implementation and maintenance. With the development of the Beidou navigation satellite system, especially third generation of BDS with signal global coverage in the future, and with progress of space geodetic technology, it is possible for us to establish a global millimeter-level reference frame based on space geodetic technology including BDS. The millimeter reference frame implementation concerns two factors: 1) The variation of geocenter motion estimation, and 2) the site nonlinear motion modeling. In this paper, the geocentric inversion methods are discussed and compared among results derived from various technical methods. Our nonlinear site movement modeling focuses on singular spectrum analysis method, which is of apparent advantages over earth physical effect modeling. All presented in the paper expected to provide reference to our future CGCS2000 maintenance.

  10. Spherical Pendulum Small Oscillations for Slewing Crane Motion

    PubMed Central

    Perig, Alexander V.; Stadnik, Alexander N.; Deriglazov, Alexander I.

    2014-01-01

    The present paper focuses on the Lagrange mechanics-based description of small oscillations of a spherical pendulum with a uniformly rotating suspension center. The analytical solution of the natural frequencies' problem has been derived for the case of uniform rotation of a crane boom. The payload paths have been found in the inertial reference frame fixed on earth and in the noninertial reference frame, which is connected with the rotating crane boom. The numerical amplitude-frequency characteristics of the relative payload motion have been found. The mechanical interpretation of the terms in Lagrange equations has been outlined. The analytical expression and numerical estimation for cable tension force have been proposed. The numerical computational results, which correlate very accurately with the experimental observations, have been shown. PMID:24526891

  11. Feasibility of pulse wave velocity estimation from low frame rate US sequences in vivo

    NASA Astrophysics Data System (ADS)

    Zontak, Maria; Bruce, Matthew; Hippke, Michelle; Schwartz, Alan; O'Donnell, Matthew

    2017-03-01

    The pulse wave velocity (PWV) is considered one of the most important clinical parameters to evaluate CV risk, vascular adaptation, etc. There has been substantial work attempting to measure the PWV in peripheral vessels using ultrasound (US). This paper presents a fully automatic algorithm for PWV estimation from the human carotid using US sequences acquired with a Logic E9 scanner (modified for RF data capture) and a 9L probe. Our algorithm samples the pressure wave in time by tracking wall displacements over the sequence, and estimates the PWV by calculating the temporal shift between two sampled waves at two distinct locations. Several recent studies have utilized similar ideas along with speckle tracking tools and high frame rate (above 1 KHz) sequences to estimate the PWV. To explore PWV estimation in a more typical clinical setting, we used focused-beam scanning, which yields relatively low frame rates and small fields of view (e.g., 200 Hz for 16.7 mm filed of view). For our application, a 200 Hz frame rate is low. In particular, the sub-frame temporal accuracy required for PWV estimation between locations 16.7 mm apart, ranges from 0.82 of a frame for 4m/s, to 0.33 for 10m/s. When the distance is further reduced (to 0.28 mm between two beams), the sub-frame precision is in parts per thousand (ppt) of the frame (5 ppt for 10m/s). As such, the contributions of our algorithm and this paper are: 1. Ability to work with low frame-rate ( 200Hz) and decreased lateral field of view. 2. Fully automatic segmentation of the wall intima (using raw RF images). 3. Collaborative Speckle Tracking of 2D axial and lateral carotid wall motion. 4. Outlier robust PWV calculation from multiple votes using RANSAC. 5. Algorithm evaluation on volunteers of different ages and health conditions.

  12. 2-tier in-plane motion correction and out-of-plane motion filtering for contrast-enhanced ultrasound.

    PubMed

    Ta, Casey N; Eghtedari, Mohammad; Mattrey, Robert F; Kono, Yuko; Kummel, Andrew C

    2014-11-01

    Contrast-enhanced ultrasound (CEUS) cines of focal liver lesions (FLLs) can be quantitatively analyzed to measure tumor perfusion on a pixel-by-pixel basis for diagnostic indication. However, CEUS cines acquired freehand and during free breathing cause nonuniform in-plane and out-of-plane motion from frame to frame. These motions create fluctuations in the time-intensity curves (TICs), reducing the accuracy of quantitative measurements. Out-of-plane motion cannot be corrected by image registration in 2-dimensional CEUS and degrades the quality of in-plane motion correction (IPMC). A 2-tier IPMC strategy and adaptive out-of-plane motion filter (OPMF) are proposed to provide a stable correction of nonuniform motion to reduce the impact of motion on quantitative analyses. A total of 22 cines of FLLs were imaged with dual B-mode and contrast specific imaging to acquire a 3-minute TIC. B-mode images were analyzed for motion, and the motion correction was applied to both B-mode and contrast images. For IPMC, the main reference frame was automatically selected for each cine, and subreference frames were selected in each respiratory cycle and sequentially registered toward the main reference frame. All other frames were sequentially registered toward the local subreference frame. Four OPMFs were developed and tested: subsample normalized correlation (NC), subsample sum of absolute differences, mean frame NC, and histogram. The frames that were most dissimilar to the OPMF reference frame using 1 of the 4 above criteria in each respiratory cycle were adaptively removed by thresholding against the low-pass filter of the similarity curve. Out-of-plane motion filter was quantitatively evaluated by an out-of-plane motion metric (OPMM) that measured normalized variance in the high-pass filtered TIC within the tumor region-of-interest with low OPMM being the goal. Results for IPMC and OPMF were qualitatively evaluated by 2 blinded observers who ranked the motion in the cines before and after various combinations of motion correction steps. Quantitative measurements showed that 2-tier IPMC and OPMF improved imaging stability. With IPMC, the NC B-mode metric increased from 0.504 ± 0.149 to 0.585 ± 0.145 over all cines (P < 0.001). Two-tier IPMC also produced better fits on the contrast-specific TIC than industry standard IPMC techniques did (P < 0.02). In-plane motion correction and OPMF were shown to improve goodness of fit for pixel-by-pixel analysis (P < 0.001). Out-of-plane motion filter reduced variance in the contrast-specific signal as shown by a median decrease of 49.8% in the OPMM. Two-tier IPMC and OPMF were also shown to qualitatively reduce motion. Observers consistently ranked cines with IPMC higher than the same cine before IPMC (P < 0.001) as well as ranked cines with OPMF higher than when they were uncorrected. The 2-tier sequential IPMC and adaptive OPMF significantly reduced motion in 3-minute CEUS cines of FLLs, thereby overcoming the challenges of drift and irregular breathing motion in long cines. The 2-tier IPMC strategy provided stable motion correction tolerant of out-of-plane motion throughout the cine by sequentially registering subreference frames that bypassed the motion cycles, thereby overcoming the lack of a nearly stationary reference point in long cines. Out-of-plane motion filter reduced apparent motion by adaptively removing frames imaged off-plane from the automatically selected OPMF reference frame, thereby tolerating nonuniform breathing motion. Selection of the best OPMF by minimizing OPMM effectively reduced motion under a wide variety of motion patterns applicable to clinical CEUS. These semiautomated processes only required user input for region-of-interest selection and can improve the accuracy of quantitative perfusion measurements.

  13. Common and Innovative Visuals: A sparsity modeling framework for video.

    PubMed

    Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder

    2014-05-02

    Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.

  14. A multistage motion vector processing method for motion-compensated frame interpolation.

    PubMed

    Huang, Ai- Mei; Nguyen, Truong Q

    2008-05-01

    In this paper, a novel, low-complexity motion vector processing algorithm at the decoder is proposed for motion-compensated frame interpolation or frame rate up-conversion. We address the problems of having broken edges and deformed structures in an interpolated frame by hierarchically refining motion vectors on different block sizes. Our method explicitly considers the reliability of each received motion vector and has the capability of preserving the structure information. This is achieved by analyzing the distribution of residual energies and effectively merging blocks that have unreliable motion vectors. The motion vector reliability information is also used as a prior knowledge in motion vector refinement using a constrained vector median filter to avoid choosing identical unreliable one. We also propose using chrominance information in our method. Experimental results show that the proposed scheme has better visual quality and is also robust, even in video sequences with complex scenes and fast motion.

  15. Frame junction vibration transmission with a modified frame deformation model.

    PubMed

    Moore, J A

    1990-12-01

    A previous paper dealt with vibration transmission through junctions of connected frame members where the allowed frame deformations included bending, torsion, and longitudinal motions [J.A. Moore, J. Acoust. Soc. Am. 88, 2766-2776 (1990)]. In helicopter and aircraft structures the skin panels can constitute a high impedance connection along the length of the frames that effectively prohibits in-plane motion at the elevation of the skin panels. This has the effect of coupling in-plane bending and torsional motions within the frame. This paper discusses the transmission behavior through frame junctions that accounts for the in-plane constraint in idealized form by assuming that the attached skin panels completely prohibit inplane motion in the frames. Also, transverse shear deformation is accounted for in describing the relatively deep web frame constructions common in aircraft structures. Longitudinal motion in the frames is not included in the model. Transmission coefficient predictions again show the importance of out-of-plane bending deformation to the transmission of vibratory energy in an aircraft structure. Comparisons are shown with measured vibration transmission data along the framing in the overhead of a helicopter airframe, with good agreement. The frame junction description has been implemented within a general purpose statistical energy analysis (SEA) computer code in modeling the entire airframe structure including skin panels.

  16. Motion tracking in the liver: Validation of a method based on 4D ultrasound using a nonrigid registration technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vijayan, Sinara, E-mail: sinara.vijayan@ntnu.no; Klein, Stefan; Hofstad, Erlend Fagertun

    Purpose: Treatments like radiotherapy and focused ultrasound in the abdomen require accurate motion tracking, in order to optimize dosage delivery to the target and minimize damage to critical structures and healthy tissues around the target. 4D ultrasound is a promising modality for motion tracking during such treatments. In this study, the authors evaluate the accuracy of motion tracking in the liver based on deformable registration of 4D ultrasound images. Methods: The offline analysis was performed using a nonrigid registration algorithm that was specifically designed for motion estimation from dynamic imaging data. The method registers the entire 4D image data sequencemore » in a groupwise optimization fashion, thus avoiding a bias toward a specifically chosen reference time point. Three healthy volunteers were scanned over several breathing cycles (12 s) from three different positions and angles on the abdomen; a total of nine 4D scans for the three volunteers. Well-defined anatomic landmarks were manually annotated in all 96 time frames for assessment of the automatic algorithm. The error of the automatic motion estimation method was compared with interobserver variability. The authors also performed experiments to investigate the influence of parameters defining the deformation field flexibility and evaluated how well the method performed with a lower temporal resolution in order to establish the minimum frame rate required for accurate motion estimation. Results: The registration method estimated liver motion with an error of 1 mm (75% percentile over all datasets), which was lower than the interobserver variability of 1.4 mm. The results were only slightly dependent on the degrees of freedom of the deformation model. The registration error increased to 2.8 mm with an eight times lower temporal resolution. Conclusions: The authors conclude that the methodology was able to accurately track the motion of the liver in the 4D ultrasound data. The authors believe that the method has potential in interventions on moving abdominal organs such as MR or ultrasound guided focused ultrasound therapy and radiotherapy, pending the method is enabled to run in real-time. The data and the annotations used for this study are made publicly available for those who would like to test other methods on 4D liver ultrasound data.« less

  17. Simultaneous two-view epipolar geometry estimation and motion segmentation by 4D tensor voting.

    PubMed

    Tong, Wai-Shun; Tang, Chi-Keung; Medioni, Gérard

    2004-09-01

    We address the problem of simultaneous two-view epipolar geometry estimation and motion segmentation from nonstatic scenes. Given a set of noisy image pairs containing matches of n objects, we propose an unconventional, efficient, and robust method, 4D tensor voting, for estimating the unknown n epipolar geometries, and segmenting the static and motion matching pairs into n independent motions. By considering the 4D isotropic and orthogonal joint image space, only two tensor voting passes are needed, and a very high noise to signal ratio (up to five) can be tolerated. Epipolar geometries corresponding to multiple, rigid motions are extracted in succession. Only two uncalibrated frames are needed, and no simplifying assumption (such as affine camera model or homographic model between images) other than the pin-hole camera model is made. Our novel approach consists of propagating a local geometric smoothness constraint in the 4D joint image space, followed by global consistency enforcement for extracting the fundamental matrices corresponding to independent motions. We have performed extensive experiments to compare our method with some representative algorithms to show that better performance on nonstatic scenes are achieved. Results on challenging data sets are presented.

  18. Myocardial strain estimation from CT: towards computer-aided diagnosis on infarction identification

    NASA Astrophysics Data System (ADS)

    Wong, Ken C. L.; Tee, Michael; Chen, Marcus; Bluemke, David A.; Summers, Ronald M.; Yao, Jianhua

    2015-03-01

    Regional myocardial strains have the potential for early quantification and detection of cardiac dysfunctions. Although image modalities such as tagged and strain-encoded MRI can provide motion information of the myocardium, they are uncommon in clinical routine. In contrary, cardiac CT images are usually available, but they only provide motion information at salient features such as the cardiac boundaries. To estimate myocardial strains from a CT image sequence, we adopted a cardiac biomechanical model with hyperelastic material properties to relate the motion on the cardiac boundaries to the myocardial deformation. The frame-to-frame displacements of the cardiac boundaries are obtained using B-spline deformable image registration based on mutual information, which are enforced as boundary conditions to the biomechanical model. The system equation is solved by the finite element method to provide the dense displacement field of the myocardium, and the regional values of the three principal strains and the six strains in cylindrical coordinates are computed in terms of the American Heart Association nomenclature. To study the potential of the estimated regional strains on identifying myocardial infarction, experiments were performed on cardiac CT image sequences of ten canines with artificially induced myocardial infarctions. The leave-one-subject-out cross validations show that, by using the optimal strain magnitude thresholds computed from ROC curves, the radial strain and the first principal strain have the best performance.

  19. Fourier-based integration of quasi-periodic gait accelerations for drift-free displacement estimation using inertial sensors.

    PubMed

    Sabatini, Angelo Maria; Ligorio, Gabriele; Mannini, Andrea

    2015-11-23

    In biomechanical studies Optical Motion Capture Systems (OMCS) are considered the gold standard for determining the orientation and the position (pose) of an object in a global reference frame. However, the use of OMCS can be difficult, which has prompted research on alternative sensing technologies, such as body-worn inertial sensors. We developed a drift-free method to estimate the three-dimensional (3D) displacement of a body part during cyclical motions using body-worn inertial sensors. We performed the Fourier analysis of the stride-by-stride estimates of the linear acceleration, which were obtained by transposing the specific forces measured by the tri-axial accelerometer into the global frame using a quaternion-based orientation estimation algorithm and detecting when each stride began using a gait-segmentation algorithm. The time integration was performed analytically using the Fourier series coefficients; the inverse Fourier series was then taken for reconstructing the displacement over each single stride. The displacement traces were concatenated and spline-interpolated to obtain the entire trace. The method was applied to estimate the motion of the lower trunk of healthy subjects that walked on a treadmill and it was validated using OMCS reference 3D displacement data; different approaches were tested for transposing the measured specific force into the global frame, segmenting the gait and performing time integration (numerically and analytically). The width of the limits of agreements were computed between each tested method and the OMCS reference method for each anatomical direction: Medio-Lateral (ML), VerTical (VT) and Antero-Posterior (AP); using the proposed method, it was observed that the vertical component of displacement (VT) was within ±4 mm (±1.96 standard deviation) of OMCS data and each component of horizontal displacement (ML and AP) was within ±9 mm of OMCS data. Fourier harmonic analysis was applied to model stride-by-stride linear accelerations during walking and to perform their analytical integration. Our results showed that analytical integration based on Fourier series coefficients was a useful approach to accurately estimate 3D displacement from noisy acceleration data.

  20. Head motion evaluation and correction for PET scans with 18F-FDG in the Japanese Alzheimer's disease neuroimaging initiative (J-ADNI) multi-center study.

    PubMed

    Ikari, Yasuhiko; Nishio, Tomoyuki; Makishi, Yoko; Miya, Yukari; Ito, Kengo; Koeppe, Robert A; Senda, Michio

    2012-08-01

    Head motion during 30-min (six 5-min frames) brain PET scans starting 30 min post-injection of FDG was evaluated together with the effect of post hoc motion correction between frames in J-ADNI multicenter study carried out in 24 PET centers on a total of 172 subjects consisting of 81 normal subjects, 55 mild cognitive impairment (MCI) and 36 mild Alzheimer's disease (AD) patients. Based on the magnitude of the between-frame co-registration parameters, the scans were classified into six levels (A-F) of motion degree. The effect of motion and its correction was evaluated using between-frame variation of the regional FDG uptake values on ROIs placed over cerebral cortical areas. Although AD patients tended to present larger motion (motion level E or F in 22 % of the subjects) than MCI (3 %) and normal (4 %) subjects, unignorable motion was observed in a small number of subjects in the latter groups as well. The between-frame coefficient of variation (SD/mean) was 0.5 % in the frontal, 0.6 % in the parietal and 1.8 % in the posterior cingulate ROI for the scans of motion level 1. The respective values were 1.5, 1.4, and 3.6 % for the scans of motion level F, but reduced by the motion correction to 0.5, 0.4 and 0.8 %, respectively. The motion correction changed the ROI value for the posterior cingulate cortex by 11.6 % in the case of severest motion. Substantial head motion occurs in a fraction of subjects in a multicenter setup which includes PET centers lacking sufficient experience in imaging demented patients. A simple frame-by-frame co-registration technique that can be applied to any PET camera model is effective in correcting for motion and improving quantitative capability.

  1. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    PubMed Central

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-01-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: 1) the reconstruction algorithms do not make full use of projection statistics; and 2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10 to 40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET. PMID:27385378

  2. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    NASA Astrophysics Data System (ADS)

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-08-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.

  3. A kind of graded sub-pixel motion estimation algorithm combining time-domain characteristics with frequency-domain phase correlation

    NASA Astrophysics Data System (ADS)

    Xie, Bing; Duan, Zhemin; Chen, Yu

    2017-11-01

    The mode of navigation based on scene match can assist UAV to achieve autonomous navigation and other missions. However, aerial multi-frame images of the UAV in the complex flight environment easily be affected by the jitter, noise and exposure, which will lead to image blur, deformation and other issues, and result in the decline of detection rate of the interested regional target. Aiming at this problem, we proposed a kind of Graded sub-pixel motion estimation algorithm combining time-domain characteristics with frequency-domain phase correlation. Experimental results prove the validity and accuracy of the proposed algorithm.

  4. Discriminability limits in spatio-temporal stereo block matching.

    PubMed

    Jain, Ankit K; Nguyen, Truong Q

    2014-05-01

    Disparity estimation is a fundamental task in stereo imaging and is a well-studied problem. Recently, methods have been adapted to the video domain where motion is used as a matching criterion to help disambiguate spatially similar candidates. In this paper, we analyze the validity of the underlying assumptions of spatio-temporal disparity estimation, and determine the extent to which motion aids the matching process. By analyzing the error signal for spatio-temporal block matching under the sum of squared differences criterion and treating motion as a stochastic process, we determine the probability of a false match as a function of image features, motion distribution, image noise, and number of frames in the spatio-temporal patch. This performance quantification provides insight into when spatio-temporal matching is most beneficial in terms of the scene and motion, and can be used as a guide to select parameters for stereo matching algorithms. We validate our results through simulation and experiments on stereo video.

  5. WE-G-BRD-08: Motion Analysis for Rectal Cancer: Implications for Adaptive Radiotherapy On the MR-Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleijnen, J; Asselen, B van; Burbach, M

    2015-06-15

    Purpose: Purpose of this study is to find the optimal trade-off between adaptation interval and margin reduction and to define the implications of motion for rectal cancer boost radiotherapy on a MR-linac. Methods: Daily MRI scans were acquired of 16 patients, diagnosed with rectal cancer, prior to each radiotherapy fraction in one week (N=76). Each scan session consisted of T2-weighted and three 2D sagittal cine-MRI, at begin (t=0 min), middle (t=9:30 min) and end (t=18:00 min) of scan session, for 1 minute at 2 Hz temporal resolution. Tumor and clinical target volume (CTV) were delineated on each T2-weighted scan andmore » transferred to each cine-MRI. The start frame of the begin scan was used as reference and registered to frames at time-points 15, 30 and 60 seconds, 9:30 and 18:00 minutes and 1, 2, 3 and 4 days later. Per time-point, motion of delineated voxels was evaluated using the deformation vector fields of the registrations and the 95th percentile distance (dist95%) was calculated as measure of motion. Per time-point, the distance that includes 90% of all cases was taken as estimate of required planning target volume (PTV)-margin. Results: Highest motion reduction is observed going from 9:30 minutes to 60 seconds. We observe a reduction in margin estimates from 10.6 to 2.7 mm and 16.1 to 4.6 mm for tumor and CTV, respectively, when adapting every 60 seconds compared to not adapting treatment. A 75% and 71% reduction, respectively. Further reduction in adaptation time-interval yields only marginal motion reduction. For adaptation intervals longer than 18:00 minutes only small motion reductions are observed. Conclusion: The optimal adaptation interval for adaptive rectal cancer (boost) treatments on a MR-linac is 60 seconds. This results in substantial smaller PTV-margin estimates. Adaptation intervals of 18:00 minutes and higher, show little improvement in motion reduction.« less

  6. Impact of seasonal and postglacial surface displacement on global reference frames

    NASA Astrophysics Data System (ADS)

    Krásná, Hana; Böhm, Johannes; King, Matt; Memin, Anthony; Shabala, Stanislav; Watson, Christopher

    2014-05-01

    The calculation of actual station positions requires several corrections which are partly recommended by the International Earth Rotation and Reference Systems Service (IERS) Conventions (e.g., solid Earth tides and ocean tidal loading) as well as other corrections, e.g. accounting for hydrology and atmospheric loading. To investigate the pattern of omitted non-linear seasonal motion we estimated empirical harmonic models for selected stations within a global solution of suitable Very Long Baseline Interferometry (VLBI) sessions as well as mean annual models by stacking yearly time series of station positions. To validate these models we compare them to displacement series obtained from the Gravity Recovery and Climate Experiment (GRACE) data and to hydrology corrections determined from global models. Furthermore, we assess the impact of the seasonal station motions on the celestial reference frame as well as on Earth orientation parameters derived from real and also artificial VLBI observations. In the second part of the presentation we apply vertical rates of the ICE-5G_VM2_2012 vertical land movement grid on vertical station velocities. We assess the impact of postglacial uplift on the variability in the scale given different sampling of the postglacial signal in time and hence on the uncertainty in the scale rate of the estimated terrestrial reference frame.

  7. The role of spatial memory and frames of reference in the precision of angular path integration.

    PubMed

    Arthur, Joeanna C; Philbeck, John W; Kleene, Nicholas J; Chichka, David

    2012-09-01

    Angular path integration refers to the ability to maintain an estimate of self-location after a rotational displacement by integrating internally-generated (idiothetic) self-motion signals over time. Previous work has found that non-sensory inputs, namely spatial memory, can play a powerful role in angular path integration (Arthur et al., 2007, 2009). Here we investigated the conditions under which spatial memory facilitates angular path integration. We hypothesized that the benefit of spatial memory is particularly likely in spatial updating tasks in which one's self-location estimate is referenced to external space. To test this idea, we administered passive, non-visual body rotations (ranging 40°-140°) about the yaw axis and asked participants to use verbal reports or open-loop manual pointing to indicate the magnitude of the rotation. Prior to some trials, previews of the surrounding environment were given. We found that when participants adopted an egocentric frame of reference, the previously-observed benefit of previews on within-subject response precision was not manifested, regardless of whether remembered spatial frameworks were derived from vision or spatial language. We conclude that the powerful effect of spatial memory is dependent on one's frame of reference during self-motion updating. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. 3D pose estimation and motion analysis of the articulated human hand-forearm limb in an industrial production environment

    NASA Astrophysics Data System (ADS)

    Hahn, Markus; Barrois, Björn; Krüger, Lars; Wöhler, Christian; Sagerer, Gerhard; Kummert, Franz

    2010-09-01

    This study introduces an approach to model-based 3D pose estimation and instantaneous motion analysis of the human hand-forearm limb in the application context of safe human-robot interaction. 3D pose estimation is performed using two approaches: The Multiocular Contracting Curve Density (MOCCD) algorithm is a top-down technique based on pixel statistics around a contour model projected into the images from several cameras. The Iterative Closest Point (ICP) algorithm is a bottom-up approach which uses a motion-attributed 3D point cloud to estimate the object pose. Due to their orthogonal properties, a fusion of these algorithms is shown to be favorable. The fusion is performed by a weighted combination of the extracted pose parameters in an iterative manner. The analysis of object motion is based on the pose estimation result and the motion-attributed 3D points belonging to the hand-forearm limb using an extended constraint-line approach which does not rely on any temporal filtering. A further refinement is obtained using the Shape Flow algorithm, a temporal extension of the MOCCD approach, which estimates the temporal pose derivative based on the current and the two preceding images, corresponding to temporal filtering with a short response time of two or at most three frames. Combining the results of the two motion estimation stages provides information about the instantaneous motion properties of the object. Experimental investigations are performed on real-world image sequences displaying several test persons performing different working actions typically occurring in an industrial production scenario. In all example scenes, the background is cluttered, and the test persons wear various kinds of clothes. For evaluation, independently obtained ground truth data are used. [Figure not available: see fulltext.

  9. Patient motion effects on the quantification of regional myocardial blood flow with dynamic PET imaging.

    PubMed

    Hunter, Chad R R N; Klein, Ran; Beanlands, Rob S; deKemp, Robert A

    2016-04-01

    Patient motion is a common problem during dynamic positron emission tomography (PET) scans for quantification of myocardial blood flow (MBF). The purpose of this study was to quantify the prevalence of body motion in a clinical setting and evaluate with realistic phantoms the effects of motion on blood flow quantification, including CT attenuation correction (CTAC) artifacts that result from PET-CT misalignment. A cohort of 236 sequential patients was analyzed for patient motion under resting and peak stress conditions by two independent observers. The presence of motion, affected time-frames, and direction of motion was recorded; discrepancy between observers was resolved by consensus review. Based on these results, patient body motion effects on MBF quantification were characterized using the digital NURBS-based cardiac-torso phantom, with characteristic time activity curves (TACs) assigned to the heart wall (myocardium) and blood regions. Simulated projection data were corrected for attenuation and reconstructed using filtered back-projection. All simulations were performed without noise added, and a single CT image was used for attenuation correction and aligned to the early- or late-frame PET images. In the patient cohort, mild motion of 0.5 ± 0.1 cm occurred in 24% and moderate motion of 1.0 ± 0.3 cm occurred in 38% of patients. Motion in the superior/inferior direction accounted for 45% of all detected motion, with 30% in the superior direction. Anterior/posterior motion was predominant (29%) in the posterior direction. Left/right motion occurred in 24% of cases, with similar proportions in the left and right directions. Computer simulation studies indicated that errors in MBF can approach 500% for scans with severe patient motion (up to 2 cm). The largest errors occurred when the heart wall was shifted left toward the adjacent lung region, resulting in a severe undercorrection for attenuation of the heart wall. Simulations also indicated that the magnitude of MBF errors resulting from motion in the superior/inferior and anterior/posterior directions was similar (up to 250%). Body motion effects were more detrimental for higher resolution PET imaging (2 vs 10 mm full-width at half-maximum), and for motion occurring during the mid-to-late time-frames. Motion correction of the reconstructed dynamic image series resulted in significant reduction in MBF errors, but did not account for the residual PET-CTAC misalignment artifacts. MBF bias was reduced further using global partial-volume correction, and using dynamic alignment of the PET projection data to the CT scan for accurate attenuation correction during image reconstruction. Patient body motion can produce MBF estimation errors up to 500%. To reduce these errors, new motion correction algorithms must be effective in identifying motion in the left/right direction, and in the mid-to-late time-frames, since these conditions produce the largest errors in MBF, particularly for high resolution PET imaging. Ideally, motion correction should be done before or during image reconstruction to eliminate PET-CTAC misalignment artifacts.

  10. Tracking Gravity Probe B gyroscope polhode motion

    NASA Technical Reports Server (NTRS)

    Keiser, George M.; Parkinson, Bradford W.; Cohen, Clark E.

    1990-01-01

    The superconducting Gravity Probe B spacecraft is being developed to measure two untested predictions of Einstein's theory of general relativity by using orbiting gyroscopes; it possesses an intrinsic magnetic field which rotates with the rotor and is fixed with respect to the rotor body frame. In this paper, the path of the rotor spin axes is tracked using this trapped magnetic flux as a reference. Both the rotor motion and the magnetic field shape are estimated simultaneously, employing the higher order components of the magnetic field shape.

  11. Unsupervised markerless 3-DOF motion tracking in real time using a single low-budget camera.

    PubMed

    Quesada, Luis; León, Alejandro J

    2012-10-01

    Motion tracking is a critical task in many computer vision applications. Existing motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on motion tracking. In this paper, we present a novel three degrees of freedom motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera that can be found installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a nonmodeled unmarked object that may be nonrigid, nonconvex, partially occluded, self-occluded, or motion blurred, given that it is opaque, evenly colored, enough contrasting with the background in each frame, and that it does not rotate. Our system is also able to determine the most relevant object to track in the screen. Our proposal does not impose additional constraints, therefore it allows a market-wide implementation of applications that require the estimation of the three position degrees of freedom of an object.

  12. In vivo retinal imaging for fixational eye motion detection using a high-speed digital micromirror device (DMD)-based ophthalmoscope.

    PubMed

    Vienola, Kari V; Damodaran, Mathi; Braaf, Boy; Vermeer, Koenraad A; de Boer, Johannes F

    2018-02-01

    Retinal motion detection with an accuracy of 0.77 arcmin corresponding to 3.7 µm on the retina is demonstrated with a novel digital micromirror device based ophthalmoscope. By generating a confocal image as a reference, eye motion could be measured from consecutively measured subsampled frames. The subsampled frames provide 7.7 millisecond snapshots of the retina without motion artifacts between the image points of the subsampled frame, distributed over the full field of view. An ophthalmoscope pattern projection speed of 130 Hz enabled a motion detection bandwidth of 65 Hz. A model eye with a scanning mirror was built to test the performance of the motion detection algorithm. Furthermore, an in vivo motion trace was obtained from a healthy volunteer. The obtained eye motion trace clearly shows the three main types of fixational eye movements. Lastly, the obtained eye motion trace was used to correct for the eye motion in consecutively obtained subsampled frames to produce an averaged confocal image correct for motion artefacts.

  13. In vivo retinal imaging for fixational eye motion detection using a high-speed digital micromirror device (DMD)-based ophthalmoscope

    PubMed Central

    Vienola, Kari V.; Damodaran, Mathi; Braaf, Boy; Vermeer, Koenraad A.; de Boer, Johannes F.

    2018-01-01

    Retinal motion detection with an accuracy of 0.77 arcmin corresponding to 3.7 µm on the retina is demonstrated with a novel digital micromirror device based ophthalmoscope. By generating a confocal image as a reference, eye motion could be measured from consecutively measured subsampled frames. The subsampled frames provide 7.7 millisecond snapshots of the retina without motion artifacts between the image points of the subsampled frame, distributed over the full field of view. An ophthalmoscope pattern projection speed of 130 Hz enabled a motion detection bandwidth of 65 Hz. A model eye with a scanning mirror was built to test the performance of the motion detection algorithm. Furthermore, an in vivo motion trace was obtained from a healthy volunteer. The obtained eye motion trace clearly shows the three main types of fixational eye movements. Lastly, the obtained eye motion trace was used to correct for the eye motion in consecutively obtained subsampled frames to produce an averaged confocal image correct for motion artefacts. PMID:29552396

  14. A method of immediate detection of objects with a near-zero apparent motion in series of CCD-frames

    NASA Astrophysics Data System (ADS)

    Savanevych, V. E.; Khlamov, S. V.; Vavilova, I. B.; Briukhovetskyi, A. B.; Pohorelov, A. V.; Mkrtichian, D. E.; Kudak, V. I.; Pakuliak, L. K.; Dikov, E. N.; Melnik, R. G.; Vlasenko, V. P.; Reichart, D. E.

    2018-01-01

    The paper deals with a computational method for detection of the solar system minor bodies (SSOs), whose inter-frame shifts in series of CCD-frames during the observation are commensurate with the errors in measuring their positions. These objects have velocities of apparent motion between CCD-frames not exceeding three rms errors (3σ) of measurements of their positions. About 15% of objects have a near-zero apparent motion in CCD-frames, including the objects beyond the Jupiter's orbit as well as the asteroids heading straight to the Earth. The proposed method for detection of the object's near-zero apparent motion in series of CCD-frames is based on the Fisher f-criterion instead of using the traditional decision rules that are based on the maximum likelihood criterion. We analyzed the quality indicators of detection of the object's near-zero apparent motion applying statistical and in situ modeling techniques in terms of the conditional probability of the true detection of objects with a near-zero apparent motion. The efficiency of method being implemented as a plugin for the Collection Light Technology (CoLiTec) software for automated asteroids and comets detection has been demonstrated. Among the objects discovered with this plugin, there was the sungrazing comet C/2012 S1 (ISON). Within 26 min of the observation, the comet's image has been moved by three pixels in a series of four CCD-frames (the velocity of its apparent motion at the moment of discovery was equal to 0.8 pixels per CCD-frame; the image size on the frame was about five pixels). Next verification in observations of asteroids with a near-zero apparent motion conducted with small telescopes has confirmed an efficiency of the method even in bad conditions (strong backlight from the full Moon). So, we recommend applying the proposed method for series of observations with four or more frames.

  15. Architecture design of motion estimation for ITU-T H.263

    NASA Astrophysics Data System (ADS)

    Ku, Chung-Wei; Lin, Gong-Sheng; Chen, Liang-Gee; Lee, Yung-Ping

    1997-01-01

    Digitalized video and audio system has become the trend of the progress in multimedia, because it provides great performance in quality and feasibility of processing. However, as the huge amount of information is needed while the bandwidth is limitted, data compression plays an important role in the system. Say, for a 176 x 144 monochromic sequence with 10 frames/sec frame rate, the bandwidth is about 2Mbps. This wastes much channel resource and limits the applications. MPEG (moving picttre ezpert groip) standardizes the video codec scheme, and it performs high compression ratio while providing good quality. MPEG-i is used for the frame size about 352 x 240 and 30 frames per second, and MPEG-2 provides scalibility and can be applied on scenes with higher definition, say HDTV (high definition television). On the other hand, some applications concerns the very low bit-rate, such as videophone and video-conferencing. Because the channel bandwidth is much limitted in telephone network, a very high compression ratio must be required. ITU-T announced the H.263 video coding standards to meet the above requirements.8 According to the simulation results of TMN-5,22 it outperforms 11.263 with little overhead of complexity. Since wireless communication is the trend in the near future, low power design of the video codec is an important issue for portable visual telephone. Motion estimation is the most computation consuming parts in the whole video codec. About 60% of the computation is spent on this parts for the encoder. Several architectures were proposed for efficient processing of block matching algorithms. In this paper, in order to meet the requirements of 11.263 and the expectation of low power consumption, a modified sandwich architecture in21 is proposed. Based on the parallel processing philosophy, low power is expected and the generation of either one motion vector or four motion vectors with half-pixel accuracy is achieved concurrently. In addition, we will present our solution how to solve the other addition modes in 11.263 with the proposed architecture.

  16. Video Analysis of Rolling Cylinders

    ERIC Educational Resources Information Center

    Phommarach, S.; Wattanakasiwich, P.; Johnston, I.

    2012-01-01

    In this work, we studied the rolling motion of solid and hollow cylinders down an inclined plane at different angles. The motions were captured on video at 300 frames s[superscript -1], and the videos were analyzed frame by frame using video analysis software. Data from the real motion were compared with the theory of rolling down an inclined…

  17. GENERAL RELATIVITY DERIVATION OF BEAM REST-FRAME HAMILTONIAN.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WEI,J.

    2001-06-18

    Analysis of particle interaction in the laboratory frame of storage rings is often complicated by the fact that particle motion is relativistic, and that reference particle trajectory is curved. Rest frame of the reference particle is a convenient coordinate system to work with, within which particle motion is non-relativistic. We have derived the equations of motion in the beam rest frame from the general relativity formalism, and have successfully applied them to the analysis of crystalline beams [1].

  18. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.

    PubMed

    Ci, Wenyan; Huang, Yingping

    2016-10-17

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.

  19. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera

    PubMed Central

    Ci, Wenyan; Huang, Yingping

    2016-01-01

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508

  20. a Robust Method for Stereo Visual Odometry Based on Multiple Euclidean Distance Constraint and Ransac Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.

    2017-07-01

    Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.

  1. Optimal full motion video registration with rigorous error propagation

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn

    2014-06-01

    Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.

  2. Groupwise registration of cardiac perfusion MRI sequences using normalized mutual information in high dimension

    NASA Astrophysics Data System (ADS)

    Hamrouni, Sameh; Rougon, Nicolas; Pr"teux, Françoise

    2011-03-01

    In perfusion MRI (p-MRI) exams, short-axis (SA) image sequences are captured at multiple slice levels along the long-axis of the heart during the transit of a vascular contrast agent (Gd-DTPA) through the cardiac chambers and muscle. Compensating cardio-thoracic motions is a requirement for enabling computer-aided quantitative assessment of myocardial ischaemia from contrast-enhanced p-MRI sequences. The classical paradigm consists of registering each sequence frame on a reference image using some intensity-based matching criterion. In this paper, we introduce a novel unsupervised method for the spatio-temporal groupwise registration of cardiac p-MRI exams based on normalized mutual information (NMI) between high-dimensional feature distributions. Here, local contrast enhancement curves are used as a dense set of spatio-temporal features, and statistically matched through variational optimization to a target feature distribution derived from a registered reference template. The hard issue of probability density estimation in high-dimensional state spaces is bypassed by using consistent geometric entropy estimators, allowing NMI to be computed directly from feature samples. Specifically, a computationally efficient kth-nearest neighbor (kNN) estimation framework is retained, leading to closed-form expressions for the gradient flow of NMI over finite- and infinite-dimensional motion spaces. This approach is applied to the groupwise alignment of cardiac p-MRI exams using a free-form Deformation (FFD) model for cardio-thoracic motions. Experiments on simulated and natural datasets suggest its accuracy and robustness for registering p-MRI exams comprising more than 30 frames.

  3. The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross

    2014-06-15

    Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less

  4. Needle detection in ultrasound using the spectral properties of the displacement field: a feasibility study

    NASA Astrophysics Data System (ADS)

    Beigi, Parmida; Salcudean, Tim; Rohling, Robert; Lessoway, Victoria A.; Ng, Gary C.

    2015-03-01

    This paper presents a new needle detection technique for ultrasound guided interventions based on the spectral properties of small displacements arising from hand tremour or intentional motion. In a block-based approach, the displacement map is computed for each block of interest versus a reference frame, using an optical flow technique. To compute the flow parameters, the Lucas-Kanade approach is used in a multiresolution and regularized form. A least-squares fit is used to estimate the flow parameters from the overdetermined system of spatial and temporal gradients. Lateral and axial components of the displacement are obtained for each block of interest at consecutive frames. Magnitude-squared spectral coherency is derived between the median displacements of the reference block and each block of interest, to determine the spectral correlation. In vivo images were obtained from the tissue near the abdominal aorta to capture the extreme intrinsic body motion and insertion images were captured from a tissue-mimicking agar phantom. According to the analysis, both the involuntary and intentional movement of the needle produces coherent displacement with respect to a reference window near the insertion site. Intrinsic body motion also produces coherent displacement with respect to a reference window in the tissue; however, the coherency spectra of intrinsic and needle motion are distinguishable spectrally. Blocks with high spectral coherency at high frequencies are selected, estimating a channel for needle trajectory. The needle trajectory is detected from locally thresholded absolute displacement map within the initial estimate. Experimental results show the RMS localization accuracy of 1:0 mm, 0:7 mm, and 0:5 mm for hand tremour, vibrational and rotational needle movements, respectively.

  5. Hierarchical motion organization in random dot configurations

    NASA Technical Reports Server (NTRS)

    Bertamini, M.; Proffitt, D. R.; Kaiser, M. K. (Principal Investigator)

    2000-01-01

    Motion organization has 2 aspects: the extraction of a (moving) frame of reference and the hierarchical organization of moving elements within the reference frame. Using a discrimination of relative motions task, the authors found large differences between different types of motion (translation, divergence, and rotation) in the degree to which each can serve as a moving frame of reference. Translation and divergence are superior to rotation. There are, however, situations in which rotation can serve as a reference frame. This is due to the presence of a second factor, structural invariants (SIs). SIs are spatial relationships persisting among the elements within a configuration such as a collinearity among points or one point coinciding with the center of rotation for another (invariant radius). The combined effect of these 2 factors--motion type and SIs-influences perceptual motion organization.

  6. Dynamic PET image reconstruction integrating temporal regularization associated with respiratory motion correction for applications in oncology

    NASA Astrophysics Data System (ADS)

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric

    2018-02-01

    Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.

  7. Dynamic PET image reconstruction integrating temporal regularization associated with respiratory motion correction for applications in oncology.

    PubMed

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric

    2018-02-13

    Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.

  8. Optical Flow Estimation for Flame Detection in Videos

    PubMed Central

    Mueller, Martin; Karasev, Peter; Kolesov, Ivan; Tannenbaum, Allen

    2014-01-01

    Computational vision-based flame detection has drawn significant attention in the past decade with camera surveillance systems becoming ubiquitous. Whereas many discriminating features, such as color, shape, texture, etc., have been employed in the literature, this paper proposes a set of motion features based on motion estimators. The key idea consists of exploiting the difference between the turbulent, fast, fire motion, and the structured, rigid motion of other objects. Since classical optical flow methods do not model the characteristics of fire motion (e.g., non-smoothness of motion, non-constancy of intensity), two optical flow methods are specifically designed for the fire detection task: optimal mass transport models fire with dynamic texture, while a data-driven optical flow scheme models saturated flames. Then, characteristic features related to the flow magnitudes and directions are computed from the flow fields to discriminate between fire and non-fire motion. The proposed features are tested on a large video database to demonstrate their practical usefulness. Moreover, a novel evaluation method is proposed by fire simulations that allow for a controlled environment to analyze parameter influences, such as flame saturation, spatial resolution, frame rate, and random noise. PMID:23613042

  9. A computational model for reference-frame synthesis with applications to motion perception.

    PubMed

    Clarke, Aaron M; Öğmen, Haluk; Herzog, Michael H

    2016-09-01

    As discovered by the Gestaltists, in particular by Duncker, we often perceive motion to be within a non-retinotopic reference frame. For example, the motion of a reflector on a bicycle appears to be circular, whereas, it traces out a cycloidal path with respect to external world coordinates. The reflector motion appears to be circular because the human brain subtracts the horizontal motion of the bicycle from the reflector motion. The bicycle serves as a reference frame for the reflector motion. Here, we present a general mathematical framework, based on vector fields, to explain non-retinotopic motion processing. Using four types of non-retinotopic motion paradigms, we show how the theory works in detail. For example, we show how non-retinotopic motion in the Ternus-Pikler display can be computed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Model-based registration of multi-rigid-body for augmented reality

    NASA Astrophysics Data System (ADS)

    Ikeda, Sei; Hori, Hajime; Imura, Masataka; Manabe, Yoshitsugu; Chihara, Kunihiro

    2009-02-01

    Geometric registration between a virtual object and the real space is the most basic problem in augmented reality. Model-based tracking methods allow us to estimate three-dimensional (3-D) position and orientation of a real object by using a textured 3-D model instead of visual marker. However, it is difficult to apply existing model-based tracking methods to the objects that have movable parts such as a display of a mobile phone, because these methods suppose a single, rigid-body model. In this research, we propose a novel model-based registration method for multi rigid-body objects. For each frame, the 3-D models of each rigid part of the object are first rendered according to estimated motion and transformation from the previous frame. Second, control points are determined by detecting the edges of the rendered image and sampling pixels on these edges. Motion and transformation are then simultaneously calculated from distances between the edges and the control points. The validity of the proposed method is demonstrated through experiments using synthetic videos.

  11. Computation of fluid and particle motion from a time-sequenced image pair: a global outlier identification approach.

    PubMed

    Ray, Nilanjan

    2011-10-01

    Fluid motion estimation from time-sequenced images is a significant image analysis task. Its application is widespread in experimental fluidics research and many related areas like biomedical engineering and atmospheric sciences. In this paper, we present a novel flow computation framework to estimate the flow velocity vectors from two consecutive image frames. In an energy minimization-based flow computation, we propose a novel data fidelity term, which: 1) can accommodate various measures, such as cross-correlation or sum of absolute or squared differences of pixel intensities between image patches; 2) has a global mechanism to control the adverse effect of outliers arising out of motion discontinuities, proximity of image borders; and 3) can go hand-in-hand with various spatial smoothness terms. Further, the proposed data term and related regularization schemes are both applicable to dense and sparse flow vector estimations. We validate these claims by numerical experiments on benchmark flow data sets. © 2011 IEEE

  12. Pitch body orientation influences the perception of self-motion direction induced by optic flow.

    PubMed

    Bourrelly, A; Vercher, J-L; Bringoux, L

    2010-10-04

    We studied the effect of static pitch body tilts on the perception of self-motion direction induced by a visual stimulus. Subjects were seated in front of a screen on which was projected a 3D cluster of moving dots visually simulating a forward motion of the observer with upward or downward directional biases (relative to a true earth horizontal direction). The subjects were tilted at various angles relative to gravity and were asked to estimate the direction of the perceived motion (nose-up, as during take-off or nose-down, as during landing). The data showed that body orientation proportionally affected the amount of error in the reported perceived direction (by 40% of body tilt magnitude in a range of +/-20 degrees) and these errors were systematically recorded in the direction of body tilt. As a consequence, a same visual stimulus was differently interpreted depending on body orientation. While the subjects were required to perform the task in a geocentric reference frame (i.e., relative to a gravity-related direction), they were obviously influenced by egocentric references. These results suggest that the perception of self-motion is not elaborated within an exclusive reference frame (either egocentric or geocentric) but rather results from the combined influence of both. (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  13. Design of Visco-Elastic Dampers for RC Frame for Site-Specific Earthquake

    NASA Astrophysics Data System (ADS)

    Kamatchi, P.; Rama Raju, K.; Ravisankar, K.; Iyer, Nagesh R.

    2016-12-01

    Number of Reinforced Concrete (RC) framed buildings have got damaged at Ahmedabad city, India located at about 240 km away from epicenter during January 2001, 7.6 moment magnitude (Mw) Bhuj earthquake. In the present study, two dimensional nonlinear time history dynamic analyses of a typical 13 storey frame assumed to be located at Ahmedabad is carried out with the rock level and surface level site-specific ground motion for scenario earthquake of Mw 7.6 from Bhuj. Artificial ground motions are generated using extended finite source stochastic model with seismological parameters reported in literature for 2001 Bhuj earthquake. Surface level ground motions are obtained for a typical soil profile of 100 m depth reported in literature through one dimensional equivalent linear wave propagation analyses. From the analyses, failure of frame is observed for surface level ground motions which indicates that, in addition to the in-adequacy of the cross sections and reinforcement of the RC members of the frame chosen, the rich energy content of the surface level ground motion near the fundamental time period of the frame has also contributed for the failure of frame. As a part of retrofitting measure, five Visco-elastic Dampers (VED) in chevron bracing are added to frame. For the frame considered in the present study, provision of VED is found to be effective to mitigate damage for the soil site considered.

  14. Motion-based nearest vector metric for reference frame selection in the perception of motion.

    PubMed

    Agaoglu, Mehmet N; Clarke, Aaron M; Herzog, Michael H; Ögmen, Haluk

    2016-05-01

    We investigated how the visual system selects a reference frame for the perception of motion. Two concentric arcs underwent circular motion around the center of the display, where observers fixated. The outer (target) arc's angular velocity profile was modulated by a sine wave midflight whereas the inner (reference) arc moved at a constant angular speed. The task was to report whether the target reversed its direction of motion at any point during its motion. We investigated the effects of spatial and figural factors by systematically varying the radial and angular distances between the arcs, and their relative sizes. We found that the effectiveness of the reference frame decreases with increasing radial- and angular-distance measures. Drastic changes in the relative sizes of the arcs did not influence motion reversal thresholds, suggesting no influence of stimulus form on perceived motion. We also investigated the effect of common velocity by introducing velocity fluctuations to the reference arc as well. We found no effect of whether or not a reference frame has a constant motion. We examined several form- and motion-based metrics, which could potentially unify our findings. We found that a motion-based nearest vector metric can fully account for all the data reported here. These findings suggest that the selection of reference frames for motion processing does not result from a winner-take-all process, but instead, can be explained by a field whose strength decreases with the distance between the nearest motion vectors regardless of the form of the moving objects.

  15. SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, S; Rottmann, J; Berbeco, R

    2014-06-01

    Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less

  16. Post-Newtonian Reference Frames for Advanced Theory of the Lunar Motion and a New Generation of Lunar Laser Ranging

    NASA Astrophysics Data System (ADS)

    Xie, Yi; Kopeikin, Sergei Affiliaiton: AB(Department of Physics and Astronomy, University of Missouri, USA kopeikins@missouri.edu)

    2010-08-01

    We overview a set of post-Newtonian reference frames for a comprehensive study of the orbital dynamics and rotational motion of Moon and Earth by means of lunar laser ranging (LLR). We employ a scalar-tensor theory of gravity depending on two post-Newtonian parameters, and , and utilize the relativistic resolutions on reference frames adopted by the International Astronomical Union (IAU) in 2000. We assume that the solar system is isolated and space-time is asymptotically flat at infinity. The primary reference frame covers the entire space-time, has its origin at the solar-system barycenter (SSB) and spatial axes stretching up to infinity. The SSB frame is not rotating with respect to a set of distant quasars that are forming the International Celestial Reference Frame (ICRF). The secondary reference frame has its origin at the Earth-Moon barycenter (EMB). The EMB frame is locally-inertial and is not rotating dynamically in the sense that equation of motion of a test particle moving with respect to the EMB frame, does not contain the Coriolis and centripetal forces. Two other local frames geocentric (GRF) and selenocentric (SRF) have their origins at the center of mass of Earth and Moon respectively and do not rotate dynamically. Each local frame is subject to the geodetic precession both with respect to other local frames and with respect to the ICRF because of their relative motion with respect to each other. Theoretical advantage of the dynamically non-rotating local frames is in a more simple mathematical description. Each local frame can be aligned with the axes of ICRF after applying the matrix of the relativistic precession. The set of one global and three local frames is introduced in order to fully decouple the relative motion of Moon with respect to Earth from the orbital motion of the Earth-Moon barycenter as well as to connect the coordinate description of the lunar motion, an observer on Earth, and a retro-reflector on Moon to directly measurable quantities such as the proper time and the round-trip laser-light distance. We solve the gravity field equations and find out the metric tensor and the scalar field in all frames which description includes the post-Newtonian multipole moments of the gravitational field of Earth and Moon. We also derive the post-Newtonian coordinate transformations between the frames and analyze the residual gauge freedom.

  17. Pose and motion recovery from feature correspondences and a digital terrain map.

    PubMed

    Lerner, Ronen; Rivlin, Ehud; Rotstein, Héctor P

    2006-09-01

    A novel algorithm for pose and motion estimation using corresponding features and a Digital Terrain Map is proposed. Using a Digital Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables the elimination of the ambiguity present in vision-based algorithms for motion recovery. As a consequence, the absolute position and orientation of a camera can be recovered with respect to the external reference frame. In order to do this, the DTM is used to formulate a constraint between corresponding features in two consecutive frames. Explicit reconstruction of the 3D world is not required. When considering a number of feature points, the resulting constraints can be solved using nonlinear optimization in terms of position, orientation, and motion. Such a procedure requires an initial guess of these parameters, which can be obtained from dead-reckoning or any other source. The feasibility of the algorithm is established through extensive experimentation. Performance is compared with a state-of-the-art alternative algorithm, which intermediately reconstructs the 3D structure and then registers it to the DTM. A clear advantage for the novel algorithm is demonstrated in variety of scenarios.

  18. Patient motion effects on the quantification of regional myocardial blood flow with dynamic PET imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunter, Chad R. R. N.; Kemp, Robert A. de, E-mail: RAdeKemp@ottawaheart.ca; Klein, Ran

    Purpose: Patient motion is a common problem during dynamic positron emission tomography (PET) scans for quantification of myocardial blood flow (MBF). The purpose of this study was to quantify the prevalence of body motion in a clinical setting and evaluate with realistic phantoms the effects of motion on blood flow quantification, including CT attenuation correction (CTAC) artifacts that result from PET–CT misalignment. Methods: A cohort of 236 sequential patients was analyzed for patient motion under resting and peak stress conditions by two independent observers. The presence of motion, affected time-frames, and direction of motion was recorded; discrepancy between observers wasmore » resolved by consensus review. Based on these results, patient body motion effects on MBF quantification were characterized using the digital NURBS-based cardiac-torso phantom, with characteristic time activity curves (TACs) assigned to the heart wall (myocardium) and blood regions. Simulated projection data were corrected for attenuation and reconstructed using filtered back-projection. All simulations were performed without noise added, and a single CT image was used for attenuation correction and aligned to the early- or late-frame PET images. Results: In the patient cohort, mild motion of 0.5 ± 0.1 cm occurred in 24% and moderate motion of 1.0 ± 0.3 cm occurred in 38% of patients. Motion in the superior/inferior direction accounted for 45% of all detected motion, with 30% in the superior direction. Anterior/posterior motion was predominant (29%) in the posterior direction. Left/right motion occurred in 24% of cases, with similar proportions in the left and right directions. Computer simulation studies indicated that errors in MBF can approach 500% for scans with severe patient motion (up to 2 cm). The largest errors occurred when the heart wall was shifted left toward the adjacent lung region, resulting in a severe undercorrection for attenuation of the heart wall. Simulations also indicated that the magnitude of MBF errors resulting from motion in the superior/inferior and anterior/posterior directions was similar (up to 250%). Body motion effects were more detrimental for higher resolution PET imaging (2 vs 10 mm full-width at half-maximum), and for motion occurring during the mid-to-late time-frames. Motion correction of the reconstructed dynamic image series resulted in significant reduction in MBF errors, but did not account for the residual PET–CTAC misalignment artifacts. MBF bias was reduced further using global partial-volume correction, and using dynamic alignment of the PET projection data to the CT scan for accurate attenuation correction during image reconstruction. Conclusions: Patient body motion can produce MBF estimation errors up to 500%. To reduce these errors, new motion correction algorithms must be effective in identifying motion in the left/right direction, and in the mid-to-late time-frames, since these conditions produce the largest errors in MBF, particularly for high resolution PET imaging. Ideally, motion correction should be done before or during image reconstruction to eliminate PET-CTAC misalignment artifacts.« less

  19. Enhancing ejection fraction measurement through 4D respiratory motion compensation in cardiac PET imaging

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Wang, Xinhui; Gao, Xiangzhen; Segars, W. Paul; Lodge, Martin A.; Rahmim, Arman

    2017-06-01

    ECG gated cardiac PET imaging measures functional parameters such as left ventricle (LV) ejection fraction (EF), providing diagnostic and prognostic information for management of patients with coronary artery disease (CAD). Respiratory motion degrades spatial resolution and affects the accuracy in measuring the LV volumes for EF calculation. The goal of this study is to systematically investigate the effect of respiratory motion correction on the estimation of end-diastolic volume (EDV), end-systolic volume (ESV), and EF, especially on the separation of normal and abnormal EFs. We developed a respiratory motion incorporated 4D PET image reconstruction technique which uses all gated-frame data to acquire a motion-suppressed image. Using the standard XCAT phantom and two individual-specific volunteer XCAT phantoms, we simulated dual-gated myocardial perfusion imaging data for normally and abnormally beating hearts. With and without respiratory motion correction, we measured the EDV, ESV, and EF from the cardiac-gated reconstructed images. For all the phantoms, the estimated volumes increased and the biases significantly reduced with motion correction compared with those without. Furthermore, the improvement of ESV measurement in the abnormally beating heart led to better separation of normal and abnormal EFs. The simulation study demonstrated the significant effect of respiratory motion correction on cardiac imaging data with motion amplitude as small as 0.7 cm. The larger the motion amplitude the more improvement respiratory motion correction brought about on the EF measurement. Using data-driven respiratory gating, we also demonstrated the effect of respiratory motion correction on estimating the above functional parameters from list mode patient data. Respiratory motion correction has been shown to improve the accuracy of EF measurement in clinical cardiac PET imaging.

  20. Fixing a Reference Frame to a Moving and Deforming Continent

    NASA Astrophysics Data System (ADS)

    Blewitt, G.; Kreemer, C.; Hammond, W. C.

    2016-12-01

    The U.S. National Spatial Reference System will be modernized in 2022. A foundational component will be a geocentric reference frame fixed to the North America tectonic plate. Here we address challenges of fixing a reference frame to a moving and deforming continent. Scientific applications motivate that we fix the frame with a scale consistent with the SI system, an origin that coincides with the Earth system's center of mass, and with axes attached to the rigidly rotating interior of the North America plate. Realizing the scale and origin is now achieved to < 0.5 mm/yr by combining space-geodetic techniques (SLR, VLBI, GPS, and DORIS) in the global system, ITRS. To realize the no-net rotation condition, the complexity of plate boundary deformation demands that we only select GPS stations far from plate boundaries. Another problem is that velocity uncertainties in models of glacial isostatic adjustment (GIA) are significant compared to uncertainties in observed velocities. GIA models generally agree that far-field horizontal velocities tend to be directed toward/away from Hudson Bay, depending on mantle viscosity, with uncertain sign and magnitude of velocity. Also in the far field, strain rates tend to be small beyond the peripheral bulge ( US-Canada border). Thus the Earth's crust in the US east of the Rockies may appear to be rigid, even if this region moves relative to plate motion. This can affect Euler vector estimation, with implications (pros and cons) on scientific interpretation. Our previous approach [ref. 1] in defining the NA12 frame was to select a core set of 30 stations east of the Rockies and south of the U.S.-Canada border that satisfy strict criteria on position time series quality. The resulting horizontal velocities have an RMS of 0.3 mm/yr, quantifying a combination of plate rigidity and accuracy. However, this does not rule out possible common-mode motion arising from GIA. For the development of new frame NA16, we consider approaches to this problem. We also apply new techniques including the MIDAS robust velocity estimator [ref. 2] and "GPS Imaging" of vertical motions and strain rates (Fig. 1), which together could assist in better defining "stable North America".[1] Blewitt et al. (2013). J. Geodyn. 72, 11-24, doi:10.1016/j.jog.2013.08.004[2] Blewitt et al. (2016). JGR 121, doi:10.1002/2015JB012552

  1. A generalized framework unifying image registration and respiratory motion models and incorporating image reconstruction, for partial image data or full images

    NASA Astrophysics Data System (ADS)

    McClelland, Jamie R.; Modat, Marc; Arridge, Simon; Grimes, Helen; D'Souza, Derek; Thomas, David; O' Connell, Dylan; Low, Daniel A.; Kaza, Evangelia; Collins, David J.; Leach, Martin O.; Hawkes, David J.

    2017-06-01

    Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of ‘partial’ imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated.

  2. A generalized framework unifying image registration and respiratory motion models and incorporating image reconstruction, for partial image data or full images.

    PubMed

    McClelland, Jamie R; Modat, Marc; Arridge, Simon; Grimes, Helen; D'Souza, Derek; Thomas, David; Connell, Dylan O'; Low, Daniel A; Kaza, Evangelia; Collins, David J; Leach, Martin O; Hawkes, David J

    2017-06-07

    Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of 'partial' imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated.

  3. A generalized framework unifying image registration and respiratory motion models and incorporating image reconstruction, for partial image data or full images

    PubMed Central

    McClelland, Jamie R; Modat, Marc; Arridge, Simon; Grimes, Helen; D’Souza, Derek; Thomas, David; Connell, Dylan O’; Low, Daniel A; Kaza, Evangelia; Collins, David J; Leach, Martin O; Hawkes, David J

    2017-01-01

    Abstract Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of ‘partial’ imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated. PMID:28195833

  4. The reference frame for encoding and retention of motion depends on stimulus set size.

    PubMed

    Huynh, Duong; Tripathy, Srimant P; Bedell, Harold E; Öğmen, Haluk

    2017-04-01

    The goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J; Nguyen, D; O’Brien, R

    Purpose: Kilovoltage intrafraction monitoring (KIM) scheme has been successfully used to simultaneously monitor 3D tumor motion during radiotherapy. Recently, an iterative closest point (ICP) algorithm was implemented in KIM to also measure rotations about three axes, enabling real-time tracking of tumor motion in six degrees-of-freedom (DoF). This study aims to evaluate the accuracy of the six DoF motion estimates of KIM by comparing it with the corresponding motion (i) measured by the Calypso; and (ii) derived from kV/MV triangulation. Methods: (i) Various motions (static and dynamic) were applied to a CIRS phantom with three embedded electromagnetic transponders (Calypso Medical) usingmore » a 5D motion platform (HexaMotion) and a rotating treatment couch while both KIM and Calypso were used to concurrently track the phantom motion in six DoF. (ii) KIM was also used to retrospectively estimate six DoF motion from continuous sets of kV projections of a prostate, implanted with three gold fiducial markers (2 patients with 80 fractions in total), acquired during the treatment. Corresponding motion was obtained from kV/MV triangulation using a closed form least squares method based on three markers’ positions. Only the frames where all three markers were present were used in the analysis. The mean differences between the corresponding motion estimates were calculated for each DoF. Results: Experimental results showed that the mean of absolute differences in six DoF phantom motion measured by Calypso and KIM were within 1.1° and 0.7 mm. kV/MV triangulation derived six DoF prostate tumor better agreed with KIM estimated motion with the mean (s.d.) difference of up to 0.2° (1.36°) and 0.2 (0.25) mm for rotation and translation, respectively. Conclusion: These results suggest that KIM can provide an accurate six DoF intrafraction tumor during radiotherapy.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zawisza, I; Yan, H; Yin, F

    Purpose: To assure that tumor motion is within the radiation field during high-dose and high-precision radiosurgery, real-time imaging and surrogate monitoring are employed. These methods are useful in providing real-time tumor/surrogate motion but no future information is available. In order to anticipate future tumor/surrogate motion and track target location precisely, an algorithm is developed and investigated for estimating surrogate motion multiple-steps ahead. Methods: The study utilized a one-dimensional surrogate motion signal divided into three components: (a) training component containing the primary data including the first frame to the beginning of the input subsequence; (b) input subsequence component of the surrogatemore » signal used as input to the prediction algorithm: (c) output subsequence component is the remaining signal used as the known output of the prediction algorithm for validation. The prediction algorithm consists of three major steps: (1) extracting subsequences from training component which best-match the input subsequence according to given criterion; (2) calculating weighting factors from these best-matched subsequence; (3) collecting the proceeding parts of the subsequences and combining them together with assigned weighting factors to form output. The prediction algorithm was examined for several patients, and its performance is assessed based on the correlation between prediction and known output. Results: Respiratory motion data was collected for 20 patients using the RPM system. The output subsequence is the last 50 samples (∼2 seconds) of a surrogate signal, and the input subsequence was 100 (∼3 seconds) frames prior to the output subsequence. Based on the analysis of correlation coefficient between predicted and known output subsequence, the average correlation is 0.9644±0.0394 and 0.9789±0.0239 for equal-weighting and relative-weighting strategies, respectively. Conclusion: Preliminary results indicate that the prediction algorithm is effective in estimating surrogate motion multiple-steps in advance. Relative-weighting method shows better prediction accuracy than equal-weighting method. More parameters of this algorithm are under investigation.« less

  7. Underwater image mosaicking and visual odometry

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz; Tangirala, Sekhar; Sorber, Scott

    2017-05-01

    This paper summarizes the results of studies in underwater odometery using a video camera for estimating the velocity of an unmanned underwater vehicle (UUV). Underwater vehicles are usually equipped with sonar and Inertial Measurement Unit (IMU) - an integrated sensor package that combines multiple accelerometers and gyros to produce a three dimensional measurement of both specific force and angular rate with respect to an inertial reference frame for navigation. In this study, we investigate the use of odometry information obtainable from a video camera mounted on a UUV to extract vehicle velocity relative to the ocean floor. A key challenge with this process is the seemingly bland (i.e. featureless) nature of video data obtained underwater which could make conventional approaches to image-based motion estimation difficult. To address this problem, we perform image enhancement, followed by frame to frame image transformation, registration and mosaicking/stitching. With this approach the velocity components associated with the moving sensor (vehicle) are readily obtained from (i) the components of the transform matrix at each frame; (ii) information about the height of the vehicle above the seabed; and (iii) the sensor resolution. Preliminary results are presented.

  8. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion.

  9. Motion-based nonuniformity correction in DoFP polarimeters

    NASA Astrophysics Data System (ADS)

    Kumar, Rakesh; Tyo, J. Scott; Ratliff, Bradley M.

    2007-09-01

    Division of Focal Plane polarimeters (DoFP) operate by integrating an array of micropolarizer elements with a focal plane array. These devices have been investigated for over a decade, and example systems have been built in all regions of the optical spectrum. DoFP devices have the distinct advantage that they are mechanically rugged, inherently temporally synchronized, and optically aligned. They have the concomitant disadvantage that each pixel in the FPA has a different instantaneous field of view (IFOV), meaning that the polarization component measurements that go into estimating the Stokes vector across the image come from four different points in the field. In addition to IFOV errors, microgrid camera systems operating in the LWIR have the additional problem that FPA nonuniformity (NU) noise can be quite severe. The spatial differencing nature of a DoFP system exacerbates the residual NU noise that is remaining after calibration, and is often the largest source of false polarization signatures away from regions where IFOV error dominates. We have recently presented a scene based algorithm that uses frame-to-frame motion to compensate for NU noise in unpolarized IR imagers. In this paper, we have extended that algorithm so that it can be used to compensate for NU noise on a DoFP polarimeter. Furthermore, the additional information provided by the scene motion can be used to significantly reduce the IFOV error. We have found a reduction of IFOV error by a factor of 10 if the scene motion is known exactly. Performance is reduced when the motion must be estimated from the scene, but still shows a marked improvement over static DoFP images.

  10. A unified analysis of crustal motion in Southern California, 1970-2004: The SCEC crustal motion map

    NASA Astrophysics Data System (ADS)

    Shen, Z.-K.; King, R. W.; Agnew, D. C.; Wang, M.; Herring, T. A.; Dong, D.; Fang, P.

    2011-11-01

    To determine crustal motions in and around southern California, we have processed and combined trilateration data collected from 1970 to 1992, VLBI data from 1979 to 1992, and GPS data from 1986 to 2004: a long temporal coverage required in part by the occurrence of several large earthquakes in this region. From a series of solutions for station positions, we have estimated interseismic velocities, coseismic displacements, and postseismic motions. Within the region from 31°N to 38°N. and east to 114°W, the final product includes estimated horizontal velocities for 1009 GPS, 190 trilateration, and 16 VLBI points, with ties between some of these used to stabilize the solution. All motions are relative to the Stable North American Reference Frame (SNARF) as realized through the velocities of 20 GPS stations. This provides a relatively dense set of horizontal velocity estimates, with well-tested errors, for the past quarter century over the plate boundary from 31°N to 36.5°N. These velocities agree well with those from the Plate Boundary Observatory, which apply to a later time period. We also estimated vertical velocities, 533 of which have errors below 2 mm/yr. Most of these velocities are less than 1 mm/yr, but they show 2-4 mm/yr subsidence in the Ventura and Los Angeles basins and in the Salton Trough. Our analysis also included estimates of coseismic and postseismic motions related to the 1992 Landers, 1994 Northridge, 1999 Hector Mine, and 2003 San Simeon earthquakes. Postseismic motions increase logarithmically over time with a time constant of about 10 days, and generally mimic the direction and relative amplitude of the coseismic offsets.

  11. Using structural damage statistics to derive macroseismic intensity within the Kathmandu valley for the 2015 M7.8 Gorkha, Nepal earthquake

    NASA Astrophysics Data System (ADS)

    McGowan, S. M.; Jaiswal, K. S.; Wald, D. J.

    2017-09-01

    We make and analyze structural damage observations from within the Kathmandu valley following the 2015 M7.8 Gorkha, Nepal earthquake to derive macroseismic intensities at several locations including some located near ground motion recording sites. The macroseismic intensity estimates supplement the limited strong ground motion data in order to characterize the damage statistics. This augmentation allows for direct comparisons between ground motion amplitudes and structural damage characteristics and ultimately produces a more constrained ground shaking hazard map for the Gorkha earthquake. For systematic assessments, we focused on damage to three specific building categories: (a) low/mid-rise reinforced concrete frames with infill brick walls, (b) unreinforced brick masonry bearing walls with reinforced concrete slabs, and (c) unreinforced brick masonry bearing walls with partial timber framing. Evaluating dozens of photos of each construction type, assigning each building in the study sample to a European Macroseismic Scale (EMS)-98 Vulnerability Class based upon its structural characteristics, and then individually assigning an EMS-98 Damage Grade to each building allows a statistically derived estimate of macroseismic intensity for each of nine study areas in and around the Kathmandu valley. This analysis concludes that EMS-98 macroseismic intensities for the study areas from the Gorkha mainshock typically were in the VII-IX range. The intensity assignment process described is more rigorous than the informal approach of assigning intensities based upon anecdotal media or first-person accounts of felt-reports, shaking, and their interpretation of damage. Detailed EMS-98 macroseismic assessments in urban areas are critical for quantifying relations between shaking and damage as well as for calibrating loss estimates. We show that the macroseismic assignments made herein result in fatality estimates consistent with the overall and district-wide reported values.

  12. Using structural damage statistics to derive macroseismic intensity within the Kathmandu valley for the 2015 M7.8 Gorkha, Nepal earthquake

    USGS Publications Warehouse

    McGowan, Sean; Jaiswal, Kishor; Wald, David J.

    2017-01-01

    We make and analyze structural damage observations from within the Kathmandu valley following the 2015 M7.8 Gorkha, Nepal earthquake to derive macroseismic intensities at several locations including some located near ground motion recording sites. The macroseismic intensity estimates supplement the limited strong ground motion data in order to characterize the damage statistics. This augmentation allows for direct comparisons between ground motion amplitudes and structural damage characteristics and ultimately produces a more constrained ground shaking hazard map for the Gorkha earthquake. For systematic assessments, we focused on damage to three specific building categories: (a) low/mid-rise reinforced concrete frames with infill brick walls, (b) unreinforced brick masonry bearing walls with reinforced concrete slabs, and (c) unreinforced brick masonry bearing walls with partial timber framing. Evaluating dozens of photos of each construction type, assigning each building in the study sample to a European Macroseismic Scale (EMS)-98 Vulnerability Class based upon its structural characteristics, and then individually assigning an EMS-98 Damage Grade to each building allows a statistically derived estimate of macroseismic intensity for each of nine study areas in and around the Kathmandu valley. This analysis concludes that EMS-98 macroseismic intensities for the study areas from the Gorkha mainshock typically were in the VII–IX range. The intensity assignment process described is more rigorous than the informal approach of assigning intensities based upon anecdotal media or first-person accounts of felt-reports, shaking, and their interpretation of damage. Detailed EMS-98 macroseismic assessments in urban areas are critical for quantifying relations between shaking and damage as well as for calibrating loss estimates. We show that the macroseismic assignments made herein result in fatality estimates consistent with the overall and district-wide reported values.

  13. Semi-automatic motion compensation of contrast-enhanced ultrasound images from abdominal organs for perfusion analysis.

    PubMed

    Schäfer, Sebastian; Nylund, Kim; Sævik, Fredrik; Engjom, Trond; Mézl, Martin; Jiřík, Radovan; Dimcevski, Georg; Gilja, Odd Helge; Tönnies, Klaus

    2015-08-01

    This paper presents a system for correcting motion influences in time-dependent 2D contrast-enhanced ultrasound (CEUS) images to assess tissue perfusion characteristics. The system consists of a semi-automatic frame selection method to find images with out-of-plane motion as well as a method for automatic motion compensation. Translational and non-rigid motion compensation is applied by introducing a temporal continuity assumption. A study consisting of 40 clinical datasets was conducted to compare the perfusion with simulated perfusion using pharmacokinetic modeling. Overall, the proposed approach decreased the mean average difference between the measured perfusion and the pharmacokinetic model estimation. It was non-inferior for three out of four patient cohorts to a manual approach and reduced the analysis time by 41% compared to manual processing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Unsupervised motion-based object segmentation refined by color

    NASA Astrophysics Data System (ADS)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the chance of the wrong position producing a good match. Consequently, a number of methods exist which combine motion and colour segmentation. These methods use colour segmentation as a base for the motion segmentation and estimation or perform an independent colour segmentation in parallel which is in some way combined with the motion segmentation. The presented method uses both techniques to complement each other by first segmenting on motion cues and then refining the segmentation with colour. To our knowledge few methods exist which adopt this approach. One example is te{meshrefine}. This method uses an irregular mesh, which hinders its efficient implementation in consumer electronics devices. Furthermore, the method produces a foreground/background segmentation, while our applications call for the segmentation of multiple objects. NEW METHOD As mentioned above we start with motion segmentation and refine the edges of this segmentation with a pixel resolution colour segmentation method afterwards. There are several reasons for this approach: + Motion segmentation does not produce the oversegmentation which colour segmentation methods normally produce, because objects are more likely to have colour discontinuities than motion discontinuities. In this way, the colour segmentation only has to be done at the edges of segments, confining the colour segmentation to a smaller part of the image. In such a part, it is more likely that the colour of an object is homogeneous. + This approach restricts the computationally expensive pixel resolution colour segmentation to a subset of the image. Together with the very efficient 3DRS motion estimation algorithm, this helps to reduce the computational complexity. + The motion cue alone is often enough to reliably distinguish objects from one another and the background. To obtain the motion vector fields, a variant of the 3DRS block-based motion estimator which analyses three frames of input was used. The 3DRS motion estimator is known for its ability to estimate motion vectors which closely resemble the true motion. BLOCK-BASED MOTION SEGMENTATION As mentioned above we start with a block-resolution segmentation based on motion vectors. The presented method is inspired by the well-known K-means segmentation method te{K-means}. Several other methods (e.g. te{kmeansc}) adapt K-means for connectedness by adding a weighted shape-error. This adds the additional difficulty of finding the correct weights for the shape-parameters. Also, these methods often bias one particular pre-defined shape. The presented method, which we call K-regions, encourages connectedness because only blocks at the edges of segments may be assigned to another segment. This constrains the segmentation method to such a degree that it allows the method to use least squares for the robust fitting of affine motion models for each segment. Contrary to te{parmkm}, the segmentation step still operates on vectors instead of model parameters. To make sure the segmentation is temporally consistent, the segmentation of the previous frame will be used as initialisation for every new frame. We also present a scheme which makes the algorithm independent of the initially chosen amount of segments. COLOUR-BASED INTRA-BLOCK SEGMENTATION The block resolution motion-based segmentation forms the starting point for the pixel resolution segmentation. The pixel resolution segmentation is obtained from the block resolution segmentation by reclassifying pixels only at the edges of clusters. We assume that an edge between two objects can be found in either one of two neighbouring blocks that belong to different clusters. This assumption allows us to do the pixel resolution segmentation on each pair of such neighbouring blocks separately. Because of the local nature of the segmentation, it largely avoids problems with heterogeneously coloured areas. Because no new segments are introduced in this step, it also does not suffer from oversegmentation problems. The presented method has no problems with bifurcations. For the pixel resolution segmentation itself we reclassify pixels such that we optimize an error norm which favour similarly coloured regions and straight edges. SEGMENTATION MEASURE To assist in the evaluation of the proposed algorithm we developed a quality metric. Because the problem does not have an exact specification, we decided to define a ground truth output which we find desirable for a given input. We define the measure for the segmentation quality as being how different the segmentation is from the ground truth. Our measure enables us to evaluate oversegmentation and undersegmentation seperately. Also, it allows us to evaluate which parts of a frame suffer from oversegmentation or undersegmentation. The proposed algorithm has been tested on several typical sequences. CONCLUSIONS In this abstract we presented a new video segmentation method which performs well in the segmentation of multiple independently moving foreground objects from each other and the background. It combines the strong points of both colour and motion segmentation in the way we expected. One of the weak points is that the segmentation method suffers from undersegmentation when adjacent objects display similar motion. In sequences with detailed backgrounds the segmentation will sometimes display noisy edges. Apart from these results, we think that some of the techniques, and in particular the K-regions technique, may be useful for other two-dimensional data segmentation problems.

  15. Reducing misfocus-related motion artefacts in laser speckle contrast imaging.

    PubMed

    Ringuette, Dene; Sigal, Iliya; Gad, Raanan; Levi, Ofer

    2015-01-01

    Laser Speckle Contrast Imaging (LSCI) is a flexible, easy-to-implement technique for measuring blood flow speeds in-vivo. In order to obtain reliable quantitative data from LSCI the object must remain in the focal plane of the imaging system for the duration of the measurement session. However, since LSCI suffers from inherent frame-to-frame noise, it often requires a moving average filter to produce quantitative results. This frame-to-frame noise also makes the implementation of rapid autofocus system challenging. In this work, we demonstrate an autofocus method and system based on a novel measure of misfocus which serves as an accurate and noise-robust feedback mechanism. This measure of misfocus is shown to enable the localization of best focus with sub-depth-of-field sensitivity, yielding more accurate estimates of blood flow speeds and blood vessel diameters.

  16. Terrain shape estimation from optical flow, using Kalman filtering

    NASA Astrophysics Data System (ADS)

    Hoff, William A.; Sklair, Cheryl W.

    1990-01-01

    As one moves through a static environment, the visual world as projected on the retina seems to flow past. This apparent motion, called optical flow, can be an important source of depth perception for autonomous robots. An important application is in planetary exploration -the landing vehicle must find a safe landing site in rugged terrain, and an autonomous rover must be able to navigate safely through this terrain. In this paper, we describe a solution to this problem. Image edge points are tracked between frames of a motion sequence, and the range to the points is calculated from the displacement of the edge points and the known motion of the camera. Kalman filtering is used to incrementally improve the range estimates to those points, and provide an estimate of the uncertainty in each range. Errors in camera motion and image point measurement can also be modelled with Kalman filtering. A surface is then interpolated to these points, providing a complete map from which hazards such as steeply sloping areas can be detected. Using the method of extended Kalman filtering, our approach allows arbitrary camera motion. Preliminary results of an implementation are presented, and show that the resulting range accuracy is on the order of 1-2% of the range.

  17. WE-AB-204-09: Respiratory Motion Correction in 4D-PET by Simultaneous Motion Estimation and Image Reconstruction (SMEIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalantari, F; Wang, J; Li, T

    2015-06-15

    Purpose: In conventional 4D-PET, images from different frames are reconstructed individually and aligned by registration methods. Two issues with these approaches are: 1) Reconstruction algorithms do not make full use of all projections statistics; and 2) Image registration between noisy images can Result in poor alignment. In this study we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) method for cone beam CT for motion estimation/correction in 4D-PET. Methods: Modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM- TV) is used to obtain a primary motion-compensated PET (pmc-PET) from all projection data using Demons derivedmore » deformation vector fields (DVFs) as initial. Motion model update is done to obtain an optimal set of DVFs between the pmc-PET and other phases by matching the forward projection of the deformed pmc-PET and measured projections of other phases. Using updated DVFs, OSEM- TV image reconstruction is repeated and new DVFs are estimated based on updated images. 4D XCAT phantom with typical FDG biodistribution and a 10mm diameter tumor was used to evaluate the performance of the SMEIR algorithm. Results: Image quality of 4D-PET is greatly improved by the SMEIR algorithm. When all projections are used to reconstruct a 3D-PET, motion blurring artifacts are present, leading to a more than 5 times overestimation of the tumor size and 54% tumor to lung contrast ratio underestimation. This error reduced to 37% and 20% for post reconstruction registration methods and SMEIR respectively. Conclusion: SMEIR method can be used for motion estimation/correction in 4D-PET. The statistics is greatly improved since all projection data are combined together to update the image. The performance of the SMEIR algorithm for 4D-PET is sensitive to smoothness control parameters in the DVF estimation step.« less

  18. Spatial correlation-based side information refinement for distributed video coding

    NASA Astrophysics Data System (ADS)

    Taieb, Mohamed Haj; Chouinard, Jean-Yves; Wang, Demin

    2013-12-01

    Distributed video coding (DVC) architecture designs, based on distributed source coding principles, have benefitted from significant progresses lately, notably in terms of achievable rate-distortion performances. However, a significant performance gap still remains when compared to prediction-based video coding schemes such as H.264/AVC. This is mainly due to the non-ideal exploitation of the video sequence temporal correlation properties during the generation of side information (SI). In fact, the decoder side motion estimation provides only an approximation of the true motion. In this paper, a progressive DVC architecture is proposed, which exploits the spatial correlation of the video frames to improve the motion-compensated temporal interpolation (MCTI). Specifically, Wyner-Ziv (WZ) frames are divided into several spatially correlated groups that are then sent progressively to the receiver. SI refinement (SIR) is performed as long as these groups are being decoded, thus providing more accurate SI for the next groups. It is shown that the proposed progressive SIR method leads to significant improvements over the Discover DVC codec as well as other SIR schemes recently introduced in the literature.

  19. Mid-Ventilation Concept for Mobile Pulmonary Tumors: Internal Tumor Trajectory Versus Selective Reconstruction of Four-Dimensional Computed Tomography Frames Based on External Breathing Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guckenberger, Matthias; Wilbert, Juergen; Krieger, Thomas

    2009-06-01

    Purpose: To evaluate the accuracy of direct reconstruction of mid-ventilation and peak-phase four-dimensional (4D) computed tomography (CT) frames based on the external breathing signal. Methods and Materials: For 11 patients with 15 pulmonary targets, a respiration-correlated CT study (4D CT) was acquired for treatment planning. After retrospective time-based sorting of raw projection data and reconstruction of eight CT frames equally distributed over the breathing cycle, mean tumor position (P{sub mean}), mid-ventilation frame, and breathing motion were evaluated based on the internal tumor trajectory. Analysis of the external breathing signal (pressure sensor around abdomen) with amplitude-based sorting of projections was performedmore » for direct reconstruction of the mid-ventilation frame and frames at peak phases of the breathing cycle. Results: On the basis of the eight 4D CT frames equally spaced in time, tumor motion was largest in the craniocaudal direction, with 12 {+-} 7 mm on average. Tumor motion between the two frames reconstructed at peak phases was not different in the craniocaudal and anterior-posterior directions but was systematically smaller in the left-right direction by 1 mm on average. The 3-dimensional distance between P{sub mean} and the tumor position in the mid-ventilation frame based on the internal tumor trajectory was 1.2 {+-} 1 mm. Reconstruction of the mid-ventilation frame at the mean amplitude position of the external breathing signal resulted in tumor positions 2.0 {+-} 1.1 mm distant from P{sub mean}. Breathing-induced motion artifacts in mid-ventilation frames caused negligible changes in tumor volume and shape. Conclusions: Direct reconstruction of the mid-ventilation frame and frames at peak phases based on the external breathing signal was reliable. This makes the reconstruction of only three 4D CT frames sufficient for application of the mid-ventilation technique in clinical practice.« less

  20. The mantle flow field beneath western North America.

    PubMed

    Silver, P G; Holt, W E

    2002-02-08

    Although motions at the surface of tectonic plates are well determined, the accompanying horizontal mantle flow is not. We have combined observations of surface deformation and upper mantle seismic anisotropy to estimate this flow field for western North America. We find that the mantle velocity is 5.5 +/- 1.5 centimeters per year due east in a hot spot reference frame, nearly opposite to the direction of North American plate motion (west-southwest). The flow is only weakly coupled to the motion of the surface plate, producing a small drag force. This flow field is probably due to heterogeneity in mantle density associated with the former Farallon oceanic plate beneath North America.

  1. Blood pool and tissue phase patient motion effects on 82rubidium PET myocardial blood flow quantification.

    PubMed

    Lee, Benjamin C; Moody, Jonathan B; Poitrasson-Rivière, Alexis; Melvin, Amanda C; Weinberg, Richard L; Corbett, James R; Ficaro, Edward P; Murthy, Venkatesh L

    2018-03-23

    Patient motion can lead to misalignment of left ventricular volumes of interest and subsequently inaccurate quantification of myocardial blood flow (MBF) and flow reserve (MFR) from dynamic PET myocardial perfusion images. We aimed to identify the prevalence of patient motion in both blood and tissue phases and analyze the effects of this motion on MBF and MFR estimates. We selected 225 consecutive patients that underwent dynamic stress/rest rubidium-82 chloride ( 82 Rb) PET imaging. Dynamic image series were iteratively reconstructed with 5- to 10-second frame durations over the first 2 minutes for the blood phase and 10 to 80 seconds for the tissue phase. Motion shifts were assessed by 3 physician readers from the dynamic series and analyzed for frequency, magnitude, time, and direction of motion. The effects of this motion isolated in time, direction, and magnitude on global and regional MBF and MFR estimates were evaluated. Flow estimates derived from the motion corrected images were used as the error references. Mild to moderate motion (5-15 mm) was most prominent in the blood phase in 63% and 44% of the stress and rest studies, respectively. This motion was observed with frequencies of 75% in the septal and inferior directions for stress and 44% in the septal direction for rest. Images with blood phase isolated motion had mean global MBF and MFR errors of 2%-5%. Isolating blood phase motion in the inferior direction resulted in mean MBF and MFR errors of 29%-44% in the RCA territory. Flow errors due to tissue phase isolated motion were within 1%. Patient motion was most prevalent in the blood phase and MBF and MFR errors increased most substantially with motion in the inferior direction. Motion correction focused on these motions is needed to reduce MBF and MFR errors.

  2. The Influence of the Terrestrial Reference Frame on Studies of Sea Level Change

    NASA Astrophysics Data System (ADS)

    Nerem, R. S.; Bar-Sever, Y. E.; Haines, B. J.; Desai, S.; Heflin, M. B.

    2015-12-01

    The terrestrial reference frame (TRF) provides the foundation for the accurate monitoring of sea level using both ground-based (tide gauges) and space-based (satellite altimetry) techniques. For the latter, tide gauges are also used to monitor drifts in the satellite instruments over time. The accuracy of the terrestrial reference frame (TRF) is thus a critical component for both types of sea level measurements. The TRF is central to the formation of geocentric sea-surface height (SSH) measurements from satellite altimeter data. The computed satellite orbits are linked to a particular TRF via the assumed locations of the ground-based tracking systems. The manner in which TRF errors are expressed in the orbit solution (and thus SSH) is not straightforward, and depends on the models of the forces underlying the satellite's motion. We discuss this relationship, and provide examples of the systematic TRF-induced errors in the altimeter derived sea-level record. The TRF is also crucial to the interpretation of tide-gauge measurements, as it enables the separation of vertical land motion from volumetric changes in the water level. TRF errors affect tide gauge measurements through GNSS estimates of the vertical land motion at each tide gauge. This talk will discuss the current accuracy of the TRF and how errors in the TRF impact both satellite altimeter and tide gauge sea level measurements. We will also discuss simulations of how the proposed Geodetic Reference Antenna in SPace (GRASP) satellite mission could reduce these errors and revolutionize how reference frames are computed in general.

  3. Space geodesy validation of the global lithospheric flow

    NASA Astrophysics Data System (ADS)

    Crespi, M.; Cuffaro, M.; Doglioni, C.; Giannone, F.; Riguzzi, F.

    2007-02-01

    Space geodesy data are used to verify whether plates move chaotically or rather follow a sort of tectonic mainstream. While independent lines of geological evidence support the existence of a global ordered flow of plate motions that is westerly polarized, the Terrestrial Reference Frame (TRF) presents limitations in describing absolute plate motions relative to the mantle. For these reasons we jointly estimated a new plate motions model and three different solutions of net lithospheric rotation. Considering the six major plate boundaries and variable source depths of the main Pacific hotspots, we adapted the TRF plate kinematics by global space geodesy to absolute plate motions models with respect to the mantle. All three reconstructions confirm (i) the tectonic mainstream and (ii) the net rotation of the lithosphere. We still do not know the precise trend of this tectonic flow and the velocity of the differential rotation. However, our results show that assuming faster Pacific motions, as the asthenospheric source of the hotspots would allow, the best lithospheric net rotation estimate is 13.4 +/- 0.7 cm yr-1. This superfast solution seems in contradiction with present knowledge on the lithosphere decoupling, but it matches remarkably better with the geological constraints than those retrieved with slower Pacific motion and net rotation estimates. Assuming faster Pacific motion, it is shown that all plates move orderly `westward' along the tectonic mainstream at different velocities and the equator of the lithospheric net rotation lies inside the corresponding tectonic mainstream latitude band (~ +/-7°), defined by the 1σ confidence intervals.

  4. Automated multiple target detection and tracking in UAV videos

    NASA Astrophysics Data System (ADS)

    Mao, Hongwei; Yang, Chenhui; Abousleman, Glen P.; Si, Jennie

    2010-04-01

    In this paper, a novel system is presented to detect and track multiple targets in Unmanned Air Vehicles (UAV) video sequences. Since the output of the system is based on target motion, we first segment foreground moving areas from the background in each video frame using background subtraction. To stabilize the video, a multi-point-descriptor-based image registration method is performed where a projective model is employed to describe the global transformation between frames. For each detected foreground blob, an object model is used to describe its appearance and motion information. Rather than immediately classifying the detected objects as targets, we track them for a certain period of time and only those with qualified motion patterns are labeled as targets. In the subsequent tracking process, a Kalman filter is assigned to each tracked target to dynamically estimate its position in each frame. Blobs detected at a later time are used as observations to update the state of the tracked targets to which they are associated. The proposed overlap-rate-based data association method considers the splitting and merging of the observations, and therefore is able to maintain tracks more consistently. Experimental results demonstrate that the system performs well on real-world UAV video sequences. Moreover, careful consideration given to each component in the system has made the proposed system feasible for real-time applications.

  5. High-speed cinematography of muscle contraction.

    PubMed

    HAUPT, R E; WALL, D M

    1962-07-13

    Motion pictures of the "twitch" of an excised frog gastrocnemius muscle taken at rates of 6000 frames per second provide a means of very accurately timing the phases. The extreme "slow motion" reveals surface phenomena not observable by other techniques. Evidence of "active relaxation" is suggested by results of frame-by-frame analysis.

  6. Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond; Sridhar, Banavar

    1991-01-01

    A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.

  7. Parallel search for conjunctions with stimuli in apparent motion.

    PubMed

    Casco, C; Ganis, G

    1999-01-01

    A series of experiments was conducted to determine whether apparent motion tends to follow the similarity rule (i.e. is attribute-specific) and to investigate the underlying mechanism. Stimulus duration thresholds were measured during a two-alternative forced-choice task in which observers detected either the location or the motion direction of target groups defined by the conjunction of size and orientation. Target element positions were randomly chosen within a nominally defined rectangular subregion of the display (target region). The target region was presented either statically (followed by a 250 ms duration mask) or dynamically, displaced by a small distance (18 min of arc) from frame to frame. In the motion display, the position of both target and background elements was changed randomly from frame to frame within the respective areas to abolish spatial correspondence over time. Stimulus duration thresholds were lower in the motion than in the static task, indicating that target detection in the dynamic condition does not rely on the explicit identification of target elements in each static frame. Increasing the distractor-to-target ratio was found to reduce detectability in the static, but not in the motion task. This indicates that the perceptual segregation of the target is effortless and parallel with motion but not with static displays. The pattern of results holds regardless of the task or search paradigm employed. The detectability in the motion condition can be improved by increasing the number of frames and/or by reducing the width of the target area. Furthermore, parallel search in the dynamic condition can be conducted with both short-range and long-range motion stimuli. Finally, apparent motion of conjunctions is insufficient on its own to support location decision and is disrupted by random visual noise. Overall, these findings show that (i) the mechanism underlying apparent motion is attribute-specific; (ii) the motion system mediates temporal integration of feature conjunctions before they are identified by the static system; and (iii) target detectability in these stimuli relies upon a nonattentive, cooperative, directionally selective motion mechanism that responds to high-level attributes (conjunction of size and orientation).

  8. Marker-less multi-frame motion tracking and compensation in PET-brain imaging

    NASA Astrophysics Data System (ADS)

    Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.

    2015-03-01

    In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.

  9. Visual Target Tracking in the Presence of Unknown Observer Motion

    NASA Technical Reports Server (NTRS)

    Williams, Stephen; Lu, Thomas

    2009-01-01

    Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.

  10. Crab Pulsar Astrometry and Spin-Velocity Alignment

    NASA Astrophysics Data System (ADS)

    Romani, Roger W.; Ng, C.-Y.

    2009-01-01

    The proper motion of the Crab pulsar and its orientation with respect to the PWN symmetry axis is interesting for testing models of neutron star birth kicks. A number of authors have measured the Crab's motion using archival HST images. The most detailed study by Kaplan et al. (2008) compares a wide range of WFPC and ACS images to obtain an accurate proper motion measurement. However, they concluded that a kick comparison is fundamentally limited by the uncertainty in the progenitor's motion. Here we report on new HST images matched to 1994 and 1995 data frames, providing independent proper motion measurement with over 13 year time base and minimal systematic errors. The new observations also allow us to estimate the systematic errors due to CCD saturation. Our preliminary result indicates a proper motion consistent with Kaplan et al.'s finding. We discuss a model for the progenitor's motion, suggesting that the pulsar spin is much closer to alignment than previously suspected.

  11. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their full potential in capturing clinical outcomes. PMID:25811838

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, W; Hrycushko, B; Yan, Y

    Purpose: Traditional external beam radiotherapy for cervical cancer requires setup by external skin marks. In order to improve treatment accuracy and reduce planning margin for more conformal therapy, it is essential to monitor tumor positions interfractionally and intrafractionally. We demonstrate feasibility of monitoring cervical tumor motion online using EPID imaging from Beam’s Eye View. Methods: Prior to treatment, 1∼2 cylindrical radio opaque markers were implanted into inferior aspect of cervix tumor. During external beam treatments on a Varian 2100C by 4-field 3D plans, treatment beam images were acquired continuously by an EPID. A Matlab program was developed to locate internalmore » markers on MV images. Based on 2D marker positions obtained from different treatment fields, their 3D positions were estimated for every treatment fraction. Results: There were 398 images acquired during different treatment fractions of three cervical cancer patients. Markers were successfully located on every frame of image at an analysis speed of about 1 second per frame. Intrafraction motions were evaluated by comparing marker positions relative to the position on the first frame of image. The maximum intrafraction motion of the markers was 1.6 mm. Interfraction motions were evaluated by comparing 3D marker positions at different treatment fractions. The maximum interfraction motion was up to 10 mm. Careful comparison found that this is due to patient positioning since the bony structures shifted with the markers. Conclusion: This method provides a cost-free and simple solution for online tumor tracking for cervical cancer treatment since it is feasible to acquire and export EPID images with fast analysis in real time. This method does not need any extra equipment or deliver extra dose to patients. The online tumor motion information will be very useful to reduce planning margins and improve treatment accuracy, which is particularly important for SBRT treatment with long delivery time.« less

  13. TH-EF-BRB-08: Robotic Motion Compensation for Radiation Therapy: A 6DOF Phantom Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belcher, AH; Liu, X; Wiersma, R

    Purpose: The high accuracy of frame-based stereotactic radiosurgery (SRS), which uses a rigid frame fixed to the patient’s skull, is offset by potential drawbacks of poor patient compliance and clinical workflow restrictions. Recent research into frameless SRS has so far resulted in reduced accuracy. In this study, we investigate the use of a novel 6 degree-of-freedom (6DOF) robotic head motion cancellation system that continuously detects and compensates for patient head motions during a SRS delivery. This approach has the potential to reduce invasiveness while still achieving accuracies better or equal to traditional frame-based SRS. Methods: A 6DOF parallel kinematics roboticsmore » stage was constructed, and controlled using an inverse kinematics-based motion compensation algorithm. A 6DOF stereoscopic infrared (IR) marker tracking system was used to monitor real-time motions at sub-millimeter and sub-degree levels. A novel 6DOF calibration technique was first applied to properly orient the camera coordinate frame to match that of the LINAC and robotic control frames. Simulated head motions were measured by the system, and the robotic stage responded to these 6DOF motions automatically, returning the reflective marker coordinate frame to its original position. Results: After the motions were introduced to the system in the phantom-based study, the robotic stage automatically and rapidly returned the phantom to LINAC isocenter. When errors exceeded the compensation lower threshold of 0.25 mm or 0.25 degrees, the system registered the 6DOF error and generated a cancellation trajectory. The system responded in less than 0.5 seconds and returned all axes to less than 0.1 mm and 0.1 degree after the 6DOF compensation was performed. Conclusion: The 6DOF real-time motion cancellation system was found to be effective at compensating for translational and rotational motions to current SRS requirements. This system can improve frameless SRS by automatically returning patients to isocenter with high 6DOF accuracy.« less

  14. Uncertainty of the 20th century sea-level rise due to vertical land motion errors

    NASA Astrophysics Data System (ADS)

    Santamaría-Gómez, Alvaro; Gravelle, Médéric; Dangendorf, Sönke; Marcos, Marta; Spada, Giorgio; Wöppelmann, Guy

    2017-09-01

    Assessing the vertical land motion (VLM) at tide gauges (TG) is crucial to understanding global and regional mean sea-level changes (SLC) over the last century. However, estimating VLM with accuracy better than a few tenths of a millimeter per year is not a trivial undertaking and many factors, including the reference frame uncertainty, must be considered. Using a novel reconstruction approach and updated geodetic VLM corrections, we found the terrestrial reference frame and the estimated VLM uncertainty may contribute to the global SLC rate error by ± 0.2 mmyr-1. In addition, a spurious global SLC acceleration may be introduced up to ± 4.8 ×10-3 mmyr-2. Regional SLC rate and acceleration errors may be inflated by a factor 3 compared to the global. The difference of VLM from two independent Glacio-Isostatic Adjustment models introduces global SLC rate and acceleration biases at the level of ± 0.1 mmyr-1 and 2.8 ×10-3 mmyr-2, increasing up to 0.5 mm yr-1 and 9 ×10-3 mmyr-2 for the regional SLC. Errors in VLM corrections need to be budgeted when considering past and future SLC scenarios.

  15. Optimal integer resolution for attitude determination using global positioning system signals

    NASA Technical Reports Server (NTRS)

    Crassidis, John L.; Markley, F. Landis; Lightsey, E. Glenn

    1998-01-01

    In this paper, a new motion-based algorithm for GPS integer ambiguity resolution is derived. The first step of this algorithm converts the reference sightline vectors into body frame vectors. This is accomplished by an optimal vectorized transformation of the phase difference measurements. The result of this transformation leads to the conversion of the integer ambiguities to vectorized biases. This essentially converts the problem to the familiar magnetometer-bias determination problem, for which an optimal and efficient solution exists. Also, the formulation in this paper is re-derived to provide a sequential estimate, so that a suitable stopping condition can be found during the vehicle motion. The advantages of the new algorithm include: it does not require an a-priori estimate of the vehicle's attitude; it provides an inherent integrity check using a covariance-type expression; and it can sequentially estimate the ambiguities during the vehicle motion. The only disadvantage of the new algorithm is that it requires at least three non-coplanar baselines. The performance of the new algorithm is tested on a dynamic hardware simulator.

  16. Relative effects of posture and activity on human height estimation from surveillance footage.

    PubMed

    Ramstrand, Nerrolyn; Ramstrand, Simon; Brolund, Per; Norell, Kristin; Bergström, Peter

    2011-10-10

    Height estimations based on security camera footage are often requested by law enforcement authorities. While valid and reliable techniques have been established to determine vertical distances from video frames, there is a discrepancy between a person's true static height and their height as measured when assuming different postures or when in motion (e.g., walking). The aim of the research presented in this report was to accurately record the height of subjects as they performed a variety of activities typically observed in security camera footage and compare results to height recorded using a standard height measuring device. Forty-six able bodied adults participated in this study and were recorded using a 3D motion analysis system while performing eight different tasks. Height measurements captured using the 3D motion analysis system were compared to static height measurements in order to determine relative differences. It is anticipated that results presented in this report can be used by forensic image analysis experts as a basis for correcting height estimations of people captured on surveillance footage. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  17. Efficient region-based approach for blotch detection in archived video using texture information

    NASA Astrophysics Data System (ADS)

    Yous, Hamza; Serir, Amina

    2017-03-01

    We propose a method for blotch detection in archived videos by modeling their spatiotemporal properties. We introduce an adaptive spatiotemporal segmentation to extract candidate regions that can be classified as blotches. Then, the similarity between the preselected regions and their corresponding motion-compensated regions in the adjacent frames is assessed by means of motion trajectory estimation and textural information analysis. Perceived ground truth based on just noticeable contrast is employed for the evaluation of our approach against the state-of-the-art, and the reported results show a better performance for our approach.

  18. An interdimensional correlation framework for real-time estimation of six degree of freedom target motion using a single x-ray imager during radiotherapy

    NASA Astrophysics Data System (ADS)

    Nguyen, D. T.; Bertholet, J.; Kim, J.-H.; O'Brien, R.; Booth, J. T.; Poulsen, P. R.; Keall, P. J.

    2018-01-01

    Increasing evidence suggests that intrafraction tumour motion monitoring needs to include both 3D translations and 3D rotations. Presently, methods to estimate the rotation motion require the 3D translation of the target to be known first. However, ideally, translation and rotation should be estimated concurrently. We present the first method to directly estimate six-degree-of-freedom (6DoF) motion from the target’s projection on a single rotating x-ray imager in real-time. This novel method is based on the linear correlations between the superior-inferior translations and the motion in the other five degrees-of-freedom. The accuracy of the method was evaluated in silico with 81 liver tumour motion traces from 19 patients with three implanted markers. The ground-truth motion was estimated using the current gold standard method where each marker’s 3D position was first estimated using a Gaussian probability method, and the 6DoF motion was then estimated from the 3D positions using an iterative method. The 3D position of each marker was projected onto a gantry-mounted imager with an imaging rate of 11 Hz. After an initial 110° gantry rotation (200 images), a correlation model between the superior-inferior translations and the five other DoFs was built using a least square method. The correlation model was then updated after each subsequent frame to estimate 6DoF motion in real-time. The proposed algorithm had an accuracy (±precision) of  -0.03  ±  0.32 mm, -0.01  ±  0.13 mm and 0.03  ±  0.52 mm for translations in the left-right (LR), superior-inferior (SI) and anterior-posterior (AP) directions respectively; and, 0.07  ±  1.18°, 0.07  ±  1.00° and 0.06  ±  1.32° for rotations around the LR, SI and AP axes respectively on the dataset. The first method to directly estimate real-time 6DoF target motion from segmented marker positions on a 2D imager was devised. The algorithm was evaluated using 81 motion traces from 19 liver patients and was found to have sub-mm and sub-degree accuracy.

  19. Parallel Key Frame Extraction for Surveillance Video Service in a Smart City.

    PubMed

    Zheng, Ran; Yao, Chuanwei; Jin, Hai; Zhu, Lei; Zhang, Qin; Deng, Wei

    2015-01-01

    Surveillance video service (SVS) is one of the most important services provided in a smart city. It is very important for the utilization of SVS to provide design efficient surveillance video analysis techniques. Key frame extraction is a simple yet effective technique to achieve this goal. In surveillance video applications, key frames are typically used to summarize important video content. It is very important and essential to extract key frames accurately and efficiently. A novel approach is proposed to extract key frames from traffic surveillance videos based on GPU (graphics processing units) to ensure high efficiency and accuracy. For the determination of key frames, motion is a more salient feature in presenting actions or events, especially in surveillance videos. The motion feature is extracted in GPU to reduce running time. It is also smoothed to reduce noise, and the frames with local maxima of motion information are selected as the final key frames. The experimental results show that this approach can extract key frames more accurately and efficiently compared with several other methods.

  20. Multimodal integration in rostral fastigial nucleus provides an estimate of body movement

    PubMed Central

    Brooks, Jessica X.; Cullen, Kathleen E.

    2012-01-01

    The ability to accurately control posture and perceive self motion and spatial orientation requires knowledge of both the motion of the head and body. However, while the vestibular sensors and nuclei directly encode head motion, no sensors directly encode body motion. Instead, the convergence of vestibular and neck proprioceptive inputs during self-motion is generally believed to underlie the ability to compute body motion. Here, we provide evidence that the brain explicitly computes an internal estimate of body motion at the level of single cerebellar neurons. Neuronal responses were recorded from the rostral fastigial nucleus, the most medial of the deep cerebellar nuclei, during whole-body, body-under-head, and head-on-body rotations. We found that approximately half of the neurons encoded the motion of the body-in-space, while the other half encoded the motion of the head-in-space in a manner similar to neurons in the vestibular nuclei. Notably, neurons encoding body motion responded to both vestibular and proprioceptive stimulation (accordingly termed bimodal neurons). In contrast, neurons encoding head motion were only sensitive to vestibular inputs (accordingly termed unimodal neurons). Comparison of the proprioceptive and vestibular responses of bimodal neurons further revealed similar tuning in response to changes in head-on-body position. We propose that the similarity in nonlinear processing of vestibular and proprioceptive signals underlies the accurate computation of body motion. Furthermore, the same neurons that encode body motion (i.e., bimodal neurons) most likely encode vestibular signals in a body referenced coordinate frame, since the integration of proprioceptive and vestibular information is required for both computations. PMID:19710303

  1. Impact of quasar proper motions on the alignment between the International Celestial Reference Frame and the Gaia reference frame

    NASA Astrophysics Data System (ADS)

    Liu, J.-C.; Malkin, Z.; Zhu, Z.

    2018-03-01

    The International Celestial Reference Frame (ICRF) is currently realized by the very long baseline interferometry (VLBI) observations of extragalactic sources with the zero proper motion assumption, while Gaia will observe proper motions of these distant and faint objects to an accuracy of tens of microarcseconds per year. This paper investigates the difference between VLBI and Gaia quasar proper motions and it aims to understand the impact of quasar proper motions on the alignment of the ICRF and Gaia reference frame. We use the latest time series data of source coordinates from the International VLBI Service analysis centres operated at Goddard Space Flight Center (GSF2017) and Paris observatory (OPA2017), as well as the Gaia auxiliary quasar solution containing 2191 high-probability optical counterparts of the ICRF2 sources. The linear proper motions in right ascension and declination of VLBI sources are derived by least-squares fits while the proper motions for Gaia sources are simulated taking into account the acceleration of the Solar system barycentre and realistic uncertainties depending on the source brightness. The individual and global features of source proper motions in GSF2017 and OPA2017 VLBI data are found to be inconsistent, which may result from differences in VLBI observations, data reduction and analysis. A comparison of the VLBI and Gaia proper motions shows that the accuracies of the components of rotation and glide between the two systems are 2-4 μas yr- 1 based on about 600 common sources. For the future alignment of the ICRF and Gaia reference frames at different wavelengths, the proper motions of quasars must necessarily be considered.

  2. Whisking mechanics and active sensing

    PubMed Central

    Bush, Nicholas E; Solla, Sara A

    2017-01-01

    We describe recent advances in quantifying the three-dimensional (3D) geometry and mechanics of whisking. Careful delineation of relevant 3D reference frames reveals important geometric and mechanical distinctions between the localization problem (‘where’ is an object) and the feature extraction problem (‘what’ is an object). Head-centered and resting-whisker reference frames lend themselves to quantifying temporal and kinematic cues used for object localization. The whisking-centered reference frame lends itself to quantifying the contact mechanics likely associated with feature extraction. We offer the ‘windowed sampling’ hypothesis for active sensing: that rats can estimate an object’s spatial features by integrating mechanical information across whiskers during brief (25–60 ms) windows of ‘haptic enclosure’ with the whiskers, a motion that resembles a hand grasp. PMID:27632212

  3. Whisking mechanics and active sensing.

    PubMed

    Bush, Nicholas E; Solla, Sara A; Hartmann, Mitra Jz

    2016-10-01

    We describe recent advances in quantifying the three-dimensional (3D) geometry and mechanics of whisking. Careful delineation of relevant 3D reference frames reveals important geometric and mechanical distinctions between the localization problem ('where' is an object) and the feature extraction problem ('what' is an object). Head-centered and resting-whisker reference frames lend themselves to quantifying temporal and kinematic cues used for object localization. The whisking-centered reference frame lends itself to quantifying the contact mechanics likely associated with feature extraction. We offer the 'windowed sampling' hypothesis for active sensing: that rats can estimate an object's spatial features by integrating mechanical information across whiskers during brief (25-60ms) windows of 'haptic enclosure' with the whiskers, a motion that resembles a hand grasp. Copyright © 2016. Published by Elsevier Ltd.

  4. Object Scene Flow

    NASA Astrophysics Data System (ADS)

    Menze, Moritz; Heipke, Christian; Geiger, Andreas

    2018-06-01

    This work investigates the estimation of dense three-dimensional motion fields, commonly referred to as scene flow. While great progress has been made in recent years, large displacements and adverse imaging conditions as observed in natural outdoor environments are still very challenging for current approaches to reconstruction and motion estimation. In this paper, we propose a unified random field model which reasons jointly about 3D scene flow as well as the location, shape and motion of vehicles in the observed scene. We formulate the problem as the task of decomposing the scene into a small number of rigidly moving objects sharing the same motion parameters. Thus, our formulation effectively introduces long-range spatial dependencies which commonly employed local rigidity priors are lacking. Our inference algorithm then estimates the association of image segments and object hypotheses together with their three-dimensional shape and motion. We demonstrate the potential of the proposed approach by introducing a novel challenging scene flow benchmark which allows for a thorough comparison of the proposed scene flow approach with respect to various baseline models. In contrast to previous benchmarks, our evaluation is the first to provide stereo and optical flow ground truth for dynamic real-world urban scenes at large scale. Our experiments reveal that rigid motion segmentation can be utilized as an effective regularizer for the scene flow problem, improving upon existing two-frame scene flow methods. At the same time, our method yields plausible object segmentations without requiring an explicitly trained recognition model for a specific object class.

  5. A parallelizable real-time motion tracking algorithm with applications to ultrasonic strain imaging.

    PubMed

    Jiang, J; Hall, T J

    2007-07-07

    Ultrasound-based mechanical strain imaging systems utilize signals from conventional diagnostic ultrasound systems to image tissue elasticity contrast that provides new diagnostically valuable information. Previous works (Hall et al 2003 Ultrasound Med. Biol. 29 427, Zhu and Hall 2002 Ultrason. Imaging 24 161) demonstrated that uniaxial deformation with minimal elevation motion is preferred for breast strain imaging and real-time strain image feedback to operators is important to accomplish this goal. The work reported here enhances the real-time speckle tracking algorithm with two significant modifications. One fundamental change is that the proposed algorithm is a column-based algorithm (a column is defined by a line of data parallel to the ultrasound beam direction, i.e. an A-line), as opposed to a row-based algorithm (a row is defined by a line of data perpendicular to the ultrasound beam direction). Then, displacement estimates from its adjacent columns provide good guidance for motion tracking in a significantly reduced search region to reduce computational cost. Consequently, the process of displacement estimation can be naturally split into at least two separated tasks, computed in parallel, propagating outward from the center of the region of interest (ROI). The proposed algorithm has been implemented and optimized in a Windows system as a stand-alone ANSI C++ program. Results of preliminary tests, using numerical and tissue-mimicking phantoms, and in vivo tissue data, suggest that high contrast strain images can be consistently obtained with frame rates (10 frames s(-1)) that exceed our previous methods.

  6. Gravity in the Brain as a Reference for Space and Time Perception.

    PubMed

    Lacquaniti, Francesco; Bosco, Gianfranco; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka

    2015-01-01

    Moving and interacting with the environment require a reference for orientation and a scale for calibration in space and time. There is a wide variety of environmental clues and calibrated frames at different locales, but the reference of gravity is ubiquitous on Earth. The pull of gravity on static objects provides a plummet which, together with the horizontal plane, defines a three-dimensional Cartesian frame for visual images. On the other hand, the gravitational acceleration of falling objects can provide a time-stamp on events, because the motion duration of an object accelerated by gravity over a given path is fixed. Indeed, since ancient times, man has been using plumb bobs for spatial surveying, and water clocks or pendulum clocks for time keeping. Here we review behavioral evidence in favor of the hypothesis that the brain is endowed with mechanisms that exploit the presence of gravity to estimate the spatial orientation and the passage of time. Several visual and non-visual (vestibular, haptic, visceral) cues are merged to estimate the orientation of the visual vertical. However, the relative weight of each cue is not fixed, but depends on the specific task. Next, we show that an internal model of the effects of gravity is combined with multisensory signals to time the interception of falling objects, to time the passage through spatial landmarks during virtual navigation, to assess the duration of a gravitational motion, and to judge the naturalness of periodic motion under gravity.

  7. Frame rate required for speckle tracking echocardiography: A quantitative clinical study with open-source, vendor-independent software.

    PubMed

    Negoita, Madalina; Zolgharni, Massoud; Dadkho, Elham; Pernigo, Matteo; Mielewczik, Michael; Cole, Graham D; Dhutia, Niti M; Francis, Darrel P

    2016-09-01

    To determine the optimal frame rate at which reliable heart walls velocities can be assessed by speckle tracking. Assessing left ventricular function with speckle tracking is useful in patient diagnosis but requires a temporal resolution that can follow myocardial motion. In this study we investigated the effect of different frame rates on the accuracy of speckle tracking results, highlighting the temporal resolution where reliable results can be obtained. 27 patients were scanned at two different frame rates at their resting heart rate. From all acquired loops, lower temporal resolution image sequences were generated by dropping frames, decreasing the frame rate by up to 10-fold. Tissue velocities were estimated by automated speckle tracking. Above 40 frames/s the peak velocity was reliably measured. When frame rate was lower, the inter-frame interval containing the instant of highest velocity also contained lower velocities, and therefore the average velocity in that interval was an underestimate of the clinically desired instantaneous maximum velocity. The higher the frame rate, the more accurately maximum velocities are identified by speckle tracking, until the frame rate drops below 40 frames/s, beyond which there is little increase in peak velocity. We provide in an online supplement the vendor-independent software we used for automatic speckle-tracked velocity assessment to help others working in this field. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Probe Oscillation Shear Elastography (PROSE): A High Frame-Rate Method for Two-Dimensional Ultrasound Shear Wave Elastography.

    PubMed

    Mellema, Daniel C; Song, Pengfei; Kinnick, Randall R; Urban, Matthew W; Greenleaf, James F; Manduca, Armando; Chen, Shigao

    2016-09-01

    Ultrasound shear wave elastography (SWE) utilizes the propagation of induced shear waves to characterize the shear modulus of soft tissue. Many methods rely on an acoustic radiation force (ARF) "push beam" to generate shear waves. However, specialized hardware is required to generate the push beams, and the thermal stress that is placed upon the ultrasound system, transducer, and tissue by the push beams currently limits the frame-rate to about 1 Hz. These constraints have limited the implementation of ARF to high-end clinical systems. This paper presents Probe Oscillation Shear Elastography (PROSE) as an alternative method to measure tissue elasticity. PROSE generates shear waves using a harmonic mechanical vibration of an ultrasound transducer, while simultaneously detecting motion with the same transducer under pulse-echo mode. Motion of the transducer during detection produces a "strain-like" compression artifact that is coupled with the observed shear waves. A novel symmetric sampling scheme is proposed such that pulse-echo detection events are acquired when the ultrasound transducer returns to the same physical position, allowing the shear waves to be decoupled from the compression artifact. Full field-of-view (FOV) two-dimensional (2D) shear wave speed images were obtained by applying a local frequency estimation (LFE) technique, capable of generating a 2D map from a single frame of shear wave motion. The shear wave imaging frame rate of PROSE is comparable to the vibration frequency, which can be an order of magnitude higher than ARF based techniques. PROSE was able to produce smooth and accurate shear wave images from three homogeneous phantoms with different moduli, with an effective frame rate of 300 Hz. An inclusion phantom study showed that increased vibration frequencies improved the accuracy of inclusion imaging, and allowed targets as small as 6.5 mm to be resolved with good contrast (contrast-to-noise ratio ≥ 19 dB) between the target and background.

  9. Probe Oscillation Shear Elastography (PROSE): A High Frame-Rate Method for Two-Dimensional Ultrasound Shear Wave Elastography

    PubMed Central

    Mellema, Daniel C.; Song, Pengfei; Kinnick, Randall R.; Urban, Matthew W.; Greenleaf, James F.; Manduca, Armando; Chen, Shigao

    2017-01-01

    Ultrasound shear wave elastography (SWE) utilizes the propagation of induced shear waves to characterize the shear modulus of soft tissue. Many methods rely on an acoustic radiation force (ARF) “push beam” to generate shear waves. However, specialized hardware is required to generate the push beams, and the thermal stress that is placed upon the ultrasound system, transducer, and tissue by the push beams currently limits the frame-rate to about 1 Hz. These constraints have limited the implementation of ARF to high-end clinical systems. This paper presents Probe Oscillation Shear Elastography (PROSE) as an alternative method to measure tissue elasticity. PROSE generates shear waves using a harmonic mechanical vibration of an ultrasound transducer, while simultaneously detecting motion with the same transducer under pulse-echo mode. Motion of the transducer during detection produces a “strain-like” compression artifact that is coupled with the observed shear waves. A novel symmetric sampling scheme is proposed such that pulse-echo detection events are acquired when the ultrasound transducer returns to the same physical position, allowing the shear waves to be decoupled from the compression artifact. Full field-of-view (FOV) two-dimensional (2D) shear wave speed images were obtained by applying a local frequency estimation (LFE) technique, capable of generating a 2D map from a single frame of shear wave motion. The shear wave imaging frame rate of PROSE is comparable to the vibration frequency, which can be an order of magnitude higher than ARF based techniques. PROSE was able to produce smooth and accurate shear wave images from three homogeneous phantoms with different moduli, with an effective frame rate of 300Hz. An inclusion phantom study showed that increased vibration frequencies improved the accuracy of inclusion imaging, and allowed targets as small as 6.5 mm to be resolved with good contrast (contrast-to-noise ratio ≥19 dB) between the target and background. PMID:27076352

  10. On Inertial Body Tracking in the Presence of Model Calibration Errors

    PubMed Central

    Miezal, Markus; Taetz, Bertram; Bleser, Gabriele

    2016-01-01

    In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity. PMID:27455266

  11. Reconstruction of a time-averaged midposition CT scan for radiotherapy planning of lung cancer patients using deformable registration.

    PubMed

    Wolthaus, J W H; Sonke, J J; van Herk, M; Damen, E M F

    2008-09-01

    lower lobe lung tumors move with amplitudes of up to 2 cm due to respiration. To reduce respiration imaging artifacts in planning CT scans, 4D imaging techniques are used. Currently, we use a single (midventilation) frame of the 4D data set for clinical delineation of structures and radiotherapy planning. A single frame, however, often contains artifacts due to breathing irregularities, and is noisier than a conventional CT scan since the exposure per frame is lower. Moreover, the tumor may be displaced from the mean tumor position due to hysteresis. The aim of this work is to develop a framework for the acquisition of a good quality scan representing all scanned anatomy in the mean position by averaging transformed (deformed) CT frames, i.e., canceling out motion. A nonrigid registration method is necessary since motion varies over the lung. 4D and inspiration breath-hold (BH) CT scans were acquired for 13 patients. An iterative multiscale motion estimation technique was applied to the 4D CT scan, similar to optical flow but using image phase (gray-value transitions from bright to dark and vice versa) instead. From the (4D) deformation vector field (DVF) derived, the local mean position in the respiratory cycle was computed and the 4D DVF was modified to deform all structures of the original 4D CT scan to this mean position. A 3D midposition (MidP) CT scan was then obtained by (arithmetic or median) averaging of the deformed 4D CT scan. Image registration accuracy, tumor shape deviation with respect to the BH CT scan, and noise were determined to evaluate the image fidelity of the MidP CT scan and the performance of the technique. Accuracy of the used deformable image registration method was comparable to established automated locally rigid registration and to manual landmark registration (average difference to both methods < 0.5 mm for all directions) for the tumor region. From visual assessment, the registration was good for the clearly visible features (e.g., tumor and diaphragm). The shape of the tumor, with respect to that of the BH CT scan, was better represented by the MidP reconstructions than any of the 4D CT frames (including MidV; reduction of "shape differences" was 66%). The MidP scans contained about one-third the noise of individual 4D CT scan frames. We implemented an accurate method to estimate the motion of structures in a 4D CT scan. Subsequently, a novel method to create a midposition CT scan (time-weighted average of the anatomy) for treatment planning with reduced noise and artifacts was introduced. Tumor shape and position in the MidP CT scan represents that of the BH CT scan better than MidV CT scan and, therefore, was found to be appropriate for treatment planning.

  12. 4D ML reconstruction as a tool for volumetric PET-based treatment verification in ion beam radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Bernardi, E., E-mail: elisabetta.debernardi@unimib.it; Ricotti, R.; Riboldi, M.

    2016-02-15

    Purpose: An innovative strategy to improve the sensitivity of positron emission tomography (PET)-based treatment verification in ion beam radiotherapy is proposed. Methods: Low counting statistics PET images acquired during or shortly after the treatment (Measured PET) and a Monte Carlo estimate of the same PET images derived from the treatment plan (Expected PET) are considered as two frames of a 4D dataset. A 4D maximum likelihood reconstruction strategy was adapted to iteratively estimate the annihilation events distribution in a reference frame and the deformation motion fields that map it in the Expected PET and Measured PET frames. The outputs generatedmore » by the proposed strategy are as follows: (1) an estimate of the Measured PET with an image quality comparable to the Expected PET and (2) an estimate of the motion field mapping Expected PET to Measured PET. The details of the algorithm are presented and the strategy is preliminarily tested on analytically simulated datasets. Results: The algorithm demonstrates (1) robustness against noise, even in the worst conditions where 1.5 × 10{sup 4} true coincidences and a random fraction of 73% are simulated; (2) a proper sensitivity to different kind and grade of mismatches ranging between 1 and 10 mm; (3) robustness against bias due to incorrect washout modeling in the Monte Carlo simulation up to 1/3 of the original signal amplitude; and (4) an ability to describe the mismatch even in presence of complex annihilation distributions such as those induced by two perpendicular superimposed ion fields. Conclusions: The promising results obtained in this work suggest the applicability of the method as a quantification tool for PET-based treatment verification in ion beam radiotherapy. An extensive assessment of the proposed strategy on real treatment verification data is planned.« less

  13. Key frame extraction based on spatiotemporal motion trajectory

    NASA Astrophysics Data System (ADS)

    Zhang, Yunzuo; Tao, Ran; Zhang, Feng

    2015-05-01

    Spatiotemporal motion trajectory can accurately reflect the changes of motion state. Motivated by this observation, this letter proposes a method for key frame extraction based on motion trajectory on the spatiotemporal slice. Different from the well-known motion related methods, the proposed method utilizes the inflexions of the motion trajectory on the spatiotemporal slice of all the moving objects. Experimental results show that although a similar performance is achieved in the single-objective screen, by comparing the proposed method to that achieved with the state-of-the-art methods based on motion energy or acceleration, the proposed method shows a better performance in a multiobjective video.

  14. Modeling of video compression effects on target acquisition performance

    NASA Astrophysics Data System (ADS)

    Cha, Jae H.; Preece, Bradley; Espinola, Richard L.

    2009-05-01

    The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.

  15. Self-aligning biaxial load frame

    DOEpatents

    Ward, M.B.; Epstein, J.S.; Lloyd, W.R.

    1994-01-18

    An self-aligning biaxial loading apparatus for use in testing the strength of specimens while maintaining a constant specimen centroid during the loading operation. The self-aligning biaxial loading apparatus consists of a load frame and two load assemblies for imparting two independent perpendicular forces upon a test specimen. The constant test specimen centroid is maintained by providing elements for linear motion of the load frame relative to a fixed cross head, and by alignment and linear motion elements of one load assembly relative to the load frame. 3 figures.

  16. Self-aligning biaxial load frame

    DOEpatents

    Ward, Michael B.; Epstein, Jonathan S.; Lloyd, W. Randolph

    1994-01-01

    An self-aligning biaxial loading apparatus for use in testing the strength of specimens while maintaining a constant specimen centroid during the loading operation. The self-aligning biaxial loading apparatus consists of a load frame and two load assemblies for imparting two independent perpendicular forces upon a test specimen. The constant test specimen centroid is maintained by providing elements for linear motion of the load frame relative to a fixed crosshead, and by alignment and linear motion elements of one load assembly relative to the load frame.

  17. Low bandwidth eye tracker for scanning laser ophthalmoscopy

    NASA Astrophysics Data System (ADS)

    Harvey, Zachary G.; Dubra, Alfredo; Cahill, Nathan D.; Lopez Alarcon, Sonia

    2012-02-01

    The incorporation of adaptive optics to scanning ophthalmoscopes (AOSOs) has allowed for in vivo, noninvasive imaging of the human rod and cone photoreceptor mosaics. Light safety restrictions and power limitations of the current low-coherence light sources available for imaging result in each individual raw image having a low signal to noise ratio (SNR). To date, the only approach used to increase the SNR has been to collect large number of raw images (N >50), to register them to remove the distortions due to involuntary eye motion, and then to average them. The large amplitude of involuntary eye motion with respect to the AOSO field of view (FOV) dictates that an even larger number of images need to be collected at each retinal location to ensure adequate SNR over the feature of interest. Compensating for eye motion during image acquisition to keep the feature of interest within the FOV could reduce the number of raw frames required per retinal feature, therefore significantly reduce the imaging time, storage requirements, post-processing times and, more importantly, subject's exposure to light. In this paper, we present a particular implementation of an AOSO, termed the adaptive optics scanning light ophthalmoscope (AOSLO) equipped with a simple eye tracking system capable of compensating for eye drift by estimating the eye motion from the raw frames and by using a tip-tilt mirror to compensate for it in a closed-loop. Multiple control strategies were evaluated to minimize the image distortion introduced by the tracker itself. Also, linear, quadratic and Kalman filter motion prediction algorithms were implemented and tested and tested using both simulated motion (sinusoidal motion with varying frequencies) and human subjects. The residual displacement of the retinal features was used to compare the performance of the different correction strategies and prediction methods.

  18. Optimal estimation of diffusion coefficients from single-particle trajectories

    NASA Astrophysics Data System (ADS)

    Vestergaard, Christian L.; Blainey, Paul C.; Flyvbjerg, Henrik

    2014-02-01

    How does one optimally determine the diffusion coefficient of a diffusing particle from a single-time-lapse recorded trajectory of the particle? We answer this question with an explicit, unbiased, and practically optimal covariance-based estimator (CVE). This estimator is regression-free and is far superior to commonly used methods based on measured mean squared displacements. In experimentally relevant parameter ranges, it also outperforms the analytically intractable and computationally more demanding maximum likelihood estimator (MLE). For the case of diffusion on a flexible and fluctuating substrate, the CVE is biased by substrate motion. However, given some long time series and a substrate under some tension, an extended MLE can separate particle diffusion on the substrate from substrate motion in the laboratory frame. This provides benchmarks that allow removal of bias caused by substrate fluctuations in CVE. The resulting unbiased CVE is optimal also for short time series on a fluctuating substrate. We have applied our estimators to human 8-oxoguanine DNA glycolase proteins diffusing on flow-stretched DNA, a fluctuating substrate, and found that diffusion coefficients are severely overestimated if substrate fluctuations are not accounted for.

  19. High-Frame-Rate Doppler Ultrasound Using a Repeated Transmit Sequence

    PubMed Central

    Podkowa, Anthony S.; Oelze, Michael L.; Ketterling, Jeffrey A.

    2018-01-01

    The maximum detectable velocity of high-frame-rate color flow Doppler ultrasound is limited by the imaging frame rate when using coherent compounding techniques. Traditionally, high quality ultrasonic images are produced at a high frame rate via coherent compounding of steered plane wave reconstructions. However, this compounding operation results in an effective downsampling of the slow-time signal, thereby artificially reducing the frame rate. To alleviate this effect, a new transmit sequence is introduced where each transmit angle is repeated in succession. This transmit sequence allows for direct comparison between low resolution, pre-compounded frames at a short time interval in ways that are resistent to sidelobe motion. Use of this transmit sequence increases the maximum detectable velocity by a scale factor of the transmit sequence length. The performance of this new transmit sequence was evaluated using a rotating cylindrical phantom and compared with traditional methods using a 15-MHz linear array transducer. Axial velocity estimates were recorded for a range of ±300 mm/s and compared to the known ground truth. Using these new techniques, the root mean square error was reduced from over 400 mm/s to below 50 mm/s in the high-velocity regime compared to traditional techniques. The standard deviation of the velocity estimate in the same velocity range was reduced from 250 mm/s to 30 mm/s. This result demonstrates the viability of the repeated transmit sequence methods in detecting and quantifying high-velocity flow. PMID:29910966

  20. Multi-geodetic characterization of the seasonal signal at the CERGA geodetic reference, France

    NASA Astrophysics Data System (ADS)

    Memin, A.; Viswanathan, V.; Fienga, A.; Santamaría-Gómez, A.; Boy, J. P.

    2016-12-01

    Crustal deformations due to surface-mass loading account for a significant part of the variability in geodetic time series. A perfect understanding of the loading signal observed by geodetic techniques should help in improving terrestrial reference frame (TRF) realizations. Yet, discrepancies between crustal motion estimates from models of surface-mass loading and observations are still too large so that no model is currently recommended by the IERS for reducing the data. We investigate the discrepancy observed in the seasonal variations of the CERGA station, South of France.We characterize the seasonal motions of the reference geodetic station CERGA from GNSS, SLR and LLR. We compare the station motion observed with GNSS and SLR and we estimate changes in the station-to-the-moon distance using an improved processing strategy. We investigate the consistency between these geodetic techniques and compare the observed station motion with that estimated using models of surface-mass change. In that regard, we compute atmospheric loading effects using surface pressure fields from ECMWF, assuming an ocean response according to the classical inverted barometer (IB) assumption, considered to be valid for periods typically exceeding a week. We also used general circulation ocean models (ECCO and GLORYS) forced by wind, heat and fresh water fluxes. The continental water storage is described using GLDAS/Noah and MERRA-land models.Using the surface-mass models, we estimate the amplitude of the seasonal vertical motion of the CERGA station ranging between 5 and 10 mm with a maximum reached in August, mostly due to hydrology. The horizontal seasonal motion of the station may reach up to 3 mm. Such a station motion should induce a change in the distance to the moon reaching up to 10 mm, large enough to be detected in LLR time series and compared to GNSS- and SLR-derived motion.

  1. Frequency-locked pulse sequencer for high-frame-rate monochromatic tissue motion imaging.

    PubMed

    Azar, Reza Zahiri; Baghani, Ali; Salcudean, Septimiu E; Rohling, Robert

    2011-04-01

    To overcome the inherent low frame rate of conventional ultrasound, we have previously presented a system that can be implemented on conventional ultrasound scanners for high-frame-rate imaging of monochromatic tissue motion. The system employs a sector subdivision technique in the sequencer to increase the acquisition rate. To eliminate the delays introduced during data acquisition, a motion phase correction algorithm has also been introduced to create in-phase displacement images. Previous experimental results from tissue- mimicking phantoms showed that the system can achieve effective frame rates of up to a few kilohertz on conventional ultrasound systems. In this short communication, we present a new pulse sequencing strategy that facilitates high-frame-rate imaging of monochromatic motion such that the acquired echo signals are inherently in-phase. The sequencer uses the knowledge of the excitation frequency to synchronize the acquisition of the entire imaging plane to that of an external exciter. This sequencing approach eliminates any need for synchronization or phase correction and has applications in tissue elastography, which we demonstrate with tissue-mimicking phantoms. © 2011 IEEE

  2. Radiometrically accurate scene-based nonuniformity correction for array sensors.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2003-10-01

    A novel radiometrically accurate scene-based nonuniformity correction (NUC) algorithm is described. The technique combines absolute calibration with a recently reported algebraic scene-based NUC algorithm. The technique is based on the following principle: First, detectors that are along the perimeter of the focal-plane array are absolutely calibrated; then the calibration is transported to the remaining uncalibrated interior detectors through the application of the algebraic scene-based algorithm, which utilizes pairs of image frames exhibiting arbitrary global motion. The key advantage of this technique is that it can obtain radiometric accuracy during NUC without disrupting camera operation. Accurate estimates of the bias nonuniformity can be achieved with relatively few frames, which can be fewer than ten frame pairs. Advantages of this technique are discussed, and a thorough performance analysis is presented with use of simulated and real infrared imagery.

  3. Celestial reference frames and the gauge freedom in the post-Newtonian mechanics of the Earth-Moon system

    NASA Astrophysics Data System (ADS)

    Kopeikin, Sergei; Xie, Yi

    2010-11-01

    We introduce the Jacobi coordinates adopted to the advanced theoretical analysis of the relativistic Celestial Mechanics of the Earth-Moon system. Theoretical derivation utilizes the relativistic resolutions on reference frames adopted by the International Astronomical Union (IAU) in 2000. The resolutions assume that the Solar System is isolated and space-time is asymptotically flat at infinity and the primary reference frame covers the entire space-time, has its origin at the Solar System barycenter (SSB) with spatial axes stretching up to infinity. The SSB frame is not rotating with respect to a set of distant quasars that are assumed to be at rest on the sky forming the International Celestial Reference Frame (ICRF). The second reference frame has its origin at the Earth-Moon barycenter (EMB). The EMB frame is locally inertial and is not rotating dynamically in the sense that equation of motion of a test particle moving with respect to the EMB frame, does not contain the Coriolis and centripetal forces. Two other local frames—geocentric and selenocentric—have their origins at the center of mass of Earth and Moon respectively and do not rotate dynamically. Each local frame is subject to the geodetic precession both with respect to other local frames and with respect to the ICRF because of their relative motion with respect to each other. Theoretical advantage of the dynamically non-rotating local frames is in a more simple mathematical description of the metric tensor and relative equations of motion of the Moon with respect to Earth. Each local frame can be converted to kinematically non-rotating one after alignment with the axes of ICRF by applying the matrix of the relativistic precession as recommended by the IAU resolutions. The set of one global and three local frames is introduced in order to decouple physical effects of gravity from the gauge-dependent effects in the equations of relative motion of the Moon with respect to Earth.

  4. Coordinates of Human Visual and Inertial Heading Perception.

    PubMed

    Crane, Benjamin Thomas

    2015-01-01

    Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.

  5. Coordinates of Human Visual and Inertial Heading Perception

    PubMed Central

    Crane, Benjamin Thomas

    2015-01-01

    Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results. PMID:26267865

  6. Horizontal crustal motion in the central and eastern Mediterranean inferred from Satellite Laser Ranging measurements

    NASA Technical Reports Server (NTRS)

    Smith, David E.; Kolenkiewicz, Ron; Robbins, John W.; Dunn, Peter J.; Torrence, Mark H.

    1994-01-01

    Four campaigns to acquire Satellite Laser Ranging (SLR) measurements at sites in the Mediterranean region have been completed. These measurements to the LAGEOS satellite, made largely by mobile systems, cover a time span beginning in November 1985 and ending in June 1993. The range data from 18 sites in the central and eastern Mediterranean have been simultaneously analyzed with data acquired by the remainder of the global laser tracking network. Estimates of horizontal motion were placed into a regional, northern Europe-fixed, kinematic reference frame. Uncertainties are on the order of 5 mm/yr for sites having at least four occupations by mobile systems and approach 1 mm/yr for permanently located sites with long histories of tracking. The resulting relative motion between sites in the Aegean exhibit characteristics of broadly distributed pattern of radial extension, but at rates that are about 50% larger than those implied from studies of seismic strain rates based on seismicity of magnitude 6 or greater or across the region. The motion estimated for sites in Turkey exhibit velocity components associated with the westward motion of the Anatolian Block relative to Eurasia. These results provide a present-day 'snapshot' of ongoing deformational processes as experienced by the locations occupied by SLR systems.

  7. Estimated SLR station position and network frame sensitivity to time-varying gravity

    NASA Astrophysics Data System (ADS)

    Zelensky, Nikita P.; Lemoine, Frank G.; Chinn, Douglas S.; Melachroinos, Stavros; Beckley, Brian D.; Beall, Jennifer Wiser; Bordyugov, Oleg

    2014-06-01

    This paper evaluates the sensitivity of ITRF2008-based satellite laser ranging (SLR) station positions estimated weekly using LAGEOS-1/2 data from 1993 to 2012 to non-tidal time-varying gravity (TVG). Two primary methods for modeling TVG from degree-2 are employed. The operational approach applies an annual GRACE-derived field, and IERS recommended linear rates for five coefficients. The experimental approach uses low-order/degree coefficients estimated weekly from SLR and DORIS processing of up to 11 satellites (tvg4x4). This study shows that the LAGEOS-1/2 orbits and the weekly station solutions are sensitive to more detailed modeling of TVG than prescribed in the current IERS standards. Over 1993-2012 tvg4x4 improves SLR residuals by 18 % and shows 10 % RMS improvement in station stability. Tests suggest that the improved stability of the tvg4x4 POD solution frame may help clarify geophysical signals present in the estimated station position time series. The signals include linear and seasonal station motion, and motion of the TRF origin, particularly in Z. The effect on both POD and the station solutions becomes increasingly evident starting in 2006. Over 2008-2012, the tvg4x4 series improves SLR residuals by 29 %. Use of the GRGS RL02 series shows similar improvement in POD. Using tvg4x4, secular changes in the TRF origin Z component double over the last decade and although not conclusive, it is consistent with increased geocenter rate expected due to continental ice melt. The test results indicate that accurate modeling of TVG is necessary for improvement of station position estimation using SLR data.

  8. A method of intentional movement estimation of oblique small-UAV videos stabilized based on homography model

    NASA Astrophysics Data System (ADS)

    Guo, Shiyi; Mai, Ying; Zhao, Hongying; Gao, Pengqi

    2013-05-01

    The airborne video streams of small-UAVs are commonly plagued with distractive jittery and shaking motions, disorienting rotations, noisy and distorted images and other unwanted movements. These problems collectively make it very difficult for observers to obtain useful information from the video. Due to the small payload of small-UAVs, it is a priority to improve the image quality by means of electronic image stabilization. But when small-UAV makes a turn, affected by the flight characteristics of it, the video is easy to become oblique. This brings a lot of difficulties to electronic image stabilization technology. Homography model performed well in the oblique image motion estimation, while bringing great challenges to intentional motion estimation. Therefore, in this paper, we focus on solve the problem of the video stabilized when small-UAVs banking and turning. We attend to the small-UAVs fly along with an arc of a fixed turning radius. For this reason, after a series of experimental analysis on the flight characteristics and the path how small-UAVs turned, we presented a new method to estimate the intentional motion in which the path of the frame center was used to fit the video moving track. Meanwhile, the image sequences dynamic mosaic was done to make up for the limited field of view. At last, the proposed algorithm was carried out and validated by actual airborne videos. The results show that the proposed method is effective to stabilize the oblique video of small-UAVs.

  9. TH-EF-BRA-03: Assessment of Data-Driven Respiratory Motion-Compensation Methods for 4D-CBCT Image Registration and Reconstruction Using Clinical Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riblett, MJ; Weiss, E; Hugo, GD

    Purpose: To evaluate the performance of a 4D-CBCT registration and reconstruction method that corrects for respiratory motion and enhances image quality under clinically relevant conditions. Methods: Building on previous work, which tested feasibility of a motion-compensation workflow using image datasets superior to clinical acquisitions, this study assesses workflow performance under clinical conditions in terms of image quality improvement. Evaluated workflows utilized a combination of groupwise deformable image registration (DIR) and image reconstruction. Four-dimensional cone beam CT (4D-CBCT) FDK reconstructions were registered to either mean or respiratory phase reference frame images to model respiratory motion. The resulting 4D transformation was usedmore » to deform projection data during the FDK backprojection operation to create a motion-compensated reconstruction. To simulate clinically realistic conditions, superior quality projection datasets were sampled using a phase-binned striding method. Tissue interface sharpness (TIS) was defined as the slope of a sigmoid curve fit to the lung-diaphragm boundary or to the carina tissue-airway boundary when no diaphragm was discernable. Image quality improvement was assessed in 19 clinical cases by evaluating mitigation of view-aliasing artifacts, tissue interface sharpness recovery, and noise reduction. Results: For clinical datasets, evaluated average TIS recovery relative to base 4D-CBCT reconstructions was observed to be 87% using fixed-frame registration alone; 87% using fixed-frame with motion-compensated reconstruction; 92% using mean-frame registration alone; and 90% using mean-frame with motion-compensated reconstruction. Soft tissue noise was reduced on average by 43% and 44% for the fixed-frame registration and registration with motion-compensation methods, respectively, and by 40% and 42% for the corresponding mean-frame methods. Considerable reductions in view aliasing artifacts were observed for each method. Conclusion: Data-driven groupwise registration and motion-compensated reconstruction have the potential to improve the quality of 4D-CBCT images acquired under clinical conditions. For clinical image datasets, the addition of motion compensation after groupwise registration visibly reduced artifact impact. This work was supported by the National Cancer Institute of the National Institutes of Health under Award Number R01CA166119. Hugo and Weiss hold a research agreement with Philips Healthcare and license agreement with Varian Medical Systems. Weiss receives royalties from UpToDate. Christensen receives funds from Roger Koch to support research.« less

  10. Reference frames, gauge transformations and gravitomagnetism in the post-Newtonian theory of the lunar motion

    NASA Astrophysics Data System (ADS)

    Xie, Yi; Kopeikin, Sergei

    2010-01-01

    We construct a set of reference frames for description of the orbital and rotational motion of the Moon. We use a scalar-tensor theory of gravity depending on two parameters of the parametrized post-Newtonian (PPN) formalism and utilize the concepts of the relativistic resolutions on reference frames adopted by the International Astronomical Union in 2000. We assume that the solar system is isolated and space-time is asymptotically flat. The primary reference frame has the origin at the solar-system barycenter (SSB) and spatial axes are going to infinity. The SSB frame is not rotating with respect to distant quasars. The secondary reference frame has the origin at the Earth-Moon barycenter (EMB). The EMB frame is local with its spatial axes spreading out to the orbits of Venus and Mars and not rotating dynamically in the sense that both the Coriolis and centripetal forces acting on a free-falling test particle, moving with respect to the EMB frame, are excluded. Two other local frames, the geocentric (GRF) and the selenocentric (SRF) frames, have the origin at the center of mass of the Earth and Moon respectively. They are both introduced in order to connect the coordinate description of the lunar motion, observer on the Earth, and a retro-reflector on the Moon to the observable quantities which are the proper time and the laser-ranging distance. We solve the gravity field equations and find the metric tensor and the scalar field in all frames. We also derive the post-Newtonian coordinate transformations between the frames and analyze the residual gauge freedom of the solutions of the field equations. We discuss the gravitomagnetic effects in the barycentric equations of the motion of the Moon and argue that they are beyond the current accuracy of lunar laser ranging (LLR) observations.

  11. Event-Based Stereo Depth Estimation Using Belief Propagation.

    PubMed

    Xie, Zhen; Chen, Shengyong; Orchard, Garrick

    2017-01-01

    Compared to standard frame-based cameras, biologically-inspired event-based sensors capture visual information with low latency and minimal redundancy. These event-based sensors are also far less prone to motion blur than traditional cameras, and still operate effectively in high dynamic range scenes. However, classical framed-based algorithms are not typically suitable for these event-based data and new processing algorithms are required. This paper focuses on the problem of depth estimation from a stereo pair of event-based sensors. A fully event-based stereo depth estimation algorithm which relies on message passing is proposed. The algorithm not only considers the properties of a single event but also uses a Markov Random Field (MRF) to consider the constraints between the nearby events, such as disparity uniqueness and depth continuity. The method is tested on five different scenes and compared to other state-of-art event-based stereo matching methods. The results show that the method detects more stereo matches than other methods, with each match having a higher accuracy. The method can operate in an event-driven manner where depths are reported for individual events as they are received, or the network can be queried at any time to generate a sparse depth frame which represents the current state of the network.

  12. Natural motion of the optic nerve head revealed by high speed phase-sensitive OCT

    NASA Astrophysics Data System (ADS)

    OHara, Keith; Schmoll, Tilman; Vass, Clemens; Leitgeb, Rainer A.

    2013-03-01

    We use phase-sensitive optical coherence tomography (OCT) to measure the deformation of the optic nerve head during the pulse cycle, motivated by the possibility that these deformations might be indicative of the progression of glaucoma. A spectral-domain OCT system acquired 100k A-scans per second, with measurements from a pulse-oximeter recorded simultaneously, correlating OCT data to the subject's pulse. Data acquisition lasted for 2 seconds, to cover at least two pulse cycles. A frame-rate of 200-400 B-scans per second results in a sufficient degree of correlated speckle between successive frames that the phase-differences between fames can be extracted. Bulk motion of the entire eye changes the phase by several full cycles between frames, but this does not severely hinder extracting the smaller phase-changes due to differential motion within a frame. The central cup moves about 5 μm/s relative to the retinal-pigment-epithelium edge, with tissue adjacent to blood vessels showing larger motion.

  13. Motion-Compensated Compression of Dynamic Voxelized Point Clouds.

    PubMed

    De Queiroz, Ricardo L; Chou, Philip A

    2017-05-24

    Dynamic point clouds are a potential new frontier in visual communication systems. A few articles have addressed the compression of point clouds, but very few references exist on exploring temporal redundancies. This paper presents a novel motion-compensated approach to encoding dynamic voxelized point clouds at low bit rates. A simple coder breaks the voxelized point cloud at each frame into blocks of voxels. Each block is either encoded in intra-frame mode or is replaced by a motion-compensated version of a block in the previous frame. The decision is optimized in a rate-distortion sense. In this way, both the geometry and the color are encoded with distortion, allowing for reduced bit-rates. In-loop filtering is employed to minimize compression artifacts caused by distortion in the geometry information. Simulations reveal that this simple motion compensated coder can efficiently extend the compression range of dynamic voxelized point clouds to rates below what intra-frame coding alone can accommodate, trading rate for geometry accuracy.

  14. A parallelizable real-time motion tracking algorithm with applications to ultrasonic strain imaging

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Hall, T. J.

    2007-07-01

    Ultrasound-based mechanical strain imaging systems utilize signals from conventional diagnostic ultrasound systems to image tissue elasticity contrast that provides new diagnostically valuable information. Previous works (Hall et al 2003 Ultrasound Med. Biol. 29 427, Zhu and Hall 2002 Ultrason. Imaging 24 161) demonstrated that uniaxial deformation with minimal elevation motion is preferred for breast strain imaging and real-time strain image feedback to operators is important to accomplish this goal. The work reported here enhances the real-time speckle tracking algorithm with two significant modifications. One fundamental change is that the proposed algorithm is a column-based algorithm (a column is defined by a line of data parallel to the ultrasound beam direction, i.e. an A-line), as opposed to a row-based algorithm (a row is defined by a line of data perpendicular to the ultrasound beam direction). Then, displacement estimates from its adjacent columns provide good guidance for motion tracking in a significantly reduced search region to reduce computational cost. Consequently, the process of displacement estimation can be naturally split into at least two separated tasks, computed in parallel, propagating outward from the center of the region of interest (ROI). The proposed algorithm has been implemented and optimized in a Windows® system as a stand-alone ANSI C++ program. Results of preliminary tests, using numerical and tissue-mimicking phantoms, and in vivo tissue data, suggest that high contrast strain images can be consistently obtained with frame rates (10 frames s-1) that exceed our previous methods.

  15. The right frame of reference makes it simple: an example of introductory mechanics supported by video analysis of motion

    NASA Astrophysics Data System (ADS)

    Klein, P.; Gröber, S.; Kuhn, J.; Fleischhauer, A.; Müller, A.

    2015-01-01

    The selection and application of coordinate systems is an important issue in physics. However, considering different frames of references in a given problem sometimes seems un-intuitive and is difficult for students. We present a concrete problem of projectile motion which vividly demonstrates the value of considering different frames of references. We use this example to explore the effectiveness of video-based motion analysis (VBMA) as an instructional technique at university level in enhancing students’ understanding of the abstract concept of coordinate systems. A pilot study with 47 undergraduate students indicates that VBMA instruction improves conceptual understanding of this issue.

  16. Robust intravascular optical coherence elastography driven by acoustic radiation pressure

    NASA Astrophysics Data System (ADS)

    van Soest, Gijs; Bouchard, Richard R.; Mastik, Frits; de Jong, Nico; van der Steen, Anton F. W.

    2007-07-01

    High strain spots in the vessel wall indicate the presence of vulnerable plaques. The majority of acute cardiovascular events are preceded by rupture of such a plaque in a coronary artery. Intracoronary optical coherence tomography (OCT) can be extended, in principle, to an elastography technique, mapping the strain in the vascular wall. However, the susceptibility of OCT to frame-to-frame decorrelation, caused by tissue and catheter motion, inhibits reliable tissue displacement tracking and has to date obstructed the development of OCT-based intravascular elastography. We introduce a new technique for intravascular optical coherence elastography, which is robust against motion artifacts. Using acoustic radiation force, we apply a pressure to deform the tissue synchronously with the line scan rate of the OCT instrument. Radial tissue displacement can be tracked based on the correlation between adjacent lines, instead of subsequent frames in conventional elastography. The viability of the method is demonstrated with a simulation study. The root mean square (rms) error of the displacement estimate is 0.55 μm, and the rms error of the strain is 0.6%. It is shown that high-strain spots in the vessel wall, such as observed at the sites of vulnerable atherosclerotic lesions, can be detected with the technique. Experiments to realize this new elastographic method are presented. Simultaneous optical and ultrasonic pulse-echo tracking demonstrate that the material can be put in a high-frequency oscillatory motion with an amplitude of several micrometers, more than sufficient for accurate tracking with OCT. The resulting data are used to optimize the acoustic pushing sequence and geometry.

  17. Gravity matters: Motion perceptions modified by direction and body position.

    PubMed

    Claassen, Jens; Bardins, Stanislavs; Spiegel, Rainer; Strupp, Michael; Kalla, Roger

    2016-07-01

    Motion coherence thresholds are consistently higher at lower velocities. In this study we analysed the influence of the position and direction of moving objects on their perception and thereby the influence of gravity. This paradigm allows a differentiation to be made between coherent and randomly moving objects in an upright and a reclining position with a horizontal or vertical axis of motion. 18 young healthy participants were examined in this coherent threshold paradigm. Motion coherence thresholds were significantly lower when position and motion were congruent with gravity independent of motion velocity (p=0.024). In the other conditions higher motion coherence thresholds (MCT) were found at lower velocities and vice versa (p<0.001). This result confirms previous studies with higher MCT at lower velocity but is in contrast to studies concerning perception of virtual turns and optokinetic nystagmus, in which differences of perception were due to different directions irrespective of body position, i.e. perception took place in an egocentric reference frame. Since the observed differences occurred in an upright position only, perception of coherent motion in this study is defined by an earth-centered reference frame rather than by an ego-centric frame. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Markerless EPID image guided dynamic multi-leaf collimator tracking for lung tumors

    NASA Astrophysics Data System (ADS)

    Rottmann, J.; Keall, P.; Berbeco, R.

    2013-06-01

    Compensation of target motion during the delivery of radiotherapy has the potential to improve treatment accuracy, dose conformity and sparing of healthy tissue. We implement an online image guided therapy system based on soft tissue localization (STiL) of the target from electronic portal images and treatment aperture adaptation with a dynamic multi-leaf collimator (DMLC). The treatment aperture is moved synchronously and in real time with the tumor during the entire breathing cycle. The system is implemented and tested on a Varian TX clinical linear accelerator featuring an AS-1000 electronic portal imaging device (EPID) acquiring images at a frame rate of 12.86 Hz throughout the treatment. A position update cycle for the treatment aperture consists of four steps: in the first step at time t = t0 a frame is grabbed, in the second step the frame is processed with the STiL algorithm to get the tumor position at t = t0, in a third step the tumor position at t = ti + δt is predicted to overcome system latencies and in the fourth step, the DMLC control software calculates the required leaf motions and applies them at time t = ti + δt. The prediction model is trained before the start of the treatment with data representing the tumor motion. We analyze the system latency with a dynamic chest phantom (4D motion phantom, Washington University). We estimate the average planar position deviation between target and treatment aperture in a clinical setting by driving the phantom with several lung tumor trajectories (recorded from fiducial tracking during radiotherapy delivery to the lung). DMLC tracking for lung stereotactic body radiation therapy without fiducial markers was successfully demonstrated. The inherent system latency is found to be δt = (230 ± 11) ms for a MV portal image acquisition frame rate of 12.86 Hz. The root mean square deviation between tumor and aperture position is smaller than 1 mm. We demonstrate the feasibility of real-time markerless DMLC tracking with a standard LINAC-mounted (EPID).

  19. An Indoor Slam Method Based on Kinect and Multi-Feature Extended Information Filter

    NASA Astrophysics Data System (ADS)

    Chang, M.; Kang, Z.

    2017-09-01

    Based on the frame of ORB-SLAM in this paper the transformation parameters between adjacent Kinect image frames are computed using ORB keypoints, from which priori information matrix and information vector are calculated. The motion update of multi-feature extended information filter is then realized. According to the point cloud data formed by depth image, ICP algorithm was used to extract the point features of the point cloud data in the scene and built an observation model while calculating a-posteriori information matrix and information vector, and weakening the influences caused by the error accumulation in the positioning process. Furthermore, this paper applied ORB-SLAM frame to realize autonomous positioning in real time in interior unknown environment. In the end, Lidar was used to get data in the scene in order to estimate positioning accuracy put forward in this paper.

  20. Contribution to defining a geodetic reference frame for Africa (AFREF): Geodynamics implications

    NASA Astrophysics Data System (ADS)

    Saria, Elifuraha E.

    African Reference Frame (AFREF) is the proposed regional three-dimensional standard frame, which will be used to reference positions and velocities for geodetic sites in Africa and surrounding. This frame will play a crucial role in scientific application for example plate motion and crustal deformation studies, and also in mapping when it involves for example national boundary surveying, remote sensing, GIS, engineering projects and other development programs in Africa. To contribute to the definition of geodetic reference frame for Africa and provide the first continent-wide position/velocity solution for Africa, we processed and analyzed 16 years of GPS and 17 years of DORIS data at 133 GPS sites and 9 DORIS sites continuously operating geodetic sites in Africa and surroundings to describe the present-day kinematics of the Nubian and Somalian plates and constrain relative motions across the East African Rift. We use the resulting horizontal velocities to determine the level of rigidity of Nubia and updated a plate motion model for the East African Rift and revise the counter clockwise rotation of the Victoria plate and clockwise rotation of the Rovuma plate with respect to Nubia. The vertical velocity ranges from -2 to +2 mm/yr, close to their uncertainties with no clear geographical pattern. This study provides the first continent-wide position/velocity solution for Africa, expressed in International Terrestrial Reference Frame (ITRF2008), a contribution to the upcoming African Reference Frame (AFREF). In the next step we used the substantial increase in the geologic, geophysical and geodetic data in Africa to improve our understanding of the rift geometry and the block kinematics of the EAR. We determined the best-fit fault structure of the rift in terms of the locking depth and dip angle and use a block modeling approach where observed velocities are described as the contribution of rigid block rotation and strain accumulation on locked faults. Our results show a better fit with three sub-plates (Victoria, Rovuma and Lwandle) between the major plates Nubia and Somalia. We show that the earthquake slip vectors provide information that is consistent with the GPS velocities and significantly help reduce the uncertainties in plate angular velocity estimates. However, we find that 3.16 My average spreading rates along the Southwest Indian Ridge (SWIR) from MORVEL model are systematically faster than GPS-derived motions across that ridge, possibly reflecting the need to revise the MORVEL outward displacement correction. In the final step, we attempt to understand the hydrological loading in Africa, which may affect our geodetic estimates, particularly the uplift rates. In this work, we analyze 10 years (2002 - 2012) of continuous GPS measurements operating in Africa, and compare with the modeled hydrological loading deformation inferred from the Gravity Recovery and Climate Experiment (GRACE) at the same GPS location and for the same time period. We estimated hydrological loading deformation based on the Equivalent Water Height (EWH) derived from the 10-days interval reprocessed GRACE solution second release (RL02). We took in to account in both GPS and GRACE the systematic errors from atmospheric pressure and non-tidal ocean loading effects and model the Earth as perfect elastic and compute the deformation using appropriate Greens function. We analyze the strength of association between the observation (GPS) and the model (GRACE) in terms of annual amplitude and phase as well as the original data (time-series). We find a good correlation mainly in regions associated with strong seasonal hydrological variations. To improve the correlation between the two solutions, we subtract the GRACE-derived vertical displacement from GPS-observed time series and determine the variance reduction. Our solution shows average variance between the model and the observation reduced to ~40%. (Abstract shortened by UMI.)

  1. A programmable display layer for virtual reality system architectures.

    PubMed

    Smit, Ferdi Alexander; van Liere, Robert; Froehlich, Bernd

    2010-01-01

    Display systems typically operate at a minimum rate of 60 Hz. However, existing VR-architectures generally produce application updates at a lower rate. Consequently, the display is not updated by the application every display frame. This causes a number of undesirable perceptual artifacts. We describe an architecture that provides a programmable display layer (PDL) in order to generate updated display frames. This replaces the default display behavior of repeating application frames until an update is available. We will show three benefits of the architecture typical to VR. First, smooth motion is provided by generating intermediate display frames by per-pixel depth-image warping using 3D motion fields. Smooth motion eliminates various perceptual artifacts due to judder. Second, we implement fine-grained latency reduction at the display frame level using a synchronized prediction of simulation objects and the viewpoint. This improves the average quality and consistency of latency reduction. Third, a crosstalk reduction algorithm for consecutive display frames is implemented, which improves the quality of stereoscopic images. To evaluate the architecture, we compare image quality and latency to that of a classic level-of-detail approach.

  2. Proposed patient motion monitoring system using feature point tracking with a web camera.

    PubMed

    Miura, Hideharu; Ozawa, Shuichi; Matsuura, Takaaki; Yamada, Kiyoshi; Nagata, Yasushi

    2017-12-01

    Patient motion monitoring systems play an important role in providing accurate treatment dose delivery. We propose a system that utilizes a web camera (frame rate up to 30 fps, maximum resolution of 640 × 480 pixels) and an in-house image processing software (developed using Microsoft Visual C++ and OpenCV). This system is simple to use and convenient to set up. The pyramidal Lucas-Kanade method was applied to calculate motions for each feature point by analysing two consecutive frames. The image processing software employs a color scheme where the defined feature points are blue under stable (no movement) conditions and turn red along with a warning message and an audio signal (beeping alarm) for large patient movements. The initial position of the marker was used by the program to determine the marker positions in all the frames. The software generates a text file that contains the calculated motion for each frame and saves it as a compressed audio video interleave (AVI) file. We proposed a patient motion monitoring system using a web camera, which is simple and convenient to set up, to increase the safety of treatment delivery.

  3. Direct determination of geocenter motion by combining SLR, VLBI, GNSS, and DORIS time series

    NASA Astrophysics Data System (ADS)

    Wu, X.; Abbondanza, C.; Altamimi, Z.; Chin, T. M.; Collilieux, X.; Gross, R. S.; Heflin, M. B.; Jiang, Y.; Parker, J. W.

    2013-12-01

    The longest-wavelength surface mass transport includes three degree-one spherical harmonic components involving hemispherical mass exchanges. The mass load causes geocenter motion between the center-of-mass of the total Earth system (CM) and the center-of-figure of the solid Earth surface (CF), and deforms the solid Earth. Estimation of the degree-1 surface mass changes through CM-CF and degree-1 deformation signatures from space geodetic techniques can thus complement GRACE's time-variable gravity data to form a complete change spectrum up to a high resolution. Currently, SLR is considered the most accurate technique for direct geocenter motion determination. By tracking satellite motion from ground stations, SLR determines the motion between CM and the geometric center of its ground network (CN). This motion is then used to approximate CM-CF and subsequently for deriving degree-1 mass changes. However, the SLR network is very sparse and uneven in global distribution. The average number of operational tracking stations is about 20 in recent years. The poor network geometry can have a large CN-CF motion and is not ideal for the determination of CM-CF motion and degree-1 mass changes. We recently realized an experimental Terrestrial Reference Frame (TRF) through station time series using the Kalman filter and the RTS smoother. The TRF has its origin defined at nearly instantaneous CM using weekly SLR measurement time series. VLBI, GNSS and DORIS time series are combined weekly with those of SLR and tied to the geocentric (CM) reference frame through local tie measurements and co-motion constraints on co-located geodetic stations. The unified geocentric time series of the four geodetic techniques provide a much better network geometry for direct geodetic determination of geocenter motion. Results from this direct approach using a 90-station network compares favorably with those obtained from joint inversions of GPS/GRACE data and ocean bottom pressure models. We will also show that a previously identified discrepancy in X-component between direct SLR orbit-tracking and inverse determined geocenter motions is largely reconciled with the new unified network.

  4. Dynamic tracking of prosthetic valve motion and deformation from bi-plane x-ray views: feasibility study

    NASA Astrophysics Data System (ADS)

    Hatt, Charles R.; Wagner, Martin; Raval, Amish N.; Speidel, Michael A.

    2016-03-01

    Transcatheter aortic valve replacement (TAVR) requires navigation and deployment of a prosthetic valve within the aortic annulus under fluoroscopic guidance. To support improved device visualization in this procedure, this study investigates the feasibility of frame-by-frame 3D reconstruction of a moving and expanding prosthetic valve structure from simultaneous bi-plane x-ray views. In the proposed method, a dynamic 3D model of the valve is used in a 2D/3D registration framework to obtain a reconstruction of the valve. For each frame, valve model parameters describing position, orientation, expansion state, and deformation are iteratively adjusted until forward projections of the model match both bi-plane views. Simulated bi-plane imaging of a valve at different signal-difference-to-noise ratio (SDNR) levels was performed to test the approach. 20 image sequences with 50 frames of valve deployment were simulated at each SDNR. The simulation achieved a target registration error (TRE) of the estimated valve model of 0.93 +/- 2.6 mm (mean +/- S.D.) for the lowest SDNR of 2. For higher SDNRs (5 to 50) a TRE of 0.04 mm +/- 0.23 mm was achieved. A tabletop phantom study was then conducted using a TAVR valve. The dynamic 3D model was constructed from high resolution CT scans and a simple expansion model. TRE was 1.22 +/- 0.35 mm for expansion states varying from undeployed to fully deployed, and for moderate amounts of inter-frame motion. Results indicate that it is feasible to use bi-plane imaging to recover the 3D structure of deformable catheter devices.

  5. Dynamic tracking of prosthetic valve motion and deformation from bi-plane x-ray views: feasibility study.

    PubMed

    Hatt, Charles R; Wagner, Martin; Raval, Amish N; Speidel, Michael A

    2016-01-01

    Transcatheter aortic valve replacement (TAVR) requires navigation and deployment of a prosthetic valve within the aortic annulus under fluoroscopic guidance. To support improved device visualization in this procedure, this study investigates the feasibility of frame-by-frame 3D reconstruction of a moving and expanding prosthetic valve structure from simultaneous bi-plane x-ray views. In the proposed method, a dynamic 3D model of the valve is used in a 2D/3D registration framework to obtain a reconstruction of the valve. For each frame, valve model parameters describing position, orientation, expansion state, and deformation are iteratively adjusted until forward projections of the model match both bi-plane views. Simulated bi-plane imaging of a valve at different signal-difference-to-noise ratio (SDNR) levels was performed to test the approach. 20 image sequences with 50 frames of valve deployment were simulated at each SDNR. The simulation achieved a target registration error (TRE) of the estimated valve model of 0.93 ± 2.6 mm (mean ± S.D.) for the lowest SDNR of 2. For higher SDNRs (5 to 50) a TRE of 0.04 mm ± 0.23 mm was achieved. A tabletop phantom study was then conducted using a TAVR valve. The dynamic 3D model was constructed from high resolution CT scans and a simple expansion model. TRE was 1.22 ± 0.35 mm for expansion states varying from undeployed to fully deployed, and for moderate amounts of inter-frame motion. Results indicate that it is feasible to use bi-plane imaging to recover the 3D structure of deformable catheter devices.

  6. Impact of Glacial Isostatic Adjustment on North America Plate Specific Terrestrial Reference Frame

    NASA Astrophysics Data System (ADS)

    Herring, Thomas; Melbourne, Tim; Murray, Mark; Floyd, Mike; Szeliga, Walter; King, Robert; Phillips, David; Puskas, Christine

    2017-04-01

    We examine the impact of incorporating glacial isostatic adjustment (GIA) models in determining the Euler poles for plate specific terrestrial reference frames. We will specifically examine the impact of GIA models on the realization of a North America Reference frame. We use a combination of the velocity fields determined by the Geodesy Advancing Geosciences and EarthScope (GAGE) Facility which analyzes GPS data from the Plate Boundary Observatory (PBO) and other geodetic quality GPS sites in North America, and from the ITRF2014 re-analysis. Initial analysis of the GAGE velocity field shows reduced root-mean-square (RMS) scatter of velocity estimate residuals when the North America Euler pole is estimated including the ICE-6G GIA mode. The reduction in the north-south direction is from 0.69 mm/yr to 0.52 mm/yr, in the east-west direction from 0.34 mm/yr to 0.30 mm/yr and in height from 0.93 mm/yr to 0.72 mm/yr. The reduction in the height RMS is not surprising since the contemporary geodetic height velocity estimates are used in the developing the ICE-6G model. Contemporary horizontal motions are not used the GIA model development, and the reduction in horizontal RMS provides a partial validation of the model. There is no reduction in the horizontal velocity residual when the ICE-5G model is used. Although removing the ICE-6G model before fitting an Euler pole for the North American plate reduces the RMS of the residuals, the pattern of residuals is still systematic suggesting possibly that a spherically symmetric viscosity model might not be adequate for accurate modeling of the horizontal motions associated with GIA in North America. This presentation in focus on the prospects and impacts of incorporating GIA models in plate-specific Euler poles with emphasis on North America.

  7. Orbital motions of astronomical bodies and their centre of mass from different reference frames: a conceptual step between the geocentric and heliocentric models

    NASA Astrophysics Data System (ADS)

    Guerra, André G. C.; Simeão Carvalho, Paulo

    2016-09-01

    The motion of astronomical bodies and the centre of mass of the system is not always well perceived by students. One of the struggles is the conceptual change of reference frame, which is the same that held back the acceptance of the Heliocentric model over the Geocentric one. To address the question, the notion of centre of mass, motion equations (and their numerical solution for a system of multiple bodies), and change of frame of reference is introduced. The discussion is done based on conceptual and real world examples, using the solar system. Consequently, through the use of simple ‘do it yourself’ methods and basic equations, students can debate complex motions, and have a wider and potentially effective understanding of physics.

  8. SU-F-303-11: Implementation and Applications of Rapid, SIFT-Based Cine MR Image Binning and Region Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazur, T; Wang, Y; Fischer-Valuck, B

    2015-06-15

    Purpose: To develop a novel and rapid, SIFT-based algorithm for assessing feature motion on cine MR images acquired during MRI-guided radiotherapy treatments. In particular, we apply SIFT descriptors toward both partitioning cine images into respiratory states and tracking regions across frames. Methods: Among a training set of images acquired during a fraction, we densely assign SIFT descriptors to pixels within the images. We cluster these descriptors across all frames in order to produce a dictionary of trackable features. Associating the best-matching descriptors at every frame among the training images to these features, we construct motion traces for the features. Wemore » use these traces to define respiratory bins for sorting images in order to facilitate robust pixel-by-pixel tracking. Instead of applying conventional methods for identifying pixel correspondences across frames we utilize a recently-developed algorithm that derives correspondences via a matching objective for SIFT descriptors. Results: We apply these methods to a collection of lung, abdominal, and breast patients. We evaluate the procedure for respiratory binning using target sites exhibiting high-amplitude motion among 20 lung and abdominal patients. In particular, we investigate whether these methods yield minimal variation between images within a bin by perturbing the resulting image distributions among bins. Moreover, we compare the motion between averaged images across respiratory states to 4DCT data for these patients. We evaluate the algorithm for obtaining pixel correspondences between frames by tracking contours among a set of breast patients. As an initial case, we track easily-identifiable edges of lumpectomy cavities that show minimal motion over treatment. Conclusions: These SIFT-based methods reliably extract motion information from cine MR images acquired during patient treatments. While we performed our analysis retrospectively, the algorithm lends itself to prospective motion assessment. Applications of these methods include motion assessment, identifying treatment windows for gating, and determining optimal margins for treatment.« less

  9. Motion representation of the long fingers: a proposal for the definitions of new anatomical frames.

    PubMed

    Coupier, Jérôme; Moiseev, Fédor; Feipel, Véronique; Rooze, Marcel; Van Sint Jan, Serge

    2014-04-11

    Despite the availability of the International Society of Biomechanics (ISB) recommendations for the orientation of anatomical frames, no consensus exists about motion representations related to finger kinematics. This paper proposes novel anatomical frames for motion representation of the phalangeal segments of the long fingers. A three-dimensional model of a human forefinger was acquired from a non-pathological fresh-frozen hand. Medical imaging was used to collect phalangeal discrete positions. Data processing was performed using a customized software interface ("lhpFusionBox") to create a specimen-specific model and to reconstruct the discrete motion path. Five examiners virtually palpated two sets of landmarks. These markers were then used to build anatomical frames following two methods: a reference method following ISB recommendations and a newly-developed method based on the mean helical axis (HA). Motion representations were obtained and compared between examiners. Virtual palpation precision was around 1mm, which is comparable to results from the literature. The comparison of the two methods showed that the helical axis method seemed more reproducible between examiners especially for secondary, or accessory, motions. Computed Root Mean Square distances comparing methods showed that the ISB method displayed a variability 10 times higher than the HA method. The HA method seems to be suitable for finger motion representation using discrete positions from medical imaging. Further investigations are required before being able to use the methodology with continuous tracking of markers set on the subject's hand. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Geocenter Motion Derived from GNSS and SLR Tracking Data of LEO

    NASA Astrophysics Data System (ADS)

    Li, Y. S.; Ning, F. S.; Tseng, K. H.; Tseng, T. P.; Wu, J. M.; Chen, K. L.

    2017-12-01

    Space geodesy techniques can provide the monitoring data of global variations with high precision and large coverage through the satellites. Geocenter motion (GM) describes the difference of CF (Center of Figure) respect to CM (Center of Mass of the Earth System) due to the re-distribution and deformation of the earth system. Because satellite tracking data between ground stations and satellites orbit around the CM, geocenter motion is related to the realization of the ITRF (International Terrestrial Reference Frame) origin. In this study, GPS (Global Positioning System) observation data of IGS (International GNSS Service) and SLR (Satellite Laser Ranging) tracking data are applied to estimate the coordinates of observing sites on Earth's surface. The GPS observing sites are distributed deliberately and globally by 15° ×15° grids. Meanwhile, two different global ocean tide models are applied here. The model used in ITRF comparison and combination is parameter transformation, which is a mathematical formula allowing to transform the different frames between ITRF and CM system. Following the parameter transformation, the results of geocenter motion can be determined. The FORMOSAT-7/COSMIC-2 (F7C2) mission is a constellation of LEO (Low-Earth-Orbit) satellites, which will be launched in 2018. Besides the observing system for Meteorology, Ionosphere, and Climate, the F7C2 will be equipped with LRR (Laser Ranging Retroreflector). This work is a pilot survey to study the application of LEO SLR data in Taiwan.

  11. IMAX camera (12-IML-1)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.

  12. Scene-aware joint global and local homographic video coding

    NASA Astrophysics Data System (ADS)

    Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.

    2016-09-01

    Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.

  13. Scene-based nonuniformity correction with video sequences and registration.

    PubMed

    Hardie, R C; Hayat, M M; Armstrong, E; Yasuda, B

    2000-03-10

    We describe a new, to our knowledge, scene-based nonuniformity correction algorithm for array detectors. The algorithm relies on the ability to register a sequence of observed frames in the presence of the fixed-pattern noise caused by pixel-to-pixel nonuniformity. In low-to-moderate levels of nonuniformity, sufficiently accurate registration may be possible with standard scene-based registration techniques. If the registration is accurate, and motion exists between the frames, then groups of independent detectors can be identified that observe the same irradiance (or true scene value). These detector outputs are averaged to generate estimates of the true scene values. With these scene estimates, and the corresponding observed values through a given detector, a curve-fitting procedure is used to estimate the individual detector response parameters. These can then be used to correct for detector nonuniformity. The strength of the algorithm lies in its simplicity and low computational complexity. Experimental results, to illustrate the performance of the algorithm, include the use of visible-range imagery with simulated nonuniformity and infrared imagery with real nonuniformity.

  14. Plate Motions, Regional Deformation, and Time-Variation of Plate Motions

    NASA Technical Reports Server (NTRS)

    Gordon, R. G.

    1998-01-01

    The significant results obtained with support of this grant include the following: (1) Using VLBI data in combination with other geodetical, geophysical, and geological data to bound the present rotation of the Colorado Plateau, and to evaluate to its implications for the kinematics and seismogenic potential of the western half of the conterminous U.S. (2) Determining realistic estimates of uncertainties for VLBI data and then applying the data and uncertainties to obtain an upper bound on the integral of deformation within the "stable interior" of the North American and other plates and thus to place an upper bound on the seismogenic potential within these regions. (3) Combining VLBI data with other geodetic, geophysical, and geologic data to estimate the motion of coastal California in a frame of reference attached to the Sierra Nevada-Great Valley microplate. This analysis has provided new insights into the kinematic boundary conditions that may control or at least strongly influence the locations of asperities that rupture in great earthquakes along the San Andreas transform system. (4) Determining a global tectonic model from VLBI geodetic data that combines the estimation of plate angular velocities with individual site linear velocities where tectonically appropriate. and (5) Investigation of the some of the outstanding problems defined by the work leading to global plate motion model NUVEL-1. These problems, such as the motion between the Pacific and North American plates and between west Africa and east Africa, are focused on regions where the seismogenic potential may be greater than implied by published plate tectonic models.

  15. Estimating geocenter motion and barystatic sea-level variability from GRACE observations with explicit consideration of self-attraction and loading effects

    NASA Astrophysics Data System (ADS)

    Bergmann-Wolf, I.; Dobslaw, H.

    2015-12-01

    Estimating global barystatic sea-level variations from monthly mean gravity fields delivered by the Gravity Recovery and Climate Experiment (GRACE) satellite mission requires additional information about geocenter motion. These variations are not available directly due to the mission implementation in the CM-frame and are represented by the degree-1 terms of the spherical harmonics expansion. Global degree-1 estimates can be determined with the method of Swenson et al. (2008) from ocean mass variability, the geometry of the global land-sea distribution, and GRACE data of higher degrees and orders. Consequently, a recursive relation between the derivation of ocean mass variations from GRACE data and the introduction of geocenter motion into GRACE data exists.In this contribution, we will present a recent improvement to the processing strategy described in Bergmann-Wolf et al. (2014) by introducing a non-homogeneous distribution of global ocean mass variations in the geocenter motion determination strategy, which is due to the effects of loading and self-attraction induced by mass redistributions at the surface. A comparison of different GRACE-based oceanographic products (barystatic signal for both the global oceans and individual basins; barotropic transport variations of major ocean currents) with degree-1 terms estimated with a homogeneous and non-homogeneous ocean mass representation will be discussed, and differences in noise levels in most recent GRACE solutions from GFZ (RL05a), CSR, and JPL (both RL05) and their consequences for the application of this method will be discussed.

  16. Earth Rotation Parameter Solutions using BDS and GPS Data from MEGX Network

    NASA Astrophysics Data System (ADS)

    Xu, Tianhe; Yu, Sumei; Li, Jiajing; He, Kaifei

    2014-05-01

    Earth rotation parameters (ERPs) are necessary parameters to achieve mutual transformation of the celestial reference frame and earth-fix reference frame. They are very important for satellite precise orbit determination (POD), high-precision space navigation and positioning. In this paper, the determination of ERPs including polar motion (PM), polar motion rate (PMR) and length of day (LOD) are presented using BDS and GPS data of June 2013 from MEGX network based on least square (LS) estimation with constraint condition. BDS and GPS data of 16 co-location stations from MEGX network are the first time used to estimate the ERPs. The results show that the RMSs of x and y component errors of PM and PM rate are about 0.9 mas, 1.0 mas, 0.2 mas/d and 0.3 mas/d respectively using BDS data. The RMS of LOD is about 0.03 ms/d using BDS data. The RMSs of x and y component errors of PM and PM rate are about 0.2 mas, 0.2 mas/d respectively using GPS data. The RMS of LOD is about 0.02 ms/d using GPS data. The optimal relative weight is determined by using variance component estimation when combining BDS and GPS data. The accuracy improvements of adding BDS data is between 8% to 20% for PM and PM rate. There is no obvious improvement in LOD when BDS data is involved. System biases between BDS and GPS are also resolved per station. They are very stable from day to day with the average accuracy of about 20 cm. Keywords: Earth rotation parameter; International GNSS Service; polar motion; length of day; least square with constraint condition Acknowledgments: This work was supported by Natural Science Foundation of China (41174008) and the Foundation for the Author of National Excellent Doctoral Dissertation of China (2007B51) .

  17. Video stereolization: combining motion analysis with user interaction.

    PubMed

    Liao, Miao; Gao, Jizhou; Yang, Ruigang; Gong, Minglun

    2012-07-01

    We present a semiautomatic system that converts conventional videos into stereoscopic videos by combining motion analysis with user interaction, aiming to transfer as much as possible labeling work from the user to the computer. In addition to the widely used structure from motion (SFM) techniques, we develop two new methods that analyze the optical flow to provide additional qualitative depth constraints. They remove the camera movement restriction imposed by SFM so that general motions can be used in scene depth estimation-the central problem in mono-to-stereo conversion. With these algorithms, the user's labeling task is significantly simplified. We further developed a quadratic programming approach to incorporate both quantitative depth and qualitative depth (such as these from user scribbling) to recover dense depth maps for all frames, from which stereoscopic view can be synthesized. In addition to visual results, we present user study results showing that our approach is more intuitive and less labor intensive, while producing 3D effect comparable to that from current state-of-the-art interactive algorithms.

  18. Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.

    PubMed

    Huang, Yan; Wang, Wei; Wang, Liang

    2018-04-01

    Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.

  19. The Controllable Ball Joint Mechanism

    NASA Astrophysics Data System (ADS)

    Tung, Yung Cheng; Chieng, Wei-Hua; Ho, Shrwai

    A controllable ball joint mechanism with three rotational degrees of freedom is proposed in this paper. The mechanism is composed of three bevel gears, one of which rotates with respect to a fixed frame and the others rotate with respect to individual floating frames. The output is the resultant motion of the differential motions by the motors that rotates the bevel gears at the fixed frame and the floating frames. The mechanism is capable of a large rotation, and the structure is potentially compact. The necessary inverse and forward kinematic analyses as well as the derivation of kinematic singularity are provided according to the kinematical equivalent structure described in this paper.

  20. Covariant Uniform Acceleration

    NASA Astrophysics Data System (ADS)

    Friedman, Yaakov; Scarr, Tzvi

    2013-04-01

    We derive a 4D covariant Relativistic Dynamics Equation. This equation canonically extends the 3D relativistic dynamics equation , where F is the 3D force and p = m0γv is the 3D relativistic momentum. The standard 4D equation is only partially covariant. To achieve full Lorentz covariance, we replace the four-force F by a rank 2 antisymmetric tensor acting on the four-velocity. By taking this tensor to be constant, we obtain a covariant definition of uniformly accelerated motion. This solves a problem of Einstein and Planck. We compute explicit solutions for uniformly accelerated motion. The solutions are divided into four Lorentz-invariant types: null, linear, rotational, and general. For null acceleration, the worldline is cubic in the time. Linear acceleration covariantly extends 1D hyperbolic motion, while rotational acceleration covariantly extends pure rotational motion. We use Generalized Fermi-Walker transport to construct a uniformly accelerated family of inertial frames which are instantaneously comoving to a uniformly accelerated observer. We explain the connection between our approach and that of Mashhoon. We show that our solutions of uniformly accelerated motion have constant acceleration in the comoving frame. Assuming the Weak Hypothesis of Locality, we obtain local spacetime transformations from a uniformly accelerated frame K' to an inertial frame K. The spacetime transformations between two uniformly accelerated frames with the same acceleration are Lorentz. We compute the metric at an arbitrary point of a uniformly accelerated frame. We obtain velocity and acceleration transformations from a uniformly accelerated system K' to an inertial frame K. We introduce the 4D velocity, an adaptation of Horwitz and Piron s notion of "off-shell." We derive the general formula for the time dilation between accelerated clocks. We obtain a formula for the angular velocity of a uniformly accelerated object. Every rest point of K' is uniformly accelerated, and its acceleration is a function of the observer's acceleration and its position. We obtain an interpretation of the Lorentz-Abraham-Dirac equation as an acceleration transformation from K' to K.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, M; Yuan, Y; Lo, Y

    Purpose: To develop a novel strategy to extract the lung tumor motion from cone beam CT (CBCT) projections by an active contour model with interpolated respiration learned from diaphragm motion. Methods: Tumor tracking on CBCT projections was accomplished with the templates derived from planning CT (pCT). There are three major steps in the proposed algorithm: 1) The pCT was modified to form two CT sets: a tumor removed pCT and a tumor only pCT, the respective digitally reconstructed radiographs DRRtr and DRRto following the same geometry of the CBCT projections were generated correspondingly. 2) The DRRtr was rigidly registered withmore » the CBCT projections on the frame-by-frame basis. Difference images between CBCT projections and the registered DRRtr were generated where the tumor visibility was appreciably enhanced. 3) An active contour method was applied to track the tumor motion on the tumor enhanced projections with DRRto as templates to initialize the tumor tracking while the respiratory motion was compensated for by interpolating the diaphragm motion estimated by our novel constrained linear regression approach. CBCT and pCT from five patients undergoing stereotactic body radiotherapy were included in addition to scans from a Quasar phantom programmed with known motion. Manual tumor tracking was performed on CBCT projections and was compared to the automatic tracking to evaluate the algorithm accuracy. Results: The phantom study showed that the error between the automatic tracking and the ground truth was within 0.2mm. For the patients the discrepancy between the calculation and the manual tracking was between 1.4 and 2.2 mm depending on the location and shape of the lung tumor. Similar patterns were observed in the frequency domain. Conclusion: The new algorithm demonstrated the feasibility to track the lung tumor from noisy CBCT projections, providing a potential solution to better motion management for lung radiation therapy.« less

  2. Estimating geocenter motion and barystatic sea-level variability from GRACE observations with explicit consideration of self-attraction and loading effects

    NASA Astrophysics Data System (ADS)

    Bergmann-Wolf, Inga; Dobslaw, Henryk

    2016-04-01

    Estimating global barystatic sea-level variations from monthly mean gravity fields delivered by the Gravity Recovery and Climate Experiment (GRACE) satellite mission requires additional information about geocenter motion. These variations are not available directly due to the mission implementation in the CM-frame and are represented by the degree-1 terms of the spherical harmonics expansion. Global degree-1 estimates can be determined with the method of Swenson et al. (2008) from ocean mass variability, the geometry of the global land-sea distribution, and GRACE data of higher degrees and orders. Consequently, a recursive relation between the derivation of ocean mass variations from GRACE data and the introduction of geocenter motion into GRACE data exists. In this contribution, we will present a recent improvement to the processing strategy described in Bergmann-Wolf et al. (2014) by introducing a non-homogeneous distribution of global ocean mass variations in the geocenter motion determination strategy, which is due to the effects of loading and self-attraction induced by mass redistributions at the surface. A comparison of different GRACE-based oceanographic products (barystatic signal for both the global oceans and individual basins; barotropic transport variations of major ocean currents) with degree-1 terms estimated with a homogeneous and non-homogeneous ocean mass representation will be discussed, and differences in noise levels in most recent GRACE solutions from GFZ (RL05a), CSR, and JPL (both RL05) and their consequences for the application of this method will be discussed. Swenson, S., D. Chambers and J. Wahr (2008), Estimating geocenter variations from a combination of GRACE and ocean model output, J. Geophys. Res., 113, B08410 Bergmann-Wolf, I., L. Zhang and H. Dobslaw (2014), Global Eustatic Sea-Level Variations for the Approximation of Geocenter Motion from GRACE, J. Geod. Sci., 4, 37-48

  3. Robust automatic line scratch detection in films.

    PubMed

    Newson, Alasdair; Almansa, Andrés; Gousseau, Yann; Pérez, Patrick

    2014-03-01

    Line scratch detection in old films is a particularly challenging problem due to the variable spatiotemporal characteristics of this defect. Some of the main problems include sensitivity to noise and texture, and false detections due to thin vertical structures belonging to the scene. We propose a robust and automatic algorithm for frame-by-frame line scratch detection in old films, as well as a temporal algorithm for the filtering of false detections. In the frame-by-frame algorithm, we relax some of the hypotheses used in previous algorithms in order to detect a wider variety of scratches. This step's robustness and lack of external parameters is ensured by the combined use of an a contrario methodology and local statistical estimation. In this manner, over-detection in textured or cluttered areas is greatly reduced. The temporal filtering algorithm eliminates false detections due to thin vertical structures by exploiting the coherence of their motion with that of the underlying scene. Experiments demonstrate the ability of the resulting detection procedure to deal with difficult situations, in particular in the presence of noise, texture, and slanted or partial scratches. Comparisons show significant advantages over previous work.

  4. The effect of regularization in motion compensated PET image reconstruction: a realistic numerical 4D simulation study.

    PubMed

    Tsoumpas, C; Polycarpou, I; Thielemans, K; Buerger, C; King, A P; Schaeffter, T; Marsden, P K

    2013-03-21

    Following continuous improvement in PET spatial resolution, respiratory motion correction has become an important task. Two of the most common approaches that utilize all detected PET events to motion-correct PET data are the reconstruct-transform-average method (RTA) and motion-compensated image reconstruction (MCIR). In RTA, separate images are reconstructed for each respiratory frame, subsequently transformed to one reference frame and finally averaged to produce a motion-corrected image. In MCIR, the projection data from all frames are reconstructed by including motion information in the system matrix so that a motion-corrected image is reconstructed directly. Previous theoretical analyses have explained why MCIR is expected to outperform RTA. It has been suggested that MCIR creates less noise than RTA because the images for each separate respiratory frame will be severely affected by noise. However, recent investigations have shown that in the unregularized case RTA images can have fewer noise artefacts, while MCIR images are more quantitatively accurate but have the common salt-and-pepper noise. In this paper, we perform a realistic numerical 4D simulation study to compare the advantages gained by including regularization within reconstruction for RTA and MCIR, in particular using the median-root-prior incorporated in the ordered subsets maximum a posteriori one-step-late algorithm. In this investigation we have demonstrated that MCIR with proper regularization parameters reconstructs lesions with less bias and root mean square error and similar CNR and standard deviation to regularized RTA. This finding is reproducible for a variety of noise levels (25, 50, 100 million counts), lesion sizes (8 mm, 14 mm diameter) and iterations. Nevertheless, regularized RTA can also be a practical solution for motion compensation as a proper level of regularization reduces both bias and mean square error.

  5. Cinematic Characterization of Convected Coherent Structures Within an Continuous Flow Z-Pinch

    NASA Astrophysics Data System (ADS)

    Underwood, Thomas; Rodriguez, Jesse; Loebner, Keith; Cappelli, Mark

    2017-10-01

    In this study, two separate diagnostics are applied to a plasma jet produced from a coaxial accelerator with characteristic velocities exceeding 105 m/s and timescales of 10 μs. In the first of these, an ultra-high frame rate CMOS camera coupled to a Z-type laser Schlieren apparatus is used to obtain flow-field refractometry data for the continuous flow Z-pinch formed within the plasma deflagration jet. The 10 MHz frame rate for 256 consecutive frames provides high temporal resolution, enabling turbulent fluctuations and plasma instabilities to be visualized over the course of a single pulse. The unique advantage of this diagnostic is its ability to simultaneously resolve both structural and temporal evolution of instabilities and density gradients within the flow. To allow for a more meaningful statistical analysis of the resulting wave motion, a multiple B-dot probe array was constructed and calibrated to operate over a broadband frequency range up to 100 MHz. The resulting probe measurements are incorporated into a wavelet analysis to uncover the dispersion relation of recorded wave motion and furthermore uncover instability growth rates. Finally these results are compared with theoretical growth rate estimates to identify underlying physics. This work is supported by the U.S. Department of Energy Stewardship Science Academic Program in addition to the National Defense Science Engineering Graduate Fellowship.

  6. Output-only modal dynamic identification of frames by a refined FDD algorithm at seismic input and high damping

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Ferrari, Rosalba; Rizzi, Egidio

    2016-02-01

    The present paper deals with the seismic modal dynamic identification of frame structures by a refined Frequency Domain Decomposition (rFDD) algorithm, autonomously formulated and implemented within MATLAB. First, the output-only identification technique is outlined analytically and then employed to characterize all modal properties. Synthetic response signals generated prior to the dynamic identification are adopted as input channels, in view of assessing a necessary condition for the procedure's efficiency. Initially, the algorithm is verified on canonical input from random excitation. Then, modal identification has been attempted successfully at given seismic input, taken as base excitation, including both strong motion data and single and multiple input ground motions. Rather than different attempts investigating the role of seismic response signals in the Time Domain, this paper considers the identification analysis in the Frequency Domain. Results turn-out very much consistent with the target values, with quite limited errors in the modal estimates, including for the damping ratios, ranging from values in the order of 1% to 10%. Either seismic excitation and high values of damping, resulting critical also in case of well-spaced modes, shall not fulfill traditional FFD assumptions: this shows the consistency of the developed algorithm. Through original strategies and arrangements, the paper shows that a comprehensive rFDD modal dynamic identification of frames at seismic input is feasible, also at concomitant high damping.

  7. Evaluation of Event-Based Algorithms for Optical Flow with Ground-Truth from Inertial Measurement Sensor

    PubMed Central

    Rueckauer, Bodo; Delbruck, Tobi

    2016-01-01

    In this study we compare nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost. In contrast to conventional, frame-based motion flow algorithms, our open-source implementations compute optical flow based on address-events from a neuromorphic Dynamic Vision Sensor (DVS). For this benchmarking we created a dataset of two synthesized and three real samples recorded from a 240 × 180 pixel Dynamic and Active-pixel Vision Sensor (DAVIS). This dataset contains events from the DVS as well as conventional frames to support testing state-of-the-art frame-based methods. We introduce a new source for the ground truth: In the special case that the perceived motion stems solely from a rotation of the vision sensor around its three camera axes, the true optical flow can be estimated using gyro data from the inertial measurement unit integrated with the DAVIS camera. This provides a ground-truth to which we can compare algorithms that measure optical flow by means of motion cues. An analysis of error sources led to the use of a refractory period, more accurate numerical derivatives and a Savitzky-Golay filter to achieve significant improvements in accuracy. Our pure Java implementations of two recently published algorithms reduce computational cost by up to 29% compared to the original implementations. Two of the algorithms introduced in this paper further speed up processing by a factor of 10 compared with the original implementations, at equal or better accuracy. On a desktop PC, they run in real-time on dense natural input recorded by a DAVIS camera. PMID:27199639

  8. On event-based optical flow detection

    PubMed Central

    Brosch, Tobias; Tschechne, Stephan; Neumann, Heiko

    2015-01-01

    Event-based sensing, i.e., the asynchronous detection of luminance changes, promises low-energy, high dynamic range, and sparse sensing. This stands in contrast to whole image frame-wise acquisition by standard cameras. Here, we systematically investigate the implications of event-based sensing in the context of visual motion, or flow, estimation. Starting from a common theoretical foundation, we discuss different principal approaches for optical flow detection ranging from gradient-based methods over plane-fitting to filter based methods and identify strengths and weaknesses of each class. Gradient-based methods for local motion integration are shown to suffer from the sparse encoding in address-event representations (AER). Approaches exploiting the local plane like structure of the event cloud, on the other hand, are shown to be well suited. Within this class, filter based approaches are shown to define a proper detection scheme which can also deal with the problem of representing multiple motions at a single location (motion transparency). A novel biologically inspired efficient motion detector is proposed, analyzed and experimentally validated. Furthermore, a stage of surround normalization is incorporated. Together with the filtering this defines a canonical circuit for motion feature detection. The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations. PMID:25941470

  9. Estimation of cyclic interstory drift capacity of steel framed structures and future applications for seismic design.

    PubMed

    Bojórquez, Edén; Reyes-Salazar, Alfredo; Ruiz, Sonia E; Terán-Gilmore, Amador

    2014-01-01

    Several studies have been devoted to calibrate damage indices for steel and reinforced concrete members with the purpose of overcoming some of the shortcomings of the parameters currently used during seismic design. Nevertheless, there is a challenge to study and calibrate the use of such indices for the practical structural evaluation of complex structures. In this paper, an energy-based damage model for multidegree-of-freedom (MDOF) steel framed structures that accounts explicitly for the effects of cumulative plastic deformation demands is used to estimate the cyclic drift capacity of steel structures. To achieve this, seismic hazard curves are used to discuss the limitations of the maximum interstory drift demand as a performance parameter to achieve adequate damage control. Then the concept of cyclic drift capacity, which incorporates information of the influence of cumulative plastic deformation demands, is introduced as an alternative for future applications of seismic design of structures subjected to long duration ground motions.

  10. Estimation of Cyclic Interstory Drift Capacity of Steel Framed Structures and Future Applications for Seismic Design

    PubMed Central

    Bojórquez, Edén; Reyes-Salazar, Alfredo; Ruiz, Sonia E.; Terán-Gilmore, Amador

    2014-01-01

    Several studies have been devoted to calibrate damage indices for steel and reinforced concrete members with the purpose of overcoming some of the shortcomings of the parameters currently used during seismic design. Nevertheless, there is a challenge to study and calibrate the use of such indices for the practical structural evaluation of complex structures. In this paper, an energy-based damage model for multidegree-of-freedom (MDOF) steel framed structures that accounts explicitly for the effects of cumulative plastic deformation demands is used to estimate the cyclic drift capacity of steel structures. To achieve this, seismic hazard curves are used to discuss the limitations of the maximum interstory drift demand as a performance parameter to achieve adequate damage control. Then the concept of cyclic drift capacity, which incorporates information of the influence of cumulative plastic deformation demands, is introduced as an alternative for future applications of seismic design of structures subjected to long duration ground motions. PMID:25089288

  11. Linking HIPPARCOS to the Extragalactic Reference Frame Part 5 OF 6, Newc, Cycle 2,CONTINUATION of 2565-HIGH

    NASA Astrophysics Data System (ADS)

    Hemenway, Paul

    1991-07-01

    Determination of a non-rotating Reference Frame is crucial to progress in many areas, including: Galactic motions, local (Oort's A and B) and global (R0) parameters derived from them, solar system motion discrepancies (Planet X); and in conjunction with the VLBI radio reference frame, the registration of radio and optical images at an accuracy well below the resolution limit of HST images (0.06 arcsec). The goal of the Program is to tie the HIPPARCOS and Extra- galactic Reference Frames together at the 0.0005 arcsec and 0.0005 arcsec/year level. The HST data will allow a deter- mination of the brightness distribution in the stellar and extragalactic objects observed and time dependent changes therein at the 0.001 arcsec/year level. The Program requires targets distributed over the whole sky to define a rigid Reference Frame. GTO observations will provide initial first epoch data and preliminary proper motions. The observations will consist of relative positions of Extra- galactic objects (EGOs) and HIPPARCOS stars, measured with the FGSs.

  12. Automatic frame-centered object representation and integration revealed by iconic memory, visual priming, and backward masking.

    PubMed

    Lin, Zhicheng; He, Sheng

    2012-10-25

    Object identities ("what") and their spatial locations ("where") are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects ("files") within the reference frame ("cabinet") are orderly coded relative to the frame.

  13. Integration time for the perception of depth from motion parallax.

    PubMed

    Nawrot, Mark; Stroyan, Keith

    2012-04-15

    The perception of depth from relative motion is believed to be a slow process that "builds-up" over a period of observation. However, in the case of motion parallax, the potential accuracy of the depth estimate suffers as the observer translates during the viewing period. Our recent quantitative model for the perception of depth from motion parallax proposes that relative object depth (d) can be determined from retinal image motion (dθ/dt), pursuit eye movement (dα/dt), and fixation distance (f) by the formula: d/f≈dθ/dα. Given the model's dynamics, it is important to know the integration time required by the visual system to recover dα and dθ, and then estimate d. Knowing the minimum integration time reveals the incumbent error in this process. A depth-phase discrimination task was used to determine the time necessary to perceive depth-sign from motion parallax. Observers remained stationary and viewed a briefly translating random-dot motion parallax stimulus. Stimulus duration varied between trials. Fixation on the translating stimulus was monitored and enforced with an eye-tracker. The study found that relative depth discrimination can be performed with presentations as brief as 16.6 ms, with only two stimulus frames providing both retinal image motion and the stimulus window motion for pursuit (mean range=16.6-33.2 ms). This was found for conditions in which, prior to stimulus presentation, the eye was engaged in ongoing pursuit or the eye was stationary. A large high-contrast masking stimulus disrupted depth-discrimination for stimulus presentations less than 70-75 ms in both pursuit and stationary conditions. This interval might be linked to ocular-following response eye-movement latencies. We conclude that neural mechanisms serving depth from motion parallax generate a depth estimate much more quickly than previously believed. We propose that additional sluggishness might be due to the visual system's attempt to determine the maximum dθ/dα ratio for a selection of points on a complicated stimulus. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Spherical Coordinate Systems for Streamlining Suited Mobility Analysis

    NASA Technical Reports Server (NTRS)

    Benson, Elizabeth; Cowley, Matthew; Harvill, Lauren; Rajulu. Sudhakar

    2015-01-01

    Introduction: When describing human motion, biomechanists generally report joint angles in terms of Euler angle rotation sequences. However, there are known limitations in using this method to describe complex motions such as the shoulder joint during a baseball pitch. Euler angle notation uses a series of three rotations about an axis where each rotation is dependent upon the preceding rotation. As such, the Euler angles need to be regarded as a set to get accurate angle information. Unfortunately, it is often difficult to visualize and understand these complex motion representations. It has been shown that using a spherical coordinate system allows Anthropometry and Biomechanics Facility (ABF) personnel to increase their ability to transmit important human mobility data to engineers, in a format that is readily understandable and directly translatable to their design efforts. Objectives: The goal of this project was to use innovative analysis and visualization techniques to aid in the examination and comprehension of complex motions. Methods: This project consisted of a series of small sub-projects, meant to validate and verify a new method before it was implemented in the ABF's data analysis practices. A mechanical test rig was built and tracked in 3D using an optical motion capture system. Its position and orientation were reported in both Euler and spherical reference systems. In the second phase of the project, the ABF estimated the error inherent in a spherical coordinate system, and evaluated how this error would vary within the reference frame. This stage also involved expanding a kinematic model of the shoulder to include the rest of the joints of the body. The third stage of the project involved creating visualization methods to assist in interpreting motion in a spherical frame. These visualization methods will be incorporated in a tool to evaluate a database of suited mobility data, which is currently in development. Results: Initial results demonstrated that a spherical coordinate system is helpful in describing and visualizing the motion of a space suit. The system is particularly useful in describing the motion of the shoulder, where multiple degrees of freedom can lead to very complex motion paths.

  15. A new calibration methodology for thorax and upper limbs motion capture in children using magneto and inertial sensors.

    PubMed

    Ricci, Luca; Formica, Domenico; Sparaci, Laura; Lasorsa, Francesca Romana; Taffoni, Fabrizio; Tamilia, Eleonora; Guglielmelli, Eugenio

    2014-01-09

    Recent advances in wearable sensor technologies for motion capture have produced devices, mainly based on magneto and inertial measurement units (M-IMU), that are now suitable for out-of-the-lab use with children. In fact, the reduced size, weight and the wireless connectivity meet the requirement of minimum obtrusivity and give scientists the possibility to analyze children's motion in daily life contexts. Typical use of magneto and inertial measurement units (M-IMU) motion capture systems is based on attaching a sensing unit to each body segment of interest. The correct use of this setup requires a specific calibration methodology that allows mapping measurements from the sensors' frames of reference into useful kinematic information in the human limbs' frames of reference. The present work addresses this specific issue, presenting a calibration protocol to capture the kinematics of the upper limbs and thorax in typically developing (TD) children. The proposed method allows the construction, on each body segment, of a meaningful system of coordinates that are representative of real physiological motions and that are referred to as functional frames (FFs). We will also present a novel cost function for the Levenberg-Marquardt algorithm, to retrieve the rotation matrices between each sensor frame (SF) and the corresponding FF. Reported results on a group of 40 children suggest that the method is repeatable and reliable, opening the way to the extensive use of this technology for out-of-the-lab motion capture in children.

  16. In vivo quantification of motion in liver parenchyma and its application in shistosomiasis tissue characterization

    NASA Astrophysics Data System (ADS)

    Badawi, Ahmed M.; Hashem, Ahmed M.; Youssef, Abou-Bakr M.; Abdel-Wahab, Mohamed F.

    1995-03-01

    Schistosomiasis is a major problem in Egypt, despite an active control program it is estimated to exist in about 1/3 of the population. Deposition of less functioning fibrous tissues in the liver is the major contributory factor to the hepatic pathology. Fibrous tissues consist of a complex array of connective matrix material and a variety of collagen isotopes. As a result of an increased stromal density (collagen content), the parenchyma became more ectogenic and less elastic (hard). In this study we investigated the effect of cardiac mechanical impulses from the heart and aorta on the kinetics of the liver parenchyma. Under conditions of controlled patient movements and suspended respiration, a 30 frame per second of 588 X 512 ultrasound images (cineloop, 32 pels per cm) are captured from an aTL ultrasound machine then digitized. The image acquisition is triggered by the R wave of the ECG of the patient. The motion that has a forced oscillation form in the liver parenchyma is quantified by tracking of small box (20 - 30 pels) in 16 directions for all the successive 30 frames. The tracking was done using block matching techniques (the max correlation between boxes in time, frequency domains, and the minimum SAD (sum absolute difference) between boxes). The motion is quantified for many regions at different positions within the liver parenchyma for 80 cases of variable degrees of schisto., cirrhotic livers, and for normal livers. The velocity of the tissue is calculated from the displacement (quantified motion), time between frames, and the scan time for the ultrasound scanner. We found that the motion in liver parenchyma is small in the order of very few millimeters, and the attenuation of the mechanical wave for one ECG cycle is higher in the schisto. and cirrhotic livers than in the normal ones. Finally quantification of motion in liver parenchyma due to cardiac impulses under controlled limb movement and respiration may be of value in the characterization of schisto. (elasticity based not scattering based). This value could be used together with the wide varieties of quantitative tissue characterization parameters for pathology differentiation and for differentiating subclasses of cirrhosis as well as the determination of the extent of bilharzial affection.

  17. Characterizing ground motions that collapse steel special moment-resisting frames or make them unrepairable

    USGS Publications Warehouse

    Olsen, Anna H.; Heaton, Thomas H.; Hall, John F.

    2015-01-01

    This work applies 64,765 simulated seismic ground motions to four models each of 6- or 20-story, steel special moment-resisting frame buildings. We consider two vector intensity measures and categorize the building response as “collapsed,” “unrepairable,” or “repairable.” We then propose regression models to predict the building responses from the intensity measures. The best models for “collapse” or “unrepairable” use peak ground displacement and velocity as intensity measures, and the best models predicting peak interstory drift ratio, given that the frame model is “repairable,” use spectral acceleration and epsilon (ϵ) as intensity measures. The more flexible frame is always more likely than the stiffer frame to “collapse” or be “unrepairable.” A frame with fracture-prone welds is substantially more susceptible to “collapse” or “unrepairable” damage than the equivalent frame with sound welds. The 20-story frames with fracture-prone welds are more vulnerable to P-delta instability and have a much higher probability of collapse than do any of the 6-story frames.

  18. Posture-based processing in visual short-term memory for actions.

    PubMed

    Vicary, Staci A; Stevens, Catherine J

    2014-01-01

    Visual perception of human action involves both form and motion processing, which may rely on partially dissociable neural networks. If form and motion are dissociable during visual perception, then they may also be dissociable during their retention in visual short-term memory (VSTM). To elicit form-plus-motion and form-only processing of dance-like actions, individual action frames can be presented in the correct or incorrect order. The former appears coherent and should elicit action perception, engaging both form and motion pathways, whereas the latter appears incoherent and should elicit posture perception, engaging form pathways alone. It was hypothesized that, if form and motion are dissociable in VSTM, then recognition of static body posture should be better after viewing incoherent than after viewing coherent actions. However, as VSTM is capacity limited, posture-based encoding of actions may be ineffective with increased number of items or frames. Using a behavioural change detection task, recognition of a single test posture was significantly more likely after studying incoherent than after studying coherent stimuli. However, this effect only occurred for spans of two (but not three) items and for stimuli with five (but not nine) frames. As in perception, posture and motion are dissociable in VSTM.

  19. What the Human Brain Likes About Facial Motion

    PubMed Central

    Schultz, Johannes; Brockhaus, Matthias; Bülthoff, Heinrich H.; Pilz, Karin S.

    2013-01-01

    Facial motion carries essential information about other people's emotions and intentions. Most previous studies have suggested that facial motion is mainly processed in the superior temporal sulcus (STS), but several recent studies have also shown involvement of ventral temporal face-sensitive regions. Up to now, it is not known whether the increased response to facial motion is due to an increased amount of static information in the stimulus, to the deformation of the face over time, or to increased attentional demands. We presented nonrigidly moving faces and control stimuli to participants performing a demanding task unrelated to the face stimuli. We manipulated the amount of static information by using movies with different frame rates. The fluidity of the motion was manipulated by presenting movies with frames either in the order in which they were recorded or in scrambled order. Results confirm higher activation for moving compared with static faces in STS and under certain conditions in ventral temporal face-sensitive regions. Activation was maximal at a frame rate of 12.5 Hz and smaller for scrambled movies. These results indicate that both the amount of static information and the fluid facial motion per se are important factors for the processing of dynamic faces. PMID:22535907

  20. New Tests of the Fixed Hotspot Approximation

    NASA Astrophysics Data System (ADS)

    Gordon, R. G.; Andrews, D. L.; Horner-Johnson, B. C.; Kumar, R. R.

    2005-05-01

    We present new methods for estimating uncertainties in plate reconstructions relative to the hotspots and new tests of the fixed hotspot approximation. We find no significant motion between Pacific hotspots, on the one hand, and Indo-Atlantic hotspots, on the other, for the past ~ 50 Myr, but large and significant apparent motion before 50 Ma. Whether this motion is truly due to motion between hotspots or alternatively due to flaws in the global plate motion circuit can be tested with paleomagnetic data. These tests give results consistent with the fixed hotspot approximation and indicate significant misfits when a relative plate motion circuit through Antarctica is employed for times before 50 Ma. If all of the misfit to the global plate motion circuit is due to motion between East and West Antarctica, then that motion is 800 ± 500 km near the Ross Sea Embayment and progressively less along the Trans-Antarctic Mountains toward the Weddell Sea. Further paleomagnetic tests of the fixed hotspot approximation can be made. Cenozoic and Cretaceous paleomagnetic data from the Pacific plate, along with reconstructions of the Pacific plate relative to the hotspots, can be used to estimate an apparent polar wander (APW) path of Pacific hotspots. An APW path of Indo-Atlantic hotspots can be similarly estimated (e.g. Besse & Courtillot 2002). If both paths diverge in similar ways from the north pole of the hotspot reference frame, it would indicate that the hotspots have moved in unison relative to the spin axis, which may be attributed to true polar wander. If the two paths diverge from one another, motion between Pacific hotspots and Indo-Atlantic hotspots would be indicated. The general agreement of the two paths shows that the former is more important than the latter. The data require little or no motion between groups of hotspots, but up to ~10 mm/yr of motion is allowed within uncertainties. The results disagree, in particular, with the recent extreme interpretation of Tarduno et al. [2003], who assume (1) that motion of the Indo-Atlantic hotspots relative to the spin axis can be ignored during the past 85 Myr, and (2) that the Hawaiian hotspot has been fixed relative to the spin axis since the age of the Hawaiian-Emperor bend. Our results indicate that both assumptions are false.

  1. WE-G-BRD-02: Characterizing Information Loss in a Sparse-Sampling-Based Dynamic MRI Sequence (k-T BLAST) for Lung Motion Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arai, T; Nofiele, J; Sawant, A

    2015-06-15

    Purpose: Rapid MRI is an attractive, non-ionizing tool for soft-tissue-based monitoring of respiratory motion in thoracic and abdominal radiotherapy. One big challenge is to achieve high temporal resolution while maintaining adequate spatial resolution. K-t BLAST, sparse-sampling and reconstruction sequence based on a-priori information represents a potential solution. In this work, we investigated how much “true” motion information is lost as a-priori information is progressively added for faster imaging. Methods: Lung tumor motions in superior-inferior direction obtained from ten individuals were replayed into an in-house, MRI-compatible, programmable motion platform (50Hz refresh and 100microns precision). Six water-filled 1.5ml tubes were placed onmore » it as fiducial markers. Dynamic marker motion within a coronal slice (FOV: 32×32cm{sup 2}, resolution: 0.67×0.67mm{sup 2}, slice-thickness: 5mm) was collected on 3.0T body scanner (Ingenia, Philips). Balanced-FFE (TE/TR: 1.3ms/2.5ms, flip-angle: 40degrees) was used in conjunction with k-t BLAST. Each motion was repeated four times as four k-t acceleration factors 1, 2, 5, and 16 (corresponding frame rates were 2.5, 4.7, 9.8, and 19.1Hz, respectively) were compared. For each image set, one average motion trajectory was computed from six marker displacements. Root mean square error (RMS) was used as a metric of spatial accuracy where measured trajectories were compared to original data. Results: Tumor motion was approximately 10mm. The mean(standard deviation) of respiratory rates over ten patients was 0.28(0.06)Hz. Cumulative distributions of tumor motion frequency spectra (0–25Hz) obtained from the patients showed that 90% of motion fell on 3.88Hz or less. Therefore, the frame rate must be a double or higher for accurate monitoring. The RMS errors over patients for k-t factors of 1, 2, 5, and 16 were.10(.04),.17(.04), .21(.06) and.26(.06)mm, respectively. Conclusions: K-t factor of 5 or higher can cover the high frequency component of tumor respiratory motion, while the estimated error of spatial accuracy was approximately.2mm.« less

  2. Frames of Reference in the Classroom

    ERIC Educational Resources Information Center

    Grossman, Joshua

    2012-01-01

    The classic film "Frames of Reference" effectively illustrates concepts involved with inertial and non-inertial reference frames. In it, Donald G. Ivey and Patterson Hume use the cameras perspective to allow the viewer to see motion in reference frames translating with a constant velocity, translating while accelerating, and rotating--all with…

  3. Event-by-Event Continuous Respiratory Motion Correction for Dynamic PET Imaging.

    PubMed

    Yu, Yunhan; Chan, Chung; Ma, Tianyu; Liu, Yaqiang; Gallezot, Jean-Dominique; Naganawa, Mika; Kelada, Olivia J; Germino, Mary; Sinusas, Albert J; Carson, Richard E; Liu, Chi

    2016-07-01

    Existing respiratory motion-correction methods are applied only to static PET imaging. We have previously developed an event-by-event respiratory motion-correction method with correlations between internal organ motion and external respiratory signals (INTEX). This method is uniquely appropriate for dynamic imaging because it corrects motion for each time point. In this study, we applied INTEX to human dynamic PET studies with various tracers and investigated the impact on kinetic parameter estimation. The use of 3 tracers-a myocardial perfusion tracer, (82)Rb (n = 7); a pancreatic β-cell tracer, (18)F-FP(+)DTBZ (n = 4); and a tumor hypoxia tracer, (18)F-fluoromisonidazole ((18)F-FMISO) (n = 1)-was investigated in a study of 12 human subjects. Both rest and stress studies were performed for (82)Rb. The Anzai belt system was used to record respiratory motion. Three-dimensional internal organ motion in high temporal resolution was calculated by INTEX to guide event-by-event respiratory motion correction of target organs in each dynamic frame. Time-activity curves of regions of interest drawn based on end-expiration PET images were obtained. For (82)Rb studies, K1 was obtained with a 1-tissue model using a left-ventricle input function. Rest-stress myocardial blood flow (MBF) and coronary flow reserve (CFR) were determined. For (18)F-FP(+)DTBZ studies, the total volume of distribution was estimated with arterial input functions using the multilinear analysis 1 method. For the (18)F-FMISO study, the net uptake rate Ki was obtained with a 2-tissue irreversible model using a left-ventricle input function. All parameters were compared with the values derived without motion correction. With INTEX, K1 and MBF increased by 10% ± 12% and 15% ± 19%, respectively, for (82)Rb stress studies. CFR increased by 19% ± 21%. For studies with motion amplitudes greater than 8 mm (n = 3), K1, MBF, and CFR increased by 20% ± 12%, 30% ± 20%, and 34% ± 23%, respectively. For (82)Rb rest studies, INTEX had minimal effect on parameter estimation. The total volume of distribution of (18)F-FP(+)DTBZ and Ki of (18)F-FMISO increased by 17% ± 6% and 20%, respectively. Respiratory motion can have a substantial impact on dynamic PET in the thorax and abdomen. The INTEX method using continuous external motion data substantially changed parameters in kinetic modeling. More accurate estimation is expected with INTEX. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  4. Modeling moving systems with RELAP5-3D

    DOE PAGES

    Mesina, G. L.; Aumiller, David L.; Buschman, Francis X.; ...

    2015-12-04

    RELAP5-3D is typically used to model stationary, land-based reactors. However, it can also model reactors in other inertial and accelerating frames of reference. By changing the magnitude of the gravitational vector through user input, RELAP5-3D can model reactors on a space station or the moon. The field equations have also been modified to model reactors in a non-inertial frame, such as occur in land-based reactors during earthquakes or onboard spacecraft. Transient body forces affect fluid flow in thermal-fluid machinery aboard accelerating crafts during rotational and translational accelerations. It is useful to express the equations of fluid motion in the acceleratingmore » frame of reference attached to the moving craft. However, careful treatment of the rotational and translational kinematics is required to accurately capture the physics of the fluid motion. Correlations for flow at angles between horizontal and vertical are generated via interpolation where no experimental studies or data exist. The equations for three-dimensional fluid motion in a non-inertial frame of reference are developed. As a result, two different systems for describing rotational motion are presented, user input is discussed, and an example is given.« less

  5. Motion versus fixed distraction of the joint in the treatment of ankle osteoarthritis: a prospective randomized controlled trial.

    PubMed

    Saltzman, Charles L; Hillis, Stephen L; Stolley, Mary P; Anderson, Donald D; Amendola, Annunziato

    2012-06-06

    Initial reports have shown the efficacy of fixed distraction for the treatment of ankle osteoarthritis. We hypothesized that allowing ankle motion during distraction would result in significant improvements in outcomes compared with distraction without ankle motion. We conducted a prospective randomized controlled trial comparing the outcomes for patients with advanced ankle osteoarthritis who were managed with anterior osteophyte removal and either (1) fixed ankle distraction or (2) ankle distraction permitting joint motion. Thirty-six patients were randomized to treatment with either fixed distraction or distraction with motion. The patients were followed for twenty-four months after frame removal. The Ankle Osteoarthritis Scale (AOS) was the main outcome variable. Two years after frame removal, subjects in both groups showed significant improvement compared with the status before treatment (p < 0.02 for both groups). The motion-distraction group had significantly better AOS scores than the fixed-distraction group at twenty-six, fifty-two, and 104 weeks after frame removal (p < 0.01 at each time point). At 104 weeks, the motion-distraction group had an overall mean improvement of 56.6% in the AOS score, whereas the fixed-distraction group had a mean improvement of 22.9% (p < 0.01). Distraction improved the patient-reported outcomes of treatment of ankle osteoarthritis. Adding ankle motion to distraction showed an early and sustained beneficial effect on outcome.

  6. Global plate motion frames: Toward a unified model

    NASA Astrophysics Data System (ADS)

    Torsvik, Trond H.; Müller, R. Dietmar; van der Voo, Rob; Steinberger, Bernhard; Gaina, Carmen

    2008-09-01

    Plate tectonics constitutes our primary framework for understanding how the Earth works over geological timescales. High-resolution mapping of relative plate motions based on marine geophysical data has followed the discovery of geomagnetic reversals, mid-ocean ridges, transform faults, and seafloor spreading, cementing the plate tectonic paradigm. However, so-called "absolute plate motions," describing how the fragments of the outer shell of the Earth have moved relative to a reference system such as the Earth's mantle, are still poorly understood. Accurate absolute plate motion models are essential surface boundary conditions for mantle convection models as well as for understanding past ocean circulation and climate as continent-ocean distributions change with time. A fundamental problem with deciphering absolute plate motions is that the Earth's rotation axis and the averaged magnetic dipole axis are not necessarily fixed to the mantle reference system. Absolute plate motion models based on volcanic hot spot tracks are largely confined to the last 130 Ma and ideally would require knowledge about the motions within the convecting mantle. In contrast, models based on paleomagnetic data reflect plate motion relative to the magnetic dipole axis for most of Earth's history but cannot provide paleolongitudes because of the axial symmetry of the Earth's magnetic dipole field. We analyze four different reference frames (paleomagnetic, African fixed hot spot, African moving hot spot, and global moving hot spot), discuss their uncertainties, and develop a unifying approach for connecting a hot spot track system and a paleomagnetic absolute plate reference system into a "hybrid" model for the time period from the assembly of Pangea (˜320 Ma) to the present. For the last 100 Ma we use a moving hot spot reference frame that takes mantle convection into account, and we connect this to a pre-100 Ma global paleomagnetic frame adjusted 5° in longitude to smooth the reference frame transition. Using plate driving force arguments and the mapping of reconstructed large igneous provinces to core-mantle boundary topography, we argue that continental paleolongitudes can be constrained with reasonable confidence.

  7. Meshless Modeling of Deformable Shapes and their Motion

    PubMed Central

    Adams, Bart; Ovsjanikov, Maks; Wand, Michael; Seidel, Hans-Peter; Guibas, Leonidas J.

    2010-01-01

    We present a new framework for interactive shape deformation modeling and key frame interpolation based on a meshless finite element formulation. Starting from a coarse nodal sampling of an object’s volume, we formulate rigidity and volume preservation constraints that are enforced to yield realistic shape deformations at interactive frame rates. Additionally, by specifying key frame poses of the deforming shape and optimizing the nodal displacements while targeting smooth interpolated motion, our algorithm extends to a motion planning framework for deformable objects. This allows reconstructing smooth and plausible deformable shape trajectories in the presence of possibly moving obstacles. The presented results illustrate that our framework can handle complex shapes at interactive rates and hence is a valuable tool for animators to realistically and efficiently model and interpolate deforming 3D shapes. PMID:24839614

  8. The applicability of frame imaging from a spinning spacecraft. Volume 1: Summary report

    NASA Technical Reports Server (NTRS)

    Botticelli, R. A.; Johnson, R. O.; Wallmark, G. N.

    1973-01-01

    A detailed study was made of frame-type imaging systems for use on board a spin stabilized spacecraft for outer planets applications. All types of frame imagers capable of performing this mission were considered, regardless of the current state of the art. Detailed sensor models of these systems were developed at the component level and used in the subsequent analyses. An overall assessment was then made of the various systems based upon results of a worst-case performance analysis, foreseeable technology problems, and the relative reliability and radiation tolerance of the systems. Special attention was directed at restraints imposed by image motion and the limited data transmission and storage capability of the spacecraft. Based upon this overall assessment, the most promising systems were selected and then examined in detail for a specified Jupiter orbiter mission. The relative merits of each selected system were then analyzed, and the system design characteristics were demonstrated using preliminary configurations, block diagrams, and tables of estimated weights, volumes and power consumption.

  9. Dynamic heart model for the mathematical cardiac torso (MCAT) phantom to represent the invariant total heart volume

    NASA Astrophysics Data System (ADS)

    Pretorius, P. H.; King, Michael A.; Tsui, Benjamin M.; LaCroix, Karen; Xia, Weishi

    1998-07-01

    This manuscript documents the alteration of the heart model of the MCAT phantom to better represent cardiac motion. The objective of the inclusion of motion was to develop a digital simulation of the heart such that the impact of cardiac motion on single photon emission computed tomography (SPECT) imaging could be assessed and methods of quantitating cardiac function could be investigated. The motion of the dynamic MCAT's heart is modeled by a 128 time frame volume curve. Eight time frames are averaged together to obtain a gated perfusion acquisition of 16 time frames and ensure motion within every time frame. The position of the MCAT heart was changed during contraction to rotate back and forth around the long axis through the center of the left ventricle (LV) using the end systolic time frame as turning point. Simple respiratory motion was also introduced by changing the orientation of the heart model in a 2 dimensional (2D) plane with every time frame. The averaging effect of respiratory motion in a specific time frame was modeled by randomly selecting multiple heart locations between two extreme orientations. Non-gated perfusion phantoms were also generated by averaging over all time frames. Maximal chamber volumes were selected to fit a profile of a normal healthy person. These volumes were changed during contraction of the ventricles such that the increase in volume in the atria compensated for the decrease in volume in the ventricles. The myocardium were modeled to represent shortening of muscle fibers during contraction with the base of the ventricles moving towards a static apex. The apical region was modeled with moderate wall thinning present while myocardial mass was conserved. To test the applicability of the dynamic heart model, myocardial wall thickening was measured using maximum counts and full width half maximum measurements, and compared with published trends. An analytical 3D projector, with attenuation and detector response included, was used to generate radionuclide projection data sets. After reconstruction a linear relationship was obtained between maximum myocardial counts and myocardium thickness, similar to published results. A numeric difference in values from different locations exist due to different amounts of attenuation present. Similar results were obtained for FWHM measurements. Also, a hot apical region on the polar maps without attenuation compensation turns into an apical defect with attenuation compensation. The apical decrease was more prominent in ED than ES due to the change in the partial volume effect. Both of these agree with clinical trends. It is concluded that the dynamic MCAT (dMCAT) phantom can be used to study the influence of various physical parameters on radionuclide perfusion imaging.

  10. Automatic frame-centered object representation and integration revealed by iconic memory, visual priming, and backward masking

    PubMed Central

    Lin, Zhicheng; He, Sheng

    2012-01-01

    Object identities (“what”) and their spatial locations (“where”) are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects (“files”) within the reference frame (“cabinet”) are orderly coded relative to the frame. PMID:23104817

  11. Automatic 3D registration of dynamic stress and rest (82)Rb and flurpiridaz F 18 myocardial perfusion PET data for patient motion detection and correction.

    PubMed

    Woo, Jonghye; Tamarappoo, Balaji; Dey, Damini; Nakazato, Ryo; Le Meunier, Ludovic; Ramesh, Amit; Lazewatsky, Joel; Germano, Guido; Berman, Daniel S; Slomka, Piotr J

    2011-11-01

    The authors aimed to develop an image-based registration scheme to detect and correct patient motion in stress and rest cardiac positron emission tomography (PET)/CT images. The patient motion correction was of primary interest and the effects of patient motion with the use of flurpiridaz F 18 and (82)Rb were demonstrated. The authors evaluated stress/rest PET myocardial perfusion imaging datasets in 30 patients (60 datasets in total, 21 male and 9 female) using a new perfusion agent (flurpiridaz F 18) (n = 16) and (82)Rb (n = 14), acquired on a Siemens Biograph-64 scanner in list mode. Stress and rest images were reconstructed into 4 ((82)Rb) or 10 (flurpiridaz F 18) dynamic frames (60 s each) using standard reconstruction (2D attenuation weighted ordered subsets expectation maximization). Patient motion correction was achieved by an image-based registration scheme optimizing a cost function using modified normalized cross-correlation that combined global and local features. For comparison, visual scoring of motion was performed on the scale of 0 to 2 (no motion, moderate motion, and large motion) by two experienced observers. The proposed registration technique had a 93% success rate in removing left ventricular motion, as visually assessed. The maximum detected motion extent for stress and rest were 5.2 mm and 4.9 mm for flurpiridaz F 18 perfusion and 3.0 mm and 4.3 mm for (82)Rb perfusion studies, respectively. Motion extent (maximum frame-to-frame displacement) obtained for stress and rest were (2.2 ± 1.1, 1.4 ± 0.7, 1.9 ± 1.3) mm and (2.0 ± 1.1, 1.2 ±0 .9, 1.9 ± 0.9) mm for flurpiridaz F 18 perfusion studies and (1.9 ± 0.7, 0.7 ± 0.6, 1.3 ± 0.6) mm and (2.0 ± 0.9, 0.6 ± 0.4, 1.2 ± 1.2) mm for (82)Rb perfusion studies, respectively. A visually detectable patient motion threshold was established to be ≥2.2 mm, corresponding to visual user scores of 1 and 2. After motion correction, the average increases in contrast-to-noise ratio (CNR) from all frames for larger than the motion threshold were 16.2% in stress flurpiridaz F 18 and 12.2% in rest flurpiridaz F 18 studies. The average increases in CNR were 4.6% in stress (82)Rb studies and 4.3% in rest (82)Rb studies. Fully automatic motion correction of dynamic PET frames can be performed accurately, potentially allowing improved image quantification of cardiac PET data.

  12. Attitude-correlated frames approach for a star sensor to improve attitude accuracy under highly dynamic conditions.

    PubMed

    Ma, Liheng; Zhan, Dejun; Jiang, Guangwen; Fu, Sihua; Jia, Hui; Wang, Xingshu; Huang, Zongsheng; Zheng, Jiaxing; Hu, Feng; Wu, Wei; Qin, Shiqiao

    2015-09-01

    The attitude accuracy of a star sensor decreases rapidly when star images become motion-blurred under dynamic conditions. Existing techniques concentrate on a single frame of star images to solve this problem and improvements are obtained to a certain extent. An attitude-correlated frames (ACF) approach, which concentrates on the features of the attitude transforms of the adjacent star image frames, is proposed to improve upon the existing techniques. The attitude transforms between different star image frames are measured by the strap-down gyro unit precisely. With the ACF method, a much larger star image frame is obtained through the combination of adjacent frames. As a result, the degradation of attitude accuracy caused by motion-blurring are compensated for. The improvement of the attitude accuracy is approximately proportional to the square root of the number of correlated star image frames. Simulations and experimental results indicate that the ACF approach is effective in removing random noises and improving the attitude determination accuracy of the star sensor under highly dynamic conditions.

  13. Role of Alpha-Band Oscillations in Spatial Updating across Whole Body Motion

    PubMed Central

    Gutteling, Tjerk P.; Medendorp, W. P.

    2016-01-01

    When moving around in the world, we have to keep track of important locations in our surroundings. In this process, called spatial updating, we must estimate our body motion and correct representations of memorized spatial locations in accordance with this motion. While the behavioral characteristics of spatial updating across whole body motion have been studied in detail, its neural implementation lacks detailed study. Here we use electroencephalography (EEG) to distinguish various spectral components of this process. Subjects gazed at a central body-fixed point in otherwise complete darkness, while a target was briefly flashed, either left or right from this point. Subjects had to remember the location of this target as either moving along with the body or remaining fixed in the world while being translated sideways on a passive motion platform. After the motion, subjects had to indicate the remembered target location in the instructed reference frame using a mouse response. While the body motion, as detected by the vestibular system, should not affect the representation of body-fixed targets, it should interact with the representation of a world-centered target to update its location relative to the body. We show that the initial presentation of the visual target induced a reduction of alpha band power in contralateral parieto-occipital areas, which evolved to a sustained increase during the subsequent memory period. Motion of the body led to a reduction of alpha band power in central parietal areas extending to lateral parieto-temporal areas, irrespective of whether the targets had to be memorized relative to world or body. When updating a world-fixed target, its internal representation shifts hemispheres, only when subjects’ behavioral responses suggested an update across the body midline. Our results suggest that parietal cortex is involved in both self-motion estimation and the selective application of this motion information to maintaining target locations as fixed in the world or fixed to the body. PMID:27199882

  14. The Effect of Motion Analysis Activities in a Video-Based Laboratory in Students' Understanding of Position, Velocity and Frames of Reference

    ERIC Educational Resources Information Center

    Koleza, Eugenia; Pappas, John

    2008-01-01

    In this article, we present the results of a qualitative research project on the effect of motion analysis activities in a Video-Based Laboratory (VBL) on students' understanding of position, velocity and frames of reference. The participants in our research were 48 pre-service teachers enrolled in Education Departments with no previous strong…

  15. Seismic damage to structures in the M s6.5 Ludian earthquake

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Xie, Quancai; Dai, Boyang; Zhang, Haoyu; Chen, Hongfu

    2016-03-01

    On 3 August 2014, the Ludian earthquake struck northwest Yunnan Province with a surface wave magnitude of 6.5. This moderate earthquake unexpectedly caused high fatalities and great economic loss. Four strong motion stations were located in the areas with intensity V, VI, VII and IX, near the epicentre. The characteristics of the ground motion are discussed herein, including 1) ground motion was strong at a period of less than 1.4 s, which covered the natural vibration period of a large number of structures; and 2) the release energy was concentrated geographically. Based on materials collected during emergency building inspections, the damage patterns of adobe, masonry, timber frame and reinforced concrete (RC) frame structures in areas with different intensities are summarised. Earthquake damage matrices of local buildings are also given for fragility evaluation and earthquake damage prediction. It is found that the collapse ratios of RC frame and confined masonry structures based on the new design code are significantly lower than non-seismic buildings. However, the RC frame structures still failed to achieve the `strong column, weak beam' design target. Traditional timber frame structures with a light infill wall showed good aseismic performance.

  16. Quantifying the effect of disruptions to temporal coherence on the intelligibility of compressed American Sign Language video

    NASA Astrophysics Data System (ADS)

    Ciaramello, Frank M.; Hemami, Sheila S.

    2009-02-01

    Communication of American Sign Language (ASL) over mobile phones would be very beneficial to the Deaf community. ASL video encoded to achieve the rates provided by current cellular networks must be heavily compressed and appropriate assessment techniques are required to analyze the intelligibility of the compressed video. As an extension to a purely spatial measure of intelligibility, this paper quantifies the effect of temporal compression artifacts on sign language intelligibility. These artifacts can be the result of motion-compensation errors that distract the observer or frame rate reductions. They reduce the the perception of smooth motion and disrupt the temporal coherence of the video. Motion-compensation errors that affect temporal coherence are identified by measuring the block-level correlation between co-located macroblocks in adjacent frames. The impact of frame rate reductions was quantified through experimental testing. A subjective study was performed in which fluent ASL participants rated the intelligibility of sequences encoded at a range of 5 different frame rates and with 3 different levels of distortion. The subjective data is used to parameterize an objective intelligibility measure which is highly correlated with subjective ratings at multiple frame rates.

  17. (In)sensitivity of GNSS techniques to geocenter motion

    NASA Astrophysics Data System (ADS)

    Rebischung, Paul; Altamimi, Zuheir; Springer, Tim

    2013-04-01

    As a satellite-based technique, GNSS should be sensitive to motions of the Earth's center of mass (CM) with respect to the Earth's crust. In theory, the weekly solutions of the IGS Analysis Centers (ACs) should indeed have the "instantaneous" CM as their origin, and the net translations between the weekly AC frames and a secular frame such as ITRF2008 should thus approximate the non-linear motion of CM with respect to the Earth's center of figure. However, the comparison of the AC translation time series with each other, with SLR geocenter estimates or with geophysical models reveals that this way of observing geocenter motion with GNSS currently gives unreliable results. The fact that the origin of the weekly AC solutions shoud be CM stems from the satellite equations of motion, in which no degree-1 Stokes coefficients are included. It is therefore reasonable to think that any mis-modeling or uncertainty about the forces acting on GNSS satellites can potentially offset the network origin from CM. That is why defects in radiation pressure modeling have long been assumed to be the main origin of the GNSS geocenter errors. In particular, Meindl et al. (2012) incriminate the correlation between the Z component of the origin and the direct radiation pressure parameters D0. We review here the sensitivity of GNSS techniques to geocenter motion from a different perspective. Our approach consists in determining the signature of a geocenter error on GNSS observations, and seeing how and how well such an error can be compensated by all other usual GNSS parameters. (In other words, we look for the linear combinations of parameters which have the maximal partial correlations with each of the 3 components of the origin, and evaluate these maximal partial correlations.) Without setting up any empirical radiation pressure parameter, we obtain maximal partial correlations of 99.98 % for all 3 components of the origin: a geocenter error can almost perfectly be absorbed by the other GNSS parameters. Satellite clock offsets, if estimated epoch-wise, especially devastate the sensitivity of GNSS to geocenter motion. The numerous station-related parameters (station positions, station clock offsets, ZWDs and horizontal tropospheric gradients) do the rest of the job. The maximal partial correlations increase a bit more when the classic "ECOM" set of 5 radiation pressure parameters is set up for each satellite. But this increase is almost fully attributable to the once-per-revolution parameters BC & BS. In particular, we do not find the direct radiation pressure parameters D0 to play a predominant role in the GNSS geocenter determination problem.

  18. Estimating network effect in geocenter motion: Applications

    NASA Astrophysics Data System (ADS)

    Zannat, Umma Jamila; Tregoning, Paul

    2017-10-01

    The network effect is the error associated with the subsampling of the Earth surface by space geodetic networks. It is an obstacle toward the precise measurement of geocenter motion, that is, the relative motion between the center of mass of the Earth system and the center of figure of the Earth surface. In a complementary paper, we proposed a theoretical approach to estimate the magnitude of this effect from the displacement fields predicted by geophysical models. Here we evaluate the effectiveness of our estimate for two illustrative physical processes: coseismic displacements inducing instantaneous changes in the Helmert parameters and elastic deformation due to surface water movements causing secular drifts in those parameters. For the first, we consider simplified models of the 2004 Sumatra-Andaman and the 2011 Tōhoku-Oki earthquakes, and for the second, we use the observations of the Gravity Recovery and Climate Experiment, complemented by an ocean model. In both case studies, it is found that the magnitude of the network effect, even for a large global network, is often as large as the magnitude of the changes in the Helmert parameters themselves. However, we also show that our proposed modification to the definition of the center of network frame to include weights proportional to the area of the Earth surface that the stations represent can significantly reduce the network effect in most cases.

  19. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    NASA Astrophysics Data System (ADS)

    Morishima, Shigeo; Nakamura, Satoshi

    2004-12-01

    We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  20. Isla Guadalupe, Mexico (GUAX, SCIGN/PBO) a Relative Constraint for California Borderland and Northern Gulf of California Motions.

    NASA Astrophysics Data System (ADS)

    Gonzalez-Garcia, J. J.

    2004-12-01

    Using ITRF2000 as a common reference frame link, I analyzed survey mode and permanent GPS published results, together with SOPAC public data and results (http://sopac.ucsd.edu), in order to evaluate relative present day crustal deformation in California and northern Mexico. The crustal velocity field of Mexico (Marquez-Azua and DeMets, 2003) obtained from continuous GPS measurements conducted by Instituto Nacional de Geografia e Informatica (INEGI) for 1993-2001, was partially used. The preferred model for an instantaneous rigid motion between North-America and Pacific plates (NAPA), is obtained using results of Isla Guadalupe GPS surveys (1991-2002) giving a new constraint for Pacific plate (PA) motion (Gonzalez-Garcia et al., 2003). It produces an apparent reduction of 1 mm/yr in the absolute motion in the border zone between PA and North-America (NA) plates in this region, as compared with other GPS models (v.g. Prawirodirdjo and Bock, 2004); and it is 3 mm/yr higher than NNRNUVEL-1A. In the PA reference frame, westernmost islands from San Francisco (FARB), Los Angeles (MIG1), and Ensenada (GUAX); give current residuals of 1.8, 1.7 and 0.9 mm/yr and azimuths that are consistent with local tectonic setting, respectively. In the NA reference frame, besides the confirmation of 2 mm/yr E-W extension for the southern Basin and Range province in northern Mexico; a present day deformation rate of 40.5 mm/yr between San Felipe, Baja California (SFBC) and Hermosillo, Sonora, is obtained. This rate agrees with a 6.3 to 6.7 Ma for the "initiation of a full sea-floor spreading" in the northern Gulf of California. SFBC has a 7 mm/yr motion in the PA reference frame, giving then, a full NAPA theoretical absolute motion of 47.5 mm/yr. For Puerto Penasco, Sonora (PENA) there is a NAPA motion of 46.2 mm/yr and a residual of 1.2 mm/yr in the NA reference frame, this site is located only 75 km to the northeast from the Wagner basin center. For southern Isla Guadalupe (GUAX) there is 51.8 mm/yr in the NAPA reference frame. Finally full present day NAPA motion at the Alarcon Rise must be only 50.1 ±0.2 mm/yr in agreement with the lower limit of the NAPA "geological" model obtained by DeMets and Dixon (1999).

  1. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago

    2015-08-01

    The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most accurate method to measure the motion and strain with an average median motion error of 0.42 mm and a median strain error of 2.0 ± 0.9%, 2.1 ± 1.3% and 7.1 ± 4.9% for circumferential, longitudinal and radial strain respectively. It also showed its capability to identify abnormal segments with reduced cardiac function and timing differences for the dyssynchrony cases. These results indicate that the proposed diffeomorphic speckle tracking method provides robust and accurate motion and strain estimation. Copyright © 2015. Published by Elsevier B.V.

  2. Automated quantification of lumbar vertebral kinematics from dynamic fluoroscopic sequences

    NASA Astrophysics Data System (ADS)

    Camp, Jon; Zhao, Kristin; Morel, Etienne; White, Dan; Magnuson, Dixon; Gay, Ralph; An, Kai-Nan; Robb, Richard

    2009-02-01

    We hypothesize that the vertebra-to-vertebra patterns of spinal flexion and extension motion of persons with lower back pain will differ from those of persons who are pain-free. Thus, it is our goal to measure the motion of individual lumbar vertebrae noninvasively from dynamic fluoroscopic sequences. Two-dimensional normalized mutual information-based image registration was used to track frame-to-frame motion. Software was developed that required the operator to identify each vertebra on the first frame of the sequence using a four-point "caliper" placed at the posterior and anterior edges of the inferior and superior end plates of the target vertebrae. The program then resolved the individual motions of each vertebra independently throughout the entire sequence. To validate the technique, 6 cadaveric lumbar spine specimens were potted in polymethylmethacrylate and instrumented with optoelectric sensors. The specimens were then placed in a custom dynamic spine simulator and moved through flexion-extension cycles while kinematic data and fluoroscopic sequences were simultaneously acquired. We found strong correlation between the absolute flexionextension range of motion of each vertebra as recorded by the optoelectric system and as determined from the fluoroscopic sequence via registration. We conclude that this method is a viable way of noninvasively assessing twodimensional vertebral motion.

  3. Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table.

    PubMed

    Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo

    2013-05-06

    A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.

  4. Navier-Stokes predictions of pitch damping for axisymmetric shell using steady coning motion

    NASA Technical Reports Server (NTRS)

    Weinacht, Paul; Sturek, Walter B.; Schiff, Lewis B.

    1991-01-01

    Previous theoretical investigations have proposed that the side force and moment acting on a body of revolution in steady coning motion could be related to the pitch-damping force and moment. In the current research effort, this approach is applied to produce predictions of the pitch damping for axisymmetric shell. The flow fields about these projectiles undergoing steady coning motion are successfully computed using a parabolized Navier-Stokes computational approach which makes use of a rotating coordinate frame. The governing equations are modified to include the centrifugal and Coriolis force terms due to the rotating coordinate frame. From the computed flow field, the side moments due to coning motion, spinning motion, and combined spinning and coning motion are used to determine the pitch-damping coefficients. Computations are performed for two generic shell configurations, a secant-ogive-cylinder and a secant-ogive-cylinder-boattail.

  5. Fixing the reference frame for PPMXL proper motions using extragalactic sources

    DOE PAGES

    Grabowski, Kathleen; Carlin, Jeffrey L.; Newberg, Heidi Jo; ...

    2015-05-27

    In this study, we quantify and correct systematic errors in PPMXL proper motions using extragalactic sources from the first two LAMOST data releases and the Vèron-Cetty & Vèron Catalog of Quasars. Although the majority of the sources are from the Vèron catalog, LAMOST makes important contributions in regions that are not well-sampled by previous catalogs, particularly at low Galactic latitudes and in the south Galactic cap. We show that quasars in PPMXL have measurable and significant proper motions, which reflect the systematic zero-point offsets present in the catalog. We confirm the global proper motion shifts seen by Wu et al.,more » and additionally find smaller-scale fluctuations of the QSO-derived corrections to an absolute frame. Finally, we average the proper motions of 158 106 extragalactic objects in bins of 3° × 3° and present a table of proper motion corrections.« less

  6. Bias to experience approaching motion in a three-dimensional virtual environment.

    PubMed

    Lewis, Clifford F; McBeath, Michael K

    2004-01-01

    We used two-frame apparent motion in a three-dimensional virtual environment to test whether observers had biases to experience approaching or receding motion in depth. Observers viewed a tunnel of tiles receding in depth, that moved ambiguously either toward or away from them. We found that observers exhibited biases to experience approaching motion. The strengths of the biases were decreased when stimuli pointed away, but size of the display screen had no effect. Tests with diamond-shaped tiles that varied in the degree of pointing asymmetry resulted in a linear trend in which the bias was strongest for stimuli pointing toward the viewer, and weakest for stimuli pointing away. We show that the overall bias to experience approaching motion is consistent with a computational strategy of matching corresponding features between adjacent foreshortened stimuli in consecutive visual frames. We conclude that there are both adaptational and geometric reasons to favor the experience of approaching motion.

  7. Fast left ventricle tracking in CMR images using localized anatomical affine optical flow

    NASA Astrophysics Data System (ADS)

    Queirós, Sandro; Vilaça, João. L.; Morais, Pedro; Fonseca, Jaime C.; D'hooge, Jan; Barbosa, Daniel

    2015-03-01

    In daily cardiology practice, assessment of left ventricular (LV) global function using non-invasive imaging remains central for the diagnosis and follow-up of patients with cardiovascular diseases. Despite the different methodologies currently accessible for LV segmentation in cardiac magnetic resonance (CMR) images, a fast and complete LV delineation is still limitedly available for routine use. In this study, a localized anatomically constrained affine optical flow method is proposed for fast and automatic LV tracking throughout the full cardiac cycle in short-axis CMR images. Starting from an automatically delineated LV in the end-diastolic frame, the endocardial and epicardial boundaries are propagated by estimating the motion between adjacent cardiac phases using optical flow. In order to reduce the computational burden, the motion is only estimated in an anatomical region of interest around the tracked boundaries and subsequently integrated into a local affine motion model. Such localized estimation enables to capture complex motion patterns, while still being spatially consistent. The method was validated on 45 CMR datasets taken from the 2009 MICCAI LV segmentation challenge. The proposed approach proved to be robust and efficient, with an average distance error of 2.1 mm and a correlation with reference ejection fraction of 0.98 (1.9 +/- 4.5%). Moreover, it showed to be fast, taking 5 seconds for the tracking of a full 4D dataset (30 ms per image). Overall, a novel fast, robust and accurate LV tracking methodology was proposed, enabling accurate assessment of relevant global function cardiac indices, such as volumes and ejection fraction

  8. Motion of a Point Mass in a Rotating Disc: A Quantitative Analysis of the Coriolis and Centrifugal Force

    NASA Astrophysics Data System (ADS)

    Haddout, Soufiane

    2016-06-01

    In Newtonian mechanics, the non-inertial reference frames is a generalization of Newton's laws to any reference frames. While this approach simplifies some problems, there is often little physical insight into the motion, in particular into the effects of the Coriolis force. The fictitious Coriolis force can be used by anyone in that frame of reference to explain why objects follow curved paths. In this paper, a mathematical solution based on differential equations in non-inertial reference is used to study different types of motion in rotating system. In addition, the experimental data measured on a turntable device, using a video camera in a mechanics laboratory was conducted to compare with mathematical solution in case of parabolically curved, solving non-linear least-squares problems, based on Levenberg-Marquardt's and Gauss-Newton algorithms.

  9. Non-rigid multi-frame registration of cell nuclei in live cell fluorescence microscopy image data.

    PubMed

    Tektonidis, Marco; Kim, Il-Han; Chen, Yi-Chun M; Eils, Roland; Spector, David L; Rohr, Karl

    2015-01-01

    The analysis of the motion of subcellular particles in live cell microscopy images is essential for understanding biological processes within cells. For accurate quantification of the particle motion, compensation of the motion and deformation of the cell nucleus is required. We introduce a non-rigid multi-frame registration approach for live cell fluorescence microscopy image data. Compared to existing approaches using pairwise registration, our approach exploits information from multiple consecutive images simultaneously to improve the registration accuracy. We present three intensity-based variants of the multi-frame registration approach and we investigate two different temporal weighting schemes. The approach has been successfully applied to synthetic and live cell microscopy image sequences, and an experimental comparison with non-rigid pairwise registration has been carried out. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Spatial reference frames of visual, vestibular, and multimodal heading signals in the dorsal subdivision of the medial superior temporal area.

    PubMed

    Fetsch, Christopher R; Wang, Sentao; Gu, Yong; Deangelis, Gregory C; Angelaki, Dora E

    2007-01-17

    Heading perception is a complex task that generally requires the integration of visual and vestibular cues. This sensory integration is complicated by the fact that these two modalities encode motion in distinct spatial reference frames (visual, eye-centered; vestibular, head-centered). Visual and vestibular heading signals converge in the primate dorsal subdivision of the medial superior temporal area (MSTd), a region thought to contribute to heading perception, but the reference frames of these signals remain unknown. We measured the heading tuning of MSTd neurons by presenting optic flow (visual condition), inertial motion (vestibular condition), or a congruent combination of both cues (combined condition). Static eye position was varied from trial to trial to determine the reference frame of tuning (eye-centered, head-centered, or intermediate). We found that tuning for optic flow was predominantly eye-centered, whereas tuning for inertial motion was intermediate but closer to head-centered. Reference frames in the two unimodal conditions were rarely matched in single neurons and uncorrelated across the population. Notably, reference frames in the combined condition varied as a function of the relative strength and spatial congruency of visual and vestibular tuning. This represents the first investigation of spatial reference frames in a naturalistic, multimodal condition in which cues may be integrated to improve perceptual performance. Our results compare favorably with the predictions of a recent neural network model that uses a recurrent architecture to perform optimal cue integration, suggesting that the brain could use a similar computational strategy to integrate sensory signals expressed in distinct frames of reference.

  11. Motion Versus Fixed Distraction of the Joint in the Treatment of Ankle Osteoarthritis

    PubMed Central

    Saltzman, Charles L.; Hillis, Stephen L.; Stolley, Mary P.; Anderson, Donald D.; Amendola, Annunziato

    2012-01-01

    Background: Initial reports have shown the efficacy of fixed distraction for the treatment of ankle osteoarthritis. We hypothesized that allowing ankle motion during distraction would result in significant improvements in outcomes compared with distraction without ankle motion. Methods: We conducted a prospective randomized controlled trial comparing the outcomes for patients with advanced ankle osteoarthritis who were managed with anterior osteophyte removal and either (1) fixed ankle distraction or (2) ankle distraction permitting joint motion. Thirty-six patients were randomized to treatment with either fixed distraction or distraction with motion. The patients were followed for twenty-four months after frame removal. The Ankle Osteoarthritis Scale (AOS) was the main outcome variable. Results: Two years after frame removal, subjects in both groups showed significant improvement compared with the status before treatment (p < 0.02 for both groups). The motion-distraction group had significantly better AOS scores than the fixed-distraction group at twenty-six, fifty-two, and 104 weeks after frame removal (p < 0.01 at each time point). At 104 weeks, the motion-distraction group had an overall mean improvement of 56.6% in the AOS score, whereas the fixed-distraction group had a mean improvement of 22.9% (p < 0.01). Conclusion: Distraction improved the patient-reported outcomes of treatment of ankle osteoarthritis. Adding ankle motion to distraction showed an early and sustained beneficial effect on outcome. Level of Evidence: Therapeutic Level I. See Instructions for Authors for a complete description of levels of evidence. PMID:22637202

  12. A Single Camera Motion Capture System for Human-Computer Interaction

    NASA Astrophysics Data System (ADS)

    Okada, Ryuzo; Stenger, Björn

    This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.

  13. Instantaneous progression reference frame for calculating pelvis rotations: Reliable and anatomically-meaningful results independent of the direction of movement.

    PubMed

    Kainz, Hans; Lloyd, David G; Walsh, Henry P J; Carty, Christopher P

    2016-05-01

    In motion analysis, pelvis angles are conventionally calculated as the rotations between the pelvis and laboratory reference frame. This approach assumes that the participant's motion is along the anterior-posterior laboratory reference frame axis. When this assumption is violated interpretation of pelvis angels become problematic. In this paper a new approach for calculating pelvis angles based on the rotations between the pelvis and an instantaneous progression reference frame was introduced. At every time-point, the tangent to the trajectory of the midpoint of the pelvis projected into the horizontal plane of the laboratory reference frame was used to define the anterior-posterior axis of the instantaneous progression reference frame. This new approach combined with the rotation-obliquity-tilt rotation sequence was compared to the conventional approach using the rotation-obliquity-tilt and tilt-obliquity-rotation sequences. Four different movement tasks performed by eight healthy adults were analysed. The instantaneous progression reference frame approach was the only approach that showed reliable and anatomically meaningful results for all analysed movement tasks (mean root-mean-square-differences below 5°, differences in pelvis angles at pre-defined gait events below 10°). Both rotation sequences combined with the conventional approach led to unreliable results as soon as the participant's motion was not along the anterior-posterior laboratory axis (mean root-mean-square-differences up to 30°, differences in pelvis angles at pre-defined gait events up to 45°). The instantaneous progression reference frame approach enables the gait analysis community to analysis pelvis angles for movements that do not follow the anterior-posterior axis of the laboratory reference frame. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Local motion compensation in image sequences degraded by atmospheric turbulence: a comparative analysis of optical flow vs. block matching methods

    NASA Astrophysics Data System (ADS)

    Huebner, Claudia S.

    2016-10-01

    As a consequence of fluctuations in the index of refraction of the air, atmospheric turbulence causes scintillation, spatial and temporal blurring as well as global and local image motion creating geometric distortions. To mitigate these effects many different methods have been proposed. Global as well as local motion compensation in some form or other constitutes an integral part of many software-based approaches. For the estimation of motion vectors between consecutive frames simple methods like block matching are preferable to more complex algorithms like optical flow, at least when challenged with near real-time requirements. However, the processing power of commercially available computers continues to increase rapidly and the more powerful optical flow methods have the potential to outperform standard block matching methods. Therefore, in this paper three standard optical flow algorithms, namely Horn-Schunck (HS), Lucas-Kanade (LK) and Farnebäck (FB), are tested for their suitability to be employed for local motion compensation as part of a turbulence mitigation system. Their qualitative performance is evaluated and compared with that of three standard block matching methods, namely Exhaustive Search (ES), Adaptive Rood Pattern Search (ARPS) and Correlation based Search (CS).

  15. Heterogeneity image patch index and its application to consumer video summarization.

    PubMed

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  16. Multisensor data fusion across time and space

    NASA Astrophysics Data System (ADS)

    Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.

    2014-06-01

    Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.

  17. Motion compensation in digital subtraction angiography using graphics hardware.

    PubMed

    Deuerling-Zheng, Yu; Lell, Michael; Galant, Adam; Hornegger, Joachim

    2006-07-01

    An inherent disadvantage of digital subtraction angiography (DSA) is its sensitivity to patient motion which causes artifacts in the subtraction images. These artifacts could often reduce the diagnostic value of this technique. Automated, fast and accurate motion compensation is therefore required. To cope with this requirement, we first examine a method explicitly designed to detect local motions in DSA. Then, we implement a motion compensation algorithm by means of block matching on modern graphics hardware. Both methods search for maximal local similarity by evaluating a histogram-based measure. In this context, we are the first who have mapped an optimizing search strategy on graphics hardware while paralleling block matching. Moreover, we provide an innovative method for creating histograms on graphics hardware with vertex texturing and frame buffer blending. It turns out that both methods can effectively correct the artifacts in most case, as the hardware implementation of block matching performs much faster: the displacements of two 1024 x 1024 images can be calculated at 3 frames/s with integer precision or 2 frames/s with sub-pixel precision. Preliminary clinical evaluation indicates that the computation with integer precision could already be sufficient.

  18. A Typological Approach to Translation of English and Chinese Motion Events

    ERIC Educational Resources Information Center

    Deng, Yu; Chen, Huifang

    2012-01-01

    English and Chinese are satellite-framed languages in which Manner is usually incorporated with Motion in the verb and Path is denoted by the satellite. Based on Talmy's theory of motion event and typology, the research probes into translation of English and Chinese motion events and finds that: (1) Translation of motion events in English and…

  19. Real-time circumferential mapping catheter tracking for motion compensation in atrial fibrillation ablation procedures

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Bourier, Felix; Wimmer, Andreas; Koch, Martin; Kiraly, Atilla; Liao, Rui; Kurzidim, Klaus; Hornegger, Joachim; Strobel, Norbert

    2012-02-01

    Atrial fibrillation (AFib) has been identified as a major cause of stroke. Radiofrequency catheter ablation has become an increasingly important treatment option, especially when drug therapy fails. Navigation under X-ray can be enhanced by using augmented fluoroscopy. It renders overlay images from pre-operative 3-D data sets which are then fused with X-ray images to provide more details about the underlying soft-tissue anatomy. Unfortunately, these fluoroscopic overlay images are compromised by respiratory and cardiac motion. Various methods to deal with motion have been proposed. To meet clinical demands, they have to be fast. Methods providing a processing frame rate of 3 frames-per-second (fps) are considered suitable for interventional electrophysiology catheter procedures if an acquisition frame rate of 2 fps is used. Unfortunately, when working at a processing rate of 3 fps, the delay until the actual motion compensated image can be displayed is about 300 ms. More recent algorithms can achieve frame rates of up to 20 fps, which reduces the lag to 50 ms. By using a novel approach involving a 3-D catheter model, catheter segmentation and a distance transform, we can speed up motion compensation to 25 fps which results in a display delay of only 40 ms on a standard workstation for medical applications. Our method uses a constrained 2-D/3-D registration to perform catheter tracking, and it obtained a 2-D tracking error of 0.61 mm.

  20. An unsupervised video foreground co-localization and segmentation process by incorporating motion cues and frame features

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    Video foreground segmentation is one of the key problems in video processing. In this paper, we proposed a novel and fully unsupervised approach for foreground object co-localization and segmentation of unconstrained videos. We firstly compute both the actual edges and motion boundaries of the video frames, and then align them by their HOG feature maps. Then, by filling the occlusions generated by the aligned edges, we obtained more precise masks about the foreground object. Such motion-based masks could be derived as the motion-based likelihood. Moreover, the color-base likelihood is adopted for the segmentation process. Experimental Results show that our approach outperforms most of the State-of-the-art algorithms.

  1. Vision and dual IMU integrated attitude measurement system

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoting; Sun, Changku; Wang, Peng; Lu, Huang

    2018-01-01

    To determination relative attitude between two space objects on a rocking base, an integrated system based on vision and dual IMU (inertial determination unit) is built up. The determination system fuses the attitude information of vision with the angular determinations of dual IMU by extended Kalman filter (EKF) to obtain the relative attitude. One IMU (master) is attached to the measured motion object and the other (slave) to the rocking base. As the determination output of inertial sensor is relative to inertial frame, thus angular rate of the master IMU includes not only motion of the measured object relative to inertial frame but also the rocking base relative to inertial frame, where the latter can be seen as redundant harmful movement information for relative attitude determination between the measured object and the rocking base. The slave IMU here assists to remove the motion information of rocking base relative to inertial frame from the master IMU. The proposed integrated attitude determination system is tested on practical experimental platform. And experiment results with superior precision and reliability show the feasibility and effectiveness of the proposed attitude determination system.

  2. Is Nubia plate rigid? A geodetic study of the relative motion of different cratonic areas within Africa.

    NASA Astrophysics Data System (ADS)

    Njoroge, M. W.; Malservisi, R.; Hugentobler, U.; Mokhtari, M.; Voytenko, D.

    2014-12-01

    Plate rigidity is one of the main paradigms of plate tectonics and a fundamental assumption in the definition of a global reference frame as ITRF. Although still far for optimal, the increased GPS instrumentation of the African region can allow us to understand how rigid one of the major plate can be. The presence of diffused band of seismicity, the Cameroon volcanic line, Pan African Kalahari orogenic belt and East Africa Rift suggest the possibility of relative motion among the different regions within the Nubia. The study focuses on the rigidity of Nubia plate. We divide the plate into three regions: Western (West Africa craton plus Nigeria), Central (approximately the region of the Congo craton) and Southern (Kalahari craton plus South Africa) and we utilize Euler Vector formulation to study internal rigidity and eventual relative motion. Developing five different reference frames with different combinations of the 3 regions, we try to understand the presence of the relative motion between the 3 cratons thus the stability of the Nubia plate as a whole. All available GPS stations from the regions are used separately or combined in creation of the reference frames. We utilize continuous stations with at least 2.5 years of data between 1994 and 2014. Given the small relative velocity, it is important to eliminate eventual biases in the analysis and to have a good estimation in the uncertainties of the observed velocities. For this reason we perform our analysis using both Bernese and Gipsy-oasis codes to generate time series for each station. Velocities and relative uncertainties are analyzed using the Allan variance of rate technique, taking in account for colored noise. An analysis of the color of the noise as function of latitude and climatic region is also performed to each time series. Preliminary results indicate a slight counter clockwise motion of West Africa craton with respect to South Africa Kalahari, and South Africa Kalahari-Congo Cratons. In addition, a possible counter clockwise rotation of the South African Kalahari craton with respect to the Nubian plate caused by southward propagation of the East Africa Rift is compatible with the observations. However, the results are at the limit of the statistical significance and within the current velocity uncertainties the Nubia plate appears as single- rigid plate.

  3. Multiframe video coding for improved performance over wireless channels.

    PubMed

    Budagavi, M; Gibson, J D

    2001-01-01

    We propose and evaluate a multi-frame extension to block motion compensation (BMC) coding of videoconferencing-type video signals for wireless channels. The multi-frame BMC (MF-BMC) coder makes use of the redundancy that exists across multiple frames in typical videoconferencing sequences to achieve additional compression over that obtained by using the single frame BMC (SF-BMC) approach, such as in the base-level H.263 codec. The MF-BMC approach also has an inherent ability of overcoming some transmission errors and is thus more robust when compared to the SF-BMC approach. We model the error propagation process in MF-BMC coding as a multiple Markov chain and use Markov chain analysis to infer that the use of multiple frames in motion compensation increases robustness. The Markov chain analysis is also used to devise a simple scheme which randomizes the selection of the frame (amongst the multiple previous frames) used in BMC to achieve additional robustness. The MF-BMC coders proposed are a multi-frame extension of the base level H.263 coder and are found to be more robust than the base level H.263 coder when subjected to simulated errors commonly encountered on wireless channels.

  4. Sensory integration of a light touch reference in human standing balance.

    PubMed

    Assländer, Lorenz; Smith, Craig P; Reynolds, Raymond F

    2018-01-01

    In upright stance, light touch of a space-stationary touch reference reduces spontaneous sway. Moving the reference evokes sway responses which exhibit non-linear behavior that has been attributed to sensory reweighting. Reweighting refers to a change in the relative contribution of sensory cues signaling body sway in space and light touch cues signaling finger position with respect to the body. Here we test the hypothesis that the sensory fusion process involves a transformation of light touch signals into the same reference frame as other sensory inputs encoding body sway in space, or vice versa. Eight subjects lightly gripped a robotic manipulandum which moved in a circular arc around the ankle joint. A pseudo-randomized motion sequence with broad spectral characteristics was applied at three amplitudes. The stimulus was presented at two different heights and therefore different radial distances, which were matched in terms of angular motion. However, the higher stimulus evoked a significantly larger sway response, indicating that the response was not matched to stimulus angular motion. Instead, the body sway response was strongly related to the horizontal translation of the manipulandum. The results suggest that light touch is integrated as the horizontal distance between body COM and the finger. The data were well explained by a model with one feedback loop minimizing changes in horizontal COM-finger distance. The model further includes a second feedback loop estimating the horizontal finger motion and correcting the first loop when the touch reference is moving. The second loop includes the predicted transformation of sensory signals into the same reference frame and a non-linear threshold element that reproduces the non-linear sway responses, thus providing a mechanism that can explain reweighting.

  5. JTRF2014, the JPL Kalman filter and smoother realization of the International Terrestrial Reference System

    NASA Astrophysics Data System (ADS)

    Abbondanza, Claudio; Chin, Toshio M.; Gross, Richard S.; Heflin, Michael B.; Parker, Jay W.; Soja, Benedikt S.; van Dam, Tonie; Wu, Xiaoping

    2017-10-01

    We present and discuss JTRF2014, the Terrestrial Reference Frame (TRF) the Jet Propulsion Laboratory constructed by combining space-geodetic inputs from very long baseline interferometry (VLBI), satellite laser ranging (SLR), Global Navigation Satellite Systems (GNSS), and Doppler orbitography and radiopositioning integrated by satellite submitted for the realization of ITRF2014. Determined through a Kalman filter and Rauch-Tung-Striebel smoother assimilating position observations, Earth orientation parameters, and local ties, JTRF2014 is a subsecular, time series-based TRF whose origin is at the quasi-instantaneous center of mass (CM) as sensed by SLR and whose scale is determined by the quasi-instantaneous VLBI and SLR scales. The dynamical evolution of the positions accounts for a secular motion term, annual, and semiannual periodic modes. Site-dependent variances based on the analysis of loading displacements induced by mass redistributions of terrestrial fluids have been used to control the extent of random walk adopted in the combination. With differences in the amplitude of the annual signal within the range 0.5-0.8 mm, JTRF2014-derived center of network-to-center of mass (CM-CN) is in remarkable agreement with the geocenter motion obtained via spectral inversion of GNSS, Gravity Recovery and Climate Experiment (GRACE) observations and modeled ocean bottom pressure from Estimating the Circulation and Climate of the Ocean (ECCO). Comparisons of JTRF2014 to ITRF2014 suggest high-level consistency with time derivatives of the Helmert transformation parameters connecting the two frames below 0.18 mm/yr and weighted root-mean-square differences of the polar motion (polar motion rate) in the order of 30 μas (17 μas/d).

  6. Sensory integration of a light touch reference in human standing balance

    PubMed Central

    Smith, Craig P.; Reynolds, Raymond F.

    2018-01-01

    In upright stance, light touch of a space-stationary touch reference reduces spontaneous sway. Moving the reference evokes sway responses which exhibit non-linear behavior that has been attributed to sensory reweighting. Reweighting refers to a change in the relative contribution of sensory cues signaling body sway in space and light touch cues signaling finger position with respect to the body. Here we test the hypothesis that the sensory fusion process involves a transformation of light touch signals into the same reference frame as other sensory inputs encoding body sway in space, or vice versa. Eight subjects lightly gripped a robotic manipulandum which moved in a circular arc around the ankle joint. A pseudo-randomized motion sequence with broad spectral characteristics was applied at three amplitudes. The stimulus was presented at two different heights and therefore different radial distances, which were matched in terms of angular motion. However, the higher stimulus evoked a significantly larger sway response, indicating that the response was not matched to stimulus angular motion. Instead, the body sway response was strongly related to the horizontal translation of the manipulandum. The results suggest that light touch is integrated as the horizontal distance between body COM and the finger. The data were well explained by a model with one feedback loop minimizing changes in horizontal COM-finger distance. The model further includes a second feedback loop estimating the horizontal finger motion and correcting the first loop when the touch reference is moving. The second loop includes the predicted transformation of sensory signals into the same reference frame and a non-linear threshold element that reproduces the non-linear sway responses, thus providing a mechanism that can explain reweighting. PMID:29874252

  7. Motion Detection in Ultrasound Image-Sequences Using Tensor Voting

    NASA Astrophysics Data System (ADS)

    Inba, Masafumi; Yanagida, Hirotaka; Tamura, Yasutaka

    2008-05-01

    Motion detection in ultrasound image sequences using tensor voting is described. We have been developing an ultrasound imaging system adopting a combination of coded excitation and synthetic aperture focusing techniques. In our method, frame rate of the system at distance of 150 mm reaches 5000 frame/s. Sparse array and short duration coded ultrasound signals are used for high-speed data acquisition. However, many artifacts appear in the reconstructed image sequences because of the incompleteness of the transmitted code. To reduce the artifacts, we have examined the application of tensor voting to the imaging method which adopts both coded excitation and synthetic aperture techniques. In this study, the basis of applying tensor voting and the motion detection method to ultrasound images is derived. It was confirmed that velocity detection and feature enhancement are possible using tensor voting in the time and space of simulated ultrasound three-dimensional image sequences.

  8. Response of high-rise and base-isolated buildings to a hypothetical M w 7.0 blind thrust earthquake

    USGS Publications Warehouse

    Heaton, T.H.; Hall, J.F.; Wald, D.J.; Halling, M.W.

    1995-01-01

    High-rise flexible-frame buildings are commonly considered to be resistant to shaking from the largest earthquakes. In addition, base isolation has become increasingly popular for critical buildings that should still function after an earthquake. How will these two types of buildings perform if a large earthquake occurs beneath a metropolitan area? To answer this question, we simulated the near-source ground motions of a Mw 7.0 thrust earthquake and then mathematically modeled the response of a 20-story steel-frame building and a 3-story base-isolated building. The synthesized ground motions were characterized by large displacement pulses (up to 2 meters) and large ground velocities. These ground motions caused large deformation and possible collapse of the frame building, and they required exceptional measures in the design of the base-isolated building if it was to remain functional.

  9. Optimal Filter Estimation for Lucas-Kanade Optical Flow

    PubMed Central

    Sharmin, Nusrat; Brad, Remus

    2012-01-01

    Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.

  10. Postglacial Rebound from VLBI Geodesy: On Establishing Vertical Reference

    NASA Technical Reports Server (NTRS)

    Argus, Donald F.

    1996-01-01

    Difficulty in establishing a reference frame fixed to the earth's interior complicates the measurement of the vertical (radial) motions of the surface. I propose that a useful reference frame for vertical motions is that found by minimizing differences between vertical motions observed with VLBI [Ma and Ryan] and predictions from postglacial rebound predictions [Peltier]. The optimal translation of the geocenter is 1.7mm/year toward 36degN, 111degE when determined from the motions of 10 VLBI sites. This translation gives a better fit of observations to predictions than does the VLBI reference frame used by Ma and Ryan, but the improvement is statistically insignificant. The root mean square of differences decreases 20% to 0.73 mm/yr and the correlation coefficient increases from 0.76 to 0.87. Postglacial rebound is evident in the uplift of points in Sweden and Ontario that were beneath the ancient ice sheets of Fennoscandia and Canada, and in the subsidence of points in the northeastern U.S., Germany, and Alaska that were around the periphery of the ancient ice sheets.

  11. Measuring mandibular motions

    NASA Technical Reports Server (NTRS)

    Dimeff, J.; Rositano, S.; Taylor, R. C.

    1977-01-01

    Mandibular motion along three axes is measured by three motion transducers on floating yoke that rests against mandible. System includes electronics to provide variety of outputs for data display and processing. Head frame is strapped to test subject's skull to provide fixed point of reference for transducers.

  12. GPS Imaging of Global Vertical Land Motion for Sea Level Studies

    NASA Astrophysics Data System (ADS)

    Hammond, W. C.; Blewitt, G.; Hamlington, B. D.

    2015-12-01

    Coastal vertical land motion contributes to the signal of local relative sea level change. Moreover, understanding global sea level change requires understanding local sea level rise at many locations around Earth. It is therefore essential to understand the regional secular vertical land motion attributable to mantle flow, tectonic deformation, glacial isostatic adjustment, postseismic viscoelastic relaxation, groundwater basin subsidence, elastic rebound from groundwater unloading or other processes that can change the geocentric height of tide gauges anchored to the land. These changes can affect inferences of global sea level rise and should be taken into account for global projections. We present new results of GPS imaging of vertical land motion across most of Earth's continents including its ice-free coastlines around North and South America, Europe, Australia, Japan, parts of Africa and Indonesia. These images are based on data from many independent open access globally distributed continuously recording GPS networks including over 13,500 stations. The data are processed in our system to obtain solutions aligned to the International Terrestrial Reference Frame (ITRF08). To generate images of vertical rate we apply the Median Interannual Difference Adjusted for Skewness (MIDAS) algorithm to the vertical times series to obtain robust non-parametric estimates with realistic uncertainties. We estimate the vertical land motion at the location of 1420 tide gauges locations using Delaunay-based geographic interpolation with an empirically derived distance weighting function and median spatial filtering. The resulting image is insensitive to outliers and steps in the GPS time series, omits short wavelength features attributable to unstable stations or unrepresentative rates, and emphasizes long-wavelength mantle-driven vertical rates.

  13. Arterial Mechanical Motion Estimation Based on a Semi-Rigid Body Deformation Approach

    PubMed Central

    Guzman, Pablo; Hamarneh, Ghassan; Ros, Rafael; Ros, Eduardo

    2014-01-01

    Arterial motion estimation in ultrasound (US) sequences is a hard task due to noise and discontinuities in the signal derived from US artifacts. Characterizing the mechanical properties of the artery is a promising novel imaging technique to diagnose various cardiovascular pathologies and a new way of obtaining relevant clinical information, such as determining the absence of dicrotic peak, estimating the Augmentation Index (AIx), the arterial pressure or the arterial stiffness. One of the advantages of using US imaging is the non-invasive nature of the technique unlike Intra Vascular Ultra Sound (IVUS) or angiography invasive techniques, plus the relative low cost of the US units. In this paper, we propose a semi rigid deformable method based on Soft Bodies dynamics realized by a hybrid motion approach based on cross-correlation and optical flow methods to quantify the elasticity of the artery. We evaluate and compare different techniques (for instance optical flow methods) on which our approach is based. The goal of this comparative study is to identify the best model to be used and the impact of the accuracy of these different stages in the proposed method. To this end, an exhaustive assessment has been conducted in order to decide which model is the most appropriate for registering the variation of the arterial diameter over time. Our experiments involved a total of 1620 evaluations within nine simulated sequences of 84 frames each and the estimation of four error metrics. We conclude that our proposed approach obtains approximately 2.5 times higher accuracy than conventional state-of-the-art techniques. PMID:24871987

  14. Development of a computerized intervertebral motion analysis of the cervical spine for clinical application.

    PubMed

    Piché, Mathieu; Benoît, Pierre; Lambert, Julie; Barrette, Virginie; Grondin, Emmanuelle; Martel, Julie; Paré, Amélie; Cardin, André

    2007-01-01

    The objective of this study was to develop a measurement method that could be implemented in chiropractic for the evaluation of angular and translational intervertebral motion of the cervical spine. Flexion-extension radiographs were digitized with a scanner at a ratio of 1:1 and imported into a software, allowing segmental motion measurements. The measurements were obtained by selecting the most anteroinferior point and the most posteroinferior point of a vertebral body (anterior and posterior arch, respectively, for C1), with the origin of the reference frame set at the most posteroinferior point of the vertebral body below. The same procedure was performed for both the flexion and extension radiographs, and the coordinates of the 2 points were used to calculate the angular movement and the translation between the 2 vertebrae. This method provides a measure of intervertebral angular and translational movement. It uses a different reference frame for each joint instead of the same reference frame for all joints and thus provides a measure of motion in the plane of each articulation. The calculated values obtained are comparable to other studies on intervertebral motion and support further development to validate the method. The present study proposes a computerized procedure to evaluate intervertebral motion of the cervical spine. This procedure needs to be validated with a reliability study but could provide a valuable tool for doctors of chiropractic and further spinal research.

  15. Efficient use of bit planes in the generation of motion stimuli

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.; Stone, Leland S.

    1988-01-01

    The production of animated motion sequences on computer-controlled display systems presents a technical problem because large images cannot be transferred from disk storage to image memory at conventional frame rates. A technique is described in which a single base image can be used to generate a broad class of motion stimuli without the need for such memory transfers. This technique was applied to the generation of drifting sine-wave gratings (and by extension, sine wave plaids). For each drifting grating, sine and cosine spatial phase components are first reduced to 1 bit/pixel using a digital halftoning technique. The resulting pairs of 1-bit images are then loaded into pairs of bit planes of the display memory. To animate the patterns, the display hardware's color lookup table is modified on a frame-by-frame basis; for each frame the lookup table is set to display a weighted sum of the spatial sine and cosine phase components. Because the contrasts and temporal frequencies of the various components are mutually independent in each frame, the sine and cosine components can be counterphase modulated in temporal quadrature, yielding a single drifting grating. Using additional bit planes, multiple drifting gratings can be combined to form sine-wave plaid patterns. A large number of resultant plaid motions can be produced from a single image file because the temporal frequencies of all the components can be varied independently. For a graphics device having 8 bits/pixel, up to four drifting gratings may be combined, each having independently variable contrast and speed.

  16. KALREF—A Kalman filter and time series approach to the International Terrestrial Reference Frame realization

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoping; Abbondanza, Claudio; Altamimi, Zuheir; Chin, T. Mike; Collilieux, Xavier; Gross, Richard S.; Heflin, Michael B.; Jiang, Yan; Parker, Jay W.

    2015-05-01

    The current International Terrestrial Reference Frame is based on a piecewise linear site motion model and realized by reference epoch coordinates and velocities for a global set of stations. Although linear motions due to tectonic plates and glacial isostatic adjustment dominate geodetic signals, at today's millimeter precisions, nonlinear motions due to earthquakes, volcanic activities, ice mass losses, sea level rise, hydrological changes, and other processes become significant. Monitoring these (sometimes rapid) changes desires consistent and precise realization of the terrestrial reference frame (TRF) quasi-instantaneously. Here, we use a Kalman filter and smoother approach to combine time series from four space geodetic techniques to realize an experimental TRF through weekly time series of geocentric coordinates. In addition to secular, periodic, and stochastic components for station coordinates, the Kalman filter state variables also include daily Earth orientation parameters and transformation parameters from input data frames to the combined TRF. Local tie measurements among colocated stations are used at their known or nominal epochs of observation, with comotion constraints applied to almost all colocated stations. The filter/smoother approach unifies different geodetic time series in a single geocentric frame. Fragmented and multitechnique tracking records at colocation sites are bridged together to form longer and coherent motion time series. While the time series approach to TRF reflects the reality of a changing Earth more closely than the linear approximation model, the filter/smoother is computationally powerful and flexible to facilitate incorporation of other data types and more advanced characterization of stochastic behavior of geodetic time series.

  17. Motion compensation for fully 4D PET reconstruction using PET superset data

    NASA Astrophysics Data System (ADS)

    Verhaeghe, J.; Gravel, P.; Mio, R.; Fukasawa, R.; Rosa-Neto, P.; Soucy, J.-P.; Thompson, C. J.; Reader, A. J.

    2010-07-01

    Fully 4D PET image reconstruction is receiving increasing research interest due to its ability to significantly reduce spatiotemporal noise in dynamic PET imaging. However, thus far in the literature, the important issue of correcting for subject head motion has not been considered. Specifically, as a direct consequence of using temporally extensive basis functions, a single instance of movement propagates to impair the reconstruction of multiple time frames, even if no further movement occurs in those frames. Existing 3D motion compensation strategies have not yet been adapted to 4D reconstruction, and as such the benefits of 4D algorithms have not yet been reaped in a clinical setting where head movement undoubtedly occurs. This work addresses this need, developing a motion compensation method suitable for fully 4D reconstruction methods which exploits an optical tracking system to measure the head motion along with PET superset data to store the motion compensated data. List-mode events are histogrammed as PET superset data according to the measured motion, and a specially devised normalization scheme for motion compensated reconstruction from the superset data is required. This work proceeds to propose the corresponding time-dependent normalization modifications which are required for a major class of fully 4D image reconstruction algorithms (those which use linear combinations of temporal basis functions). Using realistically simulated as well as real high-resolution PET data from the HRRT, we demonstrate both the detrimental impact of subject head motion in fully 4D PET reconstruction and the efficacy of our proposed modifications to 4D algorithms. Benefits are shown both for the individual PET image frames as well as for parametric images of tracer uptake and volume of distribution for 18F-FDG obtained from Patlak analysis.

  18. Motion compensation for fully 4D PET reconstruction using PET superset data.

    PubMed

    Verhaeghe, J; Gravel, P; Mio, R; Fukasawa, R; Rosa-Neto, P; Soucy, J-P; Thompson, C J; Reader, A J

    2010-07-21

    Fully 4D PET image reconstruction is receiving increasing research interest due to its ability to significantly reduce spatiotemporal noise in dynamic PET imaging. However, thus far in the literature, the important issue of correcting for subject head motion has not been considered. Specifically, as a direct consequence of using temporally extensive basis functions, a single instance of movement propagates to impair the reconstruction of multiple time frames, even if no further movement occurs in those frames. Existing 3D motion compensation strategies have not yet been adapted to 4D reconstruction, and as such the benefits of 4D algorithms have not yet been reaped in a clinical setting where head movement undoubtedly occurs. This work addresses this need, developing a motion compensation method suitable for fully 4D reconstruction methods which exploits an optical tracking system to measure the head motion along with PET superset data to store the motion compensated data. List-mode events are histogrammed as PET superset data according to the measured motion, and a specially devised normalization scheme for motion compensated reconstruction from the superset data is required. This work proceeds to propose the corresponding time-dependent normalization modifications which are required for a major class of fully 4D image reconstruction algorithms (those which use linear combinations of temporal basis functions). Using realistically simulated as well as real high-resolution PET data from the HRRT, we demonstrate both the detrimental impact of subject head motion in fully 4D PET reconstruction and the efficacy of our proposed modifications to 4D algorithms. Benefits are shown both for the individual PET image frames as well as for parametric images of tracer uptake and volume of distribution for (18)F-FDG obtained from Patlak analysis.

  19. MO-G-18C-05: Real-Time Prediction in Free-Breathing Perfusion MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, H; Liu, W; Ruan, D

    Purpose: The aim is to minimize frame-wise difference errors caused by respiratory motion and eliminate the need for breath-holds in magnetic resonance imaging (MRI) sequences with long acquisitions and repeat times (TRs). The technique is being applied to perfusion MRI using arterial spin labeling (ASL). Methods: Respiratory motion prediction (RMP) using navigator echoes was implemented in ASL. A least-square method was used to extract the respiratory motion information from the 1D navigator. A generalized artificial neutral network (ANN) with three layers was developed to simultaneously predict 10 time points forward in time and correct for respiratory motion during MRI acquisition.more » During the training phase, the parameters of the ANN were optimized to minimize the aggregated prediction error based on acquired navigator data. During realtime prediction, the trained ANN was applied to the most recent estimated displacement trajectory to determine in real-time the amount of spatial Results: The respiratory motion information extracted from the least-square method can accurately represent the navigator profiles, with a normalized chi-square value of 0.037±0.015 across the training phase. During the 60-second training phase, the ANN successfully learned the respiratory motion pattern from the navigator training data. During real-time prediction, the ANN received displacement estimates and predicted the motion in the continuum of a 1.0 s prediction window. The ANN prediction was able to provide corrections for different respiratory states (i.e., inhalation/exhalation) during real-time scanning with a mean absolute error of < 1.8 mm. Conclusion: A new technique enabling free-breathing acquisition during MRI is being developed. A generalized ANN development has demonstrated its efficacy in predicting a continuum of motion profile for volumetric imaging based on navigator inputs. Future work will enhance the robustness of ANN and verify its effectiveness with human subjects. Research supported by National Institutes of Health National Cancer Institute Grant R01 CA159471-01.« less

  20. SU-E-J-234: Application of a Breathing Motion Model to ViewRay Cine MR Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Connell, D. P.; Thomas, D. H.; Dou, T. H.

    2015-06-15

    Purpose: A respiratory motion model previously used to generate breathing-gated CT images was used with cine MR images. Accuracy and predictive ability of the in-plane models were evaluated. Methods: Sagittalplane cine MR images of a patient undergoing treatment on a ViewRay MRI/radiotherapy system were acquired before and during treatment. Images were acquired at 4 frames/second with 3.5 × 3.5 mm resolution and a slice thickness of 5 mm. The first cine frame was deformably registered to following frames. Superior/inferior component of the tumor centroid position was used as a breathing surrogate. Deformation vectors and surrogate measurements were used to determinemore » motion model parameters. Model error was evaluated and subsequent treatment cines were predicted from breathing surrogate data. A simulated CT cine was created by generating breathing-gated volumetric images at 0.25 second intervals along the measured breathing trace, selecting a sagittal slice and downsampling to the resolution of the MR cines. A motion model was built using the first half of the simulated cine data. Model accuracy and error in predicting the remaining frames of the cine were evaluated. Results: Mean difference between model predicted and deformably registered lung tissue positions for the 28 second preview MR cine acquired before treatment was 0.81 +/− 0.30 mm. The model was used to predict two minutes of the subsequent treatment cine with a mean accuracy of 1.59 +/− 0.63 mm. Conclusion: Inplane motion models were built using MR cine images and evaluated for accuracy and ability to predict future respiratory motion from breathing surrogate measurements. Examination of long term predictive ability is ongoing. The technique was applied to simulated CT cines for further validation, and the authors are currently investigating use of in-plane models to update pre-existing volumetric motion models used for generation of breathing-gated CT planning images.« less

  1. Linear State-Space Representation of the Dynamics of Relative Motion, Based on Restricted Three Body Dynamics

    NASA Technical Reports Server (NTRS)

    Luquette,Richard J.; Sanner, Robert M.

    2004-01-01

    Precision Formation Flying is an enabling technology for a variety of proposed space-based observatories, including the Micro-Arcsecond X-ray Imaging Mission (MAXIM) , the associated MAXIM pathfinder mission, Stellar Imager (SI) and the Terrestrial Planet Finder (TPF). An essential element of the technology is the control algorithm, requiring a clear understanding of the dynamics of relative motion. This paper examines the dynamics of relative motion in the context of the Restricted Three Body Problem (RTBP). The natural dynamics of relative motion are presented in their full nonlinear form. Motivated by the desire to apply linear control methods, the dynamics equations are linearized and presented in state-space form. The stability properties are explored for regions in proximity to each of the libration points in the Earth/Moon - Sun rotating frame. The dynamics of relative motion are presented in both the inertial and rotating coordinate frames.

  2. Use of full-frame sensors for height estimation of volcanic clouds

    NASA Astrophysics Data System (ADS)

    Zakšek, Klemen; Schilling, Klaus; Tzschichholz, Tristan; Hort, Matthias

    2017-04-01

    The quality of ash dispersion prediction is limited by the lack of high-quality information on eruption source parameters. One of the most important ones is the Volcanic Cloud Top Height (VCTH). Because of well-known uncertainties of currently operational methods, photogrammetric methods can be used to improve VCTH estimates. But even photogrammetric methods have difficulties because appropriate data are lacking. Here we propose an application of full-frame sensors that are available on the new generation of small satellites. A full-frame sensor makes a 2D image in a fraction of a second and it does not require a satellite to move, as a typical push-broom sensor does. In addition, full-frame sensors usually provide a better spatial resolution than most operational satellite instruments, resulting in a shorter minimal distance between satellites to produce a suitable parallax. From such images, it is possible to reconstruct a volcanic plume in 3D using methodology Structure from Motion (SfM) using the following workflow. 1) Convert images to grayscale and use local adaptive Wallis filter to enhance texture in images. 2) Use SfM software for sparse 3D reconstruction, which includes pose estimation of the cameras, features detection, and features matching. 3) Densify 3D reconstruction, create a mesh and optionally cover it with texture. 4) Use a 7-parameters similarity transformation (based on the satellite's orbit) to geolocate the results. The procedure has been tested with photos of 2009 Sarychev Peak eruption made by astronauts on the International Space Station (ISS), as a part of the NASA program Crew Earth observations. The estimated VCTH values are a bit larger than already published estimates. The presented work is just a pre-study of the forthcoming NetSat (planned launch at the end of 2017) and TOM mission (planned launch in 2019). These missions will provide VCTH based on simultaneous observations of clouds from different satellites - 4 (NetSat) and 3 (TOM) CubeSats will be flying in a pearl of strings or cartwheel formation. Both missions will fly on the height of 600 km with a distance of 100 km between two of them.

  3. Diaphragm motion quantification in megavoltage cone-beam CT projection images.

    PubMed

    Chen, Mingqing; Siochi, R Alfredo

    2010-05-01

    To quantify diaphragm motion in megavoltage (MV) cone-beam computed tomography (CBCT) projections. User identified ipsilateral hemidiaphragm apex (IHDA) positions in two full exhale and inhale frames were used to create bounding rectangles in all other frames of a CBCT scan. The bounding rectangle was enlarged to create a region of interest (ROI). ROI pixels were associated with a cost function: The product of image gradients and a gradient direction matching function for an ideal hemidiaphragm determined from 40 training sets. A dynamic Hough transform (DHT) models a hemidiaphragm as a contour made of two parabola segments with a common vertex (the IHDA). The images within the ROIs are transformed into Hough space where a contour's Hough value is the sum of the cost function over all contour pixels. Dynamic programming finds the optimal trajectory of the common vertex in Hough space subject to motion constraints between frames, and an active contour model further refines the result. Interpolated ray tracing converts the positions to room coordinates. Root-mean-square (RMS) distances between these positions and those resulting from an expert's identification of the IHDA were determined for 21 Siemens MV CBCT scans. Computation time on a 2.66 GHz CPU was 30 s. The average craniocaudal RMS error was 1.38 +/- 0.67 mm. While much larger errors occurred in a few near-sagittal frames on one patient's scans, adjustments to algorithm constraints corrected them. The DHT based algorithm can compute IHDA trajectories immediately prior to radiation therapy on a daily basis using localization MVCBCT projection data. This has potential for calibrating external motion surrogates against diaphragm motion.

  4. Effect of respiratory motion on internal radiation dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Tianwu; Zaidi, Habib, E-mail: habib.zaidi@hcuge.ch; Geneva Neuroscience Center, Geneva University, Geneva CH-1205

    Purpose: Estimation of the radiation dose to internal organs is essential for the assessment of radiation risks and benefits to patients undergoing diagnostic and therapeutic nuclear medicine procedures including PET. Respiratory motion induces notable internal organ displacement, which influences the absorbed dose for external exposure to radiation. However, to their knowledge, the effect of respiratory motion on internal radiation dosimetry has never been reported before. Methods: Thirteen computational models representing the adult male at different respiratory phases corresponding to the normal respiratory cycle were generated from the 4D dynamic XCAT phantom. Monte Carlo calculations were performed using the MCNP transportmore » code to estimate the specific absorbed fractions (SAFs) of monoenergetic photons/electrons, the S-values of common positron-emitting radionuclides (C-11, N-13, O-15, F-18, Cu-64, Ga-68, Rb-82, Y-86, and I-124), and the absorbed dose of {sup 18}F-fluorodeoxyglucose ({sup 18}F-FDG) in 28 target regions for both the static (average of dynamic frames) and dynamic phantoms. Results: The self-absorbed dose for most organs/tissues is only slightly influenced by respiratory motion. However, for the lung, the self-absorbed SAF is about 11.5% higher at the peak exhale phase than the peak inhale phase for photon energies above 50 keV. The cross-absorbed dose is obviously affected by respiratory motion for many combinations of source-target pairs. The cross-absorbed S-values for the heart contents irradiating the lung are about 7.5% higher in the peak exhale phase than the peak inhale phase for different positron-emitting radionuclides. For {sup 18}F-FDG, organ absorbed doses are less influenced by respiratory motion. Conclusions: Respiration-induced volume variations of the lungs and the repositioning of internal organs affect the self-absorbed dose of the lungs and cross-absorbed dose between organs in internal radiation dosimetry. The dynamic anatomical model provides more accurate internal radiation dosimetry estimates for the lungs and abdominal organs based on realistic modeling of respiratory motion. This work also contributes to a better understanding of model-induced uncertainties in internal radiation dosimetry.« less

  5. Resolving Fast, Confined Diffusion in Bacteria with Image Correlation Spectroscopy.

    PubMed

    Rowland, David J; Tuson, Hannah H; Biteen, Julie S

    2016-05-24

    By following single fluorescent molecules in a microscope, single-particle tracking (SPT) can measure diffusion and binding on the nanometer and millisecond scales. Still, although SPT can at its limits characterize the fastest biomolecules as they interact with subcellular environments, this measurement may require advanced illumination techniques such as stroboscopic illumination. Here, we address the challenge of measuring fast subcellular motion by instead analyzing single-molecule data with spatiotemporal image correlation spectroscopy (STICS) with a focus on measurements of confined motion. Our SPT and STICS analysis of simulations of the fast diffusion of confined molecules shows that image blur affects both STICS and SPT, and we find biased diffusion rate measurements for STICS analysis in the limits of fast diffusion and tight confinement due to fitting STICS correlation functions to a Gaussian approximation. However, we determine that with STICS, it is possible to correctly interpret the motion that blurs single-molecule images without advanced illumination techniques or fast cameras. In particular, we present a method to overcome the bias due to image blur by properly estimating the width of the correlation function by directly calculating the correlation function variance instead of using the typical Gaussian fitting procedure. Our simulation results are validated by applying the STICS method to experimental measurements of fast, confined motion: we measure the diffusion of cytosolic mMaple3 in living Escherichia coli cells at 25 frames/s under continuous illumination to illustrate the utility of STICS in an experimental parameter regime for which in-frame motion prevents SPT and tight confinement of fast diffusion precludes stroboscopic illumination. Overall, our application of STICS to freely diffusing cytosolic protein in small cells extends the utility of single-molecule experiments to the regime of fast confined diffusion without requiring advanced microscopy techniques. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  6. Five-dimensional motion compensation for respiratory and cardiac motion with cone-beam CT of the thorax region

    NASA Astrophysics Data System (ADS)

    Sauppe, Sebastian; Hahn, Andreas; Brehm, Marcus; Paysan, Pascal; Seghers, Dieter; Kachelrieß, Marc

    2016-03-01

    We propose an adapted method of our previously published five-dimensional (5D) motion compensation (MoCo) algorithm1, developed for micro-CT imaging of small animals, to provide for the first time motion artifact-free 5D cone-beam CT (CBCT) images from a conventional flat detector-based CBCT scan of clinical patients. Image quality of retrospectively respiratory- and cardiac-gated volumes from flat detector CBCT scans is deteriorated by severe sparse projection artifacts. These artifacts further complicate motion estimation, as it is required for MoCo image reconstruction. For high quality 5D CBCT images at the same x-ray dose and the same number of projections as todays 3D CBCT we developed a double MoCo approach based on motion vector fields (MVFs) for respiratory and cardiac motion. In a first step our already published four-dimensional (4D) artifact-specific cyclic motion-compensation (acMoCo) approach is applied to compensate for the respiratory patient motion. With this information a cyclic phase-gated deformable heart registration algorithm is applied to the respiratory motion-compensated 4D CBCT data, thus resulting in cardiac MVFs. We apply these MVFs on double-gated images and thereby respiratory and cardiac motion-compensated 5D CBCT images are obtained. Our 5D MoCo approach processing patient data acquired with the TrueBeam 4D CBCT system (Varian Medical Systems). Our double MoCo approach turned out to be very efficient and removed nearly all streak artifacts due to making use of 100% of the projection data for each reconstructed frame. The 5D MoCo patient data show fine details and no motion blurring, even in regions close to the heart where motion is fastest.

  7. New architecture for dynamic frame-skipping transcoder.

    PubMed

    Fung, Kai-Tat; Chan, Yui-Lam; Siu, Wan-Chi

    2002-01-01

    Transcoding is a key technique for reducing the bit rate of a previously compressed video signal. A high transcoding ratio may result in an unacceptable picture quality when the full frame rate of the incoming video bitstream is used. Frame skipping is often used as an efficient scheme to allocate more bits to the representative frames, so that an acceptable quality for each frame can be maintained. However, the skipped frame must be decompressed completely, which might act as a reference frame to nonskipped frames for reconstruction. The newly quantized discrete cosine transform (DCT) coefficients of the prediction errors need to be re-computed for the nonskipped frame with reference to the previous nonskipped frame; this can create undesirable complexity as well as introduce re-encoding errors. In this paper, we propose new algorithms and a novel architecture for frame-rate reduction to improve picture quality and to reduce complexity. The proposed architecture is mainly performed on the DCT domain to achieve a transcoder with low complexity. With the direct addition of DCT coefficients and an error compensation feedback loop, re-encoding errors are reduced significantly. Furthermore, we propose a frame-rate control scheme which can dynamically adjust the number of skipped frames according to the incoming motion vectors and re-encoding errors due to transcoding such that the decoded sequence can have a smooth motion as well as better transcoded pictures. Experimental results show that, as compared to the conventional transcoder, the new architecture for frame-skipping transcoder is more robust, produces fewer requantization errors, and has reduced computational complexity.

  8. ERP-Variations on Time Scales Between Hours and Months Derived From GNSS Observations

    NASA Astrophysics Data System (ADS)

    Weber, R.; Englich, S.; Mendes Cerveira, P.

    2007-05-01

    Current observations gained by the space geodetic techniques, especially VLBI, GPS and SLR, allow for the determination of Earth Rotation Parameters (ERPs - polar motion, UT1/LOD) with unprecedented accuracy and temporal resolution. This presentation focuses on contributions to the ERP recovery provided by satellite navigation systems (primarily GPS). The IGS (International GNSS Service), for example, currently provides daily polar motion with an accuracy of less than 0.1mas and LOD estimates with an accuracy of a few microseconds. To study more rapid variations in polar motion and LOD we established in a first step a high resolution (hourly resolution) ERP-time series from GPS observation data of the IGS network covering the year 2005. The calculations were carried out by means of the Bernese GPS Software V5.0 considering observations from a subset of 113 fairly stable stations out of the IGS05 reference frame sites. From these ERP time series the amplitudes of the major diurnal and semidiurnal variations caused by ocean tides are estimated. After correcting the series for ocean tides the remaining geodetic observed excitation is compared with variations of atmospheric excitation (AAM). To study the sensitivity of the estimates with respect to the applied mapping function we applied both the widely used NMF (Niell Mapping Function) and the VMF1 (Vienna Mapping Function 1). In addition, based on computations covering two months in 2005, the potential improvement due to the use of additional GLONASS data will be discussed.

  9. Security Applications Of Computer Motion Detection

    NASA Astrophysics Data System (ADS)

    Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry

    1987-05-01

    An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.

  10. High-speed adaptive optics line scan confocal retinal imaging for human eye

    PubMed Central

    Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Purpose Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. Methods A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye’s optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. Results The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. Conclusions We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss. PMID:28257458

  11. High-speed adaptive optics line scan confocal retinal imaging for human eye.

    PubMed

    Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye's optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss.

  12. Geocenter Motion Derived from the JTRF2014 Combination

    NASA Astrophysics Data System (ADS)

    Abbondanza, C.; Chin, T. M.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; van Dam, T. M.; Wu, X.

    2016-12-01

    JTRF2014 represents the JPL Terrestrial Reference Frame (TRF) recently obtained as a result of the combination of the space-geodetic reprocessed inputs to the ITRF2014. Based upon a Kalman filter and smoother approach, JTRF2014 assimilates station positions and Earth-Orientation Parameters (EOPs) from GNSS, VLBI, SLR and DORIS and combine them through local tie measurements. JTRF is in its essence a time-series based TRF. In the JTRF2014 the dynamical evolution of the station positions is formulated by introducing linear and seasonal terms (annual and semi-annual periodic modes). Non-secular and non-seasonal motions of the geodetic sites are included in the smoothed time series by properly defining the station position process noise whose variance is characterized by analyzing station displacements induced by temporal changes of planetary fluid masses (atmosphere, oceans and continental surface water). With its station position time series output at a weekly resolution, JTRF2014 materializes a sub-secular frame whose origin is at the quasi-instantaneous Center of Mass (CM) as sensed by SLR. Both SLR and VLBI contribute to the scale of the combined frame. The sub-secular nature of the frame allows the users to directly access the quasi-instantaneous geocenter and scale information. Unlike standard combined TRF products which only give access to the secular component of the CM-CN motions, JTRF2014 is able to preserve -in addition to the long-term- the seasonal, non-seasonal and non-secular components of the geocenter motion. In the JTRF2014 assimilation scheme, local tie measurements are used to transfer the geocenter information from SLR to the space-geodetic techniques which are either insensitive to CM (VLBI) or whose geocenter motion is poorly determined (GNSS and DORIS). Properly tied to the CM frame through local ties and co-motion constraints, GNSS, VLBI and DORIS contribute to improve the SLR network geometry. In this paper, the determination of the weekly (CM-CN) time series as inferred from the JTRF2014 combination will be presented. Comparisons with geocenter time series derived from global inversions of GPS, GRACE and ocean bottom pressure models show the JTRF2014-derived geocenter favourably compares to the results of the inversion.

  13. Motion compensated shape error concealment.

    PubMed

    Schuster, Guido M; Katsaggelos, Aggelos K

    2006-02-01

    The introduction of Video Objects (VOs) is one of the innovations of MPEG-4. The alpha-plane of a VO defines its shape at a given instance in time and hence determines the boundary of its texture. In packet-based networks, shape, motion, and texture are subject to loss. While there has been considerable attention paid to the concealment of texture and motion errors, little has been done in the field of shape error concealment. In this paper we propose a post-processing shape error concealment technique that uses the motion compensated boundary information of the previously received alpha-plane. The proposed approach is based on matching received boundary segments in the current frame to the boundary in the previous frame. This matching is achieved by finding a maximally smooth motion vector field. After the current boundary segments are matched to the previous boundary, the missing boundary pieces are reconstructed by motion compensation. Experimental results demonstrating the performance of the proposed motion compensated shape error concealment method, and comparing it with the previously proposed weighted side matching method are presented.

  14. Frame sequences analysis technique of linear objects movement

    NASA Astrophysics Data System (ADS)

    Oshchepkova, V. Y.; Berg, I. A.; Shchepkin, D. V.; Kopylova, G. V.

    2017-12-01

    Obtaining data by noninvasive methods are often needed in many fields of science and engineering. This is achieved through video recording in various frame rate and light spectra. In doing so quantitative analysis of movement of the objects being studied becomes an important component of the research. This work discusses analysis of motion of linear objects on the two-dimensional plane. The complexity of this problem increases when the frame contains numerous objects whose images may overlap. This study uses a sequence containing 30 frames at the resolution of 62 × 62 pixels and frame rate of 2 Hz. It was required to determine the average velocity of objects motion. This velocity was found as an average velocity for 8-12 objects with the error of 15%. After processing dependencies of the average velocity vs. control parameters were found. The processing was performed in the software environment GMimPro with the subsequent approximation of the data obtained using the Hill equation.

  15. Denoising time-resolved microscopy image sequences with singular value thresholding.

    PubMed

    Furnival, Tom; Leary, Rowan K; Midgley, Paul A

    2017-07-01

    Time-resolved imaging in microscopy is important for the direct observation of a range of dynamic processes in both the physical and life sciences. However, the image sequences are often corrupted by noise, either as a result of high frame rates or a need to limit the radiation dose received by the sample. Here we exploit both spatial and temporal correlations using low-rank matrix recovery methods to denoise microscopy image sequences. We also make use of an unbiased risk estimator to address the issue of how much thresholding to apply in a robust and automated manner. The performance of the technique is demonstrated using simulated image sequences, as well as experimental scanning transmission electron microscopy data, where surface adatom motion and nanoparticle structural dynamics are recovered at rates of up to 32 frames per second. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Object Tracking and Target Reacquisition Based on 3-D Range Data for Moving Vehicles

    PubMed Central

    Lee, Jehoon; Lankton, Shawn; Tannenbaum, Allen

    2013-01-01

    In this paper, we propose an approach for tracking an object of interest based on 3-D range data. We employ particle filtering and active contours to simultaneously estimate the global motion of the object and its local deformations. The proposed algorithm takes advantage of range information to deal with the challenging (but common) situation in which the tracked object disappears from the image domain entirely and reappears later. To cope with this problem, a method based on principle component analysis (PCA) of shape information is proposed. In the proposed method, if the target disappears out of frame, shape similarity energy is used to detect target candidates that match a template shape learned online from previously observed frames. Thus, we require no a priori knowledge of the target’s shape. Experimental results show the practical applicability and robustness of the proposed algorithm in realistic tracking scenarios. PMID:21486717

  17. User-assisted video segmentation system for visual communication

    NASA Astrophysics Data System (ADS)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  18. ISS Squat and Deadlift Kinematics on the Advanced Resistive Exercise Device

    NASA Technical Reports Server (NTRS)

    Newby, N.; Caldwell, E.; Sibonga, J.; Ploutz-Snyder, L.

    2014-01-01

    Visual assessment of exercise form on the Advanced Resistive Exercise Device (ARED) on orbit is difficult due to the motion of the entire device on its Vibration Isolation System (VIS). The VIS allows for two degrees of device translational motion, and one degree of rotational motion. In order to minimize the forces that the VIS must damp in these planes of motion, the floor of the ARED moves as well during exercise to reduce changes in the center of mass of the system. To help trainers and other exercise personnel better assess squat and deadlift form a tool was developed that removes the VIS motion and creates a stick figure video of the exerciser. Another goal of the study was to determine whether any useful kinematic information could be obtained from just a single camera. Finally, the use of these data may aid in the interpretation of QCT hip structure data in response to ARED exercises performed in-flight. After obtaining informed consent, four International Space Station (ISS) crewmembers participated in this investigation. Exercise was videotaped using a single camera positioned to view the side of the crewmember during exercise on the ARED. One crewmember wore reflective tape on the toe, heel, ankle, knee, hip, and shoulder joints. This technique was not available for the other three crewmembers, so joint locations were assessed and digitized frame-by-frame by lab personnel. A custom Matlab program was used to assign two-dimensional coordinates to the joint locations throughout exercise. A second custom Matlab program was used to scale the data, calculate joint angles, estimate the foot center of pressure (COP), approximate normal and shear loads, and to create the VIS motion-corrected stick figure videos. Kinematics for the squat and deadlift vary considerably for the four crewmembers in this investigation. Some have very shallow knee and hip angles, and others have quite large ranges of motion at these joints. Joint angle analysis showed that crewmembers do not return to a normal upright stance during squat, but remain somewhat bent at the hips. COP excursions were quite large during these exercises covering the entire length of the base of support in most cases. Anterior-posterior shear was very pronounced at the bottom of the squat and deadlift correlating with a COP shift to the toes at this part of the exercise. The stick figure videos showing a feet fixed reference frame have made it visually much easier for exercise personnel and trainers to assess exercise kinematics. Not returning to fully upright, hips extended position during squat exercises could have implications for the amount of load that is transmitted axially along the skeleton. The estimated shear loads observed in these crewmembers, along with a concomitant reduction in normal force, may also affect bone loading. The increased shear is likely due to the surprisingly large deviations in COP. Since the footplate on ARED moves along an arced path, much of the squat and deadlift movement is occurring on a tilted foot surface. This leads to COP movements away from the heel. The combination of observed kinematics and estimated kinetics make squat and deadlift exercises on the ARED distinctly different from their ground-based counterparts. CONCLUSION This investigation showed that some useful exercise information can be obtained at low cost, using a single video camera that is readily available on ISS. Squat and deadlift kinematics on the ISS ARED differ from ground-based ARED exercise. The amount of COP shift during these exercises sometimes approaches the limit of stability leading to modifications in the kinematics. The COP movement and altered kinematics likely reduce the bone loading experienced during these exercises. Further, the stick figure videos may prove to be a useful tool in assisting trainers to identify exercise form and make suggestions for improvements

  19. Stochastic filtering for damage identification through nonlinear structural finite element model updating

    NASA Astrophysics Data System (ADS)

    Astroza, Rodrigo; Ebrahimian, Hamed; Conte, Joel P.

    2015-03-01

    This paper describes a novel framework that combines advanced mechanics-based nonlinear (hysteretic) finite element (FE) models and stochastic filtering techniques to estimate unknown time-invariant parameters of nonlinear inelastic material models used in the FE model. Using input-output data recorded during earthquake events, the proposed framework updates the nonlinear FE model of the structure. The updated FE model can be directly used for damage identification and further used for damage prognosis. To update the unknown time-invariant parameters of the FE model, two alternative stochastic filtering methods are used: the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). A three-dimensional, 5-story, 2-by-1 bay reinforced concrete (RC) frame is used to verify the proposed framework. The RC frame is modeled using fiber-section displacement-based beam-column elements with distributed plasticity and is subjected to the ground motion recorded at the Sylmar station during the 1994 Northridge earthquake. The results indicate that the proposed framework accurately estimate the unknown material parameters of the nonlinear FE model. The UKF outperforms the EKF when the relative root-mean-square error of the recorded responses are compared. In addition, the results suggest that the convergence of the estimate of modeling parameters is smoother and faster when the UKF is utilized.

  20. A motion-tolerant approach for monitoring SpO2 and heart rate using photoplethysmography signal with dual frame length processing and multi-classifier fusion.

    PubMed

    Fan, Feiyi; Yan, Yuepeng; Tang, Yongzhong; Zhang, Hao

    2017-12-01

    Monitoring pulse oxygen saturation (SpO 2 ) and heart rate (HR) using photoplethysmography (PPG) signal contaminated by a motion artifact (MA) remains a difficult problem, especially when the oximeter is not equipped with a 3-axis accelerometer for adaptive noise cancellation. In this paper, we report a pioneering investigation on the impact of altering the frame length of Molgedey and Schuster independent component analysis (ICAMS) on performance, design a multi-classifier fusion strategy for selecting the PPG correlated signal component, and propose a novel approach to extract SpO 2 and HR readings from PPG signal contaminated by strong MA interference. The algorithm comprises multiple stages, including dual frame length ICAMS, a multi-classifier-based PPG correlated component selector, line spectral analysis, tree-based HR monitoring, and post-processing. Our approach is evaluated by multi-subject tests. The root mean square error (RMSE) is calculated for each trial. Three statistical metrics are selected as performance evaluation criteria: mean RMSE, median RMSE and the standard deviation (SD) of RMSE. The experimental results demonstrate that a shorter ICAMS analysis window probably results in better performance in SpO 2 estimation. Notably, the designed multi-classifier signal component selector achieved satisfactory performance. The subject tests indicate that our algorithm outperforms other baseline methods regarding accuracy under most criteria. The proposed work can contribute to improving the performance of current pulse oximetry and personal wearable monitoring devices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Automated Selection Of Pictures In Sequences

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.; Shelton, Robert O.

    1995-01-01

    Method of automated selection of film or video motion-picture frames for storage or examination developed. Beneficial in situations in which quantity of visual information available exceeds amount stored or examined by humans in reasonable amount of time, and/or necessary to reduce large number of motion-picture frames to few conveying significantly different information in manner intermediate between movie and comic book or storyboard. For example, computerized vision system monitoring industrial process programmed to sound alarm when changes in scene exceed normal limits.

  2. Experimental Evaluation of the High-Speed Motion Vector Measurement by Combining Synthetic Aperture Array Processing with Constrained Least Square Method

    NASA Astrophysics Data System (ADS)

    Yokoyama, Ryouta; Yagi, Shin-ichi; Tamura, Kiyoshi; Sato, Masakazu

    2009-07-01

    Ultrahigh speed dynamic elastography has promising potential capabilities in applying clinical diagnosis and therapy of living soft tissues. In order to realize the ultrahigh speed motion tracking at speeds of over thousand frames per second, synthetic aperture (SA) array signal processing technology must be introduced. Furthermore, the overall system performance should overcome the fine quantitative evaluation in accuracy and variance of echo phase changes distributed across a tissue medium. On spatial evaluation of local phase changes caused by pulsed excitation on a tissue phantom, investigation was made with the proposed SA signal system utilizing different virtual point sources that were generated by an array transducer to probe each component of local tissue displacement vectors. The final results derived from the cross-correlation method (CCM) brought about almost the same performance as obtained by the constrained least square method (LSM) extended to successive echo frames. These frames were reconstructed by SA processing after the real-time acquisition triggered by the pulsed irradiation from a point source. The continuous behavior of spatial motion vectors demonstrated the dynamic generation and traveling of the pulsed shear wave at a speed of one thousand frames per second.

  3. A Pursuit Theory Account for the Perception of Common Motion in Motion Parallax.

    PubMed

    Ratzlaff, Michael; Nawrot, Mark

    2016-09-01

    The visual system uses an extraretinal pursuit eye movement signal to disambiguate the perception of depth from motion parallax. Visual motion in the same direction as the pursuit is perceived nearer in depth while visual motion in the opposite direction as pursuit is perceived farther in depth. This explanation of depth sign applies to either an allocentric frame of reference centered on the fixation point or an egocentric frame of reference centered on the observer. A related problem is that of depth order when two stimuli have a common direction of motion. The first psychophysical study determined whether perception of egocentric depth order is adequately explained by a model employing an allocentric framework, especially when the motion parallax stimuli have common rather than divergent motion. A second study determined whether a reversal in perceived depth order, produced by a reduction in pursuit velocity, is also explained by this model employing this allocentric framework. The results show than an allocentric model can explain both the egocentric perception of depth order with common motion and the perceptual depth order reversal created by a reduction in pursuit velocity. We conclude that an egocentric model is not the only explanation for perceived depth order in these common motion conditions. © The Author(s) 2016.

  4. Development of ultrasound/endoscopy PACS (picture archiving and communication system) and investigation of compression method for cine images

    NASA Astrophysics Data System (ADS)

    Osada, Masakazu; Tsukui, Hideki

    2002-09-01

    ABSTRACT Picture Archiving and Communication System (PACS) is a system which connects imaging modalities, image archives, and image workstations to reduce film handling cost and improve hospital workflow. Handling diagnostic ultrasound and endoscopy images is challenging, because it produces large amount of data such as motion (cine) images of 30 frames per second, 640 x 480 in resolution, with 24-bit color. Also, it requires enough image quality for clinical review. We have developed PACS which is able to manage ultrasound and endoscopy cine images with above resolution and frame rate, and investigate suitable compression method and compression rate for clinical image review. Results show that clinicians require capability for frame-by-frame forward and backward review of cine images because they carefully look through motion images to find certain color patterns which may appear in one frame. In order to satisfy this quality, we have chosen motion JPEG, installed and confirmed that we could capture this specific pattern. As for acceptable image compression rate, we have performed subjective evaluation. No subjects could tell the difference between original non-compressed images and 1:10 lossy compressed JPEG images. One subject could tell the difference between original and 1:20 lossy compressed JPEG images although it is acceptable. Thus, ratios of 1:10 to 1:20 are acceptable to reduce data amount and cost while maintaining quality for clinical review.

  5. Time-of-flight depth image enhancement using variable integration time

    NASA Astrophysics Data System (ADS)

    Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong

    2013-03-01

    Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.

  6. Vertical Crustal Motion Derived from Satellite Altimetry and Tide Gauges, and Comparisons with DORIS Measurements

    NASA Technical Reports Server (NTRS)

    Ray, R. D.; Beckley, B. D.; Lemoine, F. G.

    2010-01-01

    A somewhat unorthodox method for determining vertical crustal motion at a tide-gauge location is to difference the sea level time series with an equivalent time series determined from satellite altimetry, To the extent that both instruments measure an identical ocean signal, the difference will be dominated by vertical land motion at the gauge. We revisit this technique by analyzing sea level signals at 28 tide gauges that are colocated with DORIS geodetic stations. Comparisons of altimeter-gauge vertical rates with DORIS rates yield a median difference of 1.8 mm/yr and a weighted root-mean-square difference of2.7 mm/yr. The latter suggests that our uncertainty estimates, which are primarily based on an assumed AR(l) noise process in all time series, underestimates the true errors. Several sources of additional error are discussed, including possible scale errors in the terrestrial reference frame to which altimeter-gauge rates are mostly insensitive, One of our stations, Male, Maldives, which has been the subject of some uninformed arguments about sea-level rise, is found to have almost no vertical motion, and thus is vulnerable to rising sea levels. Published by Elsevier Ltd. on behalf of COSPAR.

  7. Retrospective data-driven respiratory gating for PET/CT

    NASA Astrophysics Data System (ADS)

    Schleyer, Paul J.; O'Doherty, Michael J.; Barrington, Sally F.; Marsden, Paul K.

    2009-04-01

    Respiratory motion can adversely affect both PET and CT acquisitions. Respiratory gating allows an acquisition to be divided into a series of motion-reduced bins according to the respiratory signal, which is typically hardware acquired. In order that the effects of motion can potentially be corrected for, we have developed a novel, automatic, data-driven gating method which retrospectively derives the respiratory signal from the acquired PET and CT data. PET data are acquired in listmode and analysed in sinogram space, and CT data are acquired in cine mode and analysed in image space. Spectral analysis is used to identify regions within the CT and PET data which are subject to respiratory motion, and the variation of counts within these regions is used to estimate the respiratory signal. Amplitude binning is then used to create motion-reduced PET and CT frames. The method was demonstrated with four patient datasets acquired on a 4-slice PET/CT system. To assess the accuracy of the data-derived respiratory signal, a hardware-based signal was acquired for comparison. Data-driven gating was successfully performed on PET and CT datasets for all four patients. Gated images demonstrated respiratory motion throughout the bin sequences for all PET and CT series, and image analysis and direct comparison of the traces derived from the data-driven method with the hardware-acquired traces indicated accurate recovery of the respiratory signal.

  8. The effect of geocenter motion on Jason-2 orbits and the mean sea level

    NASA Astrophysics Data System (ADS)

    Melachroinos, S. A.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Bordyugov, O.

    2013-04-01

    We compute a series of Jason-2 GPS and SLR/DORIS-based orbits using ITRF2005 and the std0905 standards (Lemoine et al., 2010). Our GPS and SLR/DORIS orbit data sets span a period of 2 years from cycle 3 (July 2008) to cycle 74 (July 2010). We extract the Jason-2 orbit frame translational parameters per cycle by the means of a Helmert transformation between a set of reference orbits and a set of test orbits. We compare the annual terms of these time-series to the annual terms of two different geocenter motion models where biases and trends have been removed. Subsequently, we include the annual terms of the modeled geocenter motion as a degree-1 loading displacement correction to the GPS and SLR/DORIS tracking network of the POD process. Although the annual geocenter motion correction would reflect a stationary signal in time, under ideal conditions, the whole geocenter motion is a non-stationary process that includes secular trends. Our results suggest that our GSFC Jason-2 GPS-based orbits are closely tied to the center of mass (CM) of the Earth consistent with our current force modeling, whereas GSFC's SLR/DORIS-based orbits are tied to the origin of ITRF2005, which is the center of figure (CF) for sub-secular scales. We quantify the GPS and SLR/DORIS orbit centering and how this impacts the orbit radial error over the globe, which is assimilated into mean sea level (MSL) error, from the omission of the annual term of the geocenter correction. We find that for the SLR/DORIS std0905 orbits, currently used by the oceanographic community, only the negligence of the annual term of the geocenter motion correction results in a - 4.67 ± 3.40 mm error in the Z-component of the orbit frame which creates 1.06 ± 2.66 mm of systematic error in the MSL estimates, mainly due to the uneven distribution of the oceans between the North and South hemisphere.

  9. The Effect of Geocenter Motion on Jason-2 Orbits and the Mean Sea Level

    NASA Technical Reports Server (NTRS)

    Melachroinos, S. A.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Bordyugov, O.

    2012-01-01

    We compute a series of Jason-2 GPS and SLR/DORIS-based orbits using ITRF2005 and the std0905 standards (Lemoine et al. 2010). Our GPS and SLR/DORIS orbit data sets span a period of 2 years from cycle 3 (July 2008) to cycle 74 (July 2010). We extract the Jason-2 orbit frame translational parameters per cycle by the means of a Helmert transformation between a set of reference orbits and a set of test orbits. We compare the annual terms of these time-series to the annual terms of two different geocenter motion models where biases and trends have been removed. Subsequently, we include the annual terms of the modeled geocenter motion as a degree-1 loading displacement correction to the GPS and SLR/DORIS tracking network of the POD process. Although the annual geocenter motion correction would reflect a stationary signal in time, under ideal conditions, the whole geocenter motion is a non-stationary process that includes secular trends. Our results suggest that our GSFC Jason-2 GPS-based orbits are closely tied to the center of mass (CM) of the Earth consistent with our current force modeling, whereas GSFC's SLR/DORIS-based orbits are tied to the origin of ITRF2005, which is the center of figure (CF) for sub-secular scales. We quantify the GPS and SLR/DORIS orbit centering and how this impacts the orbit radial error over the globe, which is assimilated into mean sea level (MSL) error, from the omission of the annual term of the geocenter correction. We find that for the SLR/DORIS std0905 orbits, currently used by the oceanographic community, only the negligence of the annual term of the geocenter motion correction results in a 4.67 plus or minus 3.40 mm error in the Z-component of the orbit frame which creates 1.06 plus or minus 2.66 mm of systematic error in the MSL estimates, mainly due to the uneven distribution of the oceans between the North and South hemisphere.

  10. High-Speed Videography Overview

    NASA Astrophysics Data System (ADS)

    Miller, C. E.

    1989-02-01

    The field of high-speed videography (HSV) has continued to mature in recent years, due to the introduction of a mixture of new technology and extensions of existing technology. Recent low frame-rate innovations have the potential to dramatically expand the areas of information gathering and motion analysis at all frame-rates. Progress at the 0 - rate is bringing the battle of film versus video to the field of still photography. The pressure to push intermediate frame rates higher continues, although the maximum achievable frame rate has remained stable for several years. Higher maximum recording rates appear technologically practical, but economic factors impose severe limitations to development. The application of diverse photographic techniques to video-based systems is under-exploited. The basics of HSV apply to other fields, such as machine vision and robotics. Present motion analysis systems continue to function mainly as an instant replay replacement for high-speed movie film cameras. The interrelationship among lighting, shuttering and spatial resolution is examined.

  11. Methods for Expanding Rotary Wing Aircraft Health and Usage Monitoring Systems to the Rotating Frame through Real-time Rotor Blade Kinematics Estimation

    NASA Astrophysics Data System (ADS)

    Allred, Charles Jefferson

    Since the advent of Health and Usage Monitoring Systems (HUMS) in the early 1990's, there has been a steady decrease in the number of component failure related helicopter accidents. Additionally, measurable cost benefits due to improved maintenance practices based on HUMS data has led to a desire to expand HUMS from its traditional area of helicopter drive train monitoring. One of the areas of greatest interest for this expansion of HUMS is monitoring of the helicopter rotor head loads. Studies of rotor head load and blade motions have primarily focused on wind tunnel testing with technology which would not be applicable for production helicopter HUMS deployment, or measuring bending along the blade, rather than where it is attached to the rotor head and the location through which all the helicopter loads pass. This dissertation details research into finding methods for real time methods of estimating rotor blade motion which could be applied across helicopter fleets as an expansion of current HUMS technology. First, there is a brief exploration of supporting technologies which will be crucial in enabling the expansion of HUMS from the fuselage of helicopters to the rotor head: wireless data transmission and energy harvesting. A brief overview of the commercially available low power wireless technology selected for this research is presented. The development of a relatively high-powered energy harvester specific to the motion of helicopter rotor blades is presented and two different prototypes of the device are shown. Following the overview of supporting technologies, two novel methods of monitoring rotor blade motion in real time are developed. The first method employs linear displacement sensors embedded in the elastomer layers of a high-capacity laminate bearing of the type commonly used in fully articulated rotors throughout the helicopter industry. The configuration of these displacement sensors allows modeling of the sensing system as a robotic parallel mechanism, similar to a Stewart Platform. A calibration method for this device is developed and the improved orientation estimation results are shown. The second method is not specific to the fully articulated rotor head mounting geometry of the first method. Rather, it utilizes micro-electromechanical (MEMS) accelerometers and gyroscopes configured to measure the centrifugal acceleration and rotation rate induced through rotor head rotation differentially. By measuring these quantities differentially, other accelerations from the fuselage reference frame are removed from the measurement, resulting in acceleration and rate quantities that are impacted only by the angle of the sensors relative to the plane of rotation. By mounting these sensors strategically and symmetrically about the rotor blade root center of rotation, the orientation of the rotor blade can be estimated in real time.

  12. Estimating pixel variances in the scenes of staring sensors

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM

    2012-01-24

    A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.

  13. Actuation of an Inertia-Coupled Rimless Wheel Model across Level Ground

    NASA Astrophysics Data System (ADS)

    Weeks, Seth Caleb

    The inertia-coupled rimless wheel model is a passive dynamic walking device which is theoretically capable of achieving highly efficient motion with no energy losses. Under non-ideal circumstances, energy losses due to air drag require the use of actuation to maintain stable motions. The Actuated Inertia-coupled Rimless Wheel Across Flat Terrain (AIRWAFT) model provides actuation to an inertia-coupled rimless wheel model across level ground to compensate for energy losses by applying hip-torque between the frame and inertia wheel via a motor. Two methods of defining the open-loop actuation are presented. Position control defines the relative position of the drum relative to the frame. Torque control specifies the amount of torque between the frame and the drum. The performance of the model was evaluated with respect to changes in various geometrical and control parameters and initial conditions. This parameter study led to the discovery of a stable, periodic motion with a cost of transport of 0.33.

  14. Computer-aided target tracking in motion analysis studies

    NASA Astrophysics Data System (ADS)

    Burdick, Dominic C.; Marcuse, M. L.; Mislan, J. D.

    1990-08-01

    Motion analysis studies require the precise tracking of reference objects in sequential scenes. In a typical situation, events of interest are captured at high frame rates using special cameras, and selected objects or targets are tracked on a frame by frame basis to provide necessary data for motion reconstruction. Tracking is usually done using manual methods which are slow and prone to error. A computer based image analysis system has been developed that performs tracking automatically. The objective of this work was to eliminate the bottleneck due to manual methods in high volume tracking applications such as the analysis of crash test films for the automotive industry. The system has proven to be successful in tracking standard fiducial targets and other objects in crash test scenes. Over 95 percent of target positions which could be located using manual methods can be tracked by the system, with a significant improvement in throughput over manual methods. Future work will focus on the tracking of clusters of targets and on tracking deformable objects such as airbags.

  15. Multi-geodetic characterization of the seasonal signal at the CERGA geodetic reference station, France

    NASA Astrophysics Data System (ADS)

    Mémin, Anthony; Viswanathan, Vishnu; Fienga, Agnes; Santamarìa-Gómez, Alvaro; Boy, Jean-Paul; Cavalié, Olivier; Deleflie, Florent; Exertier, Pierre; Bernard, Jean-Daniel; Hinderer, Jacques

    2017-04-01

    Crustal deformations due to surface-mass loading account for a significant part of the variability in geodetic time series. A perfect understanding of the loading signal observed by geodetic techniques should help in improving terrestrial reference frame (TRF) realizations. Yet, discrepancies between crustal motion estimates from models of surface-mass loading and observations are still too large so that no model is currently recommended by the IERS for reducing the observations. We investigate the discrepancy observed in the seasonal variations of the position at the CERGA station, South of France. We characterize the seasonal motions of the reference geodetic station CERGA from GNSS, SLR, LLR and InSAR. We investigate the consistency between the station motions deduced from these geodetic techniques and compare the observed station motion with that estimated using models of surface-mass change. In that regard, we compute atmospheric loading effects using surface pressure fields from ECMWF, assuming an ocean response according to the classical inverted barometer (IB) assumption, considered to be valid for periods typically exceeding a week. We also used general circulation ocean models (ECCO and GLORYS) forced by wind, heat and fresh water fluxes. The continental water storage is described using GLDAS/Noah and MERRA-land models. Using the surface-mass models, we estimate that the seasonal signal due to loading deformation at the CERGA station is about 8-9, 1-2 and 1-2 mm peak-to-peak in Up, North and East component, respectively. There is a very good correlation between GPS observations and non-tidal loading predicted deformation due to atmosphere, ocean and hydrology which is the main driver of seasonal signal at CERGA. Despite large error bars, LLR observations agree reasonably well with GPS and non-tidal loading predictions in Up component. Local deformation as observed by InSAR is very well correlated with GPS observations corrected for non-tidal loading. Finally, we estimate local mass changes using the absolute gravity measurement campaigns available at the station and the global models of surface-mass change. We compute the induced station motion that we compare with the local deformation observed by InSAR and GPS.

  16. LROC Investigation of Three Strategies for Reducing the Impact of Respiratory Motion on the Detection of Solitary Pulmonary Nodules in SPECT

    NASA Astrophysics Data System (ADS)

    Smyczynski, Mark S.; Gifford, Howard C.; Dey, Joyoni; Lehovich, Andre; McNamara, Joseph E.; Segars, W. Paul; King, Michael A.

    2016-02-01

    The objective of this investigation was to determine the effectiveness of three motion reducing strategies in diminishing the degrading impact of respiratory motion on the detection of small solitary pulmonary nodules (SPNs) in single-photon emission computed tomographic (SPECT) imaging in comparison to a standard clinical acquisition and the ideal case of imaging in the absence of respiratory motion. To do this nonuniform rational B-spline cardiac-torso (NCAT) phantoms based on human-volunteer CT studies were generated spanning the respiratory cycle for a normal background distribution of Tc-99 m NeoTect. Similarly, spherical phantoms of 1.0-cm diameter were generated to model small SPN for each of the 150 uniquely located sites within the lungs whose respiratory motion was based on the motion of normal structures in the volunteer CT studies. The SIMIND Monte Carlo program was used to produce SPECT projection data from these. Normal and single-lesion containing SPECT projection sets with a clinically realistic Poisson noise level were created for the cases of 1) the end-expiration (EE) frame with all counts, 2) respiration-averaged motion with all counts, 3) one fourth of the 32 frames centered around EE (Quarter Binning), 4) one half of the 32 frames centered around EE (Half Binning), and 5) eight temporally binned frames spanning the respiratory cycle. Each of the sets of combined projection data were reconstructed with RBI-EM with system spatial-resolution compensation (RC). Based on the known motion for each of the 150 different lesions, the reconstructed volumes of respiratory bins were shifted so as to superimpose the locations of the SPN onto that in the first bin (Reconstruct and Shift). Five human observers performed localization receiver operating characteristics (LROC) studies of SPN detection. The observer results were analyzed for statistical significance differences in SPN detection accuracy among the three correction strategies, the standard acquisition, and the ideal case of the absence of respiratory motion. Our human-observer LROC determined that Quarter Binning and Half Binning strategies resulted in SPN detection accuracy statistically significantly below ( ) that of standard clinical acquisition, whereas the Reconstruct and Shift strategy resulted in a detection accuracy not statistically significantly different from that of the ideal case. This investigation demonstrates that tumor detection based on acquisitions associated with less than all the counts which could potentially be employed may result in poorer detection despite limiting the motion of the lesion. The Reconstruct and Shift method results in tumor detection that is equivalent to ideal motion correction.

  17. Automatic generation of endocardial surface meshes with 1-to-1 correspondence from cine-MR images

    NASA Astrophysics Data System (ADS)

    Su, Yi; Teo, S.-K.; Lim, C. W.; Zhong, L.; Tan, R. S.

    2015-03-01

    In this work, we develop an automatic method to generate a set of 4D 1-to-1 corresponding surface meshes of the left ventricle (LV) endocardial surface which are motion registered over the whole cardiac cycle. These 4D meshes have 1- to-1 point correspondence over the entire set, and is suitable for advanced computational processing, such as shape analysis, motion analysis and finite element modelling. The inputs to the method are the set of 3D LV endocardial surface meshes of the different frames/phases of the cardiac cycle. Each of these meshes is reconstructed independently from border-delineated MR images and they have no correspondence in terms of number of vertices/points and mesh connectivity. To generate point correspondence, the first frame of the LV mesh model is used as a template to be matched to the shape of the meshes in the subsequent phases. There are two stages in the mesh correspondence process: (1) a coarse matching phase, and (2) a fine matching phase. In the coarse matching phase, an initial rough matching between the template and the target is achieved using a radial basis function (RBF) morphing process. The feature points on the template and target meshes are automatically identified using a 16-segment nomenclature of the LV. In the fine matching phase, a progressive mesh projection process is used to conform the rough estimate to fit the exact shape of the target. In addition, an optimization-based smoothing process is used to achieve superior mesh quality and continuous point motion.

  18. Determination of recent horizontal crustal movements and deformations of African and Eurasian plates in western Mediterranean region using geodetic-GPS computations extended to 2006 (from 1997) related to NAFREF and AFREF frames.

    NASA Astrophysics Data System (ADS)

    Azzouzi, R.

    2009-04-01

    Determination of recent horizontal crustal movements and deformations of African and Eurasian plates in western Mediterranean region using geodetic-GPS computations extended to 2006 (from 1997) related to NAFREF and AFREF frames. By: R. Azzouzi*, M. Ettarid*, El H. Semlali*, et A. Rimi+ * Filière de Formation en Topographie Institut Agronomique et Vétérinaire Hassan II B.P. 6202 Rabat-Instituts MAROC + Département de la Physique du Globe Université Mohammed V Rabat MAROC This study focus on the use of the geodetic spatial technique GPS for geodynamic purposes generally in the Western Mediterranean area and particularly in Morocco. It aims to exploit this technique first to determine the geodetic coordinates on some western Mediterranean sites. And also this technique is used to detect and to determine movements cross the boundary line between the two African and Eurasian crustal plates on some well chosen GPS-Geodynamics sites. It will allow us also to estimate crustal dynamic parameters of tension that results. These parameters are linked to deformations of terrestrial crust in the region. They are also associated with tectonic constraints of the study area. The usefulness of repeated measurements of these elements, the estimate of displacements and the determination of their temporal rates is indisputable. Indeed, sismo-tectonique studies allow a good knowledge of the of earthquake processes, their frequency their amplitude and even of their prediction in the world in general and in Moroccan area especially. They allow also contributing to guarantee more security for all most important management projects, as projects of building great works (dams, bridges, nuclear centrals). And also as preliminary study, for the most important joint-project between Europe and Africa through the Strait of Gibraltar. For our application, 23 GPS monitoring stations under the ITRF2000 reference frame are chosen in Eurasian and African plates. The sites are located around the Western Mediterranean and especially on Morocco. Exploiting parameters of positions and dispersions of these stations within the 1997-2003 period, the motion and the interaction types of interaction between African and Eurasian tectonic plates can be estimated. Similarly, the crustal dynamic parameters of tension of these sites will be computed. The time occupation on repeated observations sites is at least 72 hours. The measurements are continuous on permanent stations. The precise ephemerides are used in GPS computations. The post-treatments are done using commercial and scientific softwares. The coordinates obtained for two consecutive periods to and t within a period of 8 years will be used by programs established for this purpose to estimate crustal dynamic parameters of tension as well as to evaluate the appropriate movements. Even crustal dynamic parameters will be determined on each sites of the GPS-Geodynamics network, whose interest of seismic investigations is very important. This will allow best knowledge of substantial seismic activities of the surrounding zones. It can be deduced by measuring the motions and their parameter tensions using GPS. These estimations will contribute on the earthquake prediction by supervising the strain accumulation and its release in the active areas. For the geodetically aspect the GPS-Geodynamics sites computed in the ITRF frame can be used with other similar ounces' of Africa country and some well selected and convenient IGS, EUREF stations..to determine first the NAFREF and the AFRER frames.

  19. ostglacial rebound from VLBI Geodesy: On Establishing Vertical Reference

    NASA Technical Reports Server (NTRS)

    Argus, Donald .

    1996-01-01

    I propose that a useful reference frame for vertical motions is that found by minimizing differences between vertical motions observed with VLBI [Ma and Ryan, 1995] and predictions from postglacial rebound predictions [Peltier, 1995].

  20. Research opportunities in space motion sickness, phase 2

    NASA Technical Reports Server (NTRS)

    Talbot, J. M.

    1983-01-01

    Space and motion sickness, the current and projected NASA research program, and the conclusions and suggestions of the ad hoc Working Group are summarized. The frame of reference for the report is ground-based research.

  1. Precise Orbital and Geodetic Parameter Estimation using SLR Observations for ILRS AAC

    NASA Astrophysics Data System (ADS)

    Kim, Young-Rok; Park, Eunseo; Oh, Hyungjik Jay; Park, Sang-Young; Lim, Hyung-Chul; Park, Chandeok

    2013-12-01

    In this study, we present results of precise orbital geodetic parameter estimation using satellite laser ranging (SLR) observations for the International Laser Ranging Service (ILRS) associate analysis center (AAC). Using normal point observations of LAGEOS-1, LAGEOS-2, ETALON-1, and ETALON-2 in SLR consolidated laser ranging data format, the NASA/ GSFC GEODYN II and SOLVE software programs were utilized for precise orbit determination (POD) and finding solutions of a terrestrial reference frame (TRF) and Earth orientation parameters (EOPs). For POD, a weekly-based orbit determination strategy was employed to process SLR observations taken from 20 weeks in 2013. For solutions of TRF and EOPs, loosely constrained scheme was used to integrate POD results of four geodetic SLR satellites. The coordinates of 11 ILRS core sites were determined and daily polar motion and polar motion rates were estimated. The root mean square (RMS) value of post-fit residuals was used for orbit quality assessment, and both the stability of TRF and the precision of EOPs by external comparison were analyzed for verification of our solutions. Results of post-fit residuals show that the RMS of the orbits of LAGEOS-1 and LAGEOS-2 are 1.20 and 1.12 cm, and those of ETALON-1 and ETALON-2 are 1.02 and 1.11 cm, respectively. The stability analysis of TRF shows that the mean value of 3D stability of the coordinates of 11 ILRS core sites is 7.0 mm. An external comparison, with respect to International Earth rotation and Reference systems Service (IERS) 08 C04 results, shows that standard deviations of polar motion XP and YP are 0.754 milliarcseconds (mas) and 0.576 mas, respectively. Our results of precise orbital and geodetic parameter estimation are reasonable and help advance research at ILRS AAC.

  2. Global rotational motion and displacement estimation of digital image stabilization based on the oblique vectors matching algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Fei; Hui, Mei; Zhao, Yue-jin

    2009-08-01

    The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.

  3. Methods for motion correction evaluation using 18F-FDG human brain scans on a high-resolution PET scanner.

    PubMed

    Keller, Sune H; Sibomana, Merence; Olesen, Oline V; Svarer, Claus; Holm, Søren; Andersen, Flemming L; Højgaard, Liselotte

    2012-03-01

    Many authors have reported the importance of motion correction (MC) for PET. Patient motion during scanning disturbs kinetic analysis and degrades resolution. In addition, using misaligned transmission for attenuation and scatter correction may produce regional quantification bias in the reconstructed emission images. The purpose of this work was the development of quality control (QC) methods for MC procedures based on external motion tracking (EMT) for human scanning using an optical motion tracking system. Two scans with minor motion and 5 with major motion (as reported by the optical motion tracking system) were selected from (18)F-FDG scans acquired on a PET scanner. The motion was measured as the maximum displacement of the markers attached to the subject's head and was considered to be major if larger than 4 mm and minor if less than 2 mm. After allowing a 40- to 60-min uptake time after tracer injection, we acquired a 6-min transmission scan, followed by a 40-min emission list-mode scan. Each emission list-mode dataset was divided into 8 frames of 5 min. The reconstructed time-framed images were aligned to a selected reference frame using either EMT or the AIR (automated image registration) software. The following 3 QC methods were used to evaluate the EMT and AIR MC: a method using the ratio between 2 regions of interest with gray matter voxels (GM) and white matter voxels (WM), called GM/WM; mutual information; and cross correlation. The results of the 3 QC methods were in agreement with one another and with a visual subjective inspection of the image data. Before MC, the QC method measures varied significantly in scans with major motion and displayed limited variations on scans with minor motion. The variation was significantly reduced and measures improved after MC with AIR, whereas EMT MC performed less well. The 3 presented QC methods produced similar results and are useful for evaluating tracer-independent external-tracking motion-correction methods for human brain scans.

  4. CT fluoroscopy-guided robotically-assisted lung biopsy

    NASA Astrophysics Data System (ADS)

    Xu, Sheng; Fichtinger, Gabor; Taylor, Russell H.; Banovac, Filip; Cleary, Kevin

    2006-03-01

    Lung biopsy is a common interventional radiology procedure. One of the difficulties in performing the lung biopsy is that lesions move with respiration. This paper presents a new robotically assisted lung biopsy system for CT fluoroscopy that can automatically compensate for the respiratory motion during the intervention. The system consists of a needle placement robot to hold the needle on the CT scan plane, a radiolucent Z-frame for registration of the CT and robot coordinate systems, and a frame grabber to obtain the CT fluoroscopy image in real-time. The CT fluoroscopy images are used to noninvasively track the motion of a pulmonary lesion in real-time. The position of the lesion in the images is automatically determined by the image processing software and the motion of the robot is controlled to compensate for the lesion motion. The system was validated under CT fluoroscopy using a respiratory motion simulator. A swine study was also done to show the feasibility of the technique in a respiring animal.

  5. An experimental protocol for the definition of upper limb anatomical frames on children using magneto-inertial sensors.

    PubMed

    Ricci, L; Formica, D; Tamilia, E; Taffoni, F; Sparaci, L; Capirci, O; Guglielmelli, E

    2013-01-01

    Motion capture based on magneto-inertial sensors is a technology enabling data collection in unstructured environments, allowing "out of the lab" motion analysis. This technology is a good candidate for motion analysis of children thanks to the reduced weight and size as well as the use of wireless communication that has improved its wearability and reduced its obtrusivity. A key issue in the application of such technology for motion analysis is its calibration, i.e. a process that allows mapping orientation information from each sensor to a physiological reference frame. To date, even if there are several calibration procedures available for adults, no specific calibration procedures have been developed for children. This work addresses this specific issue presenting a calibration procedure for motion capture of thorax and upper limbs on healthy children. Reported results suggest comparable performance with similar studies on adults and emphasize some critical issues, opening the way to further improvements.

  6. Real-time motion analytics during brain MRI improve data quality and reduce costs.

    PubMed

    Dosenbach, Nico U F; Koller, Jonathan M; Earl, Eric A; Miranda-Dominguez, Oscar; Klein, Rachel L; Van, Andrew N; Snyder, Abraham Z; Nagel, Bonnie J; Nigg, Joel T; Nguyen, Annie L; Wesevich, Victoria; Greene, Deanna J; Fair, Damien A

    2017-11-01

    Head motion systematically distorts clinical and research MRI data. Motion artifacts have biased findings from many structural and functional brain MRI studies. An effective way to remove motion artifacts is to exclude MRI data frames affected by head motion. However, such post-hoc frame censoring can lead to data loss rates of 50% or more in our pediatric patient cohorts. Hence, many scanner operators collect additional 'buffer data', an expensive practice that, by itself, does not guarantee sufficient high-quality MRI data for a given participant. Therefore, we developed an easy-to-setup, easy-to-use Framewise Integrated Real-time MRI Monitoring (FIRMM) software suite that provides scanner operators with head motion analytics in real-time, allowing them to scan each subject until the desired amount of low-movement data has been collected. Our analyses show that using FIRMM to identify the ideal scan time for each person can reduce total brain MRI scan times and associated costs by 50% or more. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  7. A refined Frequency Domain Decomposition tool for structural modal monitoring in earthquake engineering

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Rizzi, Egidio

    2017-07-01

    Output-only structural identification is developed by a refined Frequency Domain Decomposition ( rFDD) approach, towards assessing current modal properties of heavy-damped buildings (in terms of identification challenge), under strong ground motions. Structural responses from earthquake excitations are taken as input signals for the identification algorithm. A new dedicated computational procedure, based on coupled Chebyshev Type II bandpass filters, is outlined for the effective estimation of natural frequencies, mode shapes and modal damping ratios. The identification technique is also coupled with a Gabor Wavelet Transform, resulting in an effective and self-contained time-frequency analysis framework. Simulated response signals generated by shear-type frames (with variable structural features) are used as a necessary validation condition. In this context use is made of a complete set of seismic records taken from the FEMA P695 database, i.e. all 44 "Far-Field" (22 NS, 22 WE) earthquake signals. The modal estimates are statistically compared to their target values, proving the accuracy of the developed algorithm in providing prompt and accurate estimates of all current strong ground motion modal parameters. At this stage, such analysis tool may be employed for convenient application in the realm of Earthquake Engineering, towards potential Structural Health Monitoring and damage detection purposes.

  8. Slow Speed--Fast Motion: Time-Lapse Recordings in Physics Education

    ERIC Educational Resources Information Center

    Vollmer, Michael; Möllmann, Klaus-Peter

    2018-01-01

    Video analysis with a 30 Hz frame rate is the standard tool in physics education. The development of affordable high-speed-cameras has extended the capabilities of the tool for much smaller time scales to the 1 ms range, using frame rates of typically up to 1000 frames s[superscript -1], allowing us to study transient physics phenomena happening…

  9. Efficient biprediction decision scheme for fast high efficiency video coding encoding

    NASA Astrophysics Data System (ADS)

    Park, Sang-hyo; Lee, Seung-ho; Jang, Euee S.; Jun, Dongsan; Kang, Jung-Won

    2016-11-01

    An efficient biprediction decision scheme of high efficiency video coding (HEVC) is proposed for fast-encoding applications. For low-delay video applications, bidirectional prediction can be used to increase compression performance efficiently with previous reference frames. However, at the same time, the computational complexity of the HEVC encoder is significantly increased due to the additional biprediction search. Although a some research has attempted to reduce this complexity, whether the prediction is strongly related to both motion complexity and prediction modes in a coding unit has not yet been investigated. A method that avoids most compression-inefficient search points is proposed so that the computational complexity of the motion estimation process can be dramatically decreased. To determine if biprediction is critical, the proposed method exploits the stochastic correlation of the context of prediction units (PUs): the direction of a PU and the accuracy of a motion vector. Through experimental results, the proposed method showed that the time complexity of biprediction can be reduced to 30% on average, outperforming existing methods in view of encoding time, number of function calls, and memory access.

  10. Evaluation of ground motion scaling methods for analysis of structural systems

    USGS Publications Warehouse

    O'Donnell, A. P.; Beltsar, O.A.; Kurama, Y.C.; Kalkan, E.; Taflanidis, A.A.

    2011-01-01

    Ground motion selection and scaling comprises undoubtedly the most important component of any seismic risk assessment study that involves time-history analysis. Ironically, this is also the single parameter with the least guidance provided in current building codes, resulting in the use of mostly subjective choices in design. The relevant research to date has been primarily on single-degree-of-freedom systems, with only a few studies using multi-degree-of-freedom systems. Furthermore, the previous research is based solely on numerical simulations with no experimental data available for the validation of the results. By contrast, the research effort described in this paper focuses on an experimental evaluation of selected ground motion scaling methods based on small-scale shake-table experiments of re-configurable linearelastic and nonlinear multi-story building frame structure models. Ultimately, the experimental results will lead to the development of guidelines and procedures to achieve reliable demand estimates from nonlinear response history analysis in seismic design. In this paper, an overview of this research effort is discussed and preliminary results based on linear-elastic dynamic response are presented. ?? ASCE 2011.

  11. Estimation of coefficient of rolling friction by the evolvent pendulum method

    NASA Astrophysics Data System (ADS)

    Alaci, S.; Ciornei, F. C.; Ciogole, A.; Ciornei, M. C.

    2017-05-01

    The paper presents a method for finding the coefficient of rolling friction using an evolvent pendulum. The pendulum consists in a fixed cylindrical body and a mobile body presenting a plane surface in contact with a cylindrical surface. The mobile body is placed over the fixed one in an equilibrium state; after applying a small impulse, the mobile body oscillates. The motion of the body is video recorded and afterwards the movie is analyzed by frames and the decrease with time of angular amplitude of the pendulum is found. The equation of motion is established for oscillations of the mobile body. The equation of motion, differential nonlinear, is integrated by Runge-Kutta method. Imposing the same damping both to model’s solution and to theoretical model, the value of coefficient of rolling friction is obtained. The last part of the paper presents results for actual pairs of materials. The main advantage of the method is the fact that the dimensions of contact regions are small, of order a few millimeters, and thus is substantially reduced the possibility of variation of mechanical characteristic for the two surfaces.

  12. JPEG XS-based frame buffer compression inside HEVC for power-aware video compression

    NASA Astrophysics Data System (ADS)

    Willème, Alexandre; Descampe, Antonin; Rouvroy, Gaël.; Pellegrin, Pascal; Macq, Benoit

    2017-09-01

    With the emergence of Ultra-High Definition video, reference frame buffers (FBs) inside HEVC-like encoders and decoders have to sustain huge bandwidth. The power consumed by these external memory accesses accounts for a significant share of the codec's total consumption. This paper describes a solution to significantly decrease the FB's bandwidth, making HEVC encoder more suitable for use in power-aware applications. The proposed prototype consists in integrating an embedded lightweight, low-latency and visually lossless codec at the FB interface inside HEVC in order to store each reference frame as several compressed bitstreams. As opposed to previous works, our solution compresses large picture areas (ranging from a CTU to a frame stripe) independently in order to better exploit the spatial redundancy found in the reference frame. This work investigates two data reuse schemes namely Level-C and Level-D. Our approach is made possible thanks to simplified motion estimation mechanisms further reducing the FB's bandwidth and inducing very low quality degradation. In this work, we integrated JPEG XS, the upcoming standard for lightweight low-latency video compression, inside HEVC. In practice, the proposed implementation is based on HM 16.8 and on XSM 1.1.2 (JPEG XS Test Model). Through this paper, the architecture of our HEVC with JPEG XS-based frame buffer compression is described. Then its performance is compared to HM encoder. Compared to previous works, our prototype provides significant external memory bandwidth reduction. Depending on the reuse scheme, one can expect bandwidth and FB size reduction ranging from 50% to 83.3% without significant quality degradation.

  13. Piezoelectric step-motion actuator

    DOEpatents

    Mentesana,; Charles, P [Leawood, KS

    2006-10-10

    A step-motion actuator using piezoelectric material to launch a flight mass which, in turn, actuates a drive pawl to progressively engage and drive a toothed wheel or rod to accomplish stepped motion. Thus, the piezoelectric material converts electrical energy into kinetic energy of the mass, and the drive pawl and toothed wheel or rod convert the kinetic energy of the mass into the desired rotary or linear stepped motion. A compression frame may be secured about the piezoelectric element and adapted to pre-compress the piezoelectric material so as to reduce tensile loads thereon. A return spring may be used to return the mass to its resting position against the compression frame or piezoelectric material following launch. Alternative embodiment are possible, including an alternative first embodiment wherein two masses are launched in substantially different directions, and an alternative second embodiment wherein the mass is eliminated in favor of the piezoelectric material launching itself.

  14. Airborne Imagery Collections Barrow 2013

    DOE Data Explorer

    Cherry, Jessica; Crowder, Kerri

    2015-07-20

    The data here are orthomosaics, digital surface models (DSMs), and individual frames captured during low altitude airborne flights in 2013 at the Barrow Environmental Observatory. The orthomosaics, thermal IR mosaics, and DSMs were generated from the individual frames using Structure from Motion techniques.

  15. Time reversibility in the quantum frame

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masot-Conde, Fátima

    2014-12-04

    Classic Mechanics and Electromagnetism, conventionally taken as time-reversible, share the same concept of motion (either of mass or charge) as the basis of the time reversibility in their own fields. This paper focuses on the relationship between mobile geometry and motion reversibility. The goal is to extrapolate the conclusions to the quantum frame, where matter and radiation behave just as elementary mobiles. The possibility that the asymmetry of Time (Time’s arrow) is an effect of a fundamental quantum asymmetry of elementary particles, turns out to be a consequence of the discussion.

  16. Analysis of free breathing motion using artifact reduced 4D CT image data

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Jan; Werner, Rene; Frenzel, Thorsten; Lu, Wei; Low, Daniel; Handels, Heinz

    2007-03-01

    The mobility of lung tumors during the respiratory cycle is a source of error in radiotherapy treatment planning. Spatiotemporal CT data sets can be used for studying the motion of lung tumors and inner organs during the breathing cycle. We present methods for the analysis of respiratory motion using 4D CT data in high temporal resolution. An optical flow based reconstruction method was used to generate artifact-reduced 4D CT data sets of lung cancer patients. The reconstructed 4D CT data sets were segmented and the respiratory motion of tumors and inner organs was analyzed. A non-linear registration algorithm is used to calculate the velocity field between consecutive time frames of the 4D data. The resulting velocity field is used to analyze trajectories of landmarks and surface points. By this technique, the maximum displacement of any surface point is calculated, and regions with large respiratory motion are marked. To describe the tumor mobility the motion of the lung tumor center in three orthogonal directions is displayed. Estimated 3D appearance probabilities visualize the movement of the tumor during the respiratory cycle in one static image. Furthermore, correlations between trajectories of the skin surface and the trajectory of the tumor center are determined and skin regions are identified which are suitable for prediction of the internal tumor motion. The results of the motion analysis indicate that the described methods are suitable to gain insight into the spatiotemporal behavior of anatomical and pathological structures during the respiratory cycle.

  17. Motion-aware temporal regularization for improved 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Mory, Cyril; Janssens, Guillaume; Rit, Simon

    2016-09-01

    Four-dimensional cone-beam computed tomography (4D-CBCT) of the free-breathing thorax is a valuable tool in image-guided radiation therapy of the thorax and the upper abdomen. It allows the determination of the position of a tumor throughout the breathing cycle, while only its mean position can be extracted from three-dimensional CBCT. The classical approaches are not fully satisfactory: respiration-correlated methods allow one to accurately locate high-contrast structures in any frame, but contain strong streak artifacts unless the acquisition is significantly slowed down. Motion-compensated methods can yield streak-free, but static, reconstructions. This work proposes a 4D-CBCT method that can be seen as a trade-off between respiration-correlated and motion-compensated reconstruction. It builds upon the existing reconstruction using spatial and temporal regularization (ROOSTER) and is called motion-aware ROOSTER (MA-ROOSTER). It performs temporal regularization along curved trajectories, following the motion estimated on a prior 4D CT scan. MA-ROOSTER does not involve motion-compensated forward and back projections: the input motion is used only during temporal regularization. MA-ROOSTER is compared to ROOSTER, motion-compensated Feldkamp-Davis-Kress (MC-FDK), and two respiration-correlated methods, on CBCT acquisitions of one physical phantom and two patients. It yields streak-free reconstructions, visually similar to MC-FDK, and robust information on tumor location throughout the breathing cycle. MA-ROOSTER also allows a variation of the lung tissue density during the breathing cycle, similar to that of planning CT, which is required for quantitative post-processing.

  18. Impacts of GNSS position offsets on global frame stability

    NASA Astrophysics Data System (ADS)

    Griffiths, Jake; Ray, Jim

    2015-04-01

    Positional offsets appear in Global Navigation Satellite System (GNSS) time series for a variety of reasons. Antenna or radome changes are the most common cause for these discontinuities. Many others are from earthquakes, receiver changes, and different anthropogenic modifications at or near the stations. Some jumps appear for unknown or undocumented reasons. Accurate determination of station velocities, and therefore geophysical parameters and terrestrial reference frames, requires that positional offsets be correctly found and compensated. Williams (2003) found that undetected offsets introduce a random walk error component in individual station time series. The topic of detecting positional offsets has received considerable attention in recent years (e.g., Detection of Offsets in GPS Experiment; DOGEx), and most research groups using GNSS have adopted a mix of manual and automated methods for finding them. The removal of a positional offset from a time series is usually handled by estimating the average station position on both sides of the discontinuity. Except for large earthquake events, the velocity is usually assumed constant and continuous across the positional jump. This approach is sufficient in the absence of time-correlated errors. However, GNSS time series contain periodic and power-law (flicker) errors. In this paper, we evaluate the impact to individual station results and the overall stability of the global reference frame from adding increasing numbers of positional discontinuities. We use the International GNSS Service (IGS) weekly SINEX files, and iteratively insert positional offset parameters. Each iteration includes a restacking of the modified SINEX files using the CATREF software from Institut National de l'Information Géographique et Forestière (IGN). Comparisons of successive stacked solutions are used to assess the impacts on the time series of x-pole and y-pole offsets, along with changes in regularized position and secular velocity for stations with more than 2.5 years of data. Our preliminary results indicate that the change in polar motion scatter is logarithmic with increasing numbers of discontinuities. The best-fit natural logarithm to the changes in scatter for x-pole has R2 = 0.58; the fit for the y-pole series has R2 = 0.99. From these empirical functions, we find that polar motion scatter increases from zero when the total rate of discontinuities exceeds 0.2 (x-pole) and 1.3 (y-pole) per station, on average (the IGS has 0.65 per station). Thus, the presence of position offsets in GNSS station time series is likely already a contributor to IGS polar motion inaccuracy and global frame instability. Impacts to station position and velocity estimates depend on noise features found in that station's positional time series. For instance, larger changes in velocity occur for stations with shorter and noisier data spans. This is because an added discontinuity parameter for an individual station time series can induce changes in average position on both sides of the break. We will expand on these results, and consider remaining questions about the role of velocity discontinuities and the effects caused by non-core reference frame stations.

  19. A dynamic load estimation method for nonlinear structures with unscented Kalman filter

    NASA Astrophysics Data System (ADS)

    Guo, L. N.; Ding, Y.; Wang, Z.; Xu, G. S.; Wu, B.

    2018-02-01

    A force estimation method is proposed for hysteretic nonlinear structures. The equation of motion for the nonlinear structure is represented in state space and the state variable is augmented by the unknown the time history of external force. Unscented Kalman filter (UKF) is improved for the force identification in state space considering the ill-condition characteristic in the computation of square roots for the covariance matrix. The proposed method is firstly validated by a numerical simulation study of a 3-storey nonlinear hysteretic frame excited by periodic force. Each storey is supposed to follow a nonlinear hysteretic model. The external force is identified and the measurement noise is considered in this case. Then a case of a seismically isolated building subjected to earthquake excitation and impact force is studied. The isolation layer performs nonlinearly during the earthquake excitation. Impact force between the seismically isolated structure and the retaining wall is estimated with the proposed method. Uncertainties such as measurement noise, model error in storey stiffness and unexpected environmental disturbances are considered. A real-time substructure testing of an isolated structure is conducted to verify the proposed method. In the experimental study, the linear main structure is taken as numerical substructure while the one of the isolations with additional mass is taken as the nonlinear physical substructure. The force applied by the actuator on the physical substructure is identified and compared with the measured value from the force transducer. The method proposed in this paper is also validated by shaking table test of a seismically isolated steel frame. The acceleration of the ground motion as the unknowns is identified by the proposed method. Results from both numerical simulation and experimental studies indicate that the UKF based force identification method can be used to identify external excitations effectively for the nonlinear structure with accurate results even with measurement noise, model error and environmental disturbances.

  20. Ambient-Light-Canceling Camera Using Subtraction of Frames

    NASA Technical Reports Server (NTRS)

    Morookian, John Michael

    2004-01-01

    The ambient-light-canceling camera (ALCC) is a proposed near-infrared electronic camera that would utilize a combination of (1) synchronized illumination during alternate frame periods and (2) subtraction of readouts from consecutive frames to obtain images without a background component of ambient light. The ALCC is intended especially for use in tracking the motion of an eye by the pupil center corneal reflection (PCCR) method. Eye tracking by the PCCR method has shown potential for application in human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological deficiencies. In the PCCR method, an eye is illuminated by near-infrared light from a lightemitting diode (LED). Some of the infrared light is reflected from the surface of the cornea. Some of the infrared light enters the eye through the pupil and is reflected from back of the eye out through the pupil a phenomenon commonly observed as the red-eye effect in flash photography. An electronic camera is oriented to image the user's eye. The output of the camera is digitized and processed by algorithms that locate the two reflections. Then from the locations of the centers of the two reflections, the direction of gaze is computed. As described thus far, the PCCR method is susceptible to errors caused by reflections of ambient light. Although a near-infrared band-pass optical filter can be used to discriminate against ambient light, some sources of ambient light have enough in-band power to compete with the LED signal. The mode of operation of the ALCC would complement or supplant spectral filtering by providing more nearly complete cancellation of the effect of ambient light. In the operation of the ALCC, a near-infrared LED would be pulsed on during one camera frame period and off during the next frame period. Thus, the scene would be illuminated by both the LED (signal) light and the ambient (background) light during one frame period, and would be illuminated with only ambient (background) light during the next frame period. The camera output would be digitized and sent to a computer, wherein the pixel values of the background-only frame would be subtracted from the pixel values of the signal-plus-background frame to obtain signal-only pixel values (see figure). To prevent artifacts of motion from entering the images, it would be necessary to acquire image data at a rate greater than the standard video rate of 30 frames per second. For this purpose, the ALCC would exploit a novel control technique developed at NASA s Jet Propulsion Laboratory for advanced charge-coupled-device (CCD) cameras. This technique provides for readout from a subwindow [region of interest (ROI)] within the image frame. Because the desired reflections from the eye would typically occupy a small fraction of the area within the image frame, the ROI capability would make it possible to acquire and subtract pixel values at rates of several hundred frames per second considerably greater than the standard video rate and sufficient to both (1) suppress motion artifacts and (2) track the motion of the eye between consecutive subtractive frame pairs.

  1. Functionally interpretable local coordinate systems for the upper extremity using inertial & magnetic measurement systems.

    PubMed

    de Vries, W H K; Veeger, H E J; Cutti, A G; Baten, C; van der Helm, F C T

    2010-07-20

    Inertial Magnetic Measurement Systems (IMMS) are becoming increasingly popular by allowing for measurements outside the motion laboratory. The latest models enable long term, accurate measurement of segment motion in terms of joint angles, if initial segment orientations can accurately be determined. The standard procedure for definition of segmental orientation is based on the measurement of positions of bony landmarks (BLM). However, IMMS do not deliver position information, so an alternative method to establish IMMS based, anatomically understandable segment orientations is proposed. For five subjects, IMMS recordings were collected in a standard anatomical position for definition of static axes, and during a series of standardized motions for the estimation of kinematic axes of rotation. For all axes, the intra- and inter-individual dispersion was estimated. Subsequently, local coordinate systems (LCS) were constructed on the basis of the combination of IMMS axes with the lowest dispersion and compared with BLM based LCS. The repeatability of the method appeared to be high; for every segment at least two axes could be determined with a dispersion of at most 3.8 degrees. Comparison of IMMS based with BLM based LCS yielded compatible results for the thorax, but less compatible results for the humerus, forearm and hand, where differences in orientation rose to 17.2 degrees. Although different from the 'gold standard' BLM based LCS, IMMS based LCS can be constructed repeatable, enabling the estimation of segment orientations outside the laboratory. A procedure for the definition of local reference frames using IMMS is proposed. 2010 Elsevier Ltd. All rights reserved.

  2. In vivo high-resolution structural imaging of large arteries in small rodents using two-photon laser scanning microscopy

    NASA Astrophysics Data System (ADS)

    Megens, Remco T. A.; Reitsma, Sietze; Prinzen, Lenneke; Oude Egbrink, Mirjam G. A.; Engels, Wim; Leenders, Peter J. A.; Brunenberg, Ellen J. L.; Reesink, Koen D.; Janssen, Ben J. A.; Ter Haar Romeny, Bart M.; Slaaf, Dick W.; van Zandvoort, Marc A. M. J.

    2010-01-01

    In vivo (molecular) imaging of the vessel wall of large arteries at subcellular resolution is crucial for unraveling vascular pathophysiology. We previously showed the applicability of two-photon laser scanning microscopy (TPLSM) in mounted arteries ex vivo. However, in vivo TPLSM has thus far suffered from in-frame and between-frame motion artifacts due to arterial movement with cardiac and respiratory activity. Now, motion artifacts are suppressed by accelerated image acquisition triggered on cardiac and respiratory activity. In vivo TPLSM is performed on rat renal and mouse carotid arteries, both surgically exposed and labeled fluorescently (cell nuclei, elastin, and collagen). The use of short acquisition times consistently limit in-frame motion artifacts. Additionally, triggered imaging reduces between-frame artifacts. Indeed, structures in the vessel wall (cell nuclei, elastic laminae) can be imaged at subcellular resolution. In mechanically damaged carotid arteries, even the subendothelial collagen sheet (~1 μm) is visualized using collagen-targeted quantum dots. We demonstrate stable in vivo imaging of large arteries at subcellular resolution using TPLSM triggered on cardiac and respiratory cycles. This creates great opportunities for studying (diseased) arteries in vivo or immediate validation of in vivo molecular imaging techniques such as magnetic resonance imaging (MRI), ultrasound, and positron emission tomography (PET).

  3. Covariant Structure of Models of Geophysical Fluid Motion

    NASA Astrophysics Data System (ADS)

    Dubos, Thomas

    2018-01-01

    Geophysical models approximate classical fluid motion in rotating frames. Even accurate approximations can have profound consequences, such as the loss of inertial frames. If geophysical fluid dynamics are not strictly equivalent to Newtonian hydrodynamics observed in a rotating frame, what kind of dynamics are they? We aim to clarify fundamental similarities and differences between relativistic, Newtonian, and geophysical hydrodynamics, using variational and covariant formulations as tools to shed the necessary light. A space-time variational principle for the motion of a perfect fluid is introduced. The geophysical action is interpreted as a synchronous limit of the relativistic action. The relativistic Levi-Civita connection also has a finite synchronous limit, which provides a connection with which to endow geophysical space-time, generalizing Cartan (1923). A covariant mass-momentum budget is obtained using covariance of the action and metric-preserving properties of the connection. Ultimately, geophysical models are found to differ from the standard compressible Euler model only by a specific choice of a metric-Coriolis-geopotential tensor akin to the relativistic space-time metric. Once this choice is made, the same covariant mass-momentum budget applies to Newtonian and all geophysical hydrodynamics, including those models lacking an inertial frame. Hence, it is argued that this mass-momentum budget provides an appropriate, common fundamental principle of dynamics. The postulate that Euclidean, inertial frames exist can then be regarded as part of the Newtonian theory of gravitation, which some models of geophysical hydrodynamics slightly violate.

  4. IMU: inertial sensing of vertical CoM movement.

    PubMed

    Esser, Patrick; Dawes, Helen; Collett, Johnny; Howells, Ken

    2009-07-22

    The purpose of this study was to use a quaternion rotation matrix in combination with an integration approach to transform translatory accelerations of the centre of mass (CoM) from an inertial measurement unit (IMU) during walking, from the object system onto the global frame. Second, this paper utilises double integration to determine the relative change in position of the CoM from the vertical acceleration data. Five participants were tested in which an IMU, consisting of accelerometers, gyroscopes and magnetometers was attached on the lower spine estimated centre of mass. Participants were asked to walk three times through a calibrated volume at their self-selected walking speed. Synchronized data were collected by an IMU and an optical motion capture system (OMCS); both measured at 100 Hz. Accelerations of the IMU were transposed onto the global frame using a quaternion rotation matrix. Translatory acceleration, speed and relative change in position from the IMU were compared with the derived data from the OMCS. Peak acceleration in vertical axis showed no significant difference (p> or =0.05). Difference between peak and trough speed showed significant difference (p<0.05) but relative peak-trough position between the IMU and OMCS did not show any significant difference (p> or =0.05). These results indicate that quaternions, in combination with Simpsons rule integration, can be used in transforming translatory acceleration from the object frame to the global frame and therefore obtain relative change in position, thus offering a solution for using accelerometers in accurate global frame kinematic gait analyses.

  5. Dynamic dual-energy chest radiography: a potential tool for lung tissue motion monitoring and kinetic study

    PubMed Central

    Xu, Tong; Ducote, Justin L.; Wong, Jerry T.; Molloi, Sabee

    2011-01-01

    Dual-energy chest radiography has the potential to provide better diagnosis of lung disease by removing the bone signal from the image. Dynamic dual-energy radiography is now possible with the introduction of digital flat panel detectors. The purpose of this study is to evaluate the feasibility of using dynamic dual-energy chest radiography for functional lung imaging and tumor motion assessment. The dual energy system used in this study can acquire up to 15 frame of dual-energy images per second. A swine animal model was mechanically ventilated and imaged using the dual-energy system. Sequences of soft-tissue images were obtained using dual-energy subtraction. Time subtracted soft-tissue images were shown to be able to provide information on regional ventilation. Motion tracking of a lung anatomic feature (a branch of pulmonary artery) was performed based on an image cross-correlation algorithm. The tracking precision was found to be better than 1 mm. An adaptive correlation model was established between the above tracked motion and an external surrogate signal (temperature within the tracheal tube). This model is used to predict lung feature motion using the continuous surrogate signal and low frame rate dual-energy images (0.1 to 3.0 frames /sec). The average RMS error of the prediction was (1.1 ± 0.3) mm. The dynamic dual-energy was shown to be potentially useful for lung functional imaging such as regional ventilation and kinetic studies. It can also be used for lung tumor motion assessment and prediction during radiation therapy. PMID:21285477

  6. Dynamic dual-energy chest radiography: a potential tool for lung tissue motion monitoring and kinetic study.

    PubMed

    Xu, Tong; Ducote, Justin L; Wong, Jerry T; Molloi, Sabee

    2011-02-21

    Dual-energy chest radiography has the potential to provide better diagnosis of lung disease by removing the bone signal from the image. Dynamic dual-energy radiography is now possible with the introduction of digital flat-panel detectors. The purpose of this study is to evaluate the feasibility of using dynamic dual-energy chest radiography for functional lung imaging and tumor motion assessment. The dual-energy system used in this study can acquire up to 15 frames of dual-energy images per second. A swine animal model was mechanically ventilated and imaged using the dual-energy system. Sequences of soft-tissue images were obtained using dual-energy subtraction. Time subtracted soft-tissue images were shown to be able to provide information on regional ventilation. Motion tracking of a lung anatomic feature (a branch of pulmonary artery) was performed based on an image cross-correlation algorithm. The tracking precision was found to be better than 1 mm. An adaptive correlation model was established between the above tracked motion and an external surrogate signal (temperature within the tracheal tube). This model is used to predict lung feature motion using the continuous surrogate signal and low frame rate dual-energy images (0.1-3.0 frames per second). The average RMS error of the prediction was (1.1 ± 0.3) mm. The dynamic dual energy was shown to be potentially useful for lung functional imaging such as regional ventilation and kinetic studies. It can also be used for lung tumor motion assessment and prediction during radiation therapy.

  7. SU-E-J-58: Comparison of Conformal Tracking Methods Using Initial, Adaptive and Preceding Image Frames for Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teo, P; Guo, K; Alayoubi, N

    Purpose: Accounting for tumor motion during radiation therapy is important to ensure that the tumor receives the prescribed dose. Increasing the field size to account for this motion exposes the surrounding healthy tissues to unnecessary radiation. In contrast to using motion-encompassing techniques to treat moving tumors, conformal radiation therapy (RT) uses a smaller field to track the tumor and adapts the beam aperture according to the motion detected. This work investigates and compares the performance of three markerless, EPID based, optical flow methods to track tumor motion with conformal RT. Methods: Three techniques were used to track the motions ofmore » a 3D printed lung tumor programmed to move according to the tumor of seven lung cancer patients. These techniques utilized a multi-resolution optical flow algorithm as the core computation for image registration. The first method (DIR) registers the incoming images with an initial reference frame, while the second method (RFSF) uses an adaptive reference frame and the third method (CU) uses preceding image frames for registration. The patient traces and errors were evaluated for the seven patients. Results: The average position errors for all patient traces were 0.12 ± 0.33 mm, −0.05 ± 0.04 mm and −0.28 ± 0.44 mm for CU, DIR and RFSF method respectively. The position errors distributed within 1 standard deviation are 0.74 mm, 0.37 mm and 0.96 mm respectively. The CU and RFSF algorithms are sensitive to the characteristics of the patient trace and produce a wider distribution of errors amongst patients. Although the mean error for the DIR method is negatively biased (−0.05 mm) for all patients, it has the narrowest distribution of position error, which can be corrected using an offset calibration. Conclusion: Three techniques of image registration and position update were studied. Using direct comparison with an initial frame yields the best performance. The authors would like to thank Dr.YeLin Suh for making the Cyberknife dataset available to us. Scholarship funding from the Natural Sciences and Engineering Research Council of Canada (NSERC) and CancerCare Manitoba Foundation is acknowledged.« less

  8. An integrated model-driven method for in-treatment upper airway motion tracking using cine MRI in head and neck radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Hua, E-mail: huli@radonc.wustl.edu; Chen, Hsin

    Purpose: For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Methods: Considering the complex H&N structures andmore » ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. Results: The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28%  ±  1.46%) and margin error (0.49  ±  0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. Conclusions: The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.« less

  9. An integrated model-driven method for in-treatment upper airway motion tracking using cine MRI in head and neck radiation therapy.

    PubMed

    Li, Hua; Chen, Hsin-Chen; Dolly, Steven; Li, Harold; Fischer-Valuck, Benjamin; Victoria, James; Dempsey, James; Ruan, Su; Anastasio, Mark; Mazur, Thomas; Gach, Michael; Kashani, Rojano; Green, Olga; Rodriguez, Vivian; Gay, Hiram; Thorstad, Wade; Mutic, Sasa

    2016-08-01

    For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Considering the complex H&N structures and ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28%  ±  1.46%) and margin error (0.49  ±  0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.

  10. Motion of glossy objects does not promote separation of lighting and surface colour

    PubMed Central

    2017-01-01

    The surface properties of an object, such as texture, glossiness or colour, provide important cues to its identity. However, the actual visual stimulus received by the eye is determined by both the properties of the object and the illumination. We tested whether operational colour constancy for glossy objects (the ability to distinguish changes in spectral reflectance of the object, from changes in the spectrum of the illumination) was affected by rotational motion of either the object or the light source. The different chromatic and geometric properties of the specular and diffuse reflections provide the basis for this discrimination, and we systematically varied specularity to control the available information. Observers viewed animations of isolated objects undergoing either lighting or surface-based spectral transformations accompanied by motion. By varying the axis of rotation, and surface patterning or geometry, we manipulated: (i) motion-related information about the scene, (ii) relative motion between the surface patterning and the specular reflection of the lighting, and (iii) image disruption caused by this motion. Despite large individual differences in performance with static stimuli, motion manipulations neither improved nor degraded performance. As motion significantly disrupts frame-by-frame low-level image statistics, we infer that operational constancy depends on a high-level scene interpretation, which is maintained in all conditions. PMID:29291113

  11. The effect of spatial orientation on detecting motion trajectories in noise.

    PubMed

    Pavan, Andrea; Casco, Clara; Mather, George; Bellacosa, Rosilari M; Cuturi, Luigi F; Campana, Gianluca

    2011-09-15

    A series of experiments investigated the extent to which the spatial orientation of a signal line affects discrimination of its trajectory from the random trajectories of background noise lines. The orientation of the signal line was either parallel (iso-) or orthogonal (ortho-) to its motion direction and it was identical in all respects to the noise (orientation, length and speed) except for its motion direction, rendering the signal line indistinguishable from the noise on a frame-to-frame basis. We found that discrimination of ortho-trajectories was generally better than iso-trajectories. Discrimination of ortho-trajectories was largely immune to the effects of spatial jitter in the trajectory, and to variations in step size and line-length. Discrimination of iso-trajectories was reliable provided that step-size was not too short and did not exceed line length, and that the trajectory was straight. The new result that trajectory discrimination in moving line elements is modulated by line orientation suggests that ortho- and iso-trajectory discrimination rely upon two distinct mechanisms: iso-motion discrimination involves a 'motion-streak' process that combines motion information with information about orientation parallel to the motion trajectory, while ortho-motion discrimination involves extended trajectory facilitation in a network of receptive fields with orthogonal orientation tuning. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. On a Simple Formulation of the Golf Ball Paradox

    ERIC Educational Resources Information Center

    Pujol, O.; Perez, J. Ph.

    2007-01-01

    The motion of a ball rolling without slipping on the lateral section inside a fixed vertical cylinder is analysed in the Earth referential frame which is assumed to be Galilean. Equations of motion are rapidly obtained and the golf ball paradox is understood: these equations describe a motion consisting of a vertical harmonic oscillation related…

  13. Determination of the Static Friction Coefficient from Circular Motion

    ERIC Educational Resources Information Center

    Molina-Bolívar, J. A.; Cabrerizo-Vílchez, M. A.

    2014-01-01

    This paper describes a physics laboratory exercise for determining the coefficient of static friction between two surfaces. The circular motion of a coin placed on the surface of a rotating turntable has been studied. For this purpose, the motion is recorded with a high-speed digital video camera recording at 240 frames s[superscript-1], and the…

  14. VizieR Online Data Catalog: PMA Catalogue (Akhmetov+, 2017)

    NASA Astrophysics Data System (ADS)

    Akhmetov, V. S.; Fedorov, P. N.; Velichko, A. B.; Shulga, V. M.

    2017-06-01

    The idea for creating the catalogue is very simple. The PMA catalogue has been derived from a combination of two catalogues, namely 2MASS and Gaia DR1. The difference of epochs of observations for these catalogues is approximately 15 yr. The positions of objects in the Gaia DR1 catalogue are referred to the reference frame, which is consistent with ICRF to better than 0.1 mas for the J2015.0 epoch. The positions of objects in 2MASS are referred to HCRF, which, as was shown in Kovalevsky et al. (1997A&A...323..620K), is aligned with the ICRF to within ±0.6 mas at the epoch 1991.25 and is non-rotating with respect to distant extragalactic objects to within ±0.25mas/yr. By comparing the positions of the common objects contained in the catalogues, it is possible to determine their proper motions within their common range of stellar magnitudes by dividing differences of positions over the time interval between their observations. Formally, proper motions derived in such a way are given in the ICRF system, because the positions of both Gaia DR1 stars and those of 2MASS objects (through Hipparcos/Tycho-2 stars) are given in the ICRF and cover the whole sphere without gaps. We designate them further in this paper as relative, with the aim of discriminating them from absolute ones, which refer to the reference frame defined by the positions of about 1.6 million galaxies from Gaia DR1. There is no possibility of obtaining estimates of individual errors of proper motions of stars for the PMA Catalogue from the intrinsic convergence, because the direct errors for positions are not indicated in 2MASS. Therefore we use some indirect methods to obtain the estimates of uncertainties for proper motions. After elimination of the systematic errors, the root-mean-squared deviation of the coordinate differences of extended sources is about 200mas, and the mean number of galaxies inside each pixel is about 1300, so we expect the error of the absolute calibration to be 0.35mas/yr. We compared the proper motions of common objects from PMA and from the TGAS and UCAC4 catalogues. Knowing the mean-square errors of (PMA-TGAS) and (PMA-UCAC4) proper motion differences in each pixel, the appropriate errors in PMA vary from 2 to 10mas/yr, depending on magnitude, which are consistent with the errors calculated above. In case of any problems or questions, please contact by e-mail V.S. Akhmetov (akhmetovvs(at)gmail.com or akhmetov(at)astron.kharkov.ua). (1 data file).

  15. Effect of general relativity on a near-Earth satellite in the geocentric and barycentric reference frames

    NASA Technical Reports Server (NTRS)

    Ries, J. C.; Huang, C.; Watkins, M. M.

    1988-01-01

    Whether one uses a solar-system barycentric frame or a geocentric frame when including the general theory of relativity in orbit determinations for near-Earth satellites, the results should be equivalent to some limiting accuracy. The purpose of this paper is to clarify the effects of relativity in each frame and to demonstrate their equivalence through the analysis of real laser-tracking data. A correction to the conventional barycentric equations of motion is shown to be required.

  16. Non-linearity of geocentre motion and its impact on the origin of the terrestrial reference frame

    NASA Astrophysics Data System (ADS)

    Dong, Danan; Qu, Weijing; Fang, Peng; Peng, Dongju

    2014-08-01

    The terrestrial reference frame is a cornerstone for modern geodesy and its applications for a wide range of Earth sciences. The underlying assumption for establishing a terrestrial reference frame is that the motion of the solid Earth's figure centre relative to the mass centre of the Earth system on a multidecadal timescale is linear. However, past international terrestrial reference frames (ITRFs) showed unexpected accelerated motion in their translation parameters. Based on this underlying assumption, the inconsistency of relative origin motions of the ITRFs has been attributed to data reduction imperfection. We investigated the impact of surface mass loading from atmosphere, ocean, snow, soil moisture, ice sheet, glacier and sea level from 1983 to 2008 on the geocentre variations. The resultant geocentre time-series display notable trend acceleration from 1998 onward, in particular in the z-component. This effect is primarily driven by the hydrological mass redistribution in the continents (soil moisture, snow, ice sheet and glacier). The acceleration is statistically significant at the 99 per cent confidence level as determined using the Mann-Kendall test, and it is highly correlated with the satellite laser ranging determined translation series. Our study, based on independent geophysical and hydrological models, demonstrates that, in addition to systematic errors from analysis procedures, the observed non-linearity of the Earth-system behaviour at interannual timescales is physically driven and is able to explain 42 per cent of the disparity between the origins of ITRF2000 and ITRF2005, as well as the high level of consistency between the ITRF2005 and ITRF2008 origins.

  17. Real-time synchronization of kinematic and video data for the comprehensive assessment of surgical skills.

    PubMed

    Dosis, Aristotelis; Bello, Fernando; Moorthy, Krishna; Munz, Yaron; Gillies, Duncan; Darzi, Ara

    2004-01-01

    Surgical dexterity in operating theatres has traditionally been assessed subjectively. Electromagnetic (EM) motion tracking systems such as the Imperial College Surgical Assessment Device (ICSAD) have been shown to produce valid and accurate objective measures of surgical skill. To allow for video integration we have modified the data acquisition and built it within the ROVIMAS analysis software. We then used ActiveX 9.0 DirectShow video capturing and the system clock as a time stamp for the synchronized concurrent acquisition of kinematic data and video frames. Interactive video/motion data browsing was implemented to allow the user to concentrate on frames exhibiting certain kinematic properties that could result in operative errors. We exploited video-data synchronization to calculate the camera visual hull by identifying all 3D vertices using the ICSAD electromagnetic sensors. We also concentrated on high velocity peaks as a means of identifying potential erroneous movements to be confirmed by studying the corresponding video frames. The outcome of the study clearly shows that the kinematic data are precisely synchronized with the video frames and that the velocity peaks correspond to large and sudden excursions of the instrument tip. We validated the camera visual hull by both video and geometrical kinematic analysis and we observed that graphs containing fewer sudden velocity peaks are less likely to have erroneous movements. This work presented further developments to the well-established ICSAD dexterity analysis system. Synchronized real-time motion and video acquisition provides a comprehensive assessment solution by combining quantitative motion analysis tools and qualitative targeted video scoring.

  18. Experimental evaluation of four ground-motion scaling methods for dynamic response-history analysis of nonlinear structures

    USGS Publications Warehouse

    O'Donnell, Andrew P.; Kurama, Yahya C.; Kalkan, Erol; Taflanidis, Alexandros A.

    2017-01-01

    This paper experimentally evaluates four methods to scale earthquake ground-motions within an ensemble of records to minimize the statistical dispersion and maximize the accuracy in the dynamic peak roof drift demand and peak inter-story drift demand estimates from response-history analyses of nonlinear building structures. The scaling methods that are investigated are based on: (1) ASCE/SEI 7–10 guidelines; (2) spectral acceleration at the fundamental (first mode) period of the structure, Sa(T1); (3) maximum incremental velocity, MIV; and (4) modal pushover analysis. A total of 720 shake-table tests of four small-scale nonlinear building frame specimens with different static and dynamic characteristics are conducted. The peak displacement demands from full suites of 36 near-fault ground-motion records as well as from smaller “unbiased” and “biased” design subsets (bins) of ground-motions are included. Out of the four scaling methods, ground-motions scaled to the median MIV of the ensemble resulted in the smallest dispersion in the peak roof and inter-story drift demands. Scaling based on MIValso provided the most accurate median demands as compared with the “benchmark” demands for structures with greater nonlinearity; however, this accuracy was reduced for structures exhibiting reduced nonlinearity. The modal pushover-based scaling (MPS) procedure was the only method to conservatively overestimate the median drift demands.

  19. Bio-inspired optical rotation sensor

    NASA Astrophysics Data System (ADS)

    O'Carroll, David C.; Shoemaker, Patrick A.; Brinkworth, Russell S. A.

    2007-01-01

    Traditional approaches to calculating self-motion from visual information in artificial devices have generally relied on object identification and/or correlation of image sections between successive frames. Such calculations are computationally expensive and real-time digital implementation requires powerful processors. In contrast flies arrive at essentially the same outcome, the estimation of self-motion, in a much smaller package using vastly less power. Despite the potential advantages and a few notable successes, few neuromorphic analog VLSI devices based on biological vision have been employed in practical applications to date. This paper describes a hardware implementation in aVLSI of our recently developed adaptive model for motion detection. The chip integrates motion over a linear array of local motion processors to give a single voltage output. Although the device lacks on-chip photodetectors, it includes bias circuits to use currents from external photodiodes, and we have integrated it with a ring-array of 40 photodiodes to form a visual rotation sensor. The ring configuration reduces pattern noise and combined with the pixel-wise adaptive characteristic of the underlying circuitry, permits a robust output that is proportional to image rotational velocity over a large range of speeds, and is largely independent of either mean luminance or the spatial structure of the image viewed. In principle, such devices could be used as an element of a velocity-based servo to replace or augment inertial guidance systems in applications such as mUAVs.

  20. A new imaging technique on strength and phase of pulsatile tissue-motion in brightness-mode ultrasonogram

    NASA Astrophysics Data System (ADS)

    Fukuzawa, Masayuki; Yamada, Masayoshi; Nakamori, Nobuyuki; Kitsunezuka, Yoshiki

    2007-03-01

    A new imaging technique has been developed for observing both strength and phase of pulsatile tissue-motion in a movie of brightness-mode ultrasonogram. The pulsatile tissue-motion is determined by evaluating the heartbeat-frequency component in Fourier transform of a series of pixel value as a function of time at each pixel in a movie of ultrasonogram (640x480pixels/frame, 8bit/pixel, 33ms/frame) taken by a conventional ultrasonograph apparatus (ATL HDI5000). In order to visualize both the strength and the phase of the pulsatile tissue-motion, we propose a pulsatile-phase image that is obtained by superimposition of color gradation proportional to the motion phase on the original ultrasonogram only at which the motion strength exceeds a proper threshold. The pulsatile-phase image obtained from a cranial ultrasonogram of normal neonate clearly reveals that the motion region gives good agreement with the anatomical shape and position of the middle cerebral artery and the corpus callosum. The motion phase is fluctuated with the shape of arteries revealing local obstruction of blood flow. The pulsatile-phase images in the neonates with asphyxia at birth reveal decreases of the motion region and increases of the phase fluctuation due to the weakness and local disturbance of blood flow, which is useful for pediatric diagnosis.

  1. Real-time motion-based H.263+ frame rate control

    NASA Astrophysics Data System (ADS)

    Song, Hwangjun; Kim, JongWon; Kuo, C.-C. Jay

    1998-12-01

    Most existing H.263+ rate control algorithms, e.g. the one adopted in the test model of the near-term (TMN8), focus on the macroblock layer rate control and low latency under the assumptions of with a constant frame rate and through a constant bit rate (CBR) channel. These algorithms do not accommodate the transmission bandwidth fluctuation efficiently, and the resulting video quality can be degraded. In this work, we propose a new H.263+ rate control scheme which supports the variable bit rate (VBR) channel through the adjustment of the encoding frame rate and quantization parameter. A fast algorithm for the encoding frame rate control based on the inherent motion information within a sliding window in the underlying video is developed to efficiently pursue a good tradeoff between spatial and temporal quality. The proposed rate control algorithm also takes the time-varying bandwidth characteristic of the Internet into account and is able to accommodate the change accordingly. Experimental results are provided to demonstrate the superior performance of the proposed scheme.

  2. Quantification of the relative contribution of the different right ventricular wall motion components to right ventricular ejection fraction: the ReVISION method.

    PubMed

    Lakatos, Bálint; Tősér, Zoltán; Tokodi, Márton; Doronina, Alexandra; Kosztin, Annamária; Muraru, Denisa; Badano, Luigi P; Kovács, Attila; Merkely, Béla

    2017-03-27

    Three major mechanisms contribute to right ventricular (RV) pump function: (i) shortening of the longitudinal axis with traction of the tricuspid annulus towards the apex; (ii) inward movement of the RV free wall; (iii) bulging of the interventricular septum into the RV and stretching the free wall over the septum. The relative contribution of the aforementioned mechanisms to RV pump function may change in different pathological conditions.Our aim was to develop a custom method to separately assess the extent of longitudinal, radial and anteroposterior displacement of the RV walls and to quantify their relative contribution to global RV ejection fraction using 3D data sets obtained by echocardiography.Accordingly, we decomposed the movement of the exported RV beutel wall in a vertex based manner. The volumes of the beutels accounting for the RV wall motion in only one direction (either longitudinal, radial, or anteroposterior) were calculated at each time frame using the signed tetrahedron method. Then, the relative contribution of the RV wall motion along the three different directions to global RV ejection fraction was calculated either as the ratio of the given direction's ejection fraction to global ejection fraction and as the frame-by-frame RV volume change (∆V/∆t) along the three motion directions.The ReVISION (Right VentrIcular Separate wall motIon quantificatiON) method may contribute to a better understanding of the pathophysiology of RV mechanical adaptations to different loading conditions and diseases.

  3. A Novel Ship-Tracking Method for GF-4 Satellite Sequential Images.

    PubMed

    Yao, Libo; Liu, Yong; He, You

    2018-06-22

    The geostationary remote sensing satellite has the capability of wide scanning, persistent observation and operational response, and has tremendous potential for maritime target surveillance. The GF-4 satellite is the first geostationary orbit (GEO) optical remote sensing satellite with medium resolution in China. In this paper, a novel ship-tracking method in GF-4 satellite sequential imagery is proposed. The algorithm has three stages. First, a local visual saliency map based on local peak signal-to-noise ratio (PSNR) is used to detect ships in a single frame of GF-4 satellite sequential images. Second, the accuracy positioning of each potential target is realized by a dynamic correction using the rational polynomial coefficients (RPCs) and automatic identification system (AIS) data of ships. Finally, an improved multiple hypotheses tracking (MHT) algorithm with amplitude information is used to track ships by further removing the false targets, and to estimate ships’ motion parameters. The algorithm has been tested using GF-4 sequential images and AIS data. The results of the experiment demonstrate that the algorithm achieves good tracking performance in GF-4 satellite sequential images and estimates the motion information of ships accurately.

  4. A computational framework for simultaneous estimation of muscle and joint contact forces and body motion using optimization and surrogate modeling.

    PubMed

    Eskinazi, Ilan; Fregly, Benjamin J

    2018-04-01

    Concurrent estimation of muscle activations, joint contact forces, and joint kinematics by means of gradient-based optimization of musculoskeletal models is hindered by computationally expensive and non-smooth joint contact and muscle wrapping algorithms. We present a framework that simultaneously speeds up computation and removes sources of non-smoothness from muscle force optimizations using a combination of parallelization and surrogate modeling, with special emphasis on a novel method for modeling joint contact as a surrogate model of a static analysis. The approach allows one to efficiently introduce elastic joint contact models within static and dynamic optimizations of human motion. We demonstrate the approach by performing two optimizations, one static and one dynamic, using a pelvis-leg musculoskeletal model undergoing a gait cycle. We observed convergence on the order of seconds for a static optimization time frame and on the order of minutes for an entire dynamic optimization. The presented framework may facilitate model-based efforts to predict how planned surgical or rehabilitation interventions will affect post-treatment joint and muscle function. Copyright © 2018 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. Graphics processing unit accelerated intensity-based optical coherence tomography angiography using differential frames with real-time motion correction.

    PubMed

    Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi

    2014-02-01

    We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.

  6. Motion-seeded object-based attention for dynamic visual imagery

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Kim, Kyungnam

    2017-05-01

    This paper† describes a novel system that finds and segments "objects of interest" from dynamic imagery (video) that (1) processes each frame using an advanced motion algorithm that pulls out regions that exhibit anomalous motion, and (2) extracts the boundary of each object of interest using a biologically-inspired segmentation algorithm based on feature contours. The system uses a series of modular, parallel algorithms, which allows many complicated operations to be carried out by the system in a very short time, and can be used as a front-end to a larger system that includes object recognition and scene understanding modules. Using this method, we show 90% accuracy with fewer than 0.1 false positives per frame of video, which represents a significant improvement over detection using a baseline attention algorithm.

  7. Calculation of precision satellite orbits with nonsingular elements /VOP formulation/

    NASA Technical Reports Server (NTRS)

    Velez, C. E.; Cefola, P. J.; Long, A. C.; Nimitz, K. S.

    1974-01-01

    Review of some results obtained in an effort to develop efficient, high-precision trajectory computation processes for artificial satellites by optimum selection of the form of the equations of motion of the satellite and the numerical integration method. In particular, the matching of a Gaussian variation-of-parameter (VOP) formulation is considered which is expressed in terms of equinoctial orbital elements and partially decouples the motion of the orbital frame from motion within the orbital frame. The performance of the resulting orbit generators is then compared with the popular classical Cowell/Gauss-Jackson formulation/integrator pair for two distinctly different orbit types - namely, the orbit of the ATS satellite at near-geosynchronous conditions and the near-circular orbit of the GEOS-C satellite at 1000 km.

  8. Reconstructing plate motion paths where plate tectonics doesn't strictly apply

    NASA Astrophysics Data System (ADS)

    Handy, M. R.; Ustaszewski, K.

    2012-04-01

    The classical approach to reconstructing plate motion invokes the assumption that plates are rigid and therefore that their motions can be described as Eulerian rotations on a spherical Earth. This essentially two-dimensional, map view of plate motion is generally valid for large-scale systems, but is not practicable for small-scale tectonic systems in which plates, or significant parts thereof, deform on time scales approaching the duration of their motion. Such "unplate-like" (non-rigid) behaviour is common in systems with a weak lithosphere, for example, in Mediterranean-type settings where (micro-)plates undergo distributed deformation several tens to hundreds of km away from their boundaries. The motion vector of such anomalous plates can be quantified by combining and comparing information from two independent sources: (1) Balanced cross sections that are arrayed across deformed zones (orogens, basins) and provide estimates of crustal shortening and/or extension. Plate motion is then derived by retrodeforming the balanced sections in a stepwise fashion from external to internal parts of mountain belts, then applying these estimates as successive retrotranslations of points on stable parts of the upper plate with respect to a chosen reference frame on the lower plate. This approach is contingent on using structural markers with tight age constraints, for example, depth-sensitive metamorphic mineral parageneses and syn-orogenic sediments with known paleogeographic provenance; (2) Geophysical images of 3D subcrustal structure, especially of the MOHO and the lithospheric mantle in the vicinity of the deformed zones. In the latter case, travel-time seismic tomography of velocity anomalies can be used to identify subducted lithospheric slabs that extend downwards from the zones of crustal shortening to the mantle transitional zone and beyond. Synthesizing information from these two sources yields plate motion paths whose validity can be tested by the degree of consistency between crustal shortening estimates and the amount of subducted lithosphere imaged at depth. This approach has several limitations: (1) shortening values in mountain belts are usually minimum estimates due to the erosion of deformational fronts and out-of-sequence thrusting that obscure or even eliminate zones of shortening. Also, subduction may occur without accretion of material to the upper plate; (2) sedimentary ages are often loosely bracketed and only high-retentivity isotopic systems yield ages near the age of mineral formation in metamorphic rocks; (3) images of seismic velocity anomalies are highly model-dependent and the anomalies themselves may have been partly lost to thermal erosion, especially in areas that have experienced heating, for example, beneath extensional basins. Thus, only a few orogens studied so far (e.g., the circum-Mediterreanean belts) have the density of geological and geophysical data needed to constrain the translation of a sufficient number of reference points to obtain a reliable plate-motion vector. Nevertheless, this approach complements established methods for determining plate motion (plate-circuits using paleomagnetic information, ocean-floor magnetic lineaments) and provides a viable alternative where such paleomagnetic information is sparse or lacking.

  9. A Unified Global Reference Frame of Vertical Crustal Movements by Satellite Laser Ranging.

    PubMed

    Zhu, Xinhui; Wang, Ren; Sun, Fuping; Wang, Jinling

    2016-02-08

    Crustal movement is one of the main factors influencing the change of the Earth system, especially in its vertical direction, which affects people's daily life through the frequent occurrence of earthquakes, geological disasters, and so on. In order to get a better study and application of the vertical crustal movement,as well as its changes, the foundation and prerequisite areto devise and establish its reference frame; especially, a unified global reference frame is required. Since SLR (satellite laser ranging) is one of the most accurate space techniques for monitoring geocentric motion and can directly measure the ground station's geocentric coordinates and velocities relative to the centre of the Earth's mass, we proposed to take the vertical velocity of the SLR technique in the ITRF2008 framework as the reference frame of vertical crustal motion, which we defined as the SLR vertical reference frame (SVRF). The systematic bias between other velocity fields and the SVRF was resolved by using the GPS (Global Positioning System) and VLBI (very long baseline interferometry) velocity observations, and the unity of other velocity fields and SVRF was realized,as well. The results show that it is feasible and suitable to take the SVRF as a reference frame, which has both geophysical meanings and geodetic observations, so we recommend taking the SLR vertical velocity under ITRF2008 as the global reference frame of vertical crustal movement.

  10. A Unified Global Reference Frame of Vertical Crustal Movements by Satellite Laser Ranging

    PubMed Central

    Zhu, Xinhui; Wang, Ren; Sun, Fuping; Wang, Jinling

    2016-01-01

    Crustal movement is one of the main factors influencing the change of the Earth system, especially in its vertical direction, which affects people’s daily life through the frequent occurrence of earthquakes, geological disasters, and so on. In order to get a better study and application of the vertical crustal movement, as well as its changes, the foundation and prerequisite areto devise and establish its reference frame; especially, a unified global reference frame is required. Since SLR (satellite laser ranging) is one of the most accurate space techniques for monitoring geocentric motion and can directly measure the ground station’s geocentric coordinates and velocities relative to the centre of the Earth’s mass, we proposed to take the vertical velocity of the SLR technique in the ITRF2008 framework as the reference frame of vertical crustal motion, which we defined as the SLR vertical reference frame (SVRF). The systematic bias between other velocity fields and the SVRF was resolved by using the GPS (Global Positioning System) and VLBI (very long baseline interferometry) velocity observations, and the unity of other velocity fields and SVRF was realized, as well. The results show that it is feasible and suitable to take the SVRF as a reference frame, which has both geophysical meanings and geodetic observations, so we recommend taking the SLR vertical velocity under ITRF2008 as the global reference frame of vertical crustal movement. PMID:26867197

  11. Joint Video Stitching and Stabilization from Moving Cameras.

    PubMed

    Guo, Heng; Liu, Shuaicheng; He, Tong; Zhu, Shuyuan; Zeng, Bing; Gabbouj, Moncef

    2016-09-08

    In this paper, we extend image stitching to video stitching for videos that are captured for the same scene simultaneously by multiple moving cameras. In practice, videos captured under this circumstance often appear shaky. Directly applying image stitching methods for shaking videos often suffers from strong spatial and temporal artifacts. To solve this problem, we propose a unified framework in which video stitching and stabilization are performed jointly. Specifically, our system takes several overlapping videos as inputs. We estimate both inter motions (between different videos) and intra motions (between neighboring frames within a video). Then, we solve an optimal virtual 2D camera path from all original paths. An enlarged field of view along the virtual path is finally obtained by a space-temporal optimization that takes both inter and intra motions into consideration. Two important components of this optimization are that (1) a grid-based tracking method is designed for an improved robustness, which produces features that are distributed evenly within and across multiple views, and (2) a mesh-based motion model is adopted for the handling of the scene parallax. Some experimental results are provided to demonstrate the effectiveness of our approach on various consumer-level videos and a Plugin, named "Video Stitcher" is developed at Adobe After Effects CC2015 to show the processed videos.

  12. Homage to Bob Brodkey at 85: ejections, sweeps and the genesis and extensions of quadrant analysis

    NASA Astrophysics Data System (ADS)

    Wallace, James

    2013-11-01

    Almost 50 years ago Bob Brodkey and his student, Corino, conceived and carried out a visualization experiment for the very near wall region of a turbulent pipe flow (JFM 37) that, together with the turbulent boundary layer visualization of Kline et al. (JFM 30), excited the turbulence community. Using a high speed movie camera mounted on a lathe bed that recorded magnified images in a moving frame of reference, they observed the motions of small particles in the sub- and buffer-layers. Surprisingly, these motion were not nearly so locally random as was the general view of turbulence at the time. Rather, connected regions of the near wall flow decelerated and then erupted away from the wall in what they called ``ejections.'' These decelerated motions were followed by larger scale connected motions toward the wall from above that they called ``sweeps.'' Brodkey and Corino estimated that ejections accounted for 70 % the Reynolds shear stress at Red = 20 , 000 while only occurring about 18 % of the time. Wallace et al. (JFM 54) attempted to quantify these visual observations by conceiving of and carrying out a quadrant analyisis in a turbulent oil channel flow. This paper will trace this history and describe the expanding use of these ideas in turbulence research today.

  13. Children Learning to Explain Daily Celestial Motion: Understanding Astronomy across Moving Frames of Reference

    ERIC Educational Resources Information Center

    Plummer, Julia D.; Wasko, Kyle D.; Slagle, Cynthia

    2011-01-01

    This study investigated elementary students' explanations for the daily patterns of apparent motion of the Sun, Moon, and stars. Third-grade students were chosen for this study because this age level is at the lower end of when many US standards documents suggest students should learn to use the Earth's rotation to explain daily celestial motion.…

  14. Hoph Bifurcation in Viscous, Low Speed Flows About an Airfoil with Structural Coupling

    DTIC Science & Technology

    1993-03-01

    8 2.1 Equations of Motion ...... ..................... 8 2.2 Coordinate Transformation ....................... 13 2.3 Aerodynamic...a-frame) f - Apparent body forces applied in noninertial system fL - Explicit fourth-order numerical damping term Ai - Implicit fourth-order...resulting airfoil motion . The equations describing the airfoil motion are integrated in time using a fourth-order Runge-Kutta algorithm. The

  15. Gaia Data Release 1. Astrometry: one billion positions, two million proper motions and parallaxes

    NASA Astrophysics Data System (ADS)

    Lindegren, L.; Lammers, U.; Bastian, U.; Hernández, J.; Klioner, S.; Hobbs, D.; Bombrun, A.; Michalik, D.; Ramos-Lerate, M.; Butkevich, A.; Comoretto, G.; Joliet, E.; Holl, B.; Hutton, A.; Parsons, P.; Steidelmüller, H.; Abbas, U.; Altmann, M.; Andrei, A.; Anton, S.; Bach, N.; Barache, C.; Becciani, U.; Berthier, J.; Bianchi, L.; Biermann, M.; Bouquillon, S.; Bourda, G.; Brüsemeister, T.; Bucciarelli, B.; Busonero, D.; Carlucci, T.; Castañeda, J.; Charlot, P.; Clotet, M.; Crosta, M.; Davidson, M.; de Felice, F.; Drimmel, R.; Fabricius, C.; Fienga, A.; Figueras, F.; Fraile, E.; Gai, M.; Garralda, N.; Geyer, R.; González-Vidal, J. J.; Guerra, R.; Hambly, N. C.; Hauser, M.; Jordan, S.; Lattanzi, M. G.; Lenhardt, H.; Liao, S.; Löffler, W.; McMillan, P. J.; Mignard, F.; Mora, A.; Morbidelli, R.; Portell, J.; Riva, A.; Sarasso, M.; Serraller, I.; Siddiqui, H.; Smart, R.; Spagna, A.; Stampa, U.; Steele, I.; Taris, F.; Torra, J.; van Reeven, W.; Vecchiato, A.; Zschocke, S.; de Bruijne, J.; Gracia, G.; Raison, F.; Lister, T.; Marchant, J.; Messineo, R.; Soffel, M.; Osorio, J.; de Torres, A.; O'Mullane, W.

    2016-11-01

    Context. Gaia Data Release 1 (DR1) contains astrometric results for more than 1 billion stars brighter than magnitude 20.7 based on observations collected by the Gaia satellite during the first 14 months of its operational phase. Aims: We give a brief overview of the astrometric content of the data release and of the model assumptions, data processing, and validation of the results. Methods: For stars in common with the Hipparcos and Tycho-2 catalogues, complete astrometric single-star solutions are obtained by incorporating positional information from the earlier catalogues. For other stars only their positions are obtained, essentially by neglecting their proper motions and parallaxes. The results are validated by an analysis of the residuals, through special validation runs, and by comparison with external data. Results: For about two million of the brighter stars (down to magnitude 11.5) we obtain positions, parallaxes, and proper motions to Hipparcos-type precision or better. For these stars, systematic errors depending for example on position and colour are at a level of ± 0.3 milliarcsecond (mas). For the remaining stars we obtain positions at epoch J2015.0 accurate to 10 mas. Positions and proper motions are given in a reference frame that is aligned with the International Celestial Reference Frame (ICRF) to better than 0.1 mas at epoch J2015.0, and non-rotating with respect to ICRF to within 0.03 mas yr-1. The Hipparcos reference frame is found to rotate with respect to the Gaia DR1 frame at a rate of 0.24 mas yr-1. Conclusions: Based on less than a quarter of the nominal mission length and on very provisional and incomplete calibrations, the quality and completeness of the astrometric data in Gaia DR1 are far from what is expected for the final mission products. The present results nevertheless represent a huge improvement in the available fundamental stellar data and practical definition of the optical reference frame.

  16. High speed imaging - An important industrial tool

    NASA Technical Reports Server (NTRS)

    Moore, Alton; Pinelli, Thomas E.

    1986-01-01

    High-speed photography, which is a rapid sequence of photographs that allow an event to be analyzed through the stoppage of motion or the production of slow-motion effects, is examined. In high-speed photography 16, 35, and 70 mm film and framing rates between 64-12,000 frames per second are utilized to measure such factors as angles, velocities, failure points, and deflections. The use of dual timing lamps in high-speed photography and the difficulties encountered with exposure and programming the camera and event are discussed. The application of video cameras to the recording of high-speed events is described.

  17. Motion-compensated speckle tracking via particle filtering

    NASA Astrophysics Data System (ADS)

    Liu, Lixin; Yagi, Shin-ichi; Bian, Hongyu

    2015-07-01

    Recently, an improved motion compensation method that uses the sum of absolute differences (SAD) has been applied to frame persistence utilized in conventional ultrasonic imaging because of its high accuracy and relative simplicity in implementation. However, high time consumption is still a significant drawback of this space-domain method. To seek for a more accelerated motion compensation method and verify if it is possible to eliminate conventional traversal correlation, motion-compensated speckle tracking between two temporally adjacent B-mode frames based on particle filtering is discussed. The optimal initial density of particles, the least number of iterations, and the optimal transition radius of the second iteration are analyzed from simulation results for the sake of evaluating the proposed method quantitatively. The speckle tracking results obtained using the optimized parameters indicate that the proposed method is capable of tracking the micromotion of speckle throughout the region of interest (ROI) that is superposed with global motion. The computational cost of the proposed method is reduced by 25% compared with that of the previous algorithm and further improvement is necessary.

  18. High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.

    PubMed

    Song, Shiyu; Chandraker, Manmohan; Guest, Clark C

    2016-04-01

    We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.

  19. Dissipation function and adaptive gradient reconstruction based smoke detection in video

    NASA Astrophysics Data System (ADS)

    Li, Bin; Zhang, Qiang; Shi, Chunlei

    2017-11-01

    A method for smoke detection in video is proposed. The camera monitoring the scene is assumed to be stationary. With the atmospheric scattering model, dissipation function is reflected transmissivity between the background objects in the scene and the camera. Dark channel prior and fast bilateral filter are used for estimating dissipation function which is only the function of the depth of field. Based on dissipation function, visual background extractor (ViBe) can be used for detecting smoke as a result of smoke's motion characteristics as well as detecting other moving targets. Since smoke has semi-transparent parts, the things which are covered by these parts can be recovered by poisson equation adaptively. The similarity between the recovered parts and the original background parts in the same position is calculated by Normalized Cross Correlation (NCC) and the original background's value is selected from the frame which is nearest to the current frame. The parts with high similarity are considered as smoke parts.

  20. Shear wave speed and dispersion measurements using crawling wave chirps.

    PubMed

    Hah, Zaegyoo; Partin, Alexander; Parker, Kevin J

    2014-10-01

    This article demonstrates the measurement of shear wave speed and shear speed dispersion of biomaterials using a chirp signal that launches waves over a range of frequencies. A biomaterial is vibrated by two vibration sources that generate shear waves inside the medium, which is scanned by an ultrasound imaging system. Doppler processing of the acquired signal produces an image of the square of vibration amplitude that shows repetitive constructive and destructive interference patterns called "crawling waves." With a chirp vibration signal, successive Doppler frames are generated from different source frequencies. Collected frames generate a distinctive pattern which is used to calculate the shear speed and shear speed dispersion. A special reciprocal chirp is designed such that the equi-phase lines of a motion slice image are straight lines. Detailed analysis is provided to generate a closed-form solution for calculating the shear wave speed and the dispersion. Also several phantoms and an ex vivo human liver sample are scanned and the estimation results are presented. © The Author(s) 2014.

  1. Evaluation of a segment-based LANDSAT full-frame approach to corp area estimation

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Hixson, M. M.; Davis, S. M.

    1981-01-01

    As the registration of LANDSAT full frames enters the realm of current technology, sampling methods should be examined which utilize other than the segment data used for LACIE. The effect of separating the functions of sampling for training and sampling for area estimation. The frame selected for analysis was acquired over north central Iowa on August 9, 1978. A stratification of he full-frame was defined. Training data came from segments within the frame. Two classification and estimation procedures were compared: statistics developed on one segment were used to classify that segment, and pooled statistics from the segments were used to classify a systematic sample of pixels. Comparisons to USDA/ESCS estimates illustrate that the full-frame sampling approach can provide accurate and precise area estimates.

  2. Attenuation correction in 4D-PET using a single-phase attenuation map and rigidity-adaptive deformable registration

    PubMed Central

    Kalantari, Faraz; Wang, Jing

    2017-01-01

    Purpose Four-dimensional positron emission tomography (4D-PET) imaging is a potential solution to the respiratory motion effect in the thoracic region. Computed tomography (CT)-based attenuation correction (AC) is an essential step toward quantitative imaging for PET. However, due to the temporal difference between 4D-PET and a single attenuation map from CT, typically available in routine clinical scanning, motion artifacts are observed in the attenuation-corrected PET images, leading to errors in tumor shape and uptake. We introduced a practical method to align single-phase CT with all other 4D-PET phases for AC. Methods A penalized non-rigid Demons registration between individual 4D-PET frames without AC provides the motion vectors to be used for warping single-phase attenuation map. The non-rigid Demons registration was used to derive deformation vector fields (DVFs) between PET matched with the CT phase and other 4D-PET images. While attenuated PET images provide useful data for organ borders such as those of the lung and the liver, tumors cannot be distinguished from the background due to loss of contrast. To preserve the tumor shape in different phases, an ROI-covering tumor was excluded from non-rigid transformation. Instead the mean DVF of the central region of the tumor was assigned to all voxels in the ROI. This process mimics a rigid transformation of the tumor along with a non-rigid transformation of other organs. A 4D-XCAT phantom with spherical lung tumors, with diameters ranging from 10 to 40 mm, was used to evaluate the algorithm. The performance of the proposed hybrid method for attenuation map estimation was compared to 1) the Demons non-rigid registration only and 2) a single attenuation map based on quantitative parameters in individual PET frames. Results Motion-related artifacts were significantly reduced in the attenuation-corrected 4D-PET images. When a single attenuation map was used for all individual PET frames, the normalized root mean square error (NRMSE) values in tumor region were 49.3% (STD: 8.3%), 50.5% (STD: 9.3%), 51.8% (STD: 10.8%) and 51.5% (STD: 12.1%) for 10-mm, 20-mm, 30-mm and 40-mm tumors respectively. These errors were reduced to 11.9% (STD: 2.9%), 13.6% (STD: 3.9%), 13.8% (STD: 4.8%), and 16.7% (STD: 9.3%) by our proposed method for deforming the attenuation map. The relative errors in total lesion glycolysis (TLG) values were −0.25% (STD: 2.87%) and 3.19% (STD: 2.35%) for 30-mm and 40-mm tumors respectively in proposed method. The corresponding values for Demons method were 25.22% (STD: 14.79%) and 18.42% (STD: 7.06%). Our proposed hybrid method outperforms the Demons method especially for larger tumors. For tumors smaller than 20 mm, non-rigid transformation could also provide quantitative results. Conclusion Although non-AC 4D-PET frames include insignificant anatomical information, they are still useful to estimate the DVFs to align the attenuation map for accurate AC. The proposed hybrid method can recover the AC-related artifacts and provide quantitative AC-PET images. PMID:27987223

  3. Tracking the hyoid bone in videofluoroscopic swallowing studies

    NASA Astrophysics Data System (ADS)

    Kellen, Patrick M.; Becker, Darci; Reinhardt, Joseph M.; van Daele, Douglas

    2008-03-01

    Difficulty swallowing, or dysphagia, has become a growing problem. Swallowing complications can lead to malnutrition, dehydration, respiratory infection, and even death. The current gold standard for analyzing and diagnosing dysphagia is the videofluoroscopic barium swallow study. In these studies, a fluoroscope is used to image the patient ingesting barium solutions of different volumes and viscosities. The hyoid bone anchors many key muscles involved in swallowing and plays a key role in the process. Abnormal hyoid bone motion during a swallow can indicate swallowing dysfunction. Currently in clinical settings, hyoid bone motion is assessed qualitatively, which can be subject to intra-rater and inter-rater bias. This paper presents a semi-automatic method for tracking the hyoid bone that makes quantitative analysis feasible. The user defines a template of the hyoid on one frame, and this template is tracked across subsequent frames. The matching phase is optimized by predicting the position of the template based on kinematics. An expert speech pathologist marked the position of the hyoid on each frame of ten studies to serve as the gold standard. Results from performing Bland-Altman analysis at a 95% confidence interval showed a bias of 0.0+/-0.08 pixels in x and -0.08+/-0.09 pixels in y between the manually-defined gold standard and the proposed method. The average Pearson's correlation between the gold standard and the proposed method was 0.987 in x and 0.980 in y. This paper also presents a method for automatically establishing a patient-centric coordinate system for the interpretation of hyoid motion. This coordinate system corrects for upper body patient motion during the study and identifies superior-inferior and anterior-posterior motion components. These tools make the use of quantitative hyoid motion analysis feasible in clinical and research settings.

  4. On the Definition of Aberration

    NASA Astrophysics Data System (ADS)

    Xu, Minghui; Wang, Guangli

    2014-12-01

    There was a groundbreaking step in the history of astronomy in 1728 when the effect of aberration was discovered by James Bradley (1693-1762). Recently, the solar acceleration, due to the variations in the aberrational effect of extragalactic sources caused by it, has been determined from VLBI observations with an uncertainty of about 0.5 mm{\\cdot}{s^{-1}}{\\cdot}{yr^{-1}} level. As a basic concept in astrometry with a nearly 300-year history, the definition of aberration, however, is still equivocal and discordant in the literature. It has been under continuing debate whether it depends on the relative motion between the observer and the observed source or only on the motion of the observer with respect to the frame of reference. In this paper, we will review the debate and the inconsistency in the definition of the aberration since the last century, and then discuss its definition in detail, which involves the discussions on the planetary aberration, the stellar aberration, the proper motion of an object during the travel time of light from the object to the observer, and the way of selecting the reference frame to express and distinguish the motions of the source and the observer. The aberration is essentially caused by the transformation between coordinate systems, and consequently quantified by the velocity of the observer with respect to the selected reference frame, independent of the motion of the source. Obviously, this nature is totally different from that of the definition given by the IAU WG NFA (Capitaine, 2007) in 2006, which is stated as, ``the apparent angular displacement of the observed position of a celestial object from its geometric position, caused by the finite velocity of light in combination with the motions of the observer and of the observed object.''

  5. The MPI Emotional Body Expressions Database for Narrative Scenarios

    PubMed Central

    Volkova, Ekaterina; de la Rosa, Stephan; Bülthoff, Heinrich H.; Mohler, Betty

    2014-01-01

    Emotion expression in human-human interaction takes place via various types of information, including body motion. Research on the perceptual-cognitive mechanisms underlying the processing of natural emotional body language can benefit greatly from datasets of natural emotional body expressions that facilitate stimulus manipulation and analysis. The existing databases have so far focused on few emotion categories which display predominantly prototypical, exaggerated emotion expressions. Moreover, many of these databases consist of video recordings which limit the ability to manipulate and analyse the physical properties of these stimuli. We present a new database consisting of a large set (over 1400) of natural emotional body expressions typical of monologues. To achieve close-to-natural emotional body expressions, amateur actors were narrating coherent stories while their body movements were recorded with motion capture technology. The resulting 3-dimensional motion data recorded at a high frame rate (120 frames per second) provides fine-grained information about body movements and allows the manipulation of movement on a body joint basis. For each expression it gives the positions and orientations in space of 23 body joints for every frame. We report the results of physical motion properties analysis and of an emotion categorisation study. The reactions of observers from the emotion categorisation study are included in the database. Moreover, we recorded the intended emotion expression for each motion sequence from the actor to allow for investigations regarding the link between intended and perceived emotions. The motion sequences along with the accompanying information are made available in a searchable MPI Emotional Body Expression Database. We hope that this database will enable researchers to study expression and perception of naturally occurring emotional body expressions in greater depth. PMID:25461382

  6. A Markerless 3D Computerized Motion Capture System Incorporating a Skeleton Model for Monkeys.

    PubMed

    Nakamura, Tomoya; Matsumoto, Jumpei; Nishimaru, Hiroshi; Bretas, Rafael Vieira; Takamura, Yusaku; Hori, Etsuro; Ono, Taketoshi; Nishijo, Hisao

    2016-01-01

    In this study, we propose a novel markerless motion capture system (MCS) for monkeys, in which 3D surface images of monkeys were reconstructed by integrating data from four depth cameras, and a skeleton model of the monkey was fitted onto 3D images of monkeys in each frame of the video. To validate the MCS, first, estimated 3D positions of body parts were compared between the 3D MCS-assisted estimation and manual estimation based on visual inspection when a monkey performed a shuttling behavior in which it had to avoid obstacles in various positions. The mean estimation error of the positions of body parts (3-14 cm) and of head rotation (35-43°) between the 3D MCS-assisted and manual estimation were comparable to the errors between two different experimenters performing manual estimation. Furthermore, the MCS could identify specific monkey actions, and there was no false positive nor false negative detection of actions compared with those in manual estimation. Second, to check the reproducibility of MCS-assisted estimation, the same analyses of the above experiments were repeated by a different user. The estimation errors of positions of most body parts between the two experimenters were significantly smaller in the MCS-assisted estimation than in the manual estimation. Third, effects of methamphetamine (MAP) administration on the spontaneous behaviors of four monkeys were analyzed using the MCS. MAP significantly increased head movements, tended to decrease locomotion speed, and had no significant effect on total path length. The results were comparable to previous human clinical data. Furthermore, estimated data following MAP injection (total path length, walking speed, and speed of head rotation) correlated significantly between the two experimenters in the MCS-assisted estimation (r = 0.863 to 0.999). The results suggest that the presented MCS in monkeys is useful in investigating neural mechanisms underlying various psychiatric disorders and developing pharmacological interventions.

  7. Effects of frame rate and image resolution on pulse rate measured using multiple camera imaging photoplethysmography

    NASA Astrophysics Data System (ADS)

    Blackford, Ethan B.; Estepp, Justin R.

    2015-03-01

    Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.

  8. The Acceleration of the Barycenter of Solar System Obtained from VLBI Observations and Its Impact on the ICRS

    NASA Astrophysics Data System (ADS)

    Xu, M. H.

    2016-03-01

    Since 1998 January 1, instead of the traditional stellar reference system, the International Celestial Reference System (ICRS) has been realized by an ensemble of extragalactic radio sources that are located at hundreds of millions of light years away (if we accept their cosmological distances), so that the reference frame realized by extragalactic radio sources is assumed to be space-fixed. The acceleration of the barycenter of solar system (SSB), which is the origin of the ICRS, gives rise to a systematical variation in the directions of the observed radio sources. This phenomenon is called the secular aberration drift. As a result, the extragalactic reference frame fixed to the space provides a reference standard for detecting the secular aberration drift, and the acceleration of the barycenter with respect to the space can be determined from the observations of extragalactic radio sources. In this thesis, we aim to determine the acceleration of the SSB from astrometric and geodetic observations obtained by Very Long Baseline Interferometry (VLBI), which is a technique using the telescopes globally distributed on the Earth to observe a radio source simultaneously, and with the capacity of angular positioning for compact radio sources at 10-milliarcsecond level. The method of the global solution, which allows the acceleration vector to be estimated as a global parameter in the data analysis, is developed. Through the formal error given by the solution, this method shows directly the VLBI observations' capability to constrain the acceleration of the SSB, and demonstrates the significance level of the result. In the next step, the impact of the acceleration on the ICRS is studied in order to obtain the correction of the celestial reference frame (CRF) orientation. This thesis begins with the basic background and the general frame of this work. A brief review of the realization of the CRF based on the kinematical and the dynamical methods is presented in Chapter 2, along with the definition of the CRF and its relationship with the inertial reference frame. Chapter 3 is divided into two parts. The first part describes various effects that modify the geometric direction of an object, especially the parallax, the aberration, and the proper motion. Then the derivative model and the principle of determination of the acceleration are introduced in the second part. The VLBI data analysis method, including VLBI data reduction (solving the ambiguity, identifying the clock break, and determining the ionospheric effect), theoretical delay model, parameterization, and datum definition, is discussed in detail in Chapter 4. The estimation of the acceleration by more than 30-year VLBI observations and the results are then described in Chapter 5. The evaluation and the robust check of our results by different solutions and the comparison to that from another research group are performed. The error sources for the estimation of the acceleration, such as the secular parallax caused by the velocity of the barycenter in space, are quantitatively studied by simulation and data analysis in Chapter 6. The two main impacts of the acceleration on the CRF, the apparent proper motion with the magnitude of the μ as\\cdot yr^{-1} level and the global rotation in the CRF due to the un-uniformed distribution of radio sources on the sky, are discussed in Chapter 7. The definition and the realization of the epoch CRF are presented as well. The future work concerning the explanation of the estimated acceleration and potential research on several main problems in modern astrometry are discussed in the last chapter.

  9. Pose estimation and tracking of non-cooperative rocket bodies using Time-of-Flight cameras

    NASA Astrophysics Data System (ADS)

    Gómez Martínez, Harvey; Giorgi, Gabriele; Eissfeller, Bernd

    2017-10-01

    This paper presents a methodology for estimating the position and orientation of a rocket body in orbit - the target - undergoing a roto-translational motion, with respect to a chaser spacecraft, whose task is to match the target dynamics for a safe rendezvous. During the rendezvous maneuver the chaser employs a Time-of-Flight camera that acquires a point cloud of 3D coordinates mapping the sensed target surface. Once the system identifies the target, it initializes the chaser-to-target relative position and orientation. After initialization, a tracking procedure enables the system to sense the evolution of the target's pose between frames. The proposed algorithm is evaluated using simulated point clouds, generated with a CAD model of the Cosmos-3M upper stage and the PMD CamCube 3.0 camera specifications.

  10. Siamese convolutional networks for tracking the spine motion

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Sui, Xiubao; Sun, Yicheng; Liu, Chengwei; Hu, Yong

    2017-09-01

    Deep learning models have demonstrated great success in various computer vision tasks such as image classification and object tracking. However, tracking the lumbar spine by digitalized video fluoroscopic imaging (DVFI), which can quantitatively analyze the motion mode of spine to diagnose lumbar instability, has not yet been well developed due to the lack of steady and robust tracking method. In this paper, we propose a novel visual tracking algorithm of the lumbar vertebra motion based on a Siamese convolutional neural network (CNN) model. We train a full-convolutional neural network offline to learn generic image features. The network is trained to learn a similarity function that compares the labeled target in the first frame with the candidate patches in the current frame. The similarity function returns a high score if the two images depict the same object. Once learned, the similarity function is used to track a previously unseen object without any adapting online. In the current frame, our tracker is performed by evaluating the candidate rotated patches sampled around the previous frame target position and presents a rotated bounding box to locate the predicted target precisely. Results indicate that the proposed tracking method can detect the lumbar vertebra steadily and robustly. Especially for images with low contrast and cluttered background, the presented tracker can still achieve good tracking performance. Further, the proposed algorithm operates at high speed for real time tracking.

  11. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  12. Waterfall notch-filtering for restoration of acoustic backscatter records from Admiralty Bay, Antarctica

    NASA Astrophysics Data System (ADS)

    Fonseca, Luciano; Hung, Edson Mintsu; Neto, Arthur Ayres; Magrani, Fábio José Guedes

    2018-06-01

    A series of multibeam sonar surveys were conducted from 2009 to 2013 around Admiralty Bay, Shetland Islands, Antarctica. These surveys provided a detailed bathymetric model that helped understand and characterize the bottom geology of this remote area. Unfortunately, the acoustic backscatter records registered during these bathymetric surveys were heavily contaminated with noise and motion artifacts. These artifacts persisted in the backscatter records despite the fact that the proper acquisition geometry and the necessary offsets and delays were applied during the survey and in post-processing. These noisy backscatter records were very difficult to interpret and to correlate with gravity-core samples acquired in the same area. In order to address this issue, a directional notch-filter was applied to the backscatter waterfall in the along-track direction. The proposed filter provided better estimates for the backscatter strength of each sample by considerably reducing residual motion artifacts. The restoration of individual samples was possible since the waterfall frame of reference preserves the acquisition geometry. Then, a remote seafloor characterization procedure based on an acoustic model inversion was applied to the restored backscatter samples, generating remote estimates of acoustic impedance. These remote estimates were compared to Multi Sensor Core Logger measurements of acoustic impedance obtained from gravity core samples. The remote estimates and the Core Logger measurements of acoustic impedance were comparable when the shallow seafloor was homogeneous. The proposed waterfall notch-filtering approach can be applied to any sonar record, provided that we know the system ping-rate and sampling frequency.

  13. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    NASA Astrophysics Data System (ADS)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  14. Restoration and analysis of amateur movies from the Kennedy assassination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Breedlove, J.R.; Cannon, T.M.; Janney, D.H.

    1980-01-01

    Much of the evidence concerning the assassination of President Kennedy comes from amateur movies of the presidential motorcade. Two of the most revealing movies are those taken by the photographers Zapruder and Nix. Approximately 180 frames of the Zapruder film clearly show the general relation of persons in the presidential limousine. Many of the frames of interest were blurred by focus problems or by linear motion. The method of cepstral analysis was used to quantitatively measure the blur, followed by maximum a posteriori (MAP) restoration. Descriptions of these methods, complete with before-and-after examples from selected frames are given. The framesmore » were then available for studies of facial expressions, hand motions, etc. Numerous allegations charge that multiple gunmen played a role in an assassination plot. Multispectral analyses, adapted from studies of satellite imagery, show no evidence of an alleged rifle in the Zapruder film. Lastly, frame-averaging is used to reduce the noise in the Nix movie prior to MAP restoration. The restoration of the reduced-noise average frame more clearly shows that at least one of the alleged gunmen is only the light-and-shadow pattern beneath the trees.« less

  15. Einstein's Mirror

    ERIC Educational Resources Information Center

    Gjurchinovski, Aleksandar; Skeparovski, Aleksandar

    2008-01-01

    Reflection of light from a plane mirror in uniform rectilinear motion is a century-old problem, intimately related to the foundations of special relativity. The problem was first investigated by Einstein in his famous 1905 paper by using the Lorentz transformations to switch from the mirror's rest frame to the frame where the mirror moves at a…

  16. Bandwidth characteristics of multimedia data traffic on a local area network

    NASA Technical Reports Server (NTRS)

    Chuang, Shery L.; Doubek, Sharon; Haines, Richard F.

    1993-01-01

    Limited spacecraft communication links call for users to investigate the potential use of video compression and multimedia technologies to optimize bandwidth allocations. The objective was to determine the transmission characteristics of multimedia data - motion video, text or bitmap graphics, and files transmitted independently and simultaneously over an ethernet local area network. Commercial desktop video teleconferencing hardware and software and Intel's proprietary Digital Video Interactive (DVI) video compression algorithm were used, and typical task scenarios were selected. The transmission time, packet size, number of packets, and network utilization of the data were recorded. Each data type - compressed motion video, text and/or bitmapped graphics, and a compressed image file - was first transmitted independently and its characteristics recorded. The results showed that an average bandwidth of 7.4 kilobits per second (kbps) was used to transmit graphics; an average bandwidth of 86.8 kbps was used to transmit an 18.9-kilobyte (kB) image file; a bandwidth of 728.9 kbps was used to transmit compressed motion video at 15 frames per second (fps); and a bandwidth of 75.9 kbps was used to transmit compressed motion video at 1.5 fps. Average packet sizes were 933 bytes for graphics, 498.5 bytes for the image file, 345.8 bytes for motion video at 15 fps, and 341.9 bytes for motion video at 1.5 fps. Simultaneous transmission of multimedia data types was also characterized. The multimedia packets used transmission bandwidths of 341.4 kbps and 105.8kbps. Bandwidth utilization varied according to the frame rate (frames per second) setting for the transmission of motion video. Packet size did not vary significantly between the data types. When these characteristics are applied to Space Station Freedom (SSF), the packet sizes fall within the maximum specified by the Consultative Committee for Space Data Systems (CCSDS). The uplink of imagery to SSF may be performed at minimal frame rates and/or within seconds of delay, depending on the user's allocated bandwidth. Further research to identify the acceptable delay interval and its impact on human performance is required. Additional studies in network performance using various video compression algorithms and integrated multimedia techniques are needed to determine the optimal design approach for utilizing SSF's data communications system.

  17. Correction And Use Of Jitter In Television Images

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.

    1989-01-01

    Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.

  18. The role of spatiotemporal and spectral cues in segregating short sound events: evidence from auditory Ternus display.

    PubMed

    Wang, Qingcui; Bao, Ming; Chen, Lihan

    2014-01-01

    Previous studies using auditory sequences with rapid repetition of tones revealed that spatiotemporal cues and spectral cues are important cues used to fuse or segregate sound streams. However, the perceptual grouping was partially driven by the cognitive processing of the periodicity cues of the long sequence. Here, we investigate whether perceptual groupings (spatiotemporal grouping vs. frequency grouping) could also be applicable to short auditory sequences, where auditory perceptual organization is mainly subserved by lower levels of perceptual processing. To find the answer to that question, we conducted two experiments using an auditory Ternus display. The display was composed of three speakers (A, B and C), with each speaker consecutively emitting one sound consisting of two frames (AB and BC). Experiment 1 manipulated both spatial and temporal factors. We implemented three 'within-frame intervals' (WFIs, or intervals between A and B, and between B and C), seven 'inter-frame intervals' (IFIs, or intervals between AB and BC) and two different speaker layouts (inter-distance of speakers: near or far). Experiment 2 manipulated the differentiations of frequencies between two auditory frames, in addition to the spatiotemporal cues as in Experiment 1. Listeners were required to make two alternative forced choices (2AFC) to report the perception of a given Ternus display: element motion (auditory apparent motion from sound A to B to C) or group motion (auditory apparent motion from sound 'AB' to 'BC'). The results indicate that the perceptual grouping of short auditory sequences (materialized by the perceptual decisions of the auditory Ternus display) was modulated by temporal and spectral cues, with the latter contributing more to segregating auditory events. Spatial layout plays a less role in perceptual organization. These results could be accounted for by the 'peripheral channeling' theory.

  19. Thon rings from amorphous ice and implications of beam-induced Brownian motion in single particle electron cryo-microscopy.

    PubMed

    McMullan, G; Vinothkumar, K R; Henderson, R

    2015-11-01

    We have recorded dose-fractionated electron cryo-microscope images of thin films of pure flash-frozen amorphous ice and pre-irradiated amorphous carbon on a Falcon II direct electron detector using 300 keV electrons. We observe Thon rings [1] in both the power spectrum of the summed frames and the sum of power spectra from the individual frames. The Thon rings from amorphous carbon images are always more visible in the power spectrum of the summed frames whereas those of amorphous ice are more visible in the sum of power spectra from the individual frames. This difference indicates that while pre-irradiated carbon behaves like a solid during the exposure, amorphous ice behaves like a fluid with the individual water molecules undergoing beam-induced motion. Using the measured variation in the power spectra amplitude with number of electrons per image we deduce that water molecules are randomly displaced by a mean squared distance of ∼1.1 Å(2) for every incident 300 keV e(-)/Å(2). The induced motion leads to an optimal exposure with 300 keV electrons of 4.0 e(-)/Å(2) per image with which to observe Thon rings centred around the strong 3.7 Å scattering peak from amorphous ice. The beam-induced movement of the water molecules generates pseudo-Brownian motion of embedded macromolecules. The resulting blurring of single particle images contributes an additional term, on top of that from radiation damage, to the minimum achievable B-factor for macromolecular structure determination. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Soft tissue distraction using pentagonal frame for long-standing traumatic flexion deformity of interphalangeal joints.

    PubMed

    Nazerani, Shaharm; Keramati, Mohammad Reza; Vahedian, Jalal; Fereshtehnejad, Seyed-Mohammad

    2012-01-01

    Interphalangeal joint contracture is a challenging complication of hand trauma, which reduces the functional capacity of the entire hand. In this study we evaluated the results of soft tissue distraction with no collateral ligament transection or volar plate removal in comparison with traditional operation of contracture release and partial ligament transection and volar plate removal. In this prospective study, a total of 40 patients in two equal groups (A and B) were studied. Patients suffering from chronic flexion contracture of abrasive traumatic nature were included. Group A were treated by soft tissue distraction using pentagonal frame technique and in Group B the contracture release was followed by finger splinting. Analyzed data revealed a significant difference between the two groups for range of motion in the proximal interphalangeal joints (P less than 0.05), while it was not meaningful in the distal interphalangeal joints (P larger than 0.05). There was not a significant difference in the degrees of flexion contracture between groups (P larger than 0.05). Regression analysis showed that using pentagonal frame technique significantly increased the mean improvement in range of motion of proximal interphalangeal joints (P less than 0.001), while the higher the preoperative flexion contracture was observed in proximal interphalangeal joints, the lower improvement was achieved in range of motion of proximal interphalangeal joints after intervention (P less than 0.001). Soft tissue distraction using pentagonal frame technique with gradual and continuous collateral ligament and surrounding joint tissues distraction combined with skin Z-plasty significantly improves the range of motion in patients with chronic traumatic flexion deformity of proximal and/or distal interphalangeal joints.

  1. Crossmodal Statistical Binding of Temporal Information and Stimuli Properties Recalibrates Perception of Visual Apparent Motion

    PubMed Central

    Zhang, Yi; Chen, Lihan

    2016-01-01

    Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910

  2. Sparse matrix beamforming and image reconstruction for 2-D HIFU monitoring using harmonic motion imaging for focused ultrasound (HMIFU) with in vitro validation.

    PubMed

    Hou, Gary Y; Provost, Jean; Grondin, Julien; Wang, Shutao; Marquet, Fabrice; Bunting, Ethan; Konofagou, Elisa E

    2014-11-01

    Harmonic motion imaging for focused ultrasound (HMIFU) utilizes an amplitude-modulated HIFU beam to induce a localized focal oscillatory motion simultaneously estimated. The objective of this study is to develop and show the feasibility of a novel fast beamforming algorithm for image reconstruction using GPU-based sparse-matrix operation with real-time feedback. In this study, the algorithm was implemented onto a fully integrated, clinically relevant HMIFU system. A single divergent transmit beam was used while fast beamforming was implemented using a GPU-based delay-and-sum method and a sparse-matrix operation. Axial HMI displacements were then estimated from the RF signals using a 1-D normalized cross-correlation method and streamed to a graphic user interface with frame rates up to 15 Hz, a 100-fold increase compared to conventional CPU-based processing. The real-time feedback rate does not require interrupting the HIFU treatment. Results in phantom experiments showed reproducible HMI images and monitoring of 22 in vitro HIFU treatments using the new 2-D system demonstrated reproducible displacement imaging, and monitoring of 22 in vitro HIFU treatments using the new 2-D system showed a consistent average focal displacement decrease of 46.7 ±14.6% during lesion formation. Complementary focal temperature monitoring also indicated an average rate of displacement increase and decrease with focal temperature at 0.84±1.15%/(°)C, and 2.03±0.93%/(°)C , respectively. These results reinforce the HMIFU capability of estimating and monitoring stiffness related changes in real time. Current ongoing studies include clinical translation of the presented system for monitoring of HIFU treatment for breast and pancreatic tumor applications.

  3. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards

    PubMed Central

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G.

    2012-01-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids. The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable. In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation. We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards. PMID:22347787

  4. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    PubMed

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  5. Preliminary results of characteristic seismic anisotropy beneath Sunda-Banda subduction-collision zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiyono, Samsul H., E-mail: samsul.wiyono@bmkg.go.id; Indonesia’s Agency for Meteorology Climatology and Geophysics, Jakarta 10610; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id

    2015-04-24

    Determining of seismic anisotropy allowed us for understanding the deformation processes that occured in the past and present. In this study, we performed shear wave splitting to characterize seismic anisotropy beneath Sunda-Banda subduction-collision zone. For about 1,610 XKS waveforms from INATEWS-BMKG networks have been analyzed. From its measurements showed that fast polarization direction is consistent with trench-perpendicular orientation but several stations presented different orientation. We also compared between fast polarization direction with absolute plate motion in the no net rotation and hotspot frame. Its result showed that both absolute plate motion frame had strong correlation with fast polarization direction. Strongmore » correlation between the fast polarization direction and the absolute plate motion can be interpreted as the possibility of dominant anisotropy is in the asthenosphere.« less

  6. Recent Progress in Understanding the Origin of the Hawaiian-Emperor Bend

    NASA Astrophysics Data System (ADS)

    Gordon, R. G.; Morgan, J. P.

    2016-12-01

    Two main explanations have been proposed for the origin of the Hawaiian-Emperor Bend (HEB): (1) that it records a change in motion of the Pacific plate relative to a stationary Hawaiian plume [Morgan, 1971]; (2) that Pacific plate motion has been uniform but the HEB records a change from rapid (>40 mm/yr) southward motion of the Hawaiian plume, while the Emperor chain was formed, to a stationary plume while the Hawaiian chain was formed [Tarduno et al. 2003]. We summarize recent progress on this issue. Recent work invalidates prior studies that inferred significant rates of motion between hotspots since the time of the HEB. Nominal rates of motion are 2-6 mm/yr with a lower bound of zero and upper bounds of 8-13 mm/yr (95% c. l.) [Koivisto et al., 2014]. In this context, Hawaiian plume drift as great as 40 mm/yr before 50 Ma B.P. seems unlikely. Other recent work demonstrates the viability of using the orientation of seismic anisotropy in the upper mantle, combined with relative plate motions, to estimate absolute plate motions independently of hotspot tracks. Wang et al. [this meeting] show that the two reference frames agree with each other within their 95% confidence limits, thus lending credibility to both estimates. To infer motion of the Hawaiian hotspot relative to the mantle from paleomagnetic data one must ignore true polar wander (TPW), but TPW is too big to ignore and is occurring today—it is an important part of explaining the apparent polar wander of the Pacific and other plates. New evidence shows that the Hawaiian hotspot was fixed in latitude during formation of most, if not all, of the Emperor seamount chain [Seidman et al., this meeting], in contradiction to the southward motion found by Tarduno et al. [2003]. Revised timing and age-dating of the HEB (now 50 Ma; Clague [this meeting]) implies that the change in plate motion coincides with a change in Pacific-Farallon motion and other circum-Pacific tectonic events. Barkhausen et al [2013] show that the Pacific-Farallon spreading rate doubles between 50 Ma and 40 Ma coincident with the acceleration of the Pacific plate from the HEB to the Hawaiian trend and an increasing propagation rate along that trend. We conclude that current evidence still favors W. J. Morgan's original explanation for the HEB: that it records a change in Pacific plate motion relative to the deep mantle.

  7. Ultrahigh-frame CCD imagers

    NASA Astrophysics Data System (ADS)

    Lowrance, John L.; Mastrocola, V. J.; Renda, George F.; Swain, Pradyumna K.; Kabra, R.; Bhaskaran, Mahalingham; Tower, John R.; Levine, Peter A.

    2004-02-01

    This paper describes the architecture, process technology, and performance of a family of high burst rate CCDs. These imagers employ high speed, low lag photo-detectors with local storage at each photo-detector to achieve image capture at rates greater than 106 frames per second. One imager has a 64 x 64 pixel array with 12 frames of storage. A second imager has a 80 x 160 array with 28 frames of storage, and the third imager has a 64 x 64 pixel array with 300 frames of storage. Application areas include capture of rapid mechanical motion, optical wavefront sensing, fluid cavitation research, combustion studies, plasma research and wind-tunnel-based gas dynamics research.

  8. Determination of regional Euler pole parameters for Eastern Austria

    NASA Astrophysics Data System (ADS)

    Umnig, Elke; Weber, Robert; Schartner, Matthias; Brueckl, Ewald

    2017-04-01

    The horizontal motion of lithospheric plates can be described as rotations around a rotation axes through the Earth's center. The two possible points where this axes intersects the surface of the Earth are called Euler poles. The rotation is expressed by the Euler parameters in terms of angular velocities together with the latitude and longitude of the Euler pole. Euler parameters were calculated from GPS data for a study area in Eastern Austria. The observation network is located along the Mur-Mürz Valley and the Vienna Basin. This zone is part of the Vienna Transfer Fault, which is the major fault system between the Eastern Alps and the Carpathians. The project ALPAACT (seismological and geodetic monitoring of ALpine-PAnnonian ACtive Tectonics) investigated intra plate tectonic movements within the Austrian part in order to estimate the seismic hazard. Precise site coordinate time series established from processing 5 years of GPS observations are available for the regional network spanning the years from 2010.0 to 2015.0. Station velocities with respect to the global reference frame ITRF2008 have been computed for 23 sites. The common Euler vector was estimated on base of a subset of reliable site velocities, for stations directly located within the area of interest. In a further step a geokinematic interpretation shall be carried out. Therefore site motions with respect to the Eurasian Plate are requested. To obtain this motion field different variants are conceivable. In a simple approach the mean ITRF2008 velocity of IGS site GRAZ can be adopted as Eurasian rotational velocity. An improved alternative is to calculate site-specific velocity differences between the Euler rotation and the individual site velocities. In this poster presentation the Euler parameters, the residual motion field as well as first geokinematic interpretation results are presented.

  9. Regularized estimation of Euler pole parameters

    NASA Astrophysics Data System (ADS)

    Aktuğ, Bahadir; Yildirim, Ömer

    2013-07-01

    Euler vectors provide a unified framework to quantify the relative or absolute motions of tectonic plates through various geodetic and geophysical observations. With the advent of space geodesy, Euler parameters of several relatively small plates have been determined through the velocities derived from the space geodesy observations. However, the available data are usually insufficient in number and quality to estimate both the Euler vector components and the Euler pole parameters reliably. Since Euler vectors are defined globally in an Earth-centered Cartesian frame, estimation with the limited geographic coverage of the local/regional geodetic networks usually results in highly correlated vector components. In the case of estimating the Euler pole parameters directly, the situation is even worse, and the position of the Euler pole is nearly collinear with the magnitude of the rotation rate. In this study, a new method, which consists of an analytical derivation of the covariance matrix of the Euler vector in an ideal network configuration, is introduced and a regularized estimation method specifically tailored for estimating the Euler vector is presented. The results show that the proposed method outperforms the least squares estimation in terms of the mean squared error.

  10. Sunglasses with thick temples and frame constrict temporal visual field.

    PubMed

    Denion, Eric; Dugué, Audrey Emmanuelle; Augy, Sylvain; Coffin-Pichonnet, Sophie; Mouriaux, Frédéric

    2013-12-01

    Our aim was to compare the impact of two types of sunglasses on visual field and glare: one ("thick sunglasses") with a thick plastic frame and wide temples and one ("thin sunglasses") with a thin metal frame and thin temples. Using the Goldmann perimeter, visual field surface areas (cm²) were calculated as projections on a 30-cm virtual cupola. A V4 test object was used, from seen to unseen, in 15 healthy volunteers in the primary position of gaze ("base visual field"), then allowing eye motion ("eye motion visual field") without glasses, then with "thin sunglasses," followed by "thick sunglasses." Visual field surface area differences greater than the 14% reproducibility error of the method and having a p < 0.05 were considered significant. A glare test was done using a surgical lighting system pointed at the eye(s) at different incidence angles. No significant "base visual field" or "eye motion visual field" surface area variations were noted when comparing tests done without glasses and with the "thin sunglasses." In contrast, a 22% "eye motion visual field" surface area decrease (p < 0.001) was noted when comparing tests done without glasses and with "thick sunglasses." This decrease was most severe in the temporal quadrant (-33%; p < 0.001). All subjects reported less lateral glare with the "thick sunglasses" than with the "thin sunglasses" (p < 0.001). The better protection from lateral glare offered by "thick sunglasses" is offset by the much poorer ability to use lateral space exploration; this results in a loss of most, if not all, of the additional visual field gained through eye motion.

  11. SMART USE OF COMPUTER-AIDED SPERM ANALYSIS (CASA) TO CHARACTERIZE SPERM MOTION

    EPA Science Inventory

    Computer-aided sperm analysis (CASA) has evolved over the past fifteen years to provide an objective, practical means of measuring and characterizing the velocity and parttern of sperm motion. CASA instruments use video frame-grabber boards to capture multiple images of spermato...

  12. Physics in a Bouncing Car.

    ERIC Educational Resources Information Center

    Bartlett, Albert A.

    1984-01-01

    Defines frame of reference for the analysis of motion in a moving car, discussing the interaction of the car body, the seat springs, and the passenger when the car goes over a bump. Provides a related, but more advanced, problem with the motion of cars involving angular acceleration. (JM)

  13. Analysis of Spark-Ignition Engine Knock as Seen in Photographs Taken at 200,000 Frames Per Second

    NASA Technical Reports Server (NTRS)

    Miller, Cearcy D; Olsen, H Lowell; Logan, Walter O , Jr; Osterstrom, Gordon E

    1946-01-01

    A motion picture of the development of knock in a spark-ignition engine is presented, which consists of 20 photographs taken at intervals of 5 microseconds, or at a rate of 200,000 photographs per second, with an equivalent wide-open exposure time of 6.4 microseconds for each photograph. A motion picture of a complete combustion process, including the development of knock, taken at the rate of 40,000 photographs per second is also presented to assist the reader in orienting the photographs of the knock development taken at 200,000 frames per second.

  14. Shaking video stabilization with content completion

    NASA Astrophysics Data System (ADS)

    Peng, Yi; Ye, Qixiang; Liu, Yanmei; Jiao, Jianbin

    2009-01-01

    A new stabilization algorithm to counterbalance the shaking motion in a video based on classical Kandade-Lucas- Tomasi (KLT) method is presented in this paper. Feature points are evaluated with law of large numbers and clustering algorithm to reduce the side effect of moving foreground. Analysis on the change of motion direction is also carried out to detect the existence of shaking. For video clips with detected shaking, an affine transformation is performed to warp the current frame to the reference one. In addition, the missing content of a frame during the stabilization is completed with optical flow analysis and mosaicking operation. Experiments on video clips demonstrate the effectiveness of the proposed algorithm.

  15. Use of hinged transarticular external fixation for adjunctive joint stabilization in dogs and cats: 14 cases (1999-2003).

    PubMed

    Jaeger, Gayle H; Wosar, Marc A; Marcellin-Little, Denis J; Lascelles, B Duncan X

    2005-08-15

    To describe placement of hinged transarticular external fixation (HTEF) frames and evaluate their ability to protect the primary repair of unstable joints while allowing joint mobility in dogs and cats. Retrospective study. 8 cats and 6 dogs. HTEF frames were composed of metal or epoxy connecting rods and a hinge. Measurements of range of motion of affected and contralateral joints and radiographs were made after fixator application and removal. 9 animals (4 cats and 5 dogs) had tarsal and 5 (4 cats and 1 dog) had stifle joint injuries. Treatment duration ranged from 45 to 100 days (median, 57 days). Ranges of motion in affected stifle and tarsal joints were 57% and 72% of control while HTEF was in place and 79% and 84% of control after frame removal. Complications were encountered in 3 cats and 2 dogs and included breakage of pins and connecting rods, hinge loosening, and failure at the hinge-epoxy interface. HTEF in animals with traumatic joint instability provided adjunctive joint stabilization during healing and protection of the primary repair and maintained joint motion during healing, resulting in early weight bearing of the affected limb.

  16. A trillion frames per second: the techniques and applications of light-in-flight photography.

    PubMed

    Faccio, Daniele; Velten, Andreas

    2018-06-14

    Cameras capable of capturing videos at a trillion frames per second allow to freeze light in motion, a very counterintuitive capability when related to our everyday experience in which light appears to travel instantaneously. By combining this capability with computational imaging techniques, new imaging opportunities emerge such as three dimensional imaging of scenes that are hidden behind a corner, the study of relativistic distortion effects, imaging through diffusive media and imaging of ultrafast optical processes such as laser ablation, supercontinuum and plasma generation. We provide an overview of the main techniques that have been developed for ultra-high speed photography with a particular focus on `light in flight' imaging, i.e. applications where the key element is the imaging of light itself at frame rates that allow to freeze it's motion and therefore extract information that would otherwise be blurred out and lost. . © 2018 IOP Publishing Ltd.

  17. High-Speed Video Analysis in a Conceptual Physics Class

    NASA Astrophysics Data System (ADS)

    Desbien, Dwain M.

    2011-09-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.

  18. New dynamic variables for rotating spacecraft

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1993-01-01

    This paper introduces two new seven-parameter representations for spacecraft attitude dynamics modeling. The seven parameters are the three components of the total system angular momentum in the spacecraft body frame; the three components of the angular momentum in the inertial reference frame; and an angle variable. These obey a single constraint as do parameterizations that include a quaternion; in this case the constraint is the equality of the sum of the squares of the angular momentum components in the two frames. The two representations are nonsingular if the system angular momentum is non-zero and obeys certain orientation constraints. The new parameterizations of the attitude matrix, the equations of motion, and the relation of the solution of these equations to Euler angles for torque-free motion are developed and analyzed. The superiority of the new parameterizations for numerical integration is shown in a specific example.

  19. What constitutes an efficient reference frame for vision?

    PubMed Central

    Tadin, Duje; Lappin, Joseph S.; Blake, Randolph; Grossman, Emily D.

    2015-01-01

    Vision requires a reference frame. To what extent does this reference frame depend on the structure of the visual input, rather than just on retinal landmarks? This question is particularly relevant to the perception of dynamic scenes, when keeping track of external motion relative to the retina is difficult. We tested human subjects’ ability to discriminate the motion and temporal coherence of changing elements that were embedded in global patterns and whose perceptual organization was manipulated in a way that caused only minor changes to the retinal image. Coherence discriminations were always better when local elements were perceived to be organized as a global moving form than when they were perceived to be unorganized, individually moving entities. Our results indicate that perceived form influences the neural representation of its component features, and from this, we propose a new method for studying perceptual organization. PMID:12219092

  20. Statistical modeling of 4D respiratory lung motion using diffeomorphic image registration.

    PubMed

    Ehrhardt, Jan; Werner, René; Schmidt-Richberg, Alexander; Handels, Heinz

    2011-02-01

    Modeling of respiratory motion has become increasingly important in various applications of medical imaging (e.g., radiation therapy of lung cancer). Current modeling approaches are usually confined to intra-patient registration of 3D image data representing the individual patient's anatomy at different breathing phases. We propose an approach to generate a mean motion model of the lung based on thoracic 4D computed tomography (CT) data of different patients to extend the motion modeling capabilities. Our modeling process consists of three steps: an intra-subject registration to generate subject-specific motion models, the generation of an average shape and intensity atlas of the lung as anatomical reference frame, and the registration of the subject-specific motion models to the atlas in order to build a statistical 4D mean motion model (4D-MMM). Furthermore, we present methods to adapt the 4D mean motion model to a patient-specific lung geometry. In all steps, a symmetric diffeomorphic nonlinear intensity-based registration method was employed. The Log-Euclidean framework was used to compute statistics on the diffeomorphic transformations. The presented methods are then used to build a mean motion model of respiratory lung motion using thoracic 4D CT data sets of 17 patients. We evaluate the model by applying it for estimating respiratory motion of ten lung cancer patients. The prediction is evaluated with respect to landmark and tumor motion, and the quantitative analysis results in a mean target registration error (TRE) of 3.3 ±1.6 mm if lung dynamics are not impaired by large lung tumors or other lung disorders (e.g., emphysema). With regard to lung tumor motion, we show that prediction accuracy is independent of tumor size and tumor motion amplitude in the considered data set. However, tumors adhering to non-lung structures degrade local lung dynamics significantly and the model-based prediction accuracy is lower in these cases. The statistical respiratory motion model is capable of providing valuable prior knowledge in many fields of applications. We present two examples of possible applications in radiation therapy and image guided diagnosis.

  1. A novel teaching system for industrial robots.

    PubMed

    Lin, Hsien-I; Lin, Yu-Hsiang

    2014-03-27

    The most important tool for controlling an industrial robotic arm is a teach pendant, which controls the robotic arm movement in work spaces and accomplishes teaching tasks. A good teaching tool should be easy to operate and can complete teaching tasks rapidly and effortlessly. In this study, a new teaching system is proposed for enabling users to operate robotic arms and accomplish teaching tasks easily. The proposed teaching system consists of the teach pen, optical markers on the pen, a motion capture system, and the pen tip estimation algorithm. With the marker positions captured by the motion capture system, the pose of the teach pen is accurately calculated by the pen tip algorithm and used to control the robot tool frame. In addition, Fitts' Law is adopted to verify the usefulness of this new system, and the results show that the system provides high accuracy, excellent operation performance, and a stable error rate. In addition, the system maintains superior performance, even when users work on platforms with different inclination angles.

  2. A Novel Teaching System for Industrial Robots

    PubMed Central

    Lin, Hsien-I; Lin, Yu-Hsiang

    2014-01-01

    The most important tool for controlling an industrial robotic arm is a teach pendant, which controls the robotic arm movement in work spaces and accomplishes teaching tasks. A good teaching tool should be easy to operate and can complete teaching tasks rapidly and effortlessly. In this study, a new teaching system is proposed for enabling users to operate robotic arms and accomplish teaching tasks easily. The proposed teaching system consists of the teach pen, optical markers on the pen, a motion capture system, and the pen tip estimation algorithm. With the marker positions captured by the motion capture system, the pose of the teach pen is accurately calculated by the pen tip algorithm and used to control the robot tool frame. In addition, Fitts' Law is adopted to verify the usefulness of this new system, and the results show that the system provides high accuracy, excellent operation performance, and a stable error rate. In addition, the system maintains superior performance, even when users work on platforms with different inclination angles. PMID:24681669

  3. Composite ultrasound imaging apparatus and method

    DOEpatents

    Morimoto, Alan K.; Bow, Jr., Wallace J.; Strong, David Scott; Dickey, Fred M.

    1998-01-01

    An imaging apparatus and method for use in presenting composite two dimensional and three dimensional images from individual ultrasonic frames. A cross-sectional reconstruction is applied by using digital ultrasound frames, transducer orientation and a known center. Motion compensation, rank value filtering, noise suppression and tissue classification are utilized to optimize the composite image.

  4. Composite ultrasound imaging apparatus and method

    DOEpatents

    Morimoto, A.K.; Bow, W.J. Jr.; Strong, D.S.; Dickey, F.M.

    1998-09-15

    An imaging apparatus and method for use in presenting composite two dimensional and three dimensional images from individual ultrasonic frames. A cross-sectional reconstruction is applied by using digital ultrasound frames, transducer orientation and a known center. Motion compensation, rank value filtering, noise suppression and tissue classification are utilized to optimize the composite image. 37 figs.

  5. Heterogeneous CPU-GPU moving targets detection for UAV video

    NASA Astrophysics Data System (ADS)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  6. A head motion estimation algorithm for motion artifact correction in dental CT imaging

    NASA Astrophysics Data System (ADS)

    Hernandez, Daniel; Elsayed Eldib, Mohamed; Hegazy, Mohamed A. A.; Hye Cho, Myung; Cho, Min Hyoung; Lee, Soo Yeol

    2018-03-01

    A small head motion of the patient can compromise the image quality in a dental CT, in which a slow cone-beam scan is adopted. We introduce a retrospective head motion estimation method by which we can estimate the motion waveform from the projection images without employing any external motion monitoring devices. We compute the cross-correlation between every two successive projection images, which results in a sinusoid-like displacement curve over the projection view when there is no patient motion. However, the displacement curve deviates from the sinusoid-like form when patient motion occurs. We develop a method to estimate the motion waveform with a single parameter derived from the displacement curve with aid of image entropy minimization. To verify the motion estimation method, we use a lab-built micro-CT that can emulate major head motions during dental CT scans, such as tilting and nodding, in a controlled way. We find that the estimated motion waveform conforms well to the actual motion waveform. To further verify the motion estimation method, we correct the motion artifacts with the estimated motion waveform. After motion artifact correction, the corrected images look almost identical to the reference images, with structural similarity index values greater than 0.81 in the phantom and rat imaging studies.

  7. Spatial vision within egocentric and exocentric frames of reference

    NASA Technical Reports Server (NTRS)

    Howard, Ian P.

    1989-01-01

    The extent to which perceptual judgements within egocentric and exocentric frames of reference are subject to illusory disturbances and long term modifications is discussed. It is argued that well known spatial illusions, such as the oculogyral illusion and induced visual motion have usually been discussed without proper attention being paid to the frame of reference within which they occur, and that this has led to the construction of inadequate theories and inappropriate procedures for testing them.

  8. Quantification of lung tumor rotation with automated landmark extraction using orthogonal cine MRI images

    NASA Astrophysics Data System (ADS)

    Paganelli, Chiara; Lee, Danny; Greer, Peter B.; Baroni, Guido; Riboldi, Marco; Keall, Paul

    2015-09-01

    The quantification of tumor motion in sites affected by respiratory motion is of primary importance to improve treatment accuracy. To account for motion, different studies analyzed the translational component only, without focusing on the rotational component, which was quantified in a few studies on the prostate with implanted markers. The aim of our study was to propose a tool able to quantify lung tumor rotation without the use of internal markers, thus providing accurate motion detection close to critical structures such as the heart or liver. Specifically, we propose the use of an automatic feature extraction method in combination with the acquisition of fast orthogonal cine MRI images of nine lung patients. As a preliminary test, we evaluated the performance of the feature extraction method by applying it on regions of interest around (i) the diaphragm and (ii) the tumor and comparing the estimated motion with that obtained by (i) the extraction of the diaphragm profile and (ii) the segmentation of the tumor, respectively. The results confirmed the capability of the proposed method in quantifying tumor motion. Then, a point-based rigid registration was applied to the extracted tumor features between all frames to account for rotation. The median lung rotation values were  -0.6   ±   2.3° and  -1.5   ±   2.7° in the sagittal and coronal planes respectively, confirming the need to account for tumor rotation along with translation to improve radiotherapy treatment.

  9. Spherical Coordinate Systems for Streamlining Suited Mobility Analysis

    NASA Technical Reports Server (NTRS)

    Benson, Elizabeth; Cowley, Matthew S.; Harvill. Lauren; Rajulu, Sudhakar

    2014-01-01

    When describing human motion, biomechanists generally report joint angles in terms of Euler angle rotation sequences. However, there are known limitations in using this method to describe complex motions such as the shoulder joint during a baseball pitch. Euler angle notation uses a series of three rotations about an axis where each rotation is dependent upon the preceding rotation. As such, the Euler angles need to be regarded as a set to get accurate angle information. Unfortunately, it is often difficult to visualize and understand these complex motion representations. One of our key functions is to help design engineers understand how a human will perform with new designs and all too often traditional use of Euler rotations becomes as much of a hindrance as a help. It is believed that using a spherical coordinate system will allow ABF personnel to more quickly and easily transmit important mobility data to engineers, in a format that is readily understandable and directly translatable to their design efforts. Objectives: The goal of this project is to establish new analysis and visualization techniques to aid in the examination and comprehension of complex motions. Methods: This project consisted of a series of small sub-projects, meant to validate and verify the method before it was implemented in the ABF's data analysis practices. The first stage was a proof of concept, where a mechanical test rig was built and instrumented with an inclinometer, so that its angle from horizontal was known. The test rig was tracked in 3D using an optical motion capture system, and its position and orientation were reported in both Euler and spherical reference systems. The rig was meant to simulate flexion/extension, transverse rotation and abduction/adduction of the human shoulder, but without the variability inherent in human motion. In the second phase of the project, the ABF estimated the error inherent in a spherical coordinate system, and evaluated how this error would vary within the reference frame. This stage also involved expanding a kinematic model of the shoulder, to include the torso, knees, ankle, elbows, wrists and neck. Part of this update included adding a representation of 'roll' about an axis, for upper arm and lower leg rotations. The third stage of the project involved creating visualization methods to assist in interpreting motion in a spherical frame. This visualization method will be incorporated in a tool to evaluate a database of suited mobility data, which is currently in development.

  10. Motion of single wandering diblock-macromolecules directed by a PTFE nano-fence: real time SFM observations.

    PubMed

    Gallyamov, Marat O; Qin, Shuhui; Matyjaszewski, Krzysztof; Khokhlov, Alexei; Möller, Martin

    2009-07-21

    Using SFM we have observed a peculiar twisting motion of diblock macromolecules pre-collapsed in ethanol vapour during their subsequent spreading in water vapour. The intrinsic asymmetry of the diblock macromolecules has been considered to be the reason for such twisting. Further, friction-deposited PTFE nano-stripes have been employed as nano-trails with the purpose of inducing lateral directed motion of the asymmetric diblock macromolecules under cyclic impact from the changing vapour surroundings. Indeed, some of the macromolecules have demonstrated a certain tendency to orient along the PTFE stripes, and some of the oriented ones have moved occasionally in a directed manner along the trail. However, it has been difficult to reliably record such directed motion at the single molecule level due to some mobility of the PTFE nano-trails themselves in the changing vapour environment. In vapours, the PTFE stripes have demonstrated a distinct tendency towards conjunction. This tendency has manifested itself in efficient expelling of groups of the mobile brush-like molecules from the areas between two PTFE stripes joining in a zip-fastener manner. This different kind of vapour-induced cooperative macromolecular motion has been reliably observed as being directed. The PTFE nano-frame experiences some deformation when constraining the spreading macromolecules. We have estimated the possible force causing such deformation of the PTFE fence. The force has been found to be a few pN as calculated by a partial contribution from every single molecule of the constrained group.

  11. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  12. 4D motion modeling of the coronary arteries from CT images for robotic assisted minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Zhang, Dong Ping; Edwards, Eddie; Mei, Lin; Rueckert, Daniel

    2009-02-01

    In this paper, we present a novel approach for coronary artery motion modeling from cardiac Computed Tomography( CT) images. The aim of this work is to develop a 4D motion model of the coronaries for image guidance in robotic-assisted totally endoscopic coronary artery bypass (TECAB) surgery. To utilize the pre-operative cardiac images to guide the minimally invasive surgery, it is essential to have a 4D cardiac motion model to be registered with the stereo endoscopic images acquired intraoperatively using the da Vinci robotic system. In this paper, we are investigating the extraction of the coronary arteries and the modelling of their motion from a dynamic sequence of cardiac CT. We use a multi-scale vesselness filter to enhance vessels in the cardiac CT images. The centerlines of the arteries are extracted using a ridge traversal algorithm. Using this method the coronaries can be extracted in near real-time as only local information is used in vessel tracking. To compute the deformation of the coronaries due to cardiac motion, the motion is extracted from a dynamic sequence of cardiac CT. Each timeframe in this sequence is registered to the end-diastole timeframe of the sequence using a non-rigid registration algorithm based on free-form deformations. Once the images have been registered a dynamic motion model of the coronaries can be obtained by applying the computed free-form deformations to the extracted coronary arteries. To validate the accuracy of the motion model we compare the actual position of the coronaries in each time frame with the predicted position of the coronaries as estimated from the non-rigid registration. We expect that this motion model of coronaries can facilitate the planning of TECAB surgery, and through the registration with real-time endoscopic video images it can reduce the conversion rate from TECAB to conventional procedures.

  13. NoRMCorre: An online algorithm for piecewise rigid motion correction of calcium imaging data.

    PubMed

    Pnevmatikakis, Eftychios A; Giovannucci, Andrea

    2017-11-01

    Motion correction is a challenging pre-processing problem that arises early in the analysis pipeline of calcium imaging data sequences. The motion artifacts in two-photon microscopy recordings can be non-rigid, arising from the finite time of raster scanning and non-uniform deformations of the brain medium. We introduce an algorithm for fast Non-Rigid Motion Correction (NoRMCorre) based on template matching. NoRMCorre operates by splitting the field of view (FOV) into overlapping spatial patches along all directions. The patches are registered at a sub-pixel resolution for rigid translation against a regularly updated template. The estimated alignments are subsequently up-sampled to create a smooth motion field for each frame that can efficiently approximate non-rigid artifacts in a piecewise-rigid manner. Existing approaches either do not scale well in terms of computational performance or are targeted to non-rigid artifacts arising just from the finite speed of raster scanning, and thus cannot correct for non-rigid motion observable in datasets from a large FOV. NoRMCorre can be run in an online mode resulting in comparable to or even faster than real time motion registration of streaming data. We evaluate its performance with simple yet intuitive metrics and compare against other non-rigid registration methods on simulated data and in vivo two-photon calcium imaging datasets. Open source Matlab and Python code is also made available. The proposed method and accompanying code can be useful for solving large scale image registration problems in calcium imaging, especially in the presence of non-rigid deformations. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  14. Automation of workplace lifting hazard assessment for musculoskeletal injury prevention.

    PubMed

    Spector, June T; Lieblich, Max; Bao, Stephen; McQuade, Kevin; Hughes, Margaret

    2014-01-01

    Existing methods for practically evaluating musculoskeletal exposures such as posture and repetition in workplace settings have limitations. We aimed to automate the estimation of parameters in the revised United States National Institute for Occupational Safety and Health (NIOSH) lifting equation, a standard manual observational tool used to evaluate back injury risk related to lifting in workplace settings, using depth camera (Microsoft Kinect) and skeleton algorithm technology. A large dataset (approximately 22,000 frames, derived from six subjects) of simultaneous lifting and other motions recorded in a laboratory setting using the Kinect (Microsoft Corporation, Redmond, Washington, United States) and a standard optical motion capture system (Qualysis, Qualysis Motion Capture Systems, Qualysis AB, Sweden) was assembled. Error-correction regression models were developed to improve the accuracy of NIOSH lifting equation parameters estimated from the Kinect skeleton. Kinect-Qualysis errors were modelled using gradient boosted regression trees with a Huber loss function. Models were trained on data from all but one subject and tested on the excluded subject. Finally, models were tested on three lifting trials performed by subjects not involved in the generation of the model-building dataset. Error-correction appears to produce estimates for NIOSH lifting equation parameters that are more accurate than those derived from the Microsoft Kinect algorithm alone. Our error-correction models substantially decreased the variance of parameter errors. In general, the Kinect underestimated parameters, and modelling reduced this bias, particularly for more biased estimates. Use of the raw Kinect skeleton model tended to result in falsely high safe recommended weight limits of loads, whereas error-corrected models gave more conservative, protective estimates. Our results suggest that it may be possible to produce reasonable estimates of posture and temporal elements of tasks such as task frequency in an automated fashion, although these findings should be confirmed in a larger study. Further work is needed to incorporate force assessments and address workplace feasibility challenges. We anticipate that this approach could ultimately be used to perform large-scale musculoskeletal exposure assessment not only for research but also to provide real-time feedback to workers and employers during work method improvement activities and employee training.

  15. Locations of stationary/periodic solutions in mean motion resonances according to the properties of dust grains

    NASA Astrophysics Data System (ADS)

    Pástor, P.

    2016-07-01

    The equations of secular evolution for dust grains in mean motion resonances with a planet are solved for stationary points. Non-gravitational effects caused by stellar radiation (the Poynting-Robertson effect and the stellar wind) are taken into account. The solutions are stationary in the semimajor axis, eccentricity and resonant angle, but allow the pericentre to advance. The semimajor axis of stationary solutions can be slightly shifted from the exact resonant value. The periodicity of the stationary solutions in a reference frame orbiting with the planet is proved analytically. The existence of periodic solutions in mean motion resonances means that analytical theory enables infinitely long capture times for dust particles. The stationary solutions are periodic motions to which the eccentricity asymptotically approaches and around which the libration occurs. Initial conditions corresponding to the stationary solutions are successfully found by numerically integrating the equation of motion. Numerically and analytically determined shifts of the semimajor axis from the exact resonance for the stationary solutions are in excellent agreement. The stationary solutions can be plotted by the locations of pericentres in the reference frame orbiting with the planet. The pericentres are distributed in space according to the properties of the dust particles.

  16. Compensator-based 6-DOF control for probe asteroid-orbital-frame hovering with actuator limitations

    NASA Astrophysics Data System (ADS)

    Liu, Xiaosong; Zhang, Peng; Liu, Keping; Li, Yuanchun

    2016-05-01

    This paper is concerned with 6-DOF control of a probe hovering in the orbital frame of an asteroid. Considering the requirements of the scientific instruments pointing direction and orbital position in practical missions, the coordinate control of relative attitude and orbit between the probe and target asteroid is imperative. A 6-DOF dynamic equation describing the relative translational and rotational motion of a probe in the asteroid's orbital frame is derived, taking the irregular gravitation, model and parameter uncertainties and external disturbances into account. An adaptive sliding mode controller is employed to guarantee the convergence of the state error, where the adaptation law is used to estimate the unknown upper bound of system uncertainty. Then the controller is improved to deal with the practical problem of actuator limitations by introducing a RBF neural network compensator, which is used to approximate the difference between the actual control with magnitude constraint and the designed nominal control law. The closed-loop system is proved to be asymptotically stable through the Lyapunov stability analysis. Numerical simulations are performed to compare the performances of the preceding designed control laws. Simulation results demonstrate the validity of the control scheme using the compensator-based adaptive sliding mode control law in the presence of actuator limitations, system uncertainty and external disturbance.

  17. Fuzzy Filtering Method for Color Videos Corrupted by Additive Noise

    PubMed Central

    Ponomaryov, Volodymyr I.; Montenegro-Monroy, Hector; Nino-de-Rivera, Luis

    2014-01-01

    A novel method for the denoising of color videos corrupted by additive noise is presented in this paper. The proposed technique consists of three principal filtering steps: spatial, spatiotemporal, and spatial postprocessing. In contrast to other state-of-the-art algorithms, during the first spatial step, the eight gradient values in different directions for pixels located in the vicinity of a central pixel as well as the R, G, and B channel correlation between the analogous pixels in different color bands are taken into account. These gradient values give the information about the level of contamination then the designed fuzzy rules are used to preserve the image features (textures, edges, sharpness, chromatic properties, etc.). In the second step, two neighboring video frames are processed together. Possible local motions between neighboring frames are estimated using block matching procedure in eight directions to perform interframe filtering. In the final step, the edges and smoothed regions in a current frame are distinguished for final postprocessing filtering. Numerous simulation results confirm that this novel 3D fuzzy method performs better than other state-of-the-art techniques in terms of objective criteria (PSNR, MAE, NCD, and SSIM) as well as subjective perception via the human vision system in the different color videos. PMID:24688428

  18. Illusory motion reveals velocity matching, not foveation, drives smooth pursuit of large objects

    PubMed Central

    Ma, Zheng; Watamaniuk, Scott N. J.; Heinen, Stephen J.

    2017-01-01

    When small objects move in a scene, we keep them foveated with smooth pursuit eye movements. Although large objects such as people and animals are common, it is nonetheless unknown how we pursue them since they cannot be foveated. It might be that the brain calculates an object's centroid, and then centers the eyes on it during pursuit as a foveation mechanism might. Alternatively, the brain merely matches the velocity by motion integration. We test these alternatives with an illusory motion stimulus that translates at a speed different from its retinal motion. The stimulus was a Gabor array that translated at a fixed velocity, with component Gabors that drifted with motion consistent or inconsistent with the translation. Velocity matching predicts different pursuit behaviors across drift conditions, while centroid matching predicts no difference. We also tested whether pursuit can segregate and ignore irrelevant local drifts when motion and centroid information are consistent by surrounding the Gabors with solid frames. Finally, observers judged the global translational speed of the Gabors to determine whether smooth pursuit and motion perception share mechanisms. We found that consistent Gabor motion enhanced pursuit gain while inconsistent, opposite motion diminished it, drawing the eyes away from the center of the stimulus and supporting a motion-based pursuit drive. Catch-up saccades tended to counter the position offset, directing the eyes opposite to the deviation caused by the pursuit gain change. Surrounding the Gabors with visible frames canceled both the gain increase and the compensatory saccades. Perceived speed was modulated analogous to pursuit gain. The results suggest that smooth pursuit of large stimuli depends on the magnitude of integrated retinal motion information, not its retinal location, and that the position system might be unnecessary for generating smooth velocity to large pursuit targets. PMID:29090315

  19. Proper Motion of the Compact, Nonthermal Radio Source in the Galactic Center, Sagittarius A*

    NASA Astrophysics Data System (ADS)

    Backer, D. C.; Sramek, R. A.

    1999-10-01

    Proper motions and radial velocities of luminous infrared stars in the Galactic center have provided strong evidence for a dark mass of 2.5×106 Msolar in the central 0.05 pc of the Galaxy. The leading hypothesis for this mass is a black hole. High angular resolution measurements at radio wavelengths find a compact radio source, Sagittarius (Sgr) A*, that is either the faint glow from a small amount of material accreting onto the hole with low radiative efficiency or a miniature active galactic nucleus (AGN) core-jet system. This paper provides a full report on the first program that has measured the apparent proper motion of Sgr A* with respect to background extragalactic reference frame. Our current result isμl,*=[-6.18+/-0.19] mas yr-1 μb,*=[-0.65+/-0.17] mas yr-1 . The observations were obtained with the NRAO Very Large Array at 4.9 GHz over 16 yr. The proper motion of Sgr A* provides an estimate of its mass based on equipartition of kinetic energy between the hole and the surrounding stars. The measured motion is largest in galactic longitude. This component of the motion is consistent with the secular parallax that results from the rotation of the solar system about the center, which is a global measure of the difference between Oort's constants (A-B), with no additional peculiar motion of Sgr A*. The current uncertainty in Oort's galactic rotation constants limits the use of this component of the proper motion for a mass inference. In latitude, we find a small, and weakly significant, peculiar motion of Sgr A*, -19+/-7 km s-1 after correction for the motion of the solar system with respect to the local standard of rest. We consider sources of peculiar motion of Sgr A* ranging from unstable radio wave propagation through intervening turbulent plasma to the effects of asymmetric masses in the center. These fail to account for a significant peculiar motion. One can appeal to an m=1 dynamical instability that numerical simulations have revealed. However, the measurement of a latitude peculiar proper motion of comparable magnitude and error but with opposite sign in the companion paper by Reid leads us to conclude at the present time that our errors may be underestimated and that the actual peculiar motion might therefore be closer to zero. Improvement of these measurements with further observations and resolving the differences between independent experiments will provide the accuracies of a few km s-1 in both coordinates that will provide both a black hole mass estimate and a definitive determination of Oort's galactic rotation constants on a global Galactic scale.

  20. Land motion estimates from GPS at tide gauges: a geophysical evaluation

    NASA Astrophysics Data System (ADS)

    Bouin, M. N.; Wöppelmann, G.

    2010-01-01

    Space geodesy applications have mainly been limited to horizontal deformations due to a number of restrictions on the vertical component accuracy. Monitoring vertical land motion is nonetheless of crucial interest in observations of long-term sea level change or postglacial rebound measurements. Here, we present a global vertical velocity field obtained with more than 200 permanent GPS stations, most of them colocated with tide gauges (TGs). We used a state of the art, homogeneous processing strategy to ensure that the reference frame was stable throughout the observation period of almost 10 yr. We associate realistic uncertainties to our vertical rates, taking into account the time-correlation noise in the time-series. The results are compared with two independent geophysical vertical velocity fields: (1) vertical velocity estimates using long-term TG records and (2) postglacial model predictions from the ICE-5G (VM2) adjustment. The quantitative agreement of the GPS vertical velocities with the `internal estimates' of vertical displacements using the TG record is very good, with a mean difference of -0.13 +/- 1.64 mm yr-1 on more than 100 sites. For 84 per cent of the GPS stations considered, the vertical velocity is confirmed by the TG estimate to within 2 mm yr-1. The overall agreement with the glacial isostatic adjustment (GIA) model is good, with discrepancy patterns related either to a local misfit of the model or to active tectonics. For 72 per cent of the sites considered, the predictions of the GIA model agree with the GPS results to within two standard deviations. Most of the GPS velocities showing discrepancies with respect to the predictions of the GIA model are, however, consistent with previously published space geodesy results. We, in turn, confirm the value of 1.8 +/- 0.5 mm yr-1 for the 20th century average global sea level rise, and conclude that GPS is now a robust tool for vertical land motion monitoring which is accurate at least at 1 mm yr-1.

Top