Science.gov

Sample records for 3d motion primitives

  1. Generalized compliant motion primitive

    NASA Astrophysics Data System (ADS)

    Backes, Paul G.

    1994-08-01

    This invention relates to a general primitive for controlling a telerobot with a set of input parameters. The primitive includes a trajectory generator; a teleoperation sensor; a joint limit generator; a force setpoint generator; a dither function generator, which produces telerobot motion inputs in a common coordinate frame for simultaneous combination in sensor summers. Virtual return spring motion input is provided by a restoration spring subsystem. The novel features of this invention include use of a single general motion primitive at a remote site to permit the shared and supervisory control of the robot manipulator to perform tasks via a remotely transferred input parameter set.

  2. A Generalized-Compliant-Motion Primitive

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.

    1993-01-01

    Computer program bridges gap between planning and execution of compliant robotic motions developed and installed in control system of telerobot. Called "generalized-compliant-motion primitive," one of several task-execution-primitive computer programs, which receives commands from higher-level task-planning programs and executes commands by generating required trajectories and applying appropriate control laws. Program comprises four parts corresponding to nominal motion, compliant motion, ending motion, and monitoring. Written in C language.

  3. A primitive-based 3D object recognition system

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1988-01-01

    An intermediate-level knowledge-based system for decomposing segmented data into three-dimensional primitives was developed to create an approximate three-dimensional description of the real world scene from a single two-dimensional perspective view. A knowledge-based approach was also developed for high-level primitive-based matching of three-dimensional objects. Both the intermediate-level decomposition and the high-level interpretation are based on the structural and relational matching; moreover, they are implemented in a frame-based environment.

  4. 3D Human Motion Editing and Synthesis: A Survey

    PubMed Central

    Wang, Xin; Chen, Qiudi; Wang, Wanliang

    2014-01-01

    The ways to compute the kinematics and dynamic quantities of human bodies in motion have been studied in many biomedical papers. This paper presents a comprehensive survey of 3D human motion editing and synthesis techniques. Firstly, four types of methods for 3D human motion synthesis are introduced and compared. Secondly, motion capture data representation, motion editing, and motion synthesis are reviewed successively. Finally, future research directions are suggested. PMID:25045395

  5. Rigid-motion-invariant classification of 3-D textures.

    PubMed

    Jain, Saurabh; Papadakis, Manos; Upadhyay, Sanat; Azencott, Robert

    2012-05-01

    This paper studies the problem of 3-D rigid-motion-invariant texture discrimination for discrete 3-D textures that are spatially homogeneous by modeling them as stationary Gaussian random fields. The latter property and our formulation of a 3-D rigid motion of a texture reduce the problem to the study of 3-D rotations of discrete textures. We formally develop the concept of 3-D texture rotations in the 3-D digital domain. We use this novel concept to define a "distance" between 3-D textures that remains invariant under all 3-D rigid motions of the texture. This concept of "distance" can be used for a monoscale or a multiscale 3-D rigid-motion-invariant testing of the statistical similarity of the 3-D textures. To compute the "distance" between any two rotations R(1) and R(2) of two given 3-D textures, we use the Kullback-Leibler divergence between 3-D Gaussian Markov random fields fitted to the rotated texture data. Then, the 3-D rigid-motion-invariant texture distance is the integral average, with respect to the Haar measure of the group SO(3), of all of these divergences when rotations R(1) and R(2) vary throughout SO(3). We also present an algorithm enabling the computation of the proposed 3-D rigid-motion-invariant texture distance as well as rules for 3-D rigid-motion-invariant texture discrimination/classification and experimental results demonstrating the capabilities of the proposed 3-D rigid-motion texture discrimination rules when applied in a multiscale setting, even on very general 3-D texture models.

  6. Motion estimation in the 3-D Gabor domain.

    PubMed

    Feng, Mu; Reed, Todd R

    2007-08-01

    Motion estimation methods can be broadly classified as being spatiotemporal or frequency domain in nature. The Gabor representation is an analysis framework providing localized frequency information. When applied to image sequences, the 3-D Gabor representation displays spatiotemporal/spatiotemporal-frequency (st/stf) information, enabling the application of robust frequency domain methods with adjustable spatiotemporal resolution. In this work, the 3-D Gabor representation is applied to motion analysis. We demonstrate that piecewise uniform translational motion can be estimated by using a uniform translation motion model in the st/stf domain. The resulting motion estimation method exhibits both good spatiotemporal resolution and substantial noise resistance compared to existing spatiotemporal methods. To form the basis of this model, we derive the signature of the translational motion in the 3-D Gabor domain. Finally, to obtain higher spatiotemporal resolution for more complex motions, a dense motion field estimation method is developed to find a motion estimate for every pixel in the sequence.

  7. Measuring the 3D motion space of the human ankle.

    PubMed

    Xiao, Jinzhuang; Zhang, Yunchao; Zhao, Shuai; Wang, Hongrui

    2017-07-20

    The 3D motion space of the human ankle is an important area of study in medicine. The 3D motion space can provide significant information for establishing more reasonable rehabilitation procedures and standards of ankle injury care. This study aims to measure the 3D motion space of the human ankle and to use mathematical methods to quantify it. A motion capturing system was used to simultaneously capture the 3D coordinates of points marked on the foot, and convert these coordinate values into rotation angles through trigonometric functions and vectors. The mathematical expression of the ankle's motion space was obtained by screening, arranging, and fitting the converted data. The mathematical expression of the 3D motion space of the participants was obtained. We statistically analyzed the data and learned that, in terms of 3D motion space, the right foot is more flexible than the left foot and the female foot is more flexible than the male foot. The adduction and abduction rotation ranges are affected by the plantar flexion or dorsal flexure rotation angles. This relationship can be expressed mathematically, which is significant in the study of the ankle joint.

  8. Motion analysis using 3D high-resolution frequency analysis.

    PubMed

    Ueda, Takaaki; Fujii, Kenta; Hirobayashi, Shigeki; Yoshizawa, Toshio; Misawa, Tadanobu

    2013-08-01

    The spatiotemporal spectra of a video that contains a moving object form a plane in the 3D frequency domain. This plane, which is described as the theoretical motion plane, reflects the velocity of the moving objects, which is calculated from the slope. However, if the resolution of the frequency analysis method is not high enough to obtain actual spectra from the object signal, the spatiotemporal spectra disperse away from the theoretical motion plane. In this paper, we propose a high-resolution frequency analysis method, described as 3D nonharmonic analysis (NHA), which is only weakly influenced by the analysis window. In addition, we estimate the motion vectors of objects in a video using the plane-clustering method, in conjunction with the least-squares method, for 3D NHA spatiotemporal spectra. We experimentally verify the accuracy of the 3D NHA and its usefulness for a sequence containing complex motions, such as cross-over motion, through comparison with 3D fast Fourier transform. The experimental results show that increasing the frequency resolution contributes to high-accuracy estimation of a motion plane.

  9. [Evaluation of Motion Sickness Induced by 3D Video Clips].

    PubMed

    Matsuura, Yasuyuki; Takada, Hiroki

    2016-01-01

    The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology.

  10. 3D tongue motion from tagged and cine MR images.

    PubMed

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z; Lee, Junghoon; Stone, Maureen; Prince, Jerry L

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach suffers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information to yield improved estimation of 3D tongue motion. The method uses the harmonic phase (HARP) algorithm to extract motion from tags and diffeomorphic demons to provide surface deformation. It then uses an incompressible deformation estimation algorithm to incorporate both sources of displacement information to form an estimate of the 3D whole tongue motion. Experimental results show that use of combined information improves motion estimation near the tongue surface, a problem that has previously been reported as problematic in HARP analysis, while preserving accurate internal motion estimates. Results on both normal and abnormal tongue motions are shown.

  11. Discerning nonrigid 3D shapes from motion cues

    PubMed Central

    Jain, Anshul; Zaidi, Qasim

    2011-01-01

    Many organisms and objects deform nonrigidly when moving, requiring perceivers to separate shape changes from object motions. Surprisingly, the abilities of observers to correctly infer nonrigid volumetric shapes from motion cues have not been measured, and structure from motion models predominantly use variants of rigidity assumptions. We show that observers are equally sensitive at discriminating cross-sections of flexing and rigid cylinders based on motion cues, when the cylinders are rotated simultaneously around the vertical and depth axes. A computational model based on motion perspective (i.e., assuming perceived depth is inversely proportional to local velocity) predicted the psychometric curves better than shape from motion factorization models using shape or trajectory basis functions. Asymmetric percepts of symmetric cylinders, arising because of asymmetric velocity profiles, provided additional evidence for the dominant role of relative velocity in shape perception. Finally, we show that inexperienced observers are generally incapable of using motion cues to detect inflation/deflation of rigid and flexing cylinders, but this handicap can be overcome with practice for both nonrigid and rigid shapes. The empirical and computational results of this study argue against the use of rigidity assumptions in extracting 3D shape from motion and for the primacy of motion deformations computed from motion shears. PMID:21205884

  12. Characterisation of walking loads by 3D inertial motion tracking

    NASA Astrophysics Data System (ADS)

    Van Nimmen, K.; Lombaert, G.; Jonkers, I.; De Roeck, G.; Van den Broeck, P.

    2014-09-01

    The present contribution analyses the walking behaviour of pedestrians in situ by 3D inertial motion tracking. The technique is first tested in laboratory experiments with simultaneous registration of the ground reaction forces. The registered motion of the pedestrian allows for the identification of stride-to-stride variations, which is usually disregarded in the simulation of walking forces. Subsequently, motion tracking is used to register the walking behaviour of (groups of) pedestrians during in situ measurements on a footbridge. The calibrated numerical model of the structure and the information gathered using the motion tracking system enables detailed simulation of the step-by-step pedestrian induced vibrations. Accounting for the in situ identified walking variability of the test-subjects leads to a significantly improved agreement between the measured and the simulated structural response.

  13. Nonstationary 3D motion of an elastic spherical shell

    NASA Astrophysics Data System (ADS)

    Tarlakovskii, D. V.; Fedotenkov, G. V.

    2015-03-01

    A 3D model of motion of a thin elastic spherical Timoshenko shell under the action of arbitrarily distributed nonstationary pressure is considered. An approach for splitting the system of equations of 3D motion of the shell is proposed. The integral representations of the solution with kernels in the form of influence functions, which can be determined analytically by using series expansions in the eigenfunctions and the Laplace transform, are constructed. An algorithm for solving the problem on the action of nonstationary normal pressure on the shell is constructed and implemented. The obtained results find practical use in aircraft and rocket construction and in many other industrial fields where thin-walled shell structural members under nonstationary working conditions are widely used.

  14. Retrospective 3D motion correction using spherical navigator echoes.

    PubMed

    Johnson, Patricia M; Liu, Junmin; Wade, Trevor; Tavallaei, Mohammad Ali; Drangova, Maria

    2016-11-01

    To develop and evaluate a rapid spherical navigator echo (SNAV) motion correction technique, then apply it for retrospective correction of brain images. The pre-rotated, template matching SNAV method (preRot-SNAV) was developed in combination with a novel hybrid baseline strategy, which includes acquired and interpolated templates. Specifically, the SNAV templates are only rotated around X- and Y-axis; for each rotated SNAV, simulated baseline templates that mimic object rotation about the Z-axis were interpolated. The new method was first evaluated with phantom experiments. Then, a customized SNAV-interleaved gradient echo sequence was used to image three volunteers performing directed head motion. The SNAV motion measurements were used to retrospectively correct the brain images. Experiments were performed using a 3.0T whole-body MRI scanner and both single and 8-channel head coils. Phantom rotations and translations measured using the hybrid baselines agreed to within 0.9° and 1mm compared to those measured with the original preRot-SNAV method. Retrospective motion correction of in vivo images using the hybrid preRot-SNAV effectively corrected for head rotation up to 4° and 4mm. The presented hybrid approach enables the acquisition of pre-rotated baseline templates in as little as 2.5s, and results in accurate measurement of rotations and translations. Retrospective 3D motion correction successfully reduced motion artifacts in vivo. Copyright © 2016. Published by Elsevier Inc.

  15. 3D Guided Wave Motion Analysis on Laminated Composites

    NASA Technical Reports Server (NTRS)

    Tian, Zhenhua; Leckey, Cara; Yu, Lingyu

    2013-01-01

    Ultrasonic guided waves have proved useful for structural health monitoring (SHM) and nondestructive evaluation (NDE) due to their ability to propagate long distances with less energy loss compared to bulk waves and due to their sensitivity to small defects in the structure. Analysis of actively transmitted ultrasonic signals has long been used to detect and assess damage. However, there remain many challenging tasks for guided wave based SHM due to the complexity involved with propagating guided waves, especially in the case of composite materials. The multimodal nature of the ultrasonic guided waves complicates the related damage analysis. This paper presents results from parallel 3D elastodynamic finite integration technique (EFIT) simulations used to acquire 3D wave motion in the subject laminated carbon fiber reinforced polymer composites. The acquired 3D wave motion is then analyzed by frequency-wavenumber analysis to study the wave propagation and interaction in the composite laminate. The frequency-wavenumber analysis enables the study of individual modes and visualization of mode conversion. Delamination damage has been incorporated into the EFIT model to generate "damaged" data. The potential for damage detection in laminated composites is discussed in the end.

  16. Ground Motion and Variability from 3-D Deterministic Broadband Simulations

    NASA Astrophysics Data System (ADS)

    Withers, Kyle Brett

    The accuracy of earthquake source descriptions is a major limitation in high-frequency (> 1 Hz) deterministic ground motion prediction, which is critical for performance-based design by building engineers. With the recent addition of realistic fault topography in 3D simulations of earthquake source models, ground motion can be deterministically calculated more realistically up to higher frequencies. We first introduce a technique to model frequency-dependent attenuation and compare its impact on strong ground motions recorded for the 2008 Chino Hills earthquake. Then, we model dynamic rupture propagation for both a generic strike-slip event and blind thrust scenario earthquakes matching the fault geometry of the 1994 Mw 6.7 Northridge earthquake along rough faults up to 8 Hz. We incorporate frequency-dependent attenuation via a power law above a reference frequency in the form Q0fn, with high accuracy down to Q values of 15, and include nonlinear effects via Drucker-Prager plasticity. We model the region surrounding the fault with and without small-scale medium complexity in both a 1D layered model characteristic of southern California rock and a 3D medium extracted from the SCEC CVMSi.426 including a near-surface geotechnical layer. We find that the spectral acceleration from our models are within 1-2 interevent standard deviations from recent ground motion prediction equations (GMPEs) and compare well with that of recordings from strong ground motion stations at both short and long periods. At periods shorter than 1 second, Q(f) is needed to match the decay of spectral acceleration seen in the GMPEs as a function of distance from the fault. We find that the similarity between the intraevent variability of our simulations and observations increases when small-scale heterogeneity and plasticity are included, extremely important as uncertainty in ground motion estimates dominates the overall uncertainty in seismic risk. In addition to GMPEs, we compare with simple

  17. Modelling of image-catheter motion for 3-D IVUS.

    PubMed

    Rosales, Misael; Radeva, Petia; Rodriguez-Leor, Oriol; Gil, Debora

    2009-02-01

    Three-dimensional intravascular ultrasound (IVUS) allows to visualize and obtain volumetric measurements of coronary lesions through an exploration of the cross sections and longitudinal views of arteries. However, the visualization and subsequent morpho-geometric measurements in IVUS longitudinal cuts are subject to distortion caused by periodic image/vessel motion around the IVUS catheter. Usually, to overcome the image motion artifact ECG-gating and image-gated approaches are proposed, leading to slowing the pullback acquisition or disregarding part of IVUS data. In this paper, we argue that the image motion is due to 3-D vessel geometry as well as cardiac dynamics, and propose a dynamic model based on the tracking of an elliptical vessel approximation to recover the rigid transformation and align IVUS images without loosing any IVUS data. We report an extensive validation with synthetic simulated data and in vivo IVUS sequences of 30 patients achieving an average reduction of the image artifact of 97% in synthetic data and 79% in real-data. Our study shows that IVUS alignment improves longitudinal analysis of the IVUS data and is a necessary step towards accurate reconstruction and volumetric measurements of 3-D IVUS.

  18. Stereo and motion in the display of 3-D scattergrams

    SciTech Connect

    Littlefield, R.J.

    1982-04-01

    A display technique is described that is useful for detecting structure in a 3-dimensional distribution of points. The technique uses a high resolution color raster display to produce a 3-D scattergram. Depth cueing is provided by motion parallax using a capture-replay mechanism. Stereo vision depth cues can also be provided. The paper discusses some general aspects of stereo scattergrams and describes their implementation as red/green anaglyphs. These techniques have been used with data sets containing over 20,000 data points. They can be implemented on relatively inexpensive hardware. (A film of the display was shown at the conference.)

  19. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  20. The Visual Priming of Motion-Defined 3D Objects.

    PubMed

    Jiang, Xiong; Jiang, Yang; Parasuraman, Raja

    2015-01-01

    The perception of a stimulus can be influenced by previous perceptual experience, a phenomenon known as perceptual priming. However, there has been limited investigation on perceptual priming of shape perception of three-dimensional object structures defined by moving dots. Here we examined the perceptual priming of a 3D object shape defined purely by motion-in-depth cues (i.e., Shape-From-Motion, SFM) using a classic prime-target paradigm. The results from the first two experiments revealed a significant increase in accuracy when a "cloudy" SFM stimulus (whose object structure was difficult to recognize due to the presence of strong noise) was preceded by an unambiguous SFM that clearly defined the same transparent 3D shape. In contrast, results from Experiment 3 revealed no change in accuracy when a "cloudy" SFM stimulus was preceded by a static shape or a semantic word that defined the same object shape. Instead, there was a significant decrease in accuracy when preceded by a static shape or a semantic word that defined a different object shape. These results suggested that the perception of a noisy SFM stimulus can be facilitated by a preceding unambiguous SFM stimulus--but not a static image or a semantic stimulus--that defined the same shape. The potential neural and computational mechanisms underlying the difference in priming are discussed.

  1. The Visual Priming of Motion-Defined 3D Objects

    PubMed Central

    Jiang, Xiong; Jiang, Yang

    2015-01-01

    The perception of a stimulus can be influenced by previous perceptual experience, a phenomenon known as perceptual priming. However, there has been limited investigation on perceptual priming of shape perception of three-dimensional object structures defined by moving dots. Here we examined the perceptual priming of a 3D object shape defined purely by motion-in-depth cues (i.e., Shape-From-Motion, SFM) using a classic prime-target paradigm. The results from the first two experiments revealed a significant increase in accuracy when a “cloudy” SFM stimulus (whose object structure was difficult to recognize due to the presence of strong noise) was preceded by an unambiguous SFM that clearly defined the same transparent 3D shape. In contrast, results from Experiment 3 revealed no change in accuracy when a “cloudy” SFM stimulus was preceded by a static shape or a semantic word that defined the same object shape. Instead, there was a significant decrease in accuracy when preceded by a static shape or a semantic word that defined a different object shape. These results suggested that the perception of a noisy SFM stimulus can be facilitated by a preceding unambiguous SFM stimulus—but not a static image or a semantic stimulus—that defined the same shape. The potential neural and computational mechanisms underlying the difference in priming are discussed. PMID:26658496

  2. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  3. 3D motion analysis of keratin filaments in living cells

    NASA Astrophysics Data System (ADS)

    Herberich, Gerlind; Windoffer, Reinhard; Leube, Rudolf; Aach, Til

    2010-03-01

    We present a novel and efficient approach for 3D motion estimation of keratin intermediate filaments in vitro. Keratin filaments are elastic cables forming a complex scaffolding within epithelial cells. To understand the mechanisms of filament formation and network organisation under physiological and pathological conditions, quantitative measurements of dynamic network alterations are essential. Therefore we acquired time-lapse series of 3D images using a confocal laser scanning microscope. Based on these image series, we show that a dense vector field can be computed such that the displacements from one frame to the next can be determined. Our method is based on a two-step registration process: First, a rigid pre-registration is applied in order to compensate for possible global cell movement. This step enables the subsequent nonrigid registration to capture only the sought local deformations of the filaments. As the transformation model of the deformable registration algorithm is based on Free Form Deformations, it is well suited for modeling filament network dynamics. The optimization is performed using efficient linear programming techniques such that the huge amount of image data of a time series can be efficiently processed. The evaluation of our results illustrates the potential of our approach.

  4. Collective Motion of Mammalian Cell Cohorts in 3D

    PubMed Central

    Sharma, Yasha; Vargas, Diego A.; Pegoraro, Adrian F.; Lepzelter, David; Weitz, David A.; Zaman, Muhammad H

    2016-01-01

    Collective cell migration is ubiquitous in biology, from development to cancer; it occurs in complex systems comprised of heterogeneous cell types, signals and matrices, and requires large scale regulation in space and time. Understanding how cells achieve organized collective motility is crucial to addressing cellular and tissue function and disease progression. While current two-dimensional model systems recapitulate the dynamic properties of collective cell migration, quantitative three-dimensional equivalent model systems have proved elusive. To establish such a model system, we study cell collectives by tracking individuals within cell cohorts embedded in three dimensional collagen scaffolding. We develop a custom algorithm to quantify the temporal and spatial heterogeneity of motion in cell cohorts during motility events. In the absence of external driving agents, we show that these cohorts rotate in short bursts, <2 hours, and translate for up to 6 hours. We observe, track, and analyze three dimensional motion of cell cohorts composed of 3–31 cells, and pave a path toward understanding cell collectives in 3D as a complex emergent system. PMID:26549557

  5. Differentially Constrained Motion Planning with State Lattice Motion Primitives

    DTIC Science & Technology

    2012-02-01

    requirements for the degree of Doctor of Philosophy in Robotics The Robotics Institute Carnegie Mellon University Pittsburgh, Pennsylvania 15213 February 2012 ...Thesis Committee Alonzo Kelly, Chair Matt Mason Tony Stentz Steve LaValle, University of Illinois Copyright c© 2012 by Mihail N. Pivtoraiko. All...REPORT DATE FEB 2012 2. REPORT TYPE 3. DATES COVERED 00-00- 2012 to 00-00- 2012 4. TITLE AND SUBTITLE Differentially Constrained Motion Planning

  6. A role of 3-D surface-from-motion cues in motion-induced blindness.

    PubMed

    Rosenthal, Orna; Davies, Martin; Aimola Davies, Anne M; Humphreys, Glyn W

    2013-01-01

    Motion-induced blindness (MIB), the illusory disappearance of local targets against a moving mask, has been attributed to both low-level stimulus-based effects and high-level processes, involving selection between local and more global stimulus contexts. Prior work shows that MIB is modulated by binocular disparity-based depth-ordering cues. We assessed whether the depth effect is specific to disparity by studying how monocular 3-D surface from motion affects MIB. Monocular kinetic depth cues were used to create a global 3-D hourglass with concave and convex surfaces. MIB increased for stationary targets on the convex relative to the concave area, extending the role of 3-D cues. Interestingly, this convexity effect was limited to the left visual field--replicating spatial anisotropies in MIB. The data indicate a causal role of general 3-D surface coding in MIB, consistent with MIB being affected by high-level, visual representations.

  7. Naturalistic arm movements during obstacle avoidance in 3D and the identification of movement primitives.

    PubMed

    Grimme, Britta; Lipinski, John; Schöner, Gregor

    2012-10-01

    By studying human movement in the laboratory, a number of regularities and invariants such as planarity and the principle of isochrony have been discovered. The theoretical idea has gained traction that movement may be generated from a limited set of movement primitives that would encode these invariants. In this study, we ask if invariants and movement primitives capture naturalistic human movement. Participants moved objects to target locations while avoiding obstacles using unconstrained arm movements in three dimensions. Two experiments manipulated the spatial layout of targets, obstacles, and the locations in the transport movement where an obstacle was encountered. We found that all movement trajectories were planar, with the inclination of the movement plane reflecting the obstacle constraint. The timing of the movement was consistent with both global isochrony (same movement time for variable path lengths) and local isochrony (same movement time for two components of the obstacle avoidance movement). The identified movement primitives of transport (movement from start to target position) and lift (movement perpendicular to transport within the movement plane) varied independently with obstacle conditions. Their scaling accounted for the observed double peak structure of movement speed. Overall, the observed naturalistic movement was astoundingly regular. Its decomposition into primitives suggests simple mechanisms for movement generation.

  8. Reliability of 3D upper limb motion analysis in children with obstetric brachial plexus palsy.

    PubMed

    Mahon, Judy; Malone, Ailish; Kiernan, Damien; Meldrum, Dara

    2017-03-01

    Kinematics, measured by 3D upper limb motion analysis (3D-ULMA), can potentially increase understanding of movement patterns by quantifying individual joint contributions. Reliability in children with obstetric brachial plexus palsy (OBPP) has not been established.

  9. Faceless identification: a model for person identification using the 3D shape and 3D motion as cues

    NASA Astrophysics Data System (ADS)

    Klasen, Lena M.; Li, Haibo

    1999-02-01

    Person identification by using biometric methods based on image sequences, or still images, often requires a controllable and cooperative environment during the image capturing stage. In the forensic case the situation is more likely to be the opposite. In this work we propose a method that makes use of the anthropometry of the human body and human actions as cues for identification. Image sequences from surveillance systems are used, which can be seen as monocular image sequences. A 3D deformable wireframe body model is used as a platform to handle the non-rigid information of the 3D shape and 3D motion of the human body from the image sequence. A recursive method for estimating global motion and local shape variations is presented, using two recursive feedback systems.

  10. Determining 3-D motion and structure from image sequences

    NASA Technical Reports Server (NTRS)

    Huang, T. S.

    1982-01-01

    A method of determining three-dimensional motion and structure from two image frames is presented. The method requires eight point correspondences between the two frames, from which motion and structure parameters are determined by solving a set of eight linear equations and a singular value decomposition of a 3x3 matrix. It is shown that the solution thus obtained is unique.

  11. 2D-3D rigid registration to compensate for prostate motion during 3D TRUS-guided biopsy.

    PubMed

    De Silva, Tharindu; Fenster, Aaron; Cool, Derek W; Gardi, Lori; Romagnoli, Cesare; Samarabandu, Jagath; Ward, Aaron D

    2013-02-01

    Three-dimensional (3D) transrectal ultrasound (TRUS)-guided systems have been developed to improve targeting accuracy during prostate biopsy. However, prostate motion during the procedure is a potential source of error that can cause target misalignments. The authors present an image-based registration technique to compensate for prostate motion by registering the live two-dimensional (2D) TRUS images acquired during the biopsy procedure to a preacquired 3D TRUS image. The registration must be performed both accurately and quickly in order to be useful during the clinical procedure. The authors implemented an intensity-based 2D-3D rigid registration algorithm optimizing the normalized cross-correlation (NCC) metric using Powell's method. The 2D TRUS images acquired during the procedure prior to biopsy gun firing were registered to the baseline 3D TRUS image acquired at the beginning of the procedure. The accuracy was measured by calculating the target registration error (TRE) using manually identified fiducials within the prostate; these fiducials were used for validation only and were not provided as inputs to the registration algorithm. They also evaluated the accuracy when the registrations were performed continuously throughout the biopsy by acquiring and registering live 2D TRUS images every second. This measured the improvement in accuracy resulting from performing the registration, continuously compensating for motion during the procedure. To further validate the method using a more challenging data set, registrations were performed using 3D TRUS images acquired by intentionally exerting different levels of ultrasound probe pressures in order to measure the performance of our algorithm when the prostate tissue was intentionally deformed. In this data set, biopsy scenarios were simulated by extracting 2D frames from the 3D TRUS images and registering them to the baseline 3D image. A graphics processing unit (GPU)-based implementation was used to improve the

  12. Rigid Body Motion in Stereo 3D Simulation

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2010-01-01

    This paper addresses the difficulties experienced by first-grade students studying rigid body motion at Sofia University. Most quantities describing the rigid body are in relations that the students find hard to visualize and understand. They also lose the notion of cause-result relations between vector quantities, such as the relation between…

  13. Tracking 3-D body motion for docking and robot control

    NASA Technical Reports Server (NTRS)

    Donath, M.; Sorensen, B.; Yang, G. B.; Starr, R.

    1987-01-01

    An advanced method of tracking three-dimensional motion of bodies has been developed. This system has the potential to dynamically characterize machine and other structural motion, even in the presence of structural flexibility, thus facilitating closed loop structural motion control. The system's operation is based on the concept that the intersection of three planes defines a point. Three rotating planes of laser light, fixed and moving photovoltaic diode targets, and a pipe-lined architecture of analog and digital electronics are used to locate multiple targets whose number is only limited by available computer memory. Data collection rates are a function of the laser scan rotation speed and are currently selectable up to 480 Hz. The tested performance on a preliminary prototype designed for 0.1 in accuracy (for tracking human motion) at a 480 Hz data rate includes a worst case resolution of 0.8 mm (0.03 inches), a repeatability of plus or minus 0.635 mm (plus or minus 0.025 inches), and an absolute accuracy of plus or minus 2.0 mm (plus or minus 0.08 inches) within an eight cubic meter volume with all results applicable at the 95 percent level of confidence along each coordinate region. The full six degrees of freedom of a body can be computed by attaching three or more target detectors to the body of interest.

  14. Rigid Body Motion in Stereo 3D Simulation

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2010-01-01

    This paper addresses the difficulties experienced by first-grade students studying rigid body motion at Sofia University. Most quantities describing the rigid body are in relations that the students find hard to visualize and understand. They also lose the notion of cause-result relations between vector quantities, such as the relation between…

  15. Markerless 3D motion capture for animal locomotion studies

    PubMed Central

    Sellers, William Irvin; Hirasaki, Eishi

    2014-01-01

    ABSTRACT Obtaining quantitative data describing the movements of animals is an essential step in understanding their locomotor biology. Outside the laboratory, measuring animal locomotion often relies on video-based approaches and analysis is hampered because of difficulties in calibration and often the limited availability of possible camera positions. It is also usually restricted to two dimensions, which is often an undesirable over-simplification given the essentially three-dimensional nature of many locomotor performances. In this paper we demonstrate a fully three-dimensional approach based on 3D photogrammetric reconstruction using multiple, synchronised video cameras. This approach allows full calibration based on the separation of the individual cameras and will work fully automatically with completely unmarked and undisturbed animals. As such it has the potential to revolutionise work carried out on free-ranging animals in sanctuaries and zoological gardens where ad hoc approaches are essential and access within enclosures often severely restricted. The paper demonstrates the effectiveness of video-based 3D photogrammetry with examples from primates and birds, as well as discussing the current limitations of this technique and illustrating the accuracies that can be obtained. All the software required is open source so this can be a very cost effective approach and provides a methodology of obtaining data in situations where other approaches would be completely ineffective. PMID:24972869

  16. Lagrangian 3D tracking of fluorescent microscopic objects in motion.

    PubMed

    Darnige, T; Figueroa-Morales, N; Bohec, P; Lindner, A; Clément, E

    2017-05-01

    We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.

  17. Simple 3-D stimulus for motion parallax and its simulation.

    PubMed

    Ono, Hiroshi; Chornenkyy, Yevgen; D'Amour, Sarah

    2013-01-01

    Simulation of a given stimulus situation should produce the same perception as the original. Rogers et al (2009 Perception 38 907-911) simulated Wheeler's (1982, PhD thesis, Rutgers University, NJ) motion parallax stimulus and obtained quite different perceptions. Wheeler's observers were unable to reliably report the correct direction of depth, whereas Rogers's were. With three experiments we explored the possible reasons for the discrepancy. Our results suggest that Rogers was able to see depth from the simulation partly due to his experience seeing depth with random dot surfaces.

  18. Motion field estimation for a dynamic scene using a 3D LiDAR.

    PubMed

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-09-09

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively.

  19. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR

    PubMed Central

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  20. Nonrigid Autofocus Motion Correction for Coronary MR Angiography with a 3D Cones Trajectory

    PubMed Central

    Ingle, R. Reeve; Wu, Holden H.; Addy, Nii Okai; Cheng, Joseph Y.; Yang, Phillip C.; Hu, Bob S.; Nishimura, Dwight G.

    2014-01-01

    Purpose: To implement a nonrigid autofocus motion correction technique to improve respiratory motion correction of free-breathing whole-heart coronary magnetic resonance angiography (CMRA) acquisitions using an image-navigated 3D cones sequence. Methods: 2D image navigators acquired every heartbeat are used to measure superior-inferior, anterior-posterior, and right-left translation of the heart during a free-breathing CMRA scan using a 3D cones readout trajectory. Various tidal respiratory motion patterns are modeled by independently scaling the three measured displacement trajectories. These scaled motion trajectories are used for 3D translational compensation of the acquired data, and a bank of motion-compensated images is reconstructed. From this bank, a gradient entropy focusing metric is used to generate a nonrigid motion-corrected image on a pixel-by-pixel basis. The performance of the autofocus motion correction technique is compared with rigid-body translational correction and no correction in phantom, volunteer, and patient studies. Results: Nonrigid autofocus motion correction yields improved image quality compared to rigid-body-corrected images and uncorrected images. Quantitative vessel sharpness measurements indicate superiority of the proposed technique in 14 out of 15 coronary segments from three patient and two volunteer studies. Conclusion: The proposed technique corrects nonrigid motion artifacts in free-breathing 3D cones acquisitions, improving image quality compared to rigid-body motion correction. PMID:24006292

  1. Motion-Corrected 3D Sonic Anemometer for Tethersondes and Other Moving Platforms

    NASA Technical Reports Server (NTRS)

    Bognar, John

    2012-01-01

    To date, it has not been possible to apply 3D sonic anemometers on tethersondes or similar atmospheric research platforms due to the motion of the supporting platform. A tethersonde module including both a 3D sonic anemometer and associated motion correction sensors has been developed, enabling motion-corrected 3D winds to be measured from a moving platform such as a tethersonde. Blimps and other similar lifting systems are used to support tethersondes meteorological devices that fly on the tether of a blimp or similar platform. To date, tethersondes have been limited to making basic meteorological measurements (pressure, temperature, humidity, and wind speed and direction). The motion of the tethersonde has precluded the addition of 3D sonic anemometers, which can be used for high-speed flux measurements, thereby limiting what has been achieved to date with tethersondes. The tethersonde modules fly on a tether that can be constantly moving and swaying. This would introduce enormous error into the output of an uncorrected 3D sonic anemometer. The motion correction that is required must be implemented in a low-weight, low-cost manner to be suitable for this application. Until now, flux measurements using 3D sonic anemometers could only be made if the 3D sonic anemometer was located on a rigid, fixed platform such as a tower. This limited the areas in which they could be set up and used. The purpose of the innovation was to enable precise 3D wind and flux measurements to be made using tether - sondes. In brief, a 3D accelerometer and a 3D gyroscope were added to a tethersonde module along with a 3D sonic anemometer. This combination allowed for the necessary package motions to be measured, which were then mathematically combined with the measured winds to yield motion-corrected 3D winds. At the time of this reporting, no tethersonde has been able to make any wind measurement other than a basic wind speed and direction measurement. The addition of a 3D sonic

  2. 3D Motion Modeling and Reconstruction of Left Ventricle Wall in Cardiac MRI.

    PubMed

    Yang, Dong; Wu, Pengxiang; Tan, Chaowei; Pohl, Kilian M; Axel, Leon; Metaxas, Dimitris

    2017-06-01

    The analysis of left ventricle (LV) wall motion is a critical step for understanding cardiac functioning mechanisms and clinical diagnosis of ventricular diseases. We present a novel approach for 3D motion modeling and analysis of LV wall in cardiac magnetic resonance imaging (MRI). First, a fully convolutional network (FCN) is deployed to initialize myocardium contours in 2D MR slices. Then, we propose an image registration algorithm to align MR slices in space and minimize the undesirable motion artifacts from inconsistent respiration. Finally, a 3D deformable model is applied to recover the shape and motion of myocardium wall. Utilizing the proposed approach, we can visually analyze 3D LV wall motion, evaluate cardiac global function, and diagnose ventricular diseases.

  3. Bayesian motion estimation accounts for a surprising bias in 3D vision

    PubMed Central

    Welchman, Andrew E.; Lam, Judith M.; Bülthoff, Heinrich H.

    2008-01-01

    Determining the approach of a moving object is a vital survival skill that depends on the brain combining information about lateral translation and motion-in-depth. Given the importance of sensing motion for obstacle avoidance, it is surprising that humans make errors, reporting an object will miss them when it is on a collision course with their head. Here we provide evidence that biases observed when participants estimate movement in depth result from the brain's use of a “prior” favoring slow velocity. We formulate a Bayesian model for computing 3D motion using independently estimated parameters for the shape of the visual system's slow velocity prior. We demonstrate the success of this model in accounting for human behavior in separate experiments that assess both sensitivity and bias in 3D motion estimation. Our results show that a surprising perceptual error in 3D motion perception reflects the importance of prior probabilities when estimating environmental properties. PMID:18697948

  4. Bayesian motion estimation accounts for a surprising bias in 3D vision.

    PubMed

    Welchman, Andrew E; Lam, Judith M; Bülthoff, Heinrich H

    2008-08-19

    Determining the approach of a moving object is a vital survival skill that depends on the brain combining information about lateral translation and motion-in-depth. Given the importance of sensing motion for obstacle avoidance, it is surprising that humans make errors, reporting an object will miss them when it is on a collision course with their head. Here we provide evidence that biases observed when participants estimate movement in depth result from the brain's use of a "prior" favoring slow velocity. We formulate a Bayesian model for computing 3D motion using independently estimated parameters for the shape of the visual system's slow velocity prior. We demonstrate the success of this model in accounting for human behavior in separate experiments that assess both sensitivity and bias in 3D motion estimation. Our results show that a surprising perceptual error in 3D motion perception reflects the importance of prior probabilities when estimating environmental properties.

  5. Identifying and modeling motion primitives for the hydromedusae Sarsia tubulosa and Aequorea victoria.

    PubMed

    Sledge, Isaac; Krieg, Michael; Lipinski, Doug; Mohseni, Kamran

    2015-10-23

    The movements of organisms can be thought of as aggregations of motion primitives: motion segments containing one or more significant actions. Here, we present a means to identify and characterize motion primitives from recorded movement data. We address these problems by assuming that the motion sequences can be characterized as a series of dynamical-system-based pattern generators. By adopting a nonparametric, Bayesian formalism for learning and simplifying these pattern generators, we arrive at a purely data-driven model to automatically identify breakpoints in the movement sequences. We apply this model to swimming sequences from two hydromedusa. The first hydromedusa is the prolate Sarsia tubulosa, for which we obtain five motion primitives that correspond to bell cavity pressurization, jet formation, jetting, cavity fluid refill, and coasting. The second hydromedusa is the oblate Aequorea victoria, for which we obtain five motion primitives that correspond to bell compression, vortex separation, cavity fluid refill, vortex formation, and coasting. Our experimental results indicate that the breakpoints between primitives are correlated with transitions in the bell geometry, vortex formation and shedding, and changes in derived dynamical quantities. These dynamics quantities include terms like pressure, power, drag, and thrust. Such findings suggest that dynamics information is inherently present in the observed motions.

  6. Computing 3-D structure of rigid objects using stereo and motion

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1987-01-01

    Work performed as a step toward an intelligent automatic machine vision system for 3-D imaging is discussed. The problem considered is the quantitative 3-D reconstruction of rigid objects. Motion and stereo are the two clues considered in this system. The system basically consists of three processes: the low level process to extract image features, the middle level process to establish the correspondence in the stereo (spatial) and motion (temporal) modalities, and the high level process to compute the 3-D coordinates of the corner points by integrating the spatial and temporal correspondences.

  7. Tracking 3D Picometer-Scale Motions of Single Nanoparticles with High-Energy Electron Probes

    PubMed Central

    Ogawa, Naoki; Hoshisashi, Kentaro; Sekiguchi, Hiroshi; Ichiyanagi, Kouhei; Matsushita, Yufuku; Hirohata, Yasuhisa; Suzuki, Seiichi; Ishikawa, Akira; Sasaki, Yuji C.

    2013-01-01

    We observed the high-speed anisotropic motion of an individual gold nanoparticle in 3D at the picometer scale using a high-energy electron probe. Diffracted electron tracking (DET) using the electron back-scattered diffraction (EBSD) patterns of labeled nanoparticles under wet-SEM allowed us to super-accurately measure the time-resolved 3D motion of individual nanoparticles in aqueous conditions. The highly precise DET data corresponded to the 3D anisotropic log-normal Gaussian distributions over time at the millisecond scale. PMID:23868465

  8. 3D surface perception from motion involves a temporal–parietal network

    PubMed Central

    Beer, Anton L.; Watanabe, Takeo; Ni, Rui; Sasaki, Yuka; Andersen, George J.

    2010-01-01

    Previous research has suggested that three-dimensional (3D) structure-from-motion (SFM) perception in humans involves several motion-sensitive occipital and parietal brain areas. By contrast, SFM perception in nonhuman primates seems to involve the temporal lobe including areas MT, MST and FST. The present functional magnetic resonance imaging study compared several motion-sensitive regions of interest including the superior temporal sulcus (STS) while human observers viewed horizontally moving dots that defined either a 3D corrugated surface or a 3D random volume. Low-level stimulus features such as dot density and velocity vectors as well as attention were tightly controlled. Consistent with previous research we found that 3D corrugated surfaces elicited stronger responses than random motion in occipital and parietal brain areas including area V3A, the ventral and dorsal intraparietal sulcus, the lateral occipital sulcus and the fusiform gyrus. Additionally, 3D corrugated surfaces elicited stronger activity in area MT and the STS but not in area MST. Brain activity in the STS but not in area MT correlated with interindividual differences in 3D surface perception. Our findings suggest that area MT is involved in the analysis of optic flow patterns such as speed gradients and that the STS in humans plays a greater role in the analysis of 3D SFM than previously thought. PMID:19674088

  9. Model-based risk assessment for motion effects in 3D radiotherapy of lung tumors

    NASA Astrophysics Data System (ADS)

    Werner, René; Ehrhardt, Jan; Schmidt-Richberg, Alexander; Handels, Heinz

    2012-02-01

    Although 4D CT imaging becomes available in an increasing number of radiotherapy facilities, 3D imaging and planning is still standard in current clinical practice. In particular for lung tumors, respiratory motion is a known source of uncertainty and should be accounted for during radiotherapy planning - which is difficult by using only a 3D planning CT. In this contribution, we propose applying a statistical lung motion model to predict patients' motion patterns and to estimate dosimetric motion effects in lung tumor radiotherapy if only 3D images are available. Being generated based on 4D CT images of patients with unimpaired lung motion, the model tends to overestimate lung tumor motion. It therefore promises conservative risk assessment regarding tumor dose coverage. This is exemplarily evaluated using treatment plans of lung tumor patients with different tumor motion patterns and for two treatment modalities (conventional 3D conformal radiotherapy and step-&- shoot intensity modulated radiotherapy). For the test cases, 4D CT images are available. Thus, also a standard registration-based 4D dose calculation is performed, which serves as reference to judge plausibility of the modelbased 4D dose calculation. It will be shown that, if combined with an additional simple patient-specific breathing surrogate measurement (here: spirometry), the model-based dose calculation provides reasonable risk assessment of respiratory motion effects.

  10. On the integrability of the motion of 3D-Swinging Atwood machine and related problems

    NASA Astrophysics Data System (ADS)

    Elmandouh, A. A.

    2016-03-01

    In the present article, we study the problem of the motion of 3D- Swinging Atwood machine. A new integrable case for this problem is announced. We point out a new integrable case describing the motion of a heavy particle on a titled cone.

  11. Geometric uncertainty of 2D projection imaging in monitoring 3D tumor motion.

    PubMed

    Suh, Yelin; Dieterich, Sonja; Keall, Paul J

    2007-06-21

    The purpose of this study was to investigate the accuracy of two-dimensional (2D) projection imaging methods in three-dimensional (3D) tumor motion monitoring. Many commercial linear accelerator types have projection imaging capabilities, and tumor motion monitoring is useful for motion inclusive, respiratory gated or tumor tracking strategies. Since 2D projection imaging is limited in its ability to resolve the motion along the imaging beam axis, there is unresolved motion when monitoring 3D tumor motion. From the 3D tumor motion data of 160 treatment fractions for 46 thoracic and abdominal cancer patients, the unresolved motion due to the geometric limitation of 2D projection imaging was calculated as displacement in the imaging beam axis for different beam angles and time intervals. The geometric uncertainty to monitor 3D motion caused by the unresolved motion of 2D imaging was quantified using the root-mean-square (rms) metric. Geometric uncertainty showed interfractional and intrafractional variation. Patient-to-patient variation was much more significant than variation for different time intervals. For the patient cohort studied, as the time intervals increase, the rms, minimum and maximum values of the rms uncertainty show decreasing tendencies for the lung patients but increasing for the liver and retroperitoneal patients, which could be attributed to patient relaxation. Geometric uncertainty was smaller for coplanar treatments than non-coplanar treatments, as superior-inferior (SI) tumor motion, the predominant motion from patient respiration, could be always resolved for coplanar treatments. Overall rms of the rms uncertainty was 0.13 cm for all treatment fractions and 0.18 cm for the treatment fractions whose average breathing peak-trough ranges were more than 0.5 cm. The geometric uncertainty for 2D imaging varies depending on the tumor site, tumor motion range, time interval and beam angle as well as between patients, between fractions and within a

  12. Local motion-compensated method for high-quality 3D coronary artery reconstruction.

    PubMed

    Liu, Bo; Bai, Xiangzhi; Zhou, Fugen

    2016-12-01

    The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method.

  13. Local motion-compensated method for high-quality 3D coronary artery reconstruction

    PubMed Central

    Liu, Bo; Bai, Xiangzhi; Zhou, Fugen

    2016-01-01

    The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method. PMID:28018741

  14. Analyzing Non-circular Motions in Spiral Galaxies Through 3D Spectroscopy

    NASA Astrophysics Data System (ADS)

    Fuentes-Carrera, I.; Rosado, M.; Amram, P.

    3D spectroscopic techniques allow the assessment of different types of motions in extended objects. In the case of spiral galaxies, thes type of techniques allow us to trace not only the (almost) circular motion of the ionized gas, but also the motions arising from the presence of structure such as bars, spiral arms and tidal features. We present an analysis of non-circular motions in spiral galaxies in interacting pairs using scanning Fabry-Perot interferometry of emission lines. We show how this analysis can be helpful to differentiate circular from non-circular motions in the kinematical analysis of this type of galaxies.

  15. Verification of real sensor motion for a high-dynamic 3D measurement inspection system

    NASA Astrophysics Data System (ADS)

    Breitbarth, Andreas; Correns, Martin; Zimmermann, Manuel; Zhang, Chen; Rosenberger, Maik; Schambach, Jörg; Notni, Gunther

    2017-06-01

    Inline three-dimensional measurements are a growing part of optical inspection. Considering increasing production capacities and economic aspects, dynamic measurements under motion are inescapable. Using a sequence of different pattern, like it is generally done in fringe projection systems, relative movements of the measurement object with respect to the 3d sensor between the images of one pattern sequence have to be compensated. Based on the application of fully automated optical inspection of circuit boards at an assembly line, the knowledge of the relative speed of movement between the measurement object and the 3d sensor system should be used inside the algorithms of motion compensation. Optimally, this relative speed is constant over the whole measurement process and consists of only one motion direction to avoid sensor vibrations. The quantified evaluation of this two assumptions and the error impact on the 3d accuracy are content of the research project described by this paper. For our experiments we use a glass etalon with non-transparent circles and transmitted light. Focused on the circle borders, this is one of the most reliable methods to determine subpixel positions using a couple of searching rays. The intersection point of all rays characterize the center of each circle. Based on these circle centers determined with a precision of approximately 1=50 pixel, the motion vector between two images could be calculated and compared with the input motion vector. Overall, the results are used to optimize the weight distribution of the 3d sensor head and reduce non-uniformly vibrations. Finally, there exists a dynamic 3d measurement system with an error of motion vectors about 4 micrometer. Based on this outcome, simulations result in a 3d standard deviation at planar object regions of 6 micrometers. The same system yields a 3d standard deviation of 9 µm without the optimization of weight distribution.

  16. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    PubMed Central

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722

  17. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models.

    PubMed

    Dhou, S; Hurwitz, M; Mishra, P; Cai, W; Rottmann, J; Li, R; Williams, C; Wagar, M; Berbeco, R; Ionascu, D; Lewis, J H

    2015-05-07

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  18. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    NASA Astrophysics Data System (ADS)

    Dhou, S.; Hurwitz, M.; Mishra, P.; Cai, W.; Rottmann, J.; Li, R.; Williams, C.; Wagar, M.; Berbeco, R.; Ionascu, D.; Lewis, J. H.

    2015-05-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  19. The effect of motion on IMRT - looking at interplay with 3D measurements

    NASA Astrophysics Data System (ADS)

    Thomas, A.; Yan, H.; Oldham, M.; Juang, T.; Adamovics, J.; Yin, F. F.

    2013-06-01

    Clinical recommendations to address tumor motion management have been derived from studies dealing with simulations and 2D measurements. 3D measurements may provide more insight and possibly alter the current motion management guidelines. This study provides an initial look at true 3D measurements involving leaf motion deliveries by use of a motion phantom and the PRESAGE/DLOS dosimetry system. An IMRT and VMAT plan were delivered to the phantom and analyzed by means of DVHs to determine whether the expansion of treatment volumes based on known imaging motion adequately cover the target. DVHs confirmed that for these deliveries the expansion volumes were adequate to treat the intended target although further studies should be conducted to allow for differences in parameters that could alter the results, such as delivery dose and breathe rate.

  20. Spherical navigator echoes for full 3D rigid body motion measurement in MRI

    NASA Astrophysics Data System (ADS)

    Welch, Edward B.; Manduca, Armando; Grimm, Roger; Ward, Heidi; Jack, Clifford R.

    2001-07-01

    We are developing a 3-D spherical navigator (SNAV) echo technique for MRI that can measure rigid body motion in all six degrees of freedom simultaneously, in a single echo, by sampling a spherical shell in k-space. MRI pulse sequences were developed to acquire varying amounts of data on such a shell. 3-D rotations of an imaged object simply rotate the data on this shell, and can be detected by registration of magnitude values. 3-D translations add phase shifts to the data on the shell, and can be detected with a weighted least squares fit to the phase differences at corresponding points. Data collected with a computer controlled motion phantom with known rotational and translational motions was used to evaluate the technique. The accuracy and precision of the technique depend on the sampling density, with roughly 1000 sample points necessary for accurate detection to within the error limits of the motion phantom. This number of samples can be captured in a single SNAV echo with a 3-D helical spiral trajectory. Motion detection in MRI with spherical navigator echoes is thus feasible and practical. Accurate motion measurements about all three axes, suitable for retrospective or prospective correction, can be obtained in a single pulse sequence.

  1. The 3D Human Motion Control Through Refined Video Gesture Annotation

    NASA Astrophysics Data System (ADS)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  2. The 3D Human Motion Control Through Refined Video Gesture Annotation

    NASA Astrophysics Data System (ADS)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  3. 3D model-based catheter tracking for motion compensation in EP procedures

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Liao, Rui; Hornegger, Joachim; Strobel, Norbert

    2010-02-01

    Atrial fibrillation is the most common sustained heart arrhythmia and a leading cause of stroke. Its treatment by radio-frequency catheter ablation, performed using fluoroscopic image guidance, is gaining increasingly more importance. Two-dimensional fluoroscopic navigation can take advantage of overlay images derived from pre-operative 3-D data to add anatomical details otherwise not visible under X-ray. Unfortunately, respiratory motion may impair the utility of these static overlay images for catheter navigation. We developed an approach for image-based 3-D motion compensation as a solution to this problem. A bi-plane C-arm system is used to take X-ray images of a special circumferential mapping catheter from two directions. In the first step of the method, a 3-D model of the device is reconstructed. Three-dimensional respiratory motion at the site of ablation is then estimated by tracking the reconstructed catheter model in 3-D. This step involves bi-plane fluoroscopy and 2-D/3-D registration. Phantom data and clinical data were used to assess our model-based catheter tracking method. Experiments involving a moving heart phantom yielded an average 2-D tracking error of 1.4 mm and an average 3-D tracking error of 1.1 mm. Our evaluation of clinical data sets comprised 469 bi-plane fluoroscopy frames (938 monoplane fluoroscopy frames). We observed an average 2-D tracking error of 1.0 mm +/- 0.4 mm and an average 3-D tracking error of 0.8 mm +/- 0.5 mm. These results demonstrate that model-based motion-compensation based on 2-D/3-D registration is both feasible and accurate.

  4. Handling Motion-Blur in 3D Tracking and Rendering for Augmented Reality.

    PubMed

    Park, Youngmin; Lepetit, Vincent; Woo, Woontack

    2012-09-01

    The contribution of this paper is two-fold. First, we show how to extend the ESM algorithm to handle motion blur in 3D object tracking. ESM is a powerful algorithm for template matching-based tracking, but it can fail under motion blur. We introduce an image formation model that explicitly consider the possibility of blur, and shows its results in a generalization of the original ESM algorithm. This allows to converge faster, more accurately and more robustly even under large amount of blur. Our second contribution is an efficient method for rendering the virtual objects under the estimated motion blur. It renders two images of the object under 3D perspective, and warps them to create many intermediate images. By fusing these images we obtain a final image for the virtual objects blurred consistently with the captured image. Because warping is much faster than 3D rendering, we can create realistically blurred images at a very low computational cost.

  5. Recursive estimation of 3D motion and surface structure from local affine flow parameters.

    PubMed

    Calway, Andrew

    2005-04-01

    A recursive structure from motion algorithm based on optical flow measurements taken from an image sequence is described. It provides estimates of surface normals in addition to 3D motion and depth. The measurements are affine motion parameters which approximate the local flow fields associated with near-planar surface patches in the scene. These are integrated over time to give estimates of the 3D parameters using an extended Kalman filter. This also estimates the camera focal length and, so, the 3D estimates are metric. The use of parametric measurements means that the algorithm is computationally less demanding than previous optical flow approaches and the recursive filter builds in a degree of noise robustness. Results of experiments on synthetic and real image sequences demonstrate that the algorithm performs well.

  6. Workbench for 3D target detection and recognition from airborne motion stereo and ladar imagery

    NASA Astrophysics Data System (ADS)

    Roy, Simon; Se, Stephen; Kotamraju, Vinay; Maheux, Jean; Nadeau, Christian; Larochelle, Vincent; Fournier, Jonathan

    2010-04-01

    3D imagery has a well-known potential for improving situational awareness and battlespace visualization by providing enhanced knowledge of uncooperative targets. This potential arises from the numerous advantages that 3D imagery has to offer over traditional 2D imagery, thereby increasing the accuracy of automatic target detection (ATD) and recognition (ATR). Despite advancements in both 3D sensing and 3D data exploitation, 3D imagery has yet to demonstrate a true operational gain, partly due to the processing burden of the massive dataloads generated by modern sensors. In this context, this paper describes the current status of a workbench designed for the study of 3D ATD/ATR. Among the project goals is the comparative assessment of algorithms and 3D sensing technologies given various scenarios. The workbench is comprised of three components: a database, a toolbox, and a simulation environment. The database stores, manages, and edits input data of various types such as point clouds, video, still imagery frames, CAD models and metadata. The toolbox features data processing modules, including range data manipulation, surface mesh generation, texture mapping, and a shape-from-motion module to extract a 3D target representation from video frames or from a sequence of still imagery. The simulation environment includes synthetic point cloud generation, 3D ATD/ATR algorithm prototyping environment and performance metrics for comparative assessment. In this paper, the workbench components are described and preliminary results are presented. Ladar, video and still imagery datasets collected during airborne trials are also detailed.

  7. Motion-Capture-Enabled Software for Gestural Control of 3D Models

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey S.; Luo, Victor; Crockett, Thomas M.; Shams, Khawaja S.; Powell, Mark W.; Valderrama, Anthony

    2012-01-01

    Current state-of-the-art systems use general-purpose input devices such as a keyboard, mouse, or joystick that map to tasks in unintuitive ways. This software enables a person to control intuitively the position, size, and orientation of synthetic objects in a 3D virtual environment. It makes possible the simultaneous control of the 3D position, scale, and orientation of 3D objects using natural gestures. Enabling the control of 3D objects using a commercial motion-capture system allows for natural mapping of the many degrees of freedom of the human body to the manipulation of the 3D objects. It reduces training time for this kind of task, and eliminates the need to create an expensive, special-purpose controller.

  8. Depth representation of moving 3-D objects in apparent-motion path.

    PubMed

    Hidaka, Souta; Kawachi, Yousuke; Gyoba, Jiro

    2008-01-01

    Apparent motion is perceived when two objects are presented alternately at different positions. The internal representations of apparently moving objects are formed in an apparent-motion path which lacks physical inputs. We investigated the depth information contained in the representation of 3-D moving objects in an apparent-motion path. We examined how probe objects-briefly placed in the motion path-affected the perceived smoothness of apparent motion. The probe objects comprised 3-D objects which were defined by being shaded or by disparity (convex/concave) or 2-D (flat) objects, while the moving objects were convex/concave objects. We found that flat probe objects induced a significantly smoother motion perception than concave probe objects only in the case of the convex moving objects. However, convex probe objects did not lead to smoother motion as the flat objects did, although the convex probe objects contained the same depth information for the moving objects. Moreover, the difference between probe objects was reduced when the moving objects were concave. These counterintuitive results were consistent in conditions when both depth cues were used. The results suggest that internal representations contain incomplete depth information that is intermediate between that of 2-D and 3-D objects.

  9. 3D coronary motion tracking in swine models with MR tracking catheters.

    PubMed

    Schmidt, Ehud J; Yoneyama, Ryuichi; Dumoulin, Charles L; Darrow, Robert D; Klein, Eric; Kiruluta, Andrew J M; Hayase, Motoya

    2009-01-01

    To develop MR-tracked catheters to delineate the three-dimensional motion of coronary arteries at high spatial and temporal resolution. Catheters with three tracking microcoils were placed into nine swine. During breath-holds, electrocardiographic (ECG)-synchronized 3D motion was measured at varying vessel depths. 3D motion was measured in American Heart Association left anterior descending (LAD) segments 6-7, left circumflex (LCX) segments 11-15, and right coronary artery (RCA) segments 2-3, at 60-115 beats/min heart rates. Similar-length cardiac cycles were averaged. Intercoil cross-correlation identified early systolic phase (ES) and determined segment motion delay. Translational and rotational motion, as a function of cardiac phase, is shown, with directionality and amplitude varying along the vessel length. Rotation (peak-to-peak solid-angle RCA approximately 0.10, LAD approximately 0.06, LCX approximately 0.18 radian) occurs primarily during fast translational motion and increases distally. LCX displacement increases with heart rate by 18%. Phantom simulations of motion effects on high-resolution images, using RCA results, show artifacts due to translation and rotation. Magnetic resonance imaging (MRI) tracking catheters quantify motion at 20 fps and 1 mm(3) resolution at multiple vessel depths, exceeding that available with other techniques. Imaging artifacts due to rotation are demonstrated. Motion-tracking catheters may provide physiological information during interventions and improve imaging spatial resolution.

  10. 3D motion picture recording by parallel phase-shifting digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Tahara, Tatsuki; Xia, Peng; Kakue, Takashi; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu

    2013-12-01

    Three-dimensional (3-D) motion-picture recording by parallel phase-shifting digital holographic microscopy that has the ability of instantaneous 3-D recording of dynamic phenomena in the microscopic field of view is presented. Parallel phase-shifting digital holography is a scheme to record multiple phase-shifted holograms with a single-shot exposure, and to achieve 3-D motion-picture recording of objects with high accuracy and wide 3-D area, based on space-division multiplexing of phase-shifted holograms. Parallel phase-shifting digital holographic microscopy is implemented by an optical interferometer and an image sensor on which polarization-detection function is introduced pixel by pixel. This time, we constructed a parallel phase-shifting digital holographic microscope for recording high-speed dynamic phenomena, and then motions of biological objects in water were recorded at more than 10,000 frames per second, which is the fastest among the previous reports on 3-D imaging of biological objects.

  11. Event-Based 3D Motion Flow Estimation Using 4D Spatio Temporal Subspaces Properties.

    PubMed

    Ieng, Sio-Hoi; Carneiro, João; Benosman, Ryad B

    2016-01-01

    State of the art scene flow estimation techniques are based on projections of the 3D motion on image using luminance-sampled at the frame rate of the cameras-as the principal source of information. We introduce in this paper a pure time based approach to estimate the flow from 3D point clouds primarily output by neuromorphic event-based stereo camera rigs, or by any existing 3D depth sensor even if it does not provide nor use luminance. This method formulates the scene flow problem by applying a local piecewise regularization of the scene flow. The formulation provides a unifying framework to estimate scene flow from synchronous and asynchronous 3D point clouds. It relies on the properties of 4D space time using a decomposition into its subspaces. This method naturally exploits the properties of the neuromorphic asynchronous event based vision sensors that allows continuous time 3D point clouds reconstruction. The approach can also handle the motion of deformable object. Experiments using different 3D sensors are presented.

  12. Event-Based 3D Motion Flow Estimation Using 4D Spatio Temporal Subspaces Properties

    PubMed Central

    Ieng, Sio-Hoi; Carneiro, João; Benosman, Ryad B.

    2017-01-01

    State of the art scene flow estimation techniques are based on projections of the 3D motion on image using luminance—sampled at the frame rate of the cameras—as the principal source of information. We introduce in this paper a pure time based approach to estimate the flow from 3D point clouds primarily output by neuromorphic event-based stereo camera rigs, or by any existing 3D depth sensor even if it does not provide nor use luminance. This method formulates the scene flow problem by applying a local piecewise regularization of the scene flow. The formulation provides a unifying framework to estimate scene flow from synchronous and asynchronous 3D point clouds. It relies on the properties of 4D space time using a decomposition into its subspaces. This method naturally exploits the properties of the neuromorphic asynchronous event based vision sensors that allows continuous time 3D point clouds reconstruction. The approach can also handle the motion of deformable object. Experiments using different 3D sensors are presented. PMID:28220057

  13. Motion corrected LV quantification based on 3D modelling for improved functional assessment in cardiac MRI

    NASA Astrophysics Data System (ADS)

    Liew, Y. M.; McLaughlin, R. A.; Chan, B. T.; Aziz, Y. F. Abdul; Chee, K. H.; Ung, N. M.; Tan, L. K.; Lai, K. W.; Ng, S.; Lim, E.

    2015-04-01

    Cine MRI is a clinical reference standard for the quantitative assessment of cardiac function, but reproducibility is confounded by motion artefacts. We explore the feasibility of a motion corrected 3D left ventricle (LV) quantification method, incorporating multislice image registration into the 3D model reconstruction, to improve reproducibility of 3D LV functional quantification. Multi-breath-hold short-axis and radial long-axis images were acquired from 10 patients and 10 healthy subjects. The proposed framework reduced misalignment between slices to subpixel accuracy (2.88 to 1.21 mm), and improved interstudy reproducibility for 5 important clinical functional measures, i.e. end-diastolic volume, end-systolic volume, ejection fraction, myocardial mass and 3D-sphericity index, as reflected in a reduction in the sample size required to detect statistically significant cardiac changes: a reduction of 21-66%. Our investigation on the optimum registration parameters, including both cardiac time frames and number of long-axis (LA) slices, suggested that a single time frame is adequate for motion correction whereas integrating more LA slices can improve registration and model reconstruction accuracy for improved functional quantification especially on datasets with severe motion artefacts.

  14. Tanner and Garneau train on IMAX 3D motion picture camera

    NASA Image and Video Library

    2000-06-13

    JSC2000-04743 (13 June 2000) --- Astronauts Marc Garneau (left) and Joseph R. Tanner, STS-97 mission specialists, familiarize themselves with an IMAX 3D motion picture camera during a training session in the Flight Operations Facility at the Johnson Space Center (JSC). Garneau represents the Canadian Space Agency (CSA).

  15. Separate Perceptual and Neural Processing of Velocity- and Disparity-Based 3D Motion Signals.

    PubMed

    Joo, Sung Jun; Czuba, Thaddeus B; Cormack, Lawrence K; Huk, Alexander C

    2016-10-19

    Although the visual system uses both velocity- and disparity-based binocular information for computing 3D motion, it is unknown whether (and how) these two signals interact. We found that these two binocular signals are processed distinctly at the levels of both cortical activity in human MT and perception. In human MT, adaptation to both velocity-based and disparity-based 3D motions demonstrated direction-selective neuroimaging responses. However, when adaptation to one cue was probed using the other cue, there was no evidence of interaction between them (i.e., there was no "cross-cue" adaptation). Analogous psychophysical measurements yielded correspondingly weak cross-cue motion aftereffects (MAEs) in the face of very strong within-cue adaptation. In a direct test of perceptual independence, adapting to opposite 3D directions generated by different binocular cues resulted in simultaneous, superimposed, opposite-direction MAEs. These findings suggest that velocity- and disparity-based 3D motion signals may both flow through area MT but constitute distinct signals and pathways.

  16. Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle

    NASA Astrophysics Data System (ADS)

    Ettl, Svenja

    2015-04-01

    'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.

  17. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  18. Effects of 3D random correlated velocity perturbations on predicted ground motions

    USGS Publications Warehouse

    Hartzell, S.; Harmsen, S.; Frankel, A.

    2010-01-01

    Three-dimensional, finite-difference simulations of a realistic finite-fault rupture on the southern Hayward fault are used to evaluate the effects of random, correlated velocity perturbations on predicted ground motions. Velocity perturbations are added to a three-dimensional (3D) regional seismic velocity model of the San Francisco Bay Area using a 3D von Karman random medium. Velocity correlation lengths of 5 and 10 km and standard deviations in the velocity of 5% and 10% are considered. The results show that significant deviations in predicted ground velocities are seen in the calculated frequency range (≤1 Hz) for standard deviations in velocity of 5% to 10%. These results have implications for the practical limits on the accuracy of scenario ground-motion calculations and on retrieval of source parameters using higher-frequency, strong-motion data.

  19. Structured light 3D tracking system for measuring motions in PET brain imaging

    NASA Astrophysics Data System (ADS)

    Olesen, Oline V.; Jørgensen, Morten R.; Paulsen, Rasmus R.; Højgaard, Liselotte; Roed, Bjarne; Larsen, Rasmus

    2010-02-01

    Patient motion during scanning deteriorates image quality, especially for high resolution PET scanners. A new proposal for a 3D head tracking system for motion correction in high resolution PET brain imaging is set up and demonstrated. A prototype tracking system based on structured light with a DLP projector and a CCD camera is set up on a model of the High Resolution Research Tomograph (HRRT). Methods to reconstruct 3D point clouds of simple surfaces based on phase-shifting interferometry (PSI) are demonstrated. The projector and camera are calibrated using a simple stereo vision procedure where the projector is treated as a camera. Additionally, the surface reconstructions are corrected for the non-linear projector output prior to image capture. The results are convincing and a first step toward a fully automated tracking system for measuring head motions in PET imaging.

  20. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  1. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Astrophysics Data System (ADS)

    Nandhakumar, N.; Smith, Philip W.

    1993-12-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  2. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Smith, Philip W.

    1993-01-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  3. Nonlinear Synchronization for Automatic Learning of 3D Pose Variability in Human Motion Sequences

    NASA Astrophysics Data System (ADS)

    Mozerov, M.; Rius, I.; Roca, X.; González, J.

    2009-12-01

    A dense matching algorithm that solves the problem of synchronizing prerecorded human motion sequences, which show different speeds and accelerations, is proposed. The approach is based on minimization of MRF energy and solves the problem by using Dynamic Programming. Additionally, an optimal sequence is automatically selected from the input dataset to be a time-scale pattern for all other sequences. The paper utilizes an action specific model which automatically learns the variability of 3D human postures observed in a set of training sequences. The model is trained using the public CMU motion capture dataset for the walking action, and a mean walking performance is automatically learnt. Additionally, statistics about the observed variability of the postures and motion direction are also computed at each time step. The synchronized motion sequences are used to learn a model of human motion for action recognition and full-body tracking purposes.

  4. 3D motion adapted gating (3D MAG): a new navigator technique for accelerated acquisition of free breathing navigator gated 3D coronary MR-angiography.

    PubMed

    Hackenbroch, M; Nehrke, K; Gieseke, J; Meyer, C; Tiemann, K; Litt, H; Dewald, O; Naehle, C P; Schild, H; Sommer, T

    2005-08-01

    This study aimed to evaluate the influence of a new navigator technique (3D MAG) on navigator efficiency, total acquisition time, image quality and diagnostic accuracy. Fifty-six patients with suspected coronary artery disease underwent free breathing navigator gated coronary MRA (Intera, Philips Medical Systems, 1.5 T, spatial resolution 0.9x0.9x3 mm3) with and without 3D MAG. Evaluation of both sequences included: 1) navigator scan efficiency, 2) total acquisition time, 3) assessment of image quality and 4) detection of stenoses >50%. Average navigator efficiencies of the LCA and RCA were 43+/-12% and 42+/-12% with and 36+/-16% and 35+/-16% without 3D MAG (P<0.01). Scan time was reduced from 12 min 7 s without to 8 min 55 s with 3D MAG for the LCA and from 12 min 19 s to 9 min 7 s with 3D MAG for the RCA (P<0.01). The average scores of image quality of the coronary MRAs with and without 3D MAG were 3.5+/-0.79 and 3.46+/-0.84 (P>0.05). There was no significant difference in the sensitivity and specificity in the detection of coronary artery stenoses between coronary MRAs with and without 3D MAG (P>0.05). 3D MAG provides accelerated acquisition of navigator gated coronary MRA by about 19% while maintaining image quality and diagnostic accuracy.

  5. Spectrum analysis of motion parallax in a 3D cluttered scene and application to egomotion.

    PubMed

    Mann, Richard; Langer, Michael S

    2005-09-01

    Previous methods for estimating observer motion in a rigid 3D scene assume that image velocities can be measured at isolated points. When the observer is moving through a cluttered 3D scene such as a forest, however, pointwise measurements of image velocity are more challenging to obtain because multiple depths, and hence multiple velocities, are present in most local image regions. We introduce a method for estimating egomotion that avoids pointwise image velocity estimation as a first step. In its place, the direction of motion parallax in local image regions is estimated, using a spectrum-based method, and these directions are then combined to directly estimate 3D observer motion. There are two advantages to this approach. First, the method can be applied to a wide range of 3D cluttered scenes, including those for which pointwise image velocities cannot be measured because only normal velocity information is available. Second, the egomotion estimates can be used as a posterior constraint on estimating pointwise image velocities, since known egomotion parameters constrain the candidate image velocities at each point to a one-dimensional rather than a two-dimensional space.

  6. 3D deformable organ model based liver motion tracking in ultrasound videos

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Bae; Hwang, Youngkyoo; Oh, Young-Taek; Bang, Won-Chul; Lee, Heesae; Kim, James D. K.; Kim, Chang Yeong

    2013-03-01

    This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor's 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.

  7. Brightness-compensated 3-D optical flow algorithm for monitoring cochlear motion patterns.

    PubMed

    von Tiedemann, Miriam; Fridberger, Anders; Ulfendahl, Mats; de Monvel, Jacques Boutet

    2010-01-01

    A method for three-dimensional motion analysis designed for live cell imaging by fluorescence confocal microscopy is described. The approach is based on optical flow computation and takes into account brightness variations in the image scene that are not due to motion, such as photobleaching or fluorescence variations that may reflect changes in cellular physiology. The 3-D optical flow algorithm allowed almost perfect motion estimation on noise-free artificial sequences, and performed with a relative error of <10% on noisy images typical of real experiments. The method was applied to a series of 3-D confocal image stacks from an in vitro preparation of the guinea pig cochlea. The complex motions caused by slow pressure changes in the cochlear compartments were quantified. At the surface of the hearing organ, the largest motion component was the transverse one (normal to the surface), but significant radial and longitudinal displacements were also present. The outer hair cell displayed larger radial motion at their basolateral membrane than at their apical surface. These movements reflect mechanical interactions between different cellular structures, which may be important for communicating sound-evoked vibrations to the sensory cells. A better understanding of these interactions is important for testing realistic models of cochlear mechanics.

  8. Brightness-compensated 3-D optical flow algorithm for monitoring cochlear motion patterns

    NASA Astrophysics Data System (ADS)

    von Tiedemann, Miriam; Fridberger, Anders; Ulfendahl, Mats; de Monvel, Jacques Boutet

    2010-09-01

    A method for three-dimensional motion analysis designed for live cell imaging by fluorescence confocal microscopy is described. The approach is based on optical flow computation and takes into account brightness variations in the image scene that are not due to motion, such as photobleaching or fluorescence variations that may reflect changes in cellular physiology. The 3-D optical flow algorithm allowed almost perfect motion estimation on noise-free artificial sequences, and performed with a relative error of <10% on noisy images typical of real experiments. The method was applied to a series of 3-D confocal image stacks from an in vitro preparation of the guinea pig cochlea. The complex motions caused by slow pressure changes in the cochlear compartments were quantified. At the surface of the hearing organ, the largest motion component was the transverse one (normal to the surface), but significant radial and longitudinal displacements were also present. The outer hair cell displayed larger radial motion at their basolateral membrane than at their apical surface. These movements reflect mechanical interactions between different cellular structures, which may be important for communicating sound-evoked vibrations to the sensory cells. A better understanding of these interactions is important for testing realistic models of cochlear mechanics.

  9. 3D Measurement of Forearm and Upper Arm during Throwing Motion using Body Mounted Sensor

    NASA Astrophysics Data System (ADS)

    Koda, Hideharu; Sagawa, Koichi; Kuroshima, Kouta; Tsukamoto, Toshiaki; Urita, Kazutaka; Ishibashi, Yasuyuki

    The aim of this study is to propose the measurement method of three-dimensional (3D) movement of forearm and upper arm during pitching motion of baseball using inertial sensors without serious consideration of sensor installation. Although high accuracy measurement of sports motion is achieved by using optical motion capture system at present, it has some disadvantages such as the calibration of cameras and limitation of measurement place. Whereas the proposed method for 3D measurement of pitching motion using body mounted sensors provides trajectory and orientation of upper arm by the integration of acceleration and angular velocity measured on upper limb. The trajectory of forearm is derived so that the elbow joint axis of forearm corresponds to that of upper arm. Spatial relation between upper limb and sensor system is obtained by performing predetermined movements of upper limb and utilizing angular velocity and gravitational acceleration. The integration error is modified so that the estimated final position, velocity and posture of upper limb agree with the actual ones. The experimental results of the measurement of pitching motion show that trajectories of shoulder, elbow and wrist estimated by the proposed method are highly correlated to those from the motion capture system within the estimation error of about 10 [%].

  10. Bias Field Inconsistency Correction of Motion-Scattered Multislice MRI for Improved 3D Image Reconstruction

    PubMed Central

    Kim, Kio; Habas, Piotr A.; Rajagopalan, Vidya; Scott, Julia A.; Corbett-Detig, James M.; Rousseau, Francois; Barkovich, A. James; Glenn, Orit A.; Studholme, Colin

    2012-01-01

    A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multi-slice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types. PMID:21511561

  11. Bias field inconsistency correction of motion-scattered multislice MRI for improved 3D image reconstruction.

    PubMed

    Kim, Kio; Habas, Piotr A; Rajagopalan, Vidya; Scott, Julia A; Corbett-Detig, James M; Rousseau, Francois; Barkovich, A James; Glenn, Orit A; Studholme, Colin

    2011-09-01

    A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multislice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types.

  12. The role of stereopsis, motion parallax, perspective and angle polarity in perceiving 3-D shape.

    PubMed

    Sherman, Aleksandra; Papathomas, Thomas V; Jain, Anshul; Keane, Brian P

    2012-01-01

    We studied how stimulus attributes (angle polarity and perspective) and data-driven signals (motion parallax and binocular disparity) affect recovery of 3-D shape. We used physical stimuli, which consisted of two congruent trapezoids forming a dihedral angle. To study the effects of the stimulus attributes, we used 2 × 2 combinations of convex/concave angles and proper/reverse perspective cues. To study the effects of binocular disparity and motion parallax, we used 2 × 2 combinations of monocular/binocular viewing with moving/stationary observers. The task was to report the depth of the right vertical edge relative to a fixation point positioned at a different depth. In Experiment 1 observers also had the option of reporting that the right vertical edge and fixation point were at the same depth. However, in Experiment 2, observers were only given two response options: is the right vertical edge in front of/behind the fixation point? We found that across all stimulus configurations, perspective is a stronger cue than angle polarity in recovering 3-D shape; we also confirm the bias to perceive convex compared to concave angles. In terms of data-driven signals, binocular disparity recovered 3-D shape better than motion parallax. Interestingly, motion parallax improved performance for monocular viewing but not for binocular viewing.

  13. Incorporating 3D body motions into large-sized freeform surface conceptual design.

    PubMed

    Qin, Shengfeng; Wright, David K; Kang, Jingsheng; Prieto, P A

    2005-01-01

    Large-sized free-form surface design presents some challenges in practice. Especially at the conceptual design stage, sculpting physical models is still essential for surface development, because CAD models are less intuitive for designers to design and modify them. These sculpted physical models can be then scanned and converted into CAD models. However, if the physical models are too big, designers may have problems in finding a suitable position to conduct their operations or simply the models are difficult to be scanned in. We investigated a novel surface modelling approach by utilising a 3D motion capture system. For designing a large-sized surface, a network of splines is initially set up. Artists or designers wearing motion marks on their hands can then change shapes of the splines with their hands. Literarily they can move their body freely to any positions to perform their tasks. They can also move their hands in 3D free space to detail surface characteristics by their gestures. All their design motions are recorded in the motion capturing system and transferred into 3D curves and surfaces correspondingly. This paper reports this novel surface design method associated with some case studies.

  14. 4DCBCT-based motion modeling and 3D fluoroscopic image generation for lung cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Berbeco, Ross; Lewis, John

    2015-03-01

    A method is developed to build patient-specific motion models based on 4DCBCT images taken at treatment time and use them to generate 3D time-varying images (referred to as 3D fluoroscopic images). Motion models are built by applying Principal Component Analysis (PCA) on the displacement vector fields (DVFs) estimated by performing deformable image registration on each phase of 4DCBCT relative to a reference phase. The resulting PCA coefficients are optimized iteratively by comparing 2D projections captured at treatment time with projections estimated using the motion model. The optimized coefficients are used to generate 3D fluoroscopic images. The method is evaluated using anthropomorphic physical and digital phantoms reproducing real patient trajectories. For physical phantom datasets, the average tumor localization error (TLE) and (95th percentile) in two datasets were 0.95 (2.2) mm. For digital phantoms assuming superior image quality of 4DCT and no anatomic or positioning disparities between 4DCT and treatment time, the average TLE and the image intensity error (IIE) in six datasets were smaller using 4DCT-based motion models. When simulating positioning disparities and tumor baseline shifts at treatment time compared to planning 4DCT, the average TLE (95th percentile) and IIE were 4.2 (5.4) mm and 0.15 using 4DCT-based models, while they were 1.2 (2.2) mm and 0.10 using 4DCBCT-based ones, respectively. 4DCBCT-based models were shown to perform better when there are positioning and tumor baseline shift uncertainties at treatment time. Thus, generating 3D fluoroscopic images based on 4DCBCT-based motion models can capture both inter- and intra- fraction anatomical changes during treatment.

  15. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson's Disease.

    PubMed

    Piro, Neltje E; Piro, Lennart K; Kassubek, Jan; Blechschmidt-Trapp, Ronald A

    2016-06-21

    Remote monitoring of Parkinson's Disease (PD) patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS) classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a) an examination based on video recordings as a clinical reference; (b) an automatically classified UPDRS; and (c) a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG) sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b) the automatically classified UPDRS is 0.48 and with (c) the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team.

  16. Edge preserving motion estimation with occlusions correction for assisted 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Pohl, Petr; Sirotenko, Michael; Tolstaya, Ekaterina; Bucha, Victor

    2014-02-01

    In this article we propose high quality motion estimation based on variational optical flow formulation with non-local regularization term. To improve motion in occlusion areas we introduce occlusion motion inpainting based on 3-frame motion clustering. Variational formulation of optical flow proved itself to be very successful, however a global optimization of cost function can be time consuming. To achieve acceptable computation times we adapted the algorithm that optimizes convex function in coarse-to-fine pyramid strategy and is suitable for modern GPU hardware implementation. We also introduced two simplifications of cost function that significantly decrease computation time with acceptable decrease of quality. For motion clustering based motion inpaitning in occlusion areas we introduce effective method of occlusion aware joint 3-frame motion clustering using RANSAC algorithm. Occlusion areas are inpainted by motion model taken from cluster that shows consistency in opposite direction. We tested our algorithm on Middlebury optical flow benchmark, where we scored around 20th position, but being one of the fastest method near the top. We also successfully used this algorithm in semi-automatic 2D to 3D conversion tool for spatio-temporal background inpainting, automatic adaptive key frame detection and key points tracking.

  17. On Integral Invariants for Effective 3-D Motion Trajectory Matching and Recognition.

    PubMed

    Shao, Zhanpeng; Li, Youfu

    2016-02-01

    Motion trajectories tracked from the motions of human, robots, and moving objects can provide an important clue for motion analysis, classification, and recognition. This paper defines some new integral invariants for a 3-D motion trajectory. Based on two typical kernel functions, we design two integral invariants, the distance and area integral invariants. The area integral invariants are estimated based on the blurred segment of noisy discrete curve to avoid the computation of high-order derivatives. Such integral invariants for a motion trajectory enjoy some desirable properties, such as computational locality, uniqueness of representation, and noise insensitivity. Moreover, our formulation allows the analysis of motion trajectories at a range of scales by varying the scale of kernel function. The features of motion trajectories can thus be perceived at multiscale levels in a coarse-to-fine manner. Finally, we define a distance function to measure the trajectory similarity to find similar trajectories. Through the experiments, we examine the robustness and effectiveness of the proposed integral invariants and find that they can capture the motion cues in trajectory matching and sign recognition satisfactorily.

  18. Structure-From-Motion in 3D Space Using 2D Lidars

    PubMed Central

    Choi, Dong-Geol; Bok, Yunsu; Kim, Jun-Sik; Shim, Inwook; Kweon, In So

    2017-01-01

    This paper presents a novel structure-from-motion methodology using 2D lidars (Light Detection And Ranging). In 3D space, 2D lidars do not provide sufficient information for pose estimation. For this reason, additional sensors have been used along with the lidar measurement. In this paper, we use a sensor system that consists of only 2D lidars, without any additional sensors. We propose a new method of estimating both the 6D pose of the system and the surrounding 3D structures. We compute the pose of the system using line segments of scan data and their corresponding planes. After discarding the outliers, both the pose and the 3D structures are refined via nonlinear optimization. Experiments with both synthetic and real data show the accuracy and robustness of the proposed method. PMID:28165372

  19. 3D Geometry and Motion Estimations of Maneuvering Targets for Interferometric ISAR With Sparse Aperture.

    PubMed

    Xu, Gang; Xing, Mengdao; Xia, Xiang-Gen; Zhang, Lei; Chen, Qianqian; Bao, Zheng

    2016-05-01

    In the current scenario of high-resolution inverse synthetic aperture radar (ISAR) imaging, the non-cooperative targets may have strong maneuverability, which tends to cause time-variant Doppler modulation and imaging plane in the echoed data. Furthermore, it is still a challenge to realize ISAR imaging of maneuvering targets from sparse aperture (SA) data. In this paper, we focus on the problem of 3D geometry and motion estimations of maneuvering targets for interferometric ISAR (InISAR) with SA. For a target of uniformly accelerated rotation, the rotational modulation in echo is formulated as chirp sensing code under a chirp-Fourier dictionary to represent the maneuverability. In particular, a joint multi-channel imaging approach is developed to incorporate the multi-channel data and treat the multi-channel ISAR image formation as a joint-sparsity constraint optimization. Then, a modified orthogonal matching pursuit (OMP) algorithm is employed to solve the optimization problem to produce high-resolution range-Doppler (RD) images and chirp parameter estimation. The 3D target geometry and the motion estimations are followed by using the acquired RD images and chirp parameters. Herein, a joint estimation approach of 3D geometry and rotation motion is presented to realize outlier removing and error reduction. In comparison with independent single-channel processing, the proposed joint multi-channel imaging approach performs better in 2D imaging, 3D imaging, and motion estimation. Finally, experiments using both simulated and measured data are performed to confirm the effectiveness of the proposed algorithm.

  20. A motion- and sound-activated, 3D-printed, chalcogenide-based triboelectric nanogenerator.

    PubMed

    Kanik, Mehmet; Say, Mehmet Girayhan; Daglar, Bihter; Yavuz, Ahmet Faruk; Dolas, Muhammet Halit; El-Ashry, Mostafa M; Bayindir, Mehmet

    2015-04-08

    A multilayered triboelectric nanogenerator (MULTENG) that can be actuated by acoustic waves, vibration of a moving car, and tapping motion is built using a 3D-printing technique. The MULTENG can generate an open-circuit voltage of up to 396 V and a short-circuit current of up to 1.62 mA, and can power 38 LEDs. The layers of the triboelectric generator are made of polyetherimide nanopillars and chalcogenide core-shell nanofibers.

  1. Event-by-event respiratory motion correction for PET with 3D internal-1D external motion correlation.

    PubMed

    Chan, Chung; Jin, Xiao; Fung, Edward K; Naganawa, Mika; Mulnix, Tim; Carson, Richard E; Liu, Chi

    2013-11-01

    Respiratory motion during PET∕CT imaging can cause substantial image blurring and underestimation of tracer concentration for both static and dynamic studies. In this study, the authors developed an event-by-event respiratory motion correction method that used three-dimensional internal-one-dimensional external motion correlation (INTEX3D) in listmode reconstruction. The authors aim to fully correct for organ/tumor-specific rigid motion caused by respiration using all detected events to eliminate both intraframe and interframe motion, and investigate the quantitative improvement in static and dynamic imaging. The positional translation of an internal organ or tumor during respiration was first determined from the reconstructions of multiple phase-gated images. A level set (active contour) method was used to segment the targeted internal organs/tumors whose centroids were determined. The mean displacement of the external respiratory signal acquired by the Anzai system that corresponded to each phase-gated frame was determined. Three linear correlations between the 1D Anzai mean displacements and the 3D centroids of the internal organ/tumor were established. The 3D internal motion signal with high temporal resolution was then generated by applying each of the three correlation functions to the entire Anzai trace (40 Hz) to guide event-by-event motion correction in listmode reconstruction. The reference location was determined as the location where CT images were acquired to facilitate phase-matched attenuation correction and anatomical-based postfiltering. The proposed method was evaluated with a NEMA phantom driven by a QUASAR respiratory motion platform, and human studies with two tracers: pancreatic beta cell tracer [(18)F]FP(+)DTBZ and tumor hypoxia tracer [(18)F]fluoromisonidazole (FMISO). An anatomical-based postreconstruction filter was applied to the motion-corrected images to reduce noise while preserving quantitative accuracy and organ boundaries in the

  2. Ground motion simulations in Marmara (Turkey) region from 3D finite difference method

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Ulrich, Thomas; Douglas, John

    2016-04-01

    In the framework of the European project MARSite (2012-2016), one of the main contributions from our research team was to provide ground-motion simulations for the Marmara region from various earthquake source scenarios. We adopted a 3D finite difference code, taking into account the 3D structure around the Sea of Marmara (including the bathymetry) and the sea layer. We simulated two moderate earthquakes (about Mw4.5) and found that the 3D structure improves significantly the waveforms compared to the 1D layer model. Simulations were carried out for different earthquakes (moderate point sources and large finite sources) in order to provide shake maps (Aochi and Ulrich, BSSA, 2015), to study the variability of ground-motion parameters (Douglas & Aochi, BSSA, 2016) as well as to provide synthetic seismograms for the blind inversion tests (Diao et al., GJI, 2016). The results are also planned to be integrated in broadband ground-motion simulations, tsunamis generation and simulations of triggered landslides (in progress by different partners). The simulations are freely shared among the partners via the internet and the visualization of the results is diffused on the project's homepage. All these simulations should be seen as a reference for this region, as they are based on the latest knowledge that obtained during the MARSite project, although their refinement and validation of the model parameters and the simulations are a continuing research task relying on continuing observations. The numerical code used, the models and the simulations are available on demand.

  3. 3D dosimetric validation of motion compensation concepts in radiotherapy using an anthropomorphic dynamic lung phantom

    NASA Astrophysics Data System (ADS)

    Mann, P.; Witte, M.; Moser, T.; Lang, C.; Runz, A.; Johnen, W.; Berger, M.; Biederer, J.; Karger, C. P.

    2017-01-01

    In this study, we developed a new setup for the validation of clinical workflows in adaptive radiation therapy, which combines a dynamic ex vivo porcine lung phantom and three-dimensional (3D) polymer gel dosimetry. The phantom consists of an artificial PMMA-thorax and contains a post mortem explanted porcine lung to which arbitrary breathing patterns can be applied. A lung tumor was simulated using the PAGAT (polyacrylamide gelatin gel fabricated at atmospheric conditions) dosimetry gel, which was evaluated in three dimensions by magnetic resonance imaging (MRI). To avoid bias by reaction with oxygen and other materials, the gel was collocated inside a BAREX™ container. For calibration purposes, the same containers with eight gel samples were irradiated with doses from 0 to 7 Gy. To test the technical feasibility of the system, a small spherical dose distribution located completely within the gel volume was planned. Dose delivery was performed under static and dynamic conditions of the phantom with and without motion compensation by beam gating. To verify clinical target definition and motion compensation concepts, the entire gel volume was homogeneously irradiated applying adequate margins in case of the static phantom and an additional internal target volume in case of dynamically operated phantom without and with gated beam delivery. MR-evaluation of the gel samples and comparison of the resulting 3D dose distribution with the planned dose distribution revealed a good agreement for the static phantom. In case of the dynamically operated phantom without motion compensation, agreement was very poor while additional application of motion compensation techniques restored the good agreement between measured and planned dose. From these experiments it was concluded that the set up with the dynamic and anthropomorphic lung phantom together with 3D-gel dosimetry provides a valuable and versatile tool for geometrical and dosimetrical validation of motion compensated

  4. 3D dosimetric validation of motion compensation concepts in radiotherapy using an anthropomorphic dynamic lung phantom.

    PubMed

    Mann, P; Witte, M; Moser, T; Lang, C; Runz, A; Johnen, W; Berger, M; Biederer, J; Karger, C P

    2017-01-21

    In this study, we developed a new setup for the validation of clinical workflows in adaptive radiation therapy, which combines a dynamic ex vivo porcine lung phantom and three-dimensional (3D) polymer gel dosimetry. The phantom consists of an artificial PMMA-thorax and contains a post mortem explanted porcine lung to which arbitrary breathing patterns can be applied. A lung tumor was simulated using the PAGAT (polyacrylamide gelatin gel fabricated at atmospheric conditions) dosimetry gel, which was evaluated in three dimensions by magnetic resonance imaging (MRI). To avoid bias by reaction with oxygen and other materials, the gel was collocated inside a BAREX(™) container. For calibration purposes, the same containers with eight gel samples were irradiated with doses from 0 to 7 Gy. To test the technical feasibility of the system, a small spherical dose distribution located completely within the gel volume was planned. Dose delivery was performed under static and dynamic conditions of the phantom with and without motion compensation by beam gating. To verify clinical target definition and motion compensation concepts, the entire gel volume was homogeneously irradiated applying adequate margins in case of the static phantom and an additional internal target volume in case of dynamically operated phantom without and with gated beam delivery. MR-evaluation of the gel samples and comparison of the resulting 3D dose distribution with the planned dose distribution revealed a good agreement for the static phantom. In case of the dynamically operated phantom without motion compensation, agreement was very poor while additional application of motion compensation techniques restored the good agreement between measured and planned dose. From these experiments it was concluded that the set up with the dynamic and anthropomorphic lung phantom together with 3D-gel dosimetry provides a valuable and versatile tool for geometrical and dosimetrical validation of motion compensated

  5. Robust ego-motion estimation and 3-D model refinement using surface parallax.

    PubMed

    Agrawal, Amit; Chellappa, Rama

    2006-05-01

    We present an iterative algorithm for robustly estimating the ego-motion and refining and updating a coarse depth map using parametric surface parallax models and brightness derivatives extracted from an image pair. Given a coarse depth map acquired by a range-finder or extracted from a digital elevation map (DEM), ego-motion is estimated by combining a global ego-motion constraint and a local brightness constancy constraint. Using the estimated camera motion and the available depth estimate, motion of the three-dimensional (3-D) points is compensated. We utilize the fact that the resulting surface parallax field is an epipolar field, and knowing its direction from the previous motion estimates, estimate its magnitude and use it to refine the depth map estimate. The parallax magnitude is estimated using a constant parallax model (CPM) which assumes a smooth parallax field and a depth based parallax model (DBPM), which models the parallax magnitude using the given depth map. We obtain confidence measures for determining the accuracy of the estimated depth values which are used to remove regions with potentially incorrect depth estimates for robustly estimating ego-motion in subsequent iterations. Experimental results using both synthetic and real data (both indoor and outdoor sequences) illustrate the effectiveness of the proposed algorithm.

  6. Real-time 3D motion tracking for small animal brain PET

    NASA Astrophysics Data System (ADS)

    Kyme, A. Z.; Zhou, V. W.; Meikle, S. R.; Fulton, R. R.

    2008-05-01

    High-resolution positron emission tomography (PET) imaging of conscious, unrestrained laboratory animals presents many challenges. Some form of motion correction will normally be necessary to avoid motion artefacts in the reconstruction. The aim of the current work was to develop and evaluate a motion tracking system potentially suitable for use in small animal PET. This system is based on the commercially available stereo-optical MicronTracker S60 which we have integrated with a Siemens Focus-220 microPET scanner. We present measured performance limits of the tracker and the technical details of our implementation, including calibration and synchronization of the system. A phantom study demonstrating motion tracking and correction was also performed. The system can be calibrated with sub-millimetre accuracy, and small lightweight markers can be constructed to provide accurate 3D motion data. A marked reduction in motion artefacts was demonstrated in the phantom study. The techniques and results described here represent a step towards a practical method for rigid-body motion correction in small animal PET. There is scope to achieve further improvements in the accuracy of synchronization and pose measurements in future work.

  7. Ultra-high voltage electron microscopy of primitive algae illuminates 3D ultrastructures of the first photosynthetic eukaryote

    PubMed Central

    Takahashi, Toshiyuki; Nishida, Tomoki; Saito, Chieko; Yasuda, Hidehiro; Nozaki, Hisayoshi

    2015-01-01

    A heterotrophic organism 1–2 billion years ago enslaved a cyanobacterium to become the first photosynthetic eukaryote, and has diverged globally. The primary phototrophs, glaucophytes, are thought to retain ancestral features of the first photosynthetic eukaryote, but examining the protoplast ultrastructure has previously been problematic in the coccoid glaucophyte Glaucocystis due to its thick cell wall. Here, we examined the three-dimensional (3D) ultrastructure in two divergent species of Glaucocystis using ultra-high voltage electron microscopy. Three-dimensional modelling of Glaucocystis cells using electron tomography clearly showed that numerous, leaflet-like flattened vesicles are distributed throughout the protoplast periphery just underneath a single-layered plasma membrane. This 3D feature is essentially identical to that of another glaucophyte genus Cyanophora, as well as the secondary phototrophs in Alveolata. Thus, the common ancestor of glaucophytes and/or the first photosynthetic eukaryote may have shown similar 3D structures. PMID:26439276

  8. 3D-wall motion tracking: a new tool for myocardial contractility analysis.

    PubMed

    Perez de Isla, Leopoldo; Montes, Cesar; Monzón, Tania; Herrero, José; Saltijeral, Adriana; Balcones, David Vivas; de Agustin, Alberto; Nuñez-Gil, Ivan; Fernández-Golfín, Covadonga; Almería, Carlos; Rodrigo, José Luis; Marcos-Alberca, Pedro; Macaya, Carlos; Zamorano, Jose

    2010-10-16

    BACKGROUND: Left-ventricular ejection fraction (LVEF), the most frequently used parameter to evaluate left ventricular (LV) systolic function, depends not only on LV contractility, but also on different variables such as pre-load and after-load. Three-dimensional wall motion tracking echocardiography (3D-WMT) is a new technique that provides information regarding different new parameters of LV systolic function. Our aim was to evaluate whether the new 3D-WMT-derived LV systolic function parameters are less dependent on load conditions than LVEF. METHODS: In order to modify the load conditions to study the dependence of the different LV systolic function parameters on them, a group of renal failure patients under chronic hemodialysis treatment was selected. The echocardiographic studies, including the 3D-WMT analysis, were performed immediately before and immediately after the hemodialysis session. RESULTS: Thirty-one consecutive patients were enrolled (mean age 65.5 ± 17.0 years; 74.2% men). There was a statistically significant change in predialysis and postdialysis, pre-load and after-load conditions (E/È ratio and systolic blood pressure) and in the LV end-diastolic volume and LVEF. Nevertheless, the findings did not show any significant change before and after dialysis in the 3D-WMT-derived parameters. CONCLUSIONS: LV 3D-wall motion tracking-derived systolic function parameters are less dependent on load conditions than LVEF. They might measure myocardial contractility in a more direct way than LVEF. Thus, hypothetically, they might be useful to detect early and subtle contractility impairments in a wide number of cardiac patients and they could help to optimize the clinical management of such patients.

  9. Motion Rehab AVE 3D: A VR-based exergame for post-stroke rehabilitation.

    PubMed

    Trombetta, Mateus; Bazzanello Henrique, Patrícia Paula; Brum, Manoela Rogofski; Colussi, Eliane Lucia; De Marchi, Ana Carolina Bertoletti; Rieder, Rafael

    2017-11-01

    Recent researches about games for post-stroke rehabilitation have been increasing, focusing in upper limb, lower limb and balance situations, and showing good experiences and results. With this in mind, this paper presents Motion Rehab AVE 3D, a serious game for post-stroke rehabilitation of patients with mild stroke. The aim is offer a new technology in order to assist the traditional therapy and motivate the patient to execute his/her rehabilitation program, under health professional supervision. The game was developed with Unity game engine, supporting Kinect motion sensing input device and display devices like Smart TV 3D and Oculus Rift. It contemplates six activities considering exercises in a tridimensional space: flexion, abduction, shoulder adduction, horizontal shoulder adduction and abduction, elbow extension, wrist extension, knee flexion, and hip flexion and abduction. Motion Rehab AVE 3D also report about hits and errors to the physiotherapist evaluate the patient's progress. A pilot study with 10 healthy participants (61-75 years old) tested one of the game levels. They experienced the 3D user interface in third-person. Our initial goal was to map a basic and comfortable setup of equipment in order to adopt later. All the participants (100%) classified the interaction process as interesting and amazing for the age, presenting a good acceptance. Our evaluation showed that the game could be used as a useful tool to motivate the patients during rehabilitation sessions. Next step is to evaluate its effectiveness for stroke patients, in order to verify if the interface and game exercises contribute into the motor rehabilitation treatment progress. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. [In vivo study on the body motion during the Shi's cervical reduction technique with 3D motion capture].

    PubMed

    Wang, Hui-hao; Zhang, Min; Niu, Wen-xin; Shen, Xu-zhe; Zhan, Hong-sheng

    2015-10-01

    The clinical effect of the Shi's cervical reduction technique for cervical spondylosis and related disorders has confirmed, however, there were few studies on the body motion during manipulation in vivo study. This study is to summary the law of motion and the motion characteristics of the right operation shoulder, elbow, knee and ankle joints by data acquisition and analysis with the 3D motion capture system. The markers were pasted on the head, trunk, left and right acromion, elbow joint, wrist joint inner side and the outer side of the inner and the outer side and the lateral upper arm, forearm lateral, anterior superior iliac spine, posterior superior iliac spine, trochanter, femoral and tibial tubercle, inner and outer side of knee, ankle, fibular head, medial and lateral in first, 2,5 metatarsal head, heel and dual lateral thigh the calf, lateral tibia of one manipulation practioner, and the subject accepted a complete cycle of cervical "Jin Chu Cao and Gu Cuo Feng" manipulation which was repeated five times. The movement trajectory of the practioner's four markers of operation joints were captured, recorded, calculated and analyzed. The movement trajectories of four joints were consistent, while the elbow joint had the biggest discrete degree. The 3D activities of the shoulder and elbow were more obvious than other two joints, but the degree of flexion and extension in the knee was significantly greater than the rotation and lateral bending. The flexibility of upper limb joint and stability of lower limb joint are the important guarantees for the Shi's cervical reduction technique, and the right knee facilitated the exerting force of upper limb by the flexion and extension activities. The 3D model built by the motion capture system would provide a new idea for manipulation teaching and further basic biomechanical research.

  11. Respiratory motion correction in 3-D PET data with advanced optical flow algorithms.

    PubMed

    Dawood, Mohammad; Buther, Florian; Jiang, Xiaoyi; Schafers, Klaus P

    2008-08-01

    The problem of motion is well known in positron emission tomography (PET) studies. The PET images are formed over an elongated period of time. As the patients cannot hold breath during the PET acquisition, spatial blurring and motion artifacts are the natural result. These may lead to wrong quantification of the radioactive uptake. We present a solution to this problem by respiratory-gating the PET data and correcting the PET images for motion with optical flow algorithms. The algorithm is based on the combined local and global optical flow algorithm with modifications to allow for discontinuity preservation across organ boundaries and for application to 3-D volume sets. The superiority of the algorithm over previous work is demonstrated on software phantom and real patient data.

  12. Inferred motion perception of light sources in 3D scenes is color-blind.

    PubMed

    Gerhard, Holly E; Maloney, Laurence T

    2013-01-01

    In everyday scenes, the illuminant can vary spatially in chromaticity and luminance, and change over time (e.g. sunset). Such variation generates dramatic image effects too complex for any contemporary machine vision system to overcome, yet human observers are remarkably successful at inferring object properties separately from lighting, an ability linked with estimation and tracking of light field parameters. Which information does the visual system use to infer light field dynamics? Here, we specifically ask whether color contributes to inferred light source motion. Observers viewed 3D surfaces illuminated by an out-of-view moving collimated source (sun) and a diffuse source (sky). In half of the trials, the two sources differed in chromaticity, thereby providing more information about motion direction. Observers discriminated light motion direction above chance, and only the least sensitive observer benefited slightly from the added color information, suggesting that color plays only a very minor role for inferring light field dynamics.

  13. Patient specific respiratory motion modeling using a limited number of 3D lung CT images.

    PubMed

    Cui, Xueli; Gao, Xin; Xia, Wei; Liu, Yangchuan; Liang, Zhiyuan

    2014-01-01

    To build a patient specific respiratory motion model with a low dose, a novel method was proposed that uses a limited number of 3D lung CT volumes with an external respiratory signal. 4D lung CT volumes were acquired for patients with in vitro labeling on the upper abdominal surface. Meanwhile, 3D coordinates of in vitro labeling were measured as external respiratory signals. A sequential correspondence between the 4D lung CT and the external respiratory signal was built using the distance correlation method, and a 3D displacement for every registration control point in the CT volumes with respect to time can be obtained by the 4D lung CT deformable registration. A temporal fitting was performed for every registration control point displacements and an external respiratory signal in the anterior-posterior direction respectively to draw their fitting curves. Finally, a linear regression was used to fit the corresponding samples of the control point displacement fitting curves and the external respiratory signal fitting curve to finish the pulmonary respiration modeling. Compared to a B-spline-based method using the respiratory signal phase, the proposed method is highly advantageous as it offers comparable modeling accuracy and target modeling error (TME); while at the same time, the proposed method requires 70% less 3D lung CTs. When using a similar amount of 3D lung CT data, the mean of the proposed method's TME is smaller than the mean of the PCA (principle component analysis)-based methods' TMEs. The results indicate that the proposed method is successful in striking a balance between modeling accuracy and number of 3D lung CT volumes.

  14. Spatiotemporal non-rigid image registration for 3D ultrasound cardiac motion estimation

    NASA Astrophysics Data System (ADS)

    Loeckx, D.; Ector, J.; Maes, F.; D'hooge, J.; Vandermeulen, D.; Voigt, J.-U.; Heidbüchel, H.; Suetens, P.

    2007-03-01

    We present a new method to evaluate 4D (3D + time) cardiac ultrasound data sets by nonrigid spatio-temporal image registration. First, a frame-to-frame registration is performed that yields a dense deformation field. The deformation field is used to calculate local spatiotemporal properties of the myocardium, such as the velocity, strain and strain rate. The field is also used to propagate particular points and surfaces, representing e.g. the endo-cardial surface over the different frames. As such, the 4D path of these point is obtained, which can be used to calculate the velocity by which the wall moves and the evolution of the local surface area over time. The wall velocity is not angle-dependent as in classical Doppler imaging, since the 4D data allows calculating the true 3D motion. Similarly, all 3D myocardium strain components can be estimated. Combined they result in local surface area or volume changes which van be color-coded as a measure of local contractability. A diagnostic method that strongly benefits from this technique is cardiac motion and deformation analysis, which is an important aid to quantify the mechanical properties of the myocardium.

  15. Scientific rotoscoping: a morphology-based method of 3-D motion analysis and visualization.

    PubMed

    Gatesy, Stephen M; Baier, David B; Jenkins, Farish A; Dial, Kenneth P

    2010-06-01

    Three-dimensional skeletal movement is often impossible to accurately quantify from external markers. X-ray imaging more directly visualizes moving bones, but extracting 3-D kinematic data is notoriously difficult from a single perspective. Stereophotogrammetry is extremely powerful if bi-planar fluoroscopy is available, yet implantation of three radio-opaque markers in each segment of interest may be impractical. Herein we introduce scientific rotoscoping (SR), a new method of motion analysis that uses articulated bone models to simultaneously animate and quantify moving skeletons without markers. The three-step process is described using examples from our work on pigeon flight and alligator walking. First, the experimental scene is reconstructed in 3-D using commercial animation software so that frames of undistorted fluoroscopic and standard video can be viewed in their correct spatial context through calibrated virtual cameras. Second, polygonal models of relevant bones are created from CT or laser scans and rearticulated into a hierarchical marionette controlled by virtual joints. Third, the marionette is registered to video images by adjusting each of its degrees of freedom over a sequence of frames. SR outputs high-resolution 3-D kinematic data for multiple, unmarked bones and anatomically accurate animations that can be rendered from any perspective. Rather than generating moving stick figures abstracted from the coordinates of independent surface points, SR is a morphology-based method of motion analysis deeply rooted in osteological and arthrological data.

  16. 3D motion and strain estimation of the heart: initial clinical findings

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Hristova, Krassimira; Loeckx, Dirk; Rademakers, Frank; Claus, Piet; D'hooge, Jan

    2010-03-01

    The quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. As such, tissue Doppler imaging and speckle tracking based methods have been introduced to estimate local myocardial strain. Recently, volumetric ultrasound has become more readily available, allowing therefore the 3D estimation of motion and myocardial deformation. Our lab has previously presented a method based on spatio-temporal elastic registration of ultrasound volumes to estimate myocardial motion and deformation in 3D, overcoming the spatial limitations of the existing methods. This method was optimized on simulated data sets in previous work and is currently tested in a clinical setting. In this manuscript, 10 healthy volunteers, 10 patient with myocardial infarction and 10 patients with arterial hypertension were included. The cardiac strain values extracted with the proposed method were compared with the ones estimated with 1D tissue Doppler imaging and 2D speckle tracking in all patient groups. Although the absolute values of the 3D strain components assessed by this new methodology were not identical to the reference methods, the relationship between the different patient groups was similar.

  17. Integration of 3D Structure from Disparity into Biological Motion Perception Independent of Depth Awareness

    PubMed Central

    Wang, Ying; Jiang, Yi

    2014-01-01

    Images projected onto the retinas of our two eyes come from slightly different directions in the real world, constituting binocular disparity that serves as an important source for depth perception - the ability to see the world in three dimensions. It remains unclear whether the integration of disparity cues into visual perception depends on the conscious representation of stereoscopic depth. Here we report evidence that, even without inducing discernible perceptual representations, the disparity-defined depth information could still modulate the visual processing of 3D objects in depth-irrelevant aspects. Specifically, observers who could not discriminate disparity-defined in-depth facing orientations of biological motions (i.e., approaching vs. receding) due to an excessive perceptual bias nevertheless exhibited a robust perceptual asymmetry in response to the indistinguishable facing orientations, similar to those who could consciously discriminate such 3D information. These results clearly demonstrate that the visual processing of biological motion engages the disparity cues independent of observers’ depth awareness. The extraction and utilization of binocular depth signals thus can be dissociable from the conscious representation of 3D structure in high-level visual perception. PMID:24586622

  18. Integration of 3D structure from disparity into biological motion perception independent of depth awareness.

    PubMed

    Wang, Ying; Jiang, Yi

    2014-01-01

    Images projected onto the retinas of our two eyes come from slightly different directions in the real world, constituting binocular disparity that serves as an important source for depth perception - the ability to see the world in three dimensions. It remains unclear whether the integration of disparity cues into visual perception depends on the conscious representation of stereoscopic depth. Here we report evidence that, even without inducing discernible perceptual representations, the disparity-defined depth information could still modulate the visual processing of 3D objects in depth-irrelevant aspects. Specifically, observers who could not discriminate disparity-defined in-depth facing orientations of biological motions (i.e., approaching vs. receding) due to an excessive perceptual bias nevertheless exhibited a robust perceptual asymmetry in response to the indistinguishable facing orientations, similar to those who could consciously discriminate such 3D information. These results clearly demonstrate that the visual processing of biological motion engages the disparity cues independent of observers' depth awareness. The extraction and utilization of binocular depth signals thus can be dissociable from the conscious representation of 3D structure in high-level visual perception.

  19. A Little Knowledge of Ground Motion: Explaining 3-D Physics-Based Modeling to Engineers

    NASA Astrophysics Data System (ADS)

    Porter, K.

    2014-12-01

    Users of earthquake planning scenarios require the ground-motion map to be credible enough to justify costly planning efforts, but not all ground-motion maps are right for all uses. There are two common ways to create a map of ground motion for a hypothetical earthquake. One approach is to map the median shaking estimated by empirical attenuation relationships. The other uses 3-D physics-based modeling, in which one analyzes a mathematical model of the earth's crust near the fault rupture and calculates the generation and propagation of seismic waves from source to ground surface by first principles. The two approaches produce different-looking maps. The more-familiar median maps smooth out variability and correlation. Using them in a planning scenario can lead to a systematic underestimation of damage and loss, and could leave a community underprepared for realistic shaking. The 3-D maps show variability, including some very high values that can disconcert non-scientists. So when the USGS Science Application for Risk Reduction's (SAFRR) Haywired scenario project selected 3-D maps, it was necessary to explain to scenario users—especially engineers who often use median maps—the differences, advantages, and disadvantages of the two approaches. We used authority, empirical evidence, and theory to support our choice. We prefaced our explanation with SAFRR's policy of using the best available earth science, and cited the credentials of the maps' developers and the reputation of the journal in which they published the maps. We cited recorded examples from past earthquakes of extreme ground motions that are like those in the scenario map. We explained the maps on theoretical grounds as well, explaining well established causes of variability: directivity, basin effects, and source parameters. The largest mapped motions relate to potentially unfamiliar extreme-value theory, so we used analogies to human longevity and the average age of the oldest person in samples of

  20. 3D Motion Planning Algorithms for Steerable Needles Using Inverse Kinematics

    PubMed Central

    Duindam, Vincent; Xu, Jijie; Alterovitz, Ron; Sastry, Shankar; Goldberg, Ken

    2010-01-01

    Steerable needles can be used in medical applications to reach targets behind sensitive or impenetrable areas. The kinematics of a steerable needle are nonholonomic and, in 2D, equivalent to a Dubins car with constant radius of curvature. In 3D, the needle can be interpreted as an airplane with constant speed and pitch rate, zero yaw, and controllable roll angle. We present a constant-time motion planning algorithm for steerable needles based on explicit geometric inverse kinematics similar to the classic Paden-Kahan subproblems. Reachability and path competitivity are analyzed using analytic comparisons with shortest path solutions for the Dubins car (for 2D) and numerical simulations (for 3D). We also present an algorithm for local path adaptation using null-space results from redundant manipulator theory. Finally, we discuss several ways to use and extend the inverse kinematics solution to generate needle paths that avoid obstacles. PMID:21359051

  1. 3D delivered dose assessment using a 4DCT-based motion model

    PubMed Central

    Cai, Weixing; Hurwitz, Martina H.; Williams, Christopher L.; Dhou, Salam; Berbeco, Ross I.; Seco, Joao; Mishra, Pankaj; Lewis, John H.

    2015-01-01

    Purpose: The purpose of this work is to develop a clinically feasible method of calculating actual delivered dose distributions for patients who have significant respiratory motion during the course of stereotactic body radiation therapy (SBRT). Methods: A novel approach was proposed to calculate the actual delivered dose distribution for SBRT lung treatment. This approach can be specified in three steps. (1) At the treatment planning stage, a patient-specific motion model is created from planning 4DCT data. This model assumes that the displacement vector field (DVF) of any respiratory motion deformation can be described as a linear combination of some basis DVFs. (2) During the treatment procedure, 2D time-varying projection images (either kV or MV projections) are acquired, from which time-varying “fluoroscopic” 3D images of the patient are reconstructed using the motion model. The DVF of each timepoint in the time-varying reconstruction is an optimized linear combination of basis DVFs such that the 2D projection of the 3D volume at this timepoint matches the projection image. (3) 3D dose distribution is computed for each timepoint in the set of 3D reconstructed fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating deformed dose distributions. This approach was first validated using two modified digital extended cardio-torso (XCAT) phantoms with lung tumors and different respiratory motions. The estimated doses were compared to the dose that would be calculated for routine 4DCT-based planning and to the actual delivered dose that was calculated using “ground truth” XCAT phantoms at all timepoints. The approach was also tested using one set of patient data, which demonstrated the application of our method in a clinical scenario. Results: For the first XCAT phantom that has a mostly regular breathing pattern, the errors in 95% volume dose (D95) are 0.11% and 0.83%, respectively for 3D fluoroscopic images

  2. 3D delivered dose assessment using a 4DCT-based motion model

    SciTech Connect

    Cai, Weixing; Hurwitz, Martina H.; Williams, Christopher L.; Dhou, Salam; Berbeco, Ross I.; Mishra, Pankaj E-mail: jhlewis@lroc.harvard.edu; Lewis, John H. E-mail: jhlewis@lroc.harvard.edu; Seco, Joao

    2015-06-15

    Purpose: The purpose of this work is to develop a clinically feasible method of calculating actual delivered dose distributions for patients who have significant respiratory motion during the course of stereotactic body radiation therapy (SBRT). Methods: A novel approach was proposed to calculate the actual delivered dose distribution for SBRT lung treatment. This approach can be specified in three steps. (1) At the treatment planning stage, a patient-specific motion model is created from planning 4DCT data. This model assumes that the displacement vector field (DVF) of any respiratory motion deformation can be described as a linear combination of some basis DVFs. (2) During the treatment procedure, 2D time-varying projection images (either kV or MV projections) are acquired, from which time-varying “fluoroscopic” 3D images of the patient are reconstructed using the motion model. The DVF of each timepoint in the time-varying reconstruction is an optimized linear combination of basis DVFs such that the 2D projection of the 3D volume at this timepoint matches the projection image. (3) 3D dose distribution is computed for each timepoint in the set of 3D reconstructed fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating deformed dose distributions. This approach was first validated using two modified digital extended cardio-torso (XCAT) phantoms with lung tumors and different respiratory motions. The estimated doses were compared to the dose that would be calculated for routine 4DCT-based planning and to the actual delivered dose that was calculated using “ground truth” XCAT phantoms at all timepoints. The approach was also tested using one set of patient data, which demonstrated the application of our method in a clinical scenario. Results: For the first XCAT phantom that has a mostly regular breathing pattern, the errors in 95% volume dose (D95) are 0.11% and 0.83%, respectively for 3D fluoroscopic images

  3. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  4. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis

    PubMed Central

    Cerveri, Pietro; Barros, Ricardo M. L.; Marins, João C. B.; Silvatti, Amanda P.

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846

  5. A Markerless 3D Computerized Motion Capture System Incorporating a Skeleton Model for Monkeys

    PubMed Central

    Nakamura, Tomoya; Matsumoto, Jumpei; Nishimaru, Hiroshi; Bretas, Rafael Vieira; Takamura, Yusaku; Hori, Etsuro; Ono, Taketoshi; Nishijo, Hisao

    2016-01-01

    In this study, we propose a novel markerless motion capture system (MCS) for monkeys, in which 3D surface images of monkeys were reconstructed by integrating data from four depth cameras, and a skeleton model of the monkey was fitted onto 3D images of monkeys in each frame of the video. To validate the MCS, first, estimated 3D positions of body parts were compared between the 3D MCS-assisted estimation and manual estimation based on visual inspection when a monkey performed a shuttling behavior in which it had to avoid obstacles in various positions. The mean estimation error of the positions of body parts (3–14 cm) and of head rotation (35–43°) between the 3D MCS-assisted and manual estimation were comparable to the errors between two different experimenters performing manual estimation. Furthermore, the MCS could identify specific monkey actions, and there was no false positive nor false negative detection of actions compared with those in manual estimation. Second, to check the reproducibility of MCS-assisted estimation, the same analyses of the above experiments were repeated by a different user. The estimation errors of positions of most body parts between the two experimenters were significantly smaller in the MCS-assisted estimation than in the manual estimation. Third, effects of methamphetamine (MAP) administration on the spontaneous behaviors of four monkeys were analyzed using the MCS. MAP significantly increased head movements, tended to decrease locomotion speed, and had no significant effect on total path length. The results were comparable to previous human clinical data. Furthermore, estimated data following MAP injection (total path length, walking speed, and speed of head rotation) correlated significantly between the two experimenters in the MCS-assisted estimation (r = 0.863 to 0.999). The results suggest that the presented MCS in monkeys is useful in investigating neural mechanisms underlying various psychiatric disorders and developing

  6. Validation of INSAT-3D atmospheric motion vectors for monsoon 2015

    NASA Astrophysics Data System (ADS)

    Sharma, Priti; Rani, S. Indira; Das Gupta, M.

    2016-05-01

    Atmospheric Motion Vector (AMV) over Indian Ocean and surrounding region is one of the most important sources of tropospheric wind information assimilated in numerical weather prediction (NWP) system. Earlier studies showed that the quality of Indian geo-stationary satellite Kalpana-1 AMVs was not comparable to that of other geostationary satellites over this region and hence not used in NWP system. Indian satellite INSAT-3D was successfully launched on July 26, 2013 with upgraded imaging system as compared to that of previous Indian satellite Kalpana-1. INSAT-3D has middle infrared band (3.80 - 4.00 μm) which is capable of night time pictures of low clouds and fog. Three consecutive images of 30-minutes interval are used to derive the AMVs. New height assignment scheme (using NWP first guess and replacing old empirical GA method) along with modified quality control scheme were implemented for deriving INSAT-3D AMVs. In this paper an attempt has been made to validate these AMVs against in-situ observations as well as against NCMRWF's NWP first guess for monsoon 2015. AMVs are subdivided into three different pressure levels in the vertical viz. low (1000 - 700 hPa), middle (700 - 400 hPa) and high (400 - 100 hPa) for validation purpose. Several statistics viz. normalized root mean square vector difference; biases etc. have been computed over different latitudinal belt. Result shows that the general mean monsoon circulations along with all the transient monsoon systems are well captured by INSAT-3D AMVs, as well as the error statistics viz., RMSE etc of INSAT-3D AMVs is now comparable to other geostationary satellites.

  7. Temporal diffeomorphic free-form deformation: application to motion and strain estimation from 3D echocardiography.

    PubMed

    De Craene, Mathieu; Piella, Gemma; Camara, Oscar; Duchateau, Nicolas; Silva, Etelvino; Doltra, Adelina; D'hooge, Jan; Brugada, Josep; Sitges, Marta; Frangi, Alejandro F

    2012-02-01

    This paper presents a new registration algorithm, called Temporal Diffeomorphic Free Form Deformation (TDFFD), and its application to motion and strain quantification from a sequence of 3D ultrasound (US) images. The originality of our approach resides in enforcing time consistency by representing the 4D velocity field as the sum of continuous spatiotemporal B-Spline kernels. The spatiotemporal displacement field is then recovered through forward Eulerian integration of the non-stationary velocity field. The strain tensor is computed locally using the spatial derivatives of the reconstructed displacement field. The energy functional considered in this paper weighs two terms: the image similarity and a regularization term. The image similarity metric is the sum of squared differences between the intensities of each frame and a reference one. Any frame in the sequence can be chosen as reference. The regularization term is based on the incompressibility of myocardial tissue. TDFFD was compared to pairwise 3D FFD and 3D+t FFD, both on displacement and velocity fields, on a set of synthetic 3D US images with different noise levels. TDFFD showed increased robustness to noise compared to these two state-of-the-art algorithms. TDFFD also proved to be more resistant to a reduced temporal resolution when decimating this synthetic sequence. Finally, this synthetic dataset was used to determine optimal settings of the TDFFD algorithm. Subsequently, TDFFD was applied to a database of cardiac 3D US images of the left ventricle acquired from 9 healthy volunteers and 13 patients treated by Cardiac Resynchronization Therapy (CRT). On healthy cases, uniform strain patterns were observed over all myocardial segments, as physiologically expected. On all CRT patients, the improvement in synchrony of regional longitudinal strain correlated with CRT clinical outcome as quantified by the reduction of end-systolic left ventricular volume at follow-up (6 and 12months), showing the potential

  8. A Markerless 3D Computerized Motion Capture System Incorporating a Skeleton Model for Monkeys.

    PubMed

    Nakamura, Tomoya; Matsumoto, Jumpei; Nishimaru, Hiroshi; Bretas, Rafael Vieira; Takamura, Yusaku; Hori, Etsuro; Ono, Taketoshi; Nishijo, Hisao

    2016-01-01

    In this study, we propose a novel markerless motion capture system (MCS) for monkeys, in which 3D surface images of monkeys were reconstructed by integrating data from four depth cameras, and a skeleton model of the monkey was fitted onto 3D images of monkeys in each frame of the video. To validate the MCS, first, estimated 3D positions of body parts were compared between the 3D MCS-assisted estimation and manual estimation based on visual inspection when a monkey performed a shuttling behavior in which it had to avoid obstacles in various positions. The mean estimation error of the positions of body parts (3-14 cm) and of head rotation (35-43°) between the 3D MCS-assisted and manual estimation were comparable to the errors between two different experimenters performing manual estimation. Furthermore, the MCS could identify specific monkey actions, and there was no false positive nor false negative detection of actions compared with those in manual estimation. Second, to check the reproducibility of MCS-assisted estimation, the same analyses of the above experiments were repeated by a different user. The estimation errors of positions of most body parts between the two experimenters were significantly smaller in the MCS-assisted estimation than in the manual estimation. Third, effects of methamphetamine (MAP) administration on the spontaneous behaviors of four monkeys were analyzed using the MCS. MAP significantly increased head movements, tended to decrease locomotion speed, and had no significant effect on total path length. The results were comparable to previous human clinical data. Furthermore, estimated data following MAP injection (total path length, walking speed, and speed of head rotation) correlated significantly between the two experimenters in the MCS-assisted estimation (r = 0.863 to 0.999). The results suggest that the presented MCS in monkeys is useful in investigating neural mechanisms underlying various psychiatric disorders and developing

  9. A Soft Sensor-Based Three-Dimensional (3-D) Finger Motion Measurement System

    PubMed Central

    Park, Wookeun; Ro, Kyongkwan; Kim, Suin; Bae, Joonbum

    2017-01-01

    In this study, a soft sensor-based three-dimensional (3-D) finger motion measurement system is proposed. The sensors, made of the soft material Ecoflex, comprise embedded microchannels filled with a conductive liquid metal (EGaln). The superior elasticity, light weight, and sensitivity of soft sensors allows them to be embedded in environments in which conventional sensors cannot. Complicated finger joints, such as the carpometacarpal (CMC) joint of the thumb are modeled to specify the location of the sensors. Algorithms to decouple the signals from soft sensors are proposed to extract the pure flexion, extension, abduction, and adduction joint angles. The performance of the proposed system and algorithms are verified by comparison with a camera-based motion capture system. PMID:28241414

  10. Evaluation of a Gait Assessment Module Using 3D Motion Capture Technology

    PubMed Central

    Baskwill, Amanda J.; Belli, Patricia; Kelleher, Leila

    2017-01-01

    Background Gait analysis is the study of human locomotion. In massage therapy, this observation is part of an assessment process that informs treatment planning. Massage therapy students must apply the theory of gait assessment to simulated patients. At Humber College, the gait assessment module traditionally consists of a textbook reading and a three-hour, in-class session in which students perform gait assessment on each other. In 2015, Humber College acquired a three-dimensional motion capture system. Purpose The purpose was to evaluate the use of 3D motion capture in a gait assessment module compared to the traditional gait assessment module. Participants Semester 2 massage therapy students who were enrolled in Massage Theory 2 (n = 38). Research Design Quasi-experimental, wait-list comparison study. Intervention The intervention group participated in an in-class session with a Qualisys motion capture system. Main Outcome Measure(s) The outcomes included knowledge and application of gait assessment theory as measured by quizzes, and students’ satisfaction as measured through a questionnaire. Results There were no statistically significant differences in baseline and post-module knowledge between both groups (pre-module: p = .46; post-module: p = .63). There was also no difference between groups on the final application question (p = .13). The intervention group enjoyed the in-class session because they could visualize the content, whereas the comparison group enjoyed the interactivity of the session. The intervention group recommended adding the assessment of gait on their classmates to their experience. Both groups noted more time was needed for the gait assessment module. Conclusions Based on the results of this study, it is recommended that the gait assessment module combine both the traditional in-class session and the 3D motion capture system. PMID:28293329

  11. Evaluation of a Gait Assessment Module Using 3D Motion Capture Technology.

    PubMed

    Baskwill, Amanda J; Belli, Patricia; Kelleher, Leila

    2017-03-01

    Gait analysis is the study of human locomotion. In massage therapy, this observation is part of an assessment process that informs treatment planning. Massage therapy students must apply the theory of gait assessment to simulated patients. At Humber College, the gait assessment module traditionally consists of a textbook reading and a three-hour, in-class session in which students perform gait assessment on each other. In 2015, Humber College acquired a three-dimensional motion capture system. The purpose was to evaluate the use of 3D motion capture in a gait assessment module compared to the traditional gait assessment module. Semester 2 massage therapy students who were enrolled in Massage Theory 2 (n = 38). Quasi-experimental, wait-list comparison study. The intervention group participated in an in-class session with a Qualisys motion capture system. The outcomes included knowledge and application of gait assessment theory as measured by quizzes, and students' satisfaction as measured through a questionnaire. There were no statistically significant differences in baseline and post-module knowledge between both groups (pre-module: p = .46; post-module: p = .63). There was also no difference between groups on the final application question (p = .13). The intervention group enjoyed the in-class session because they could visualize the content, whereas the comparison group enjoyed the interactivity of the session. The intervention group recommended adding the assessment of gait on their classmates to their experience. Both groups noted more time was needed for the gait assessment module. Based on the results of this study, it is recommended that the gait assessment module combine both the traditional in-class session and the 3D motion capture system.

  12. On-line 3D motion estimation using low resolution MRI

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; de Senneville, B. Denis; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2015-08-01

    Image processing such as deformable image registration finds its way into radiotherapy as a means to track non-rigid anatomy. With the advent of magnetic resonance imaging (MRI) guided radiotherapy, intrafraction anatomy snapshots become technically feasible. MRI provides the needed tissue signal for high-fidelity image registration. However, acquisitions, especially in 3D, take a considerable amount of time. Pushing towards real-time adaptive radiotherapy, MRI needs to be accelerated without degrading the quality of information. In this paper, we investigate the impact of image resolution on the quality of motion estimations. Potentially, spatially undersampled images yield comparable motion estimations. At the same time, their acquisition times would reduce greatly due to the sparser sampling. In order to substantiate this hypothesis, exemplary 4D datasets of the abdomen were downsampled gradually. Subsequently, spatiotemporal deformations are extracted consistently using the same motion estimation for each downsampled dataset. Errors between the original and the respectively downsampled version of the dataset are then evaluated. Compared to ground-truth, results show high similarity of deformations estimated from downsampled image data. Using a dataset with {{≤ft(2.5 \\text{mm}\\right)}3} voxel size, deformation fields could be recovered well up to a downsampling factor of 2, i.e. {{≤ft(5 \\text{mm}\\right)}3} . In a therapy guidance scenario MRI, imaging speed could accordingly increase approximately fourfold, with acceptable loss of estimated motion quality.

  13. Biodynamic Doppler imaging of subcellular motion inside 3D living tissue culture and biopsies (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Nolte, David D.

    2016-03-01

    Biodynamic imaging is an emerging 3D optical imaging technology that probes up to 1 mm deep inside three-dimensional living tissue using short-coherence dynamic light scattering to measure the intracellular motions of cells inside their natural microenvironments. Biodynamic imaging is label-free and non-invasive. The information content of biodynamic imaging is captured through tissue dynamics spectroscopy that displays the changes in the Doppler signatures from intracellular constituents in response to applied compounds. The affected dynamic intracellular mechanisms include organelle transport, membrane undulations, cytoskeletal restructuring, strain at cellular adhesions, cytokinesis, mitosis, exo- and endo-cytosis among others. The development of 3D high-content assays such as biodynamic profiling can become a critical new tool for assessing efficacy of drugs and the suitability of specific types of tissue growth for drug discovery and development. The use of biodynamic profiling to predict clinical outcome of living biopsies to cancer therapeutics can be developed into a phenotypic companion diagnostic, as well as a new tool for therapy selection in personalized medicine. This invited talk will present an overview of the optical, physical and physiological processes involved in biodynamic imaging. Several different biodynamic imaging modalities include motility contrast imaging (MCI), tissue-dynamics spectroscopy (TDS) and tissue-dynamics imaging (TDI). A wide range of potential applications will be described that include process monitoring for 3D tissue culture, drug discovery and development, cancer therapy selection, embryo assessment for in-vitro fertilization and artificial reproductive technologies, among others.

  14. Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller

    PubMed Central

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  15. Characterisation of dynamic couplings at lower limb residuum/socket interface using 3D motion capture.

    PubMed

    Tang, Jinghua; McGrath, Michael; Laszczak, Piotr; Jiang, Liudi; Bader, Dan L; Moser, David; Zahedi, Saeed

    2015-12-01

    Design and fitting of artificial limbs to lower limb amputees are largely based on the subjective judgement of the prosthetist. Understanding the science of three-dimensional (3D) dynamic coupling at the residuum/socket interface could potentially aid the design and fitting of the socket. A new method has been developed to characterise the 3D dynamic coupling at the residuum/socket interface using 3D motion capture based on a single case study of a trans-femoral amputee. The new model incorporated a Virtual Residuum Segment (VRS) and a Socket Segment (SS) which combined to form the residuum/socket interface. Angular and axial couplings between the two segments were subsequently determined. Results indicated a non-rigid angular coupling in excess of 10° in the quasi-sagittal plane and an axial coupling of between 21 and 35 mm. The corresponding angular couplings of less than 4° and 2° were estimated in the quasi-coronal and quasi-transverse plane, respectively. We propose that the combined experimental and analytical approach adopted in this case study could aid the iterative socket fitting process and could potentially lead to a new socket design.

  16. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI.

    PubMed

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z; Stone, Maureen; Prince, Jerry L

    2014-12-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations.

  17. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI

    PubMed Central

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.

    2014-01-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations. PMID:25155697

  18. Interactive Motion Planning for Steerable Needles in 3D Environments with Obstacles

    PubMed Central

    Patil, Sachin; Alterovitz, Ron

    2011-01-01

    Bevel-tip steerable needles for minimally invasive medical procedures can be used to reach clinical targets that are behind sensitive or impenetrable areas and are inaccessible to straight, rigid needles. We present a fast algorithm that can compute motion plans for steerable needles to reach targets in complex, 3D environments with obstacles at interactive rates. The fast computation makes this method suitable for online control of the steerable needle based on 3D imaging feedback and allows physicians to interactively edit the planning environment in real-time by adding obstacle definitions as they are discovered or become relevant. We achieve this fast performance by using a Rapidly Exploring Random Tree (RRT) combined with a reachability-guided sampling heuristic to alleviate the sensitivity of the RRT planner to the choice of the distance metric. We also relax the constraint of constant-curvature needle trajectories by relying on duty-cycling to realize bounded-curvature needle trajectories. These characteristics enable us to achieve orders of magnitude speed-up compared to previous approaches; we compute steerable needle motion plans in under 1 second for challenging environments containing complex, polyhedral obstacles and narrow passages. PMID:22294214

  19. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking.

    PubMed

    Dettmer, Simon L; Keyser, Ulrich F; Pagliara, Stefano

    2014-02-01

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  20. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    SciTech Connect

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    2014-02-15

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  1. The ECM Moves during Primitive Streak Formation—Computation of ECM Versus Cellular Motion

    PubMed Central

    Zamir, Evan A; Rongish, Brenda J; Little, Charles D

    2008-01-01

    Galileo described the concept of motion relativity—motion with respect to a reference frame—in 1632. He noted that a person below deck would be unable to discern whether the boat was moving. Embryologists, while recognizing that embryonic tissues undergo large-scale deformations, have failed to account for relative motion when analyzing cell motility data. A century of scientific articles has advanced the concept that embryonic cells move (“migrate”) in an autonomous fashion such that, as time progresses, the cells and their progeny assemble an embryo. In sharp contrast, the motion of the surrounding extracellular matrix scaffold has been largely ignored/overlooked. We developed computational/optical methods that measure the extent embryonic cells move relative to the extracellular matrix. Our time-lapse data show that epiblastic cells largely move in concert with a sub-epiblastic extracellular matrix during stages 2 and 3 in primitive streak quail embryos. In other words, there is little cellular motion relative to the extracellular matrix scaffold—both components move together as a tissue. The extracellular matrix displacements exhibit bilateral vortical motion, convergence to the midline, and extension along the presumptive vertebral axis—all patterns previously attributed solely to cellular “migration.” Our time-resolved data pose new challenges for understanding how extracellular chemical (morphogen) gradients, widely hypothesized to guide cellular trajectories at early gastrulation stages, are maintained in this dynamic extracellular environment. We conclude that models describing primitive streak cellular guidance mechanisms must be able to account for sub-epiblastic extracellular matrix displacements.

  2. Feasibility Study for Ballet E-Learning: Automatic Composition System for Ballet "Enchainement" with Online 3D Motion Data Archive

    ERIC Educational Resources Information Center

    Umino, Bin; Longstaff, Jeffrey Scott; Soga, Asako

    2009-01-01

    This paper reports on "Web3D dance composer" for ballet e-learning. Elementary "petit allegro" ballet steps were enumerated in collaboration with ballet teachers, digitally acquired through 3D motion capture systems, and categorised into families and sub-families. Digital data was manipulated into virtual reality modelling language (VRML) and fit…

  3. Feasibility Study for Ballet E-Learning: Automatic Composition System for Ballet "Enchainement" with Online 3D Motion Data Archive

    ERIC Educational Resources Information Center

    Umino, Bin; Longstaff, Jeffrey Scott; Soga, Asako

    2009-01-01

    This paper reports on "Web3D dance composer" for ballet e-learning. Elementary "petit allegro" ballet steps were enumerated in collaboration with ballet teachers, digitally acquired through 3D motion capture systems, and categorised into families and sub-families. Digital data was manipulated into virtual reality modelling language (VRML) and fit…

  4. Multiview diffeomorphic registration: application to motion and strain estimation from 3D echocardiography.

    PubMed

    Piella, Gemma; De Craene, Mathieu; Butakoff, Constantine; Grau, Vicente; Yao, Cheng; Nedjati-Gilani, Shahrum; Penney, Graeme P; Frangi, Alejandro F

    2013-04-01

    This paper presents a new registration framework for quantifying myocardial motion and strain from the combination of multiple 3D ultrasound (US) sequences. The originality of our approach lies in the estimation of the transformation directly from the input multiple views rather than from a single view or a reconstructed compounded sequence. This allows us to exploit all spatiotemporal information available in the input views avoiding occlusions and image fusion errors that could lead to some inconsistencies in the motion quantification result. We propose a multiview diffeomorphic registration strategy that enforces smoothness and consistency in the spatiotemporal domain by modeling the 4D velocity field continuously in space and time. This 4D continuous representation considers 3D US sequences as a whole, therefore allowing to robustly cope with variations in heart rate resulting in different number of images acquired per cardiac cycle for different views. This contributes to the robustness gained by solving for a single transformation from all input sequences. The similarity metric takes into account the physics of US images and uses a weighting scheme to balance the contribution of the different views. It includes a comparison both between consecutive images and between a reference and each of the following images. The strain tensor is computed locally using the spatial derivatives of the reconstructed displacement fields. Registration and strain accuracy were evaluated on synthetic 3D US sequences with known ground truth. Experiments were also conducted on multiview 3D datasets of 8 volunteers and 1 patient treated by cardiac resynchronization therapy. Strain curves obtained from our multiview approach were compared to the single-view case, as well as with other multiview approaches. For healthy cases, the inclusion of several views improved the consistency of the strain curves and reduced the number of segments where a non-physiological strain pattern was

  5. Static versus dynamic kinematics in cyclists: A comparison of goniometer, inclinometer and 3D motion capture.

    PubMed

    Holliday, W; Fisher, J; Theo, R; Swart, J

    2017-07-27

    Kinematic measurements conducted during bike set-ups utilise either static or dynamic measures. There is currently limited data on reliability of static and dynamic measures nor consensus on which is the optimal method. The aim of the study was to assess the difference between static and dynamic measures of the ankle, knee, hip, shoulder and elbow. Nineteen subjects performed three separate trials for a 10-min duration at a fixed workload (70% of peak power output). Static measures were taken with a standard goniometer (GM), an inclinometer (IM) and dynamic three-dimensional motion capture (3DMC) using an eight camera motion capture system. Static and dynamic joint angles were compared over the three trials to assess repeatability of the measurements and differences between static and dynamic values. There was a positive correlation between GM and IM measures for all joints. Only the knee, shoulder and elbow were positively correlated between GM and 3DMC, and IM and 3DMC. Although all three instruments were reliable, 3D motion analysis utilised different landmarks for most joints and produced different means. Changes in knee flexion angle from static to dynamic are attributable to changes in the positioning of the foot. Controlling for this factor, the differences are negated. It was demonstrated that 3DMC is not interchangeable with GM and IM, and it is recommended that 3DMC develop independent reference values for bicycle configuration.

  6. Comparative abilities of Microsoft Kinect and Vicon 3D motion capture for gait analysis.

    PubMed

    Pfister, Alexandra; West, Alexandre M; Bronner, Shaw; Noah, Jack Adam

    2014-07-01

    Biomechanical analysis is a powerful tool in the evaluation of movement dysfunction in orthopaedic and neurologic populations. Three-dimensional (3D) motion capture systems are widely used, accurate systems, but are costly and not available in many clinical settings. The Microsoft Kinect™ has the potential to be used as an alternative low-cost motion analysis tool. The purpose of this study was to assess concurrent validity of the Kinect™ with Brekel Kinect software in comparison to Vicon Nexus during sagittal plane gait kinematics. Twenty healthy adults (nine male, 11 female) were tracked while walking and jogging at three velocities on a treadmill. Concurrent hip and knee peak flexion and extension and stride timing measurements were compared between Vicon and Kinect™. Although Kinect measurements were representative of normal gait, the Kinect™ generally under-estimated joint flexion and over-estimated extension. Kinect™ and Vicon hip angular displacement correlation was very low and error was large. Kinect™ knee measurements were somewhat better than hip, but were not consistent enough for clinical assessment. Correlation between Kinect™ and Vicon stride timing was high and error was fairly small. Variability in Kinect™ measurements was smallest at the slowest velocity. The Kinect™ has basic motion capture capabilities and with some minor adjustments will be an acceptable tool to measure stride timing, but sophisticated advances in software and hardware are necessary to improve Kinect™ sensitivity before it can be implemented for clinical use.

  7. Assessing the 3D accuracy of consumer grade distance camera measurement of respiratory motion

    NASA Astrophysics Data System (ADS)

    Samir, M.; Golkar, E.; Rahni, A. A. Abd

    2017-05-01

    Recently range imagers or distance camera systems have garnered interest for measuring respiratory motion without using markers, which can then be used as a surrogate in diagnosis and treatment for example in diagnostic imaging or radiotherapy. However, their use may have limitations, especially among lower cost systems, whereby their accuracy decrease greatly with the distance of the patient from the camera. This is considering the fact that the motion amplitude of the anterior surface of the body in normal breathing is typically around 1 cm or less, which is at the limit of accuracy of these systems. This accuracy limitation is even more pertinent when the fact that the 1 cm accuracy is desired over the whole anterior surface that is image and not just an average measurement of distance. We study this limitation in a low cost system i.e. the Microsoft KinectTM, using both version 1 and version 2 of the sensor. The 3D accuracy of both versions is compared with an alternative method of respiratory motion measurement i.e. a respiratory belt, at a distance of around 1.35 m. This study can be a guide for the design and application of range imaging systems in the clinical setting.

  8. 3D cardiac motion reconstruction from CT data and tagged MRI.

    PubMed

    Wang, Xiaoxu; Mihalef, Viorel; Qian, Zhen; Voros, Szilard; Metaxas, Dimitris

    2012-01-01

    In this paper we present a novel method for left ventricle (LV) endocardium motion reconstruction using high resolution CT data and tagged MRI. High resolution CT data provide anatomic details on the LV endocardial surface, such as the papillary muscle and trabeculae carneae. Tagged MRI provides better time resolution. The combination of these two imaging techniques can give us better understanding on left ventricle motion. The high resolution CT images are segmented with mean shift method and generate the LV endocardium mesh. The meshless deformable model built with high resolution endocardium surface from CT data fit to the tagged MRI of the same phase. 3D deformation of the myocardium is computed with the Lagrangian dynamics and local Laplacian deformation. The segmented inner surface of left ventricle is compared with the heart inner surface picture and show high agreement. The papillary muscles are attached to the inner surface with roots. The free wall of the left ventricle inner surface is covered with trabeculae carneae. The deformation of the heart wall and the papillary muscle in the first half of the cardiac cycle is presented. The motion reconstruction results are very close to the live heart video.

  9. 3D radial sampling and 3D affine transform-based respiratory motion correction technique for free-breathing whole-heart coronary MRA with 100% imaging efficiency.

    PubMed

    Bhat, Himanshu; Ge, Lan; Nielles-Vallespin, Sonia; Zuehlsdorff, Sven; Li, Debiao

    2011-05-01

    The navigator gating and slice tracking approach currently used for respiratory motion compensation during free-breathing coronary magnetic resonance angiography (MRA) has low imaging efficiency (typically 30-50%), resulting in long imaging times. In this work, a novel respiratory motion correction technique with 100% scan efficiency was developed for free-breathing whole-heart coronary MRA. The navigator signal was used as a reference respiratory signal to segment the data into six bins. 3D projection reconstruction k-space sampling was used for data acquisition and enabled reconstruction of low resolution images within each respiratory bin. The motion between bins was estimated by image registration with a 3D affine transform. The data from the different respiratory bins was retrospectively combined after motion correction to produce the final image. The proposed method was compared with a traditional navigator gating approach in nine healthy subjects. The proposed technique acquired whole-heart coronary MRA with 1.0 mm(3) isotropic spatial resolution in a scan time of 6.8 ± 0.9 min, compared with 16.2 ± 2.8 min for the navigator gating approach. The image quality scores, and length, diameter and sharpness of the right coronary artery (RCA), left anterior descending coronary artery (LAD), and left circumflex coronary artery (LCX) were similar for both approaches (P > 0.05 for all), but the proposed technique reduced scan time by a factor of 2.5. Copyright © 2011 Wiley-Liss, Inc.

  10. Automated 3D motion tracking using Gabor filter bank, robust point matching, and deformable models.

    PubMed

    Chen, Ting; Wang, Xiaoxu; Chung, Sohae; Metaxas, Dimitris; Axel, Leon

    2010-01-01

    Tagged magnetic resonance imaging (tagged MRI or tMRI) provides a means of directly and noninvasively displaying the internal motion of the myocardium. Reconstruction of the motion field is needed to quantify important clinical information, e.g., the myocardial strain, and detect regional heart functional loss. In this paper, we present a three-step method for this task. First, we use a Gabor filter bank to detect and locate tag intersections in the image frames, based on local phase analysis. Next, we use an improved version of the robust point matching (RPM) method to sparsely track the motion of the myocardium, by establishing a transformation function and a one-to-one correspondence between grid tag intersections in different image frames. In particular, the RPM helps to minimize the impact on the motion tracking result of 1) through-plane motion and 2) relatively large deformation and/or relatively small tag spacing. In the final step, a meshless deformable model is initialized using the transformation function computed by RPM. The model refines the motion tracking and generates a dense displacement map, by deforming under the influence of image information, and is constrained by the displacement magnitude to retain its geometric structure. The 2D displacement maps in short and long axis image planes can be combined to drive a 3D deformable model, using the moving least square method, constrained by the minimization of the residual error at tag intersections. The method has been tested on a numerical phantom, as well as on in vivo heart data from normal volunteers and heart disease patients. The experimental results show that the new method has a good performance on both synthetic and real data. Furthermore, the method has been used in an initial clinical study to assess the differences in myocardial strain distributions between heart disease (left ventricular hypertrophy) patients and the normal control group. The final results show that the proposed method

  11. Automated 3D Motion Tracking using Gabor Filter Bank, Robust Point Matching, and Deformable Models

    PubMed Central

    Wang, Xiaoxu; Chung, Sohae; Metaxas, Dimitris; Axel, Leon

    2013-01-01

    Tagged Magnetic Resonance Imaging (tagged MRI or tMRI) provides a means of directly and noninvasively displaying the internal motion of the myocardium. Reconstruction of the motion field is needed to quantify important clinical information, e.g., the myocardial strain, and detect regional heart functional loss. In this paper, we present a three-step method for this task. First, we use a Gabor filter bank to detect and locate tag intersections in the image frames, based on local phase analysis. Next, we use an improved version of the Robust Point Matching (RPM) method to sparsely track the motion of the myocardium, by establishing a transformation function and a one-to-one correspondence between grid tag intersections in different image frames. In particular, the RPM helps to minimize the impact on the motion tracking result of: 1) through-plane motion, and 2) relatively large deformation and/or relatively small tag spacing. In the final step, a meshless deformable model is initialized using the transformation function computed by RPM. The model refines the motion tracking and generates a dense displacement map, by deforming under the influence of image information, and is constrained by the displacement magnitude to retain its geometric structure. The 2D displacement maps in short and long axis image planes can be combined to drive a 3D deformable model, using the Moving Least Square method, constrained by the minimization of the residual error at tag intersections. The method has been tested on a numerical phantom, as well as on in vivo heart data from normal volunteers and heart disease patients. The experimental results show that the new method has a good performance on both synthetic and real data. Furthermore, the method has been used in an initial clinical study to assess the differences in myocardial strain distributions between heart disease (left ventricular hypertrophy) patients and the normal control group. The final results show that the proposed method

  12. Stopping is not an option: the evolution of unstoppable motion elements (primitives).

    PubMed

    Sosnik, Ronen; Chaim, Eliyahu; Flash, Tamar

    2015-08-01

    Stopping performance is known to depend on low-level motion features, such as movement velocity. It is not known, however, whether it is also subject to high-level motion constraints. Here, we report results of 15 subjects instructed to connect four target points depicted on a digitizing tablet and stop "as rapidly as possible" upon hearing a "stop" cue (tone). Four subjects connected target points with straight paths, whereas 11 subjects generated movements corresponding to coarticulation between adjacent movement components. For the noncoarticulating and coarticulating subjects, stopping performance was not correlated or only weakly correlated with motion velocity, respectively. The generation of a straight, point-to-point movement or a smooth, curved trajectory was not disturbed by the occurrence of a stop cue. Overall, the results indicate that stopping performance is subject to high-level motion constraints, such as the completion of a geometrical plan, and that globally planned movements, once started, must run to completion, providing evidence for the definition of a motion primitive as an unstoppable motion element.

  13. Improved Maneuvering Forces and Autopilot Modelling for the ShipMo3D Ship Motion Library

    DTIC Science & Technology

    2008-09-01

    man26V = − 1 2 ρ L2 Tmid Y ′ r − Bhull−rad−stab26U (ωe = 0) (23) Bhull−man62V = − 1 2 ρ L2 Tmid N ′ v − Bhull−rad−stab62U (ωe = 0) (24) Bhull−man66V...2 555 Blvd. De la Carriere Gatineau, Quebec K1A 0K2 1 Canadian Forces Maritime Warfare School Attention : Commanding Officer P.O. Box 99000 STN Forces...languages unless the text is bilingual ). ShipMo3D is an object-oriented library with associated user applications for predicting ship motions in calm

  14. Kinematic ground motion simulations on rough faults including effects of 3D stochastic velocity perturbations

    USGS Publications Warehouse

    Graves, Robert; Pitarka, Arben

    2016-01-01

    We describe a methodology for generating kinematic earthquake ruptures for use in 3D ground‐motion simulations over the 0–5 Hz frequency band. Our approach begins by specifying a spatially random slip distribution that has a roughly wavenumber‐squared fall‐off. Given a hypocenter, the rupture speed is specified to average about 75%–80% of the local shear wavespeed and the prescribed slip‐rate function has a Kostrov‐like shape with a fault‐averaged rise time that scales self‐similarly with the seismic moment. Both the rupture time and rise time include significant local perturbations across the fault surface specified by spatially random fields that are partially correlated with the underlying slip distribution. We represent velocity‐strengthening fault zones in the shallow (<5  km) and deep (>15  km) crust by decreasing rupture speed and increasing rise time in these regions. Additional refinements to this approach include the incorporation of geometric perturbations to the fault surface, 3D stochastic correlated perturbations to the P‐ and S‐wave velocity structure, and a damage zone surrounding the shallow fault surface characterized by a 30% reduction in seismic velocity. We demonstrate the approach using a suite of simulations for a hypothetical Mw 6.45 strike‐slip earthquake embedded in a generalized hard‐rock velocity structure. The simulation results are compared with the median predictions from the 2014 Next Generation Attenuation‐West2 Project ground‐motion prediction equations and show very good agreement over the frequency band 0.1–5 Hz for distances out to 25 km from the fault. Additionally, the newly added features act to reduce the coherency of the radiated higher frequency (f>1  Hz) ground motions, and homogenize radiation‐pattern effects in this same bandwidth, which move the simulations closer to the statistical characteristics of observed motions as illustrated by comparison with recordings from

  15. Real-time 2D/3D registration for tumor motion tracking during radiotherapy

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Gendrin, C.; Bloch, C.; Spoerk, J.; Pawiro, S. A.; Weber, C.; Figl, M.; Stock, M.; Georg, D.; Bergmann, H.; Birkfellner, W.

    2012-02-01

    Organ motion during radiotherapy is one of causes of uncertainty in dose delivery. To cope with this, the planned target volume (PTV) has to be larger than needed to guarantee full tumor irradiation. Existing methods deal with the problem by performing tumor tracking using implanted fiducial markers or magnetic sensors. In this work, we investigate the feasibility of using x-ray based real time 2D/3D registration for non-invasive tumor motion tracking during radiotherapy. Our method uses purely intensity based techniques, thus avoiding markers or fiducials. X-rays are acquired during treatment at a rate of 5.4Hz. We iteratively compare each x-ray with a set of digitally reconstructed radiographs (DRR) generated from the planning volume dataset, finding the optimal match between the x-ray and one of the DRRs. The DRRs are generated using a ray-casting algorithm, implemented using general purpose computation on graphics hardware (GPGPU) programming techniques using CUDA for greater performance. Validation is conducted off-line using a phantom and five clinical patient data sets. The registration is performed on a region of interest (ROI) centered around the PTV. The phantom motion is measured with an rms error of 2.1 mm and mean registration time is 220 ms. For the patient data sets, a sinusoidal movement that clearly correlates to the breathing cycle is seen. Mean registration time is always under 105 ms which is well suited for our purposes. These results demonstrate that real-time organ motion monitoring using image based markerless registration is feasible.

  16. Rupture dynamics and ground motion from 3-D rough-fault simulations

    NASA Astrophysics Data System (ADS)

    Shi, Zheqiang; Day, Steven M.

    2013-03-01

    perform three-dimensional (3-D) numerical calculations of dynamic rupture along non-planar faults to study the effects of fault roughness on rupture propagation and resultant ground motion. The fault roughness model follows a self-similar fractal distribution over length scales spanning three orders of magnitude, from ~102 to ~105 m. The fault is governed by a strongly rate-weakening friction, and the bulk material is subject to Drucker-Prager viscoplasticity. Fault roughness promotes the development of self-healing rupture pulses and a heterogeneous distribution of fault slip at the free surface and at depth. The inelastic deformation, generated by the large dynamic stress near rupture fronts, occurs in a narrow volume around the fault with heterogeneous thickness correlated to local roughness slopes. Inelastic deformation near the free surface, however, is induced by the stress waves originated from dynamic rupture at depth and spreads to large distances (>10 km) away from the fault. The present simulations model seismic wave excitation up to ~10 Hz with rupture lengths of ~100 km, permitting comparisons with empirical studies of ground-motion intensity measures of engineering interest. Characteristics of site-averaged synthetic response spectra, including the distance and period dependence of the median values, absolute level, and intra-event standard deviation, are comparable to appropriate empirical estimates throughout the period range 0.1-3.0 s. This class of model may provide a viable representation of the ground-motion excitation process over a wide frequency range in a large spatial domain, with potential applications to the numerical prediction of source- and path-specific effects on earthquake ground motion.

  17. DLP technology application: 3D head tracking and motion correction in medical brain imaging

    NASA Astrophysics Data System (ADS)

    Olesen, Oline V.; Wilm, Jakob; Paulsen, Rasmus R.; Højgaard, Liselotte; Larsen, Rasmus

    2014-03-01

    In this paper we present a novel sensing system, robust Near-infrared Structured Light Scanning (NIRSL) for three-dimensional human model scanning application. Human model scanning due to its nature of various hair and dress appearance and body motion has long been a challenging task. Previous structured light scanning methods typically emitted visible coded light patterns onto static and opaque objects to establish correspondence between a projector and a camera for triangulation. In the success of these methods rely on scanning objects with proper reflective surface for visible light, such as plaster, light colored cloth. Whereas for human model scanning application, conventional methods suffer from low signal to noise ratio caused by low contrast of visible light over the human body. The proposed robust NIRSL, as implemented with the near infrared light, is capable of recovering those dark surfaces, such as hair, dark jeans and black shoes under visible illumination. Moreover, successful structured light scan relies on the assumption that the subject is static during scanning. Due to the nature of body motion, it is very time sensitive to keep this assumption in the case of human model scan. The proposed sensing system, by utilizing the new near-infrared capable high speed LightCrafter DLP projector, is robust to motion, provides accurate and high resolution three-dimensional point cloud, making our system more efficient and robust for human model reconstruction. Experimental results demonstrate that our system is effective and efficient to scan real human models with various dark hair, jeans and shoes, robust to human body motion and produces accurate and high resolution 3D point cloud.

  18. 3D tracking the Brownian motion of colloidal particles using digital holographic microscopy and joint reconstruction.

    PubMed

    Verrier, Nicolas; Fournier, Corinne; Fournel, Thierry

    2015-06-01

    In-line digital holography is a valuable tool for sizing, locating, and tracking micro- or nano-objects in a volume. When a parametric imaging model is available, inverse problem approaches provide a straightforward estimate of the object parameters by fitting data with the model, thereby allowing accurate reconstruction. As recently proposed and demonstrated, combining pixel super-resolution techniques with inverse problem approaches improves the estimation of particle size and 3D position. Here, we demonstrate the accurate tracking of colloidal particles in Brownian motion. Particle size and 3D position are jointly optimized from video holograms acquired with a digital holographic microscopy setup based on a low-end microscope objective (×20, NA 0.5). Exploiting information redundancy makes it possible to characterize particles with a standard deviation of 15 nm in size and a theoretical resolution of 2×2×5  nm3 for position under additive white Gaussian noise assumption.

  19. Skeletal camera network embedded structure-from-motion for 3D scene reconstruction from UAV images

    NASA Astrophysics Data System (ADS)

    Xu, Zhihua; Wu, Lixin; Gerke, Markus; Wang, Ran; Yang, Huachao

    2016-11-01

    Structure-from-Motion (SfM) techniques have been widely used for 3D scene reconstruction from multi-view images. However, due to the large computational costs of SfM methods there is a major challenge in processing highly overlapping images, e.g. images from unmanned aerial vehicles (UAV). This paper embeds a novel skeletal camera network (SCN) into SfM to enable efficient 3D scene reconstruction from a large set of UAV images. First, the flight control data are used within a weighted graph to construct a topologically connected camera network (TCN) to determine the spatial connections between UAV images. Second, the TCN is refined using a novel hierarchical degree bounded maximum spanning tree to generate a SCN, which contains a subset of edges from the TCN and ensures that each image is involved in at least a 3-view configuration. Third, the SCN is embedded into the SfM to produce a novel SCN-SfM method, which allows performing tie-point matching only for the actually connected image pairs. The proposed method was applied in three experiments with images from two fixed-wing UAVs and an octocopter UAV, respectively. In addition, the SCN-SfM method was compared to three other methods for image connectivity determination. The comparison shows a significant reduction in the number of matched images if our method is used, which leads to less computational costs. At the same time the achieved scene completeness and geometric accuracy are comparable.

  20. The representation of moving 3-D objects in apparent motion perception.

    PubMed

    Hidaka, Souta; Kawachi, Yousuke; Gyoba, Jiro

    2009-08-01

    In the present research, we investigated the depth information contained in the representations of apparently moving 3-D objects. By conducting three experiments, we measured the magnitude of representational momentum (RM) as an index of the consistency of an object's representation. Experiment 1A revealed that RM magnitude was greater when shaded, convex, apparently moving objects shifted to a flat circle than when they shifted to a shaded, concave, hemisphere. The difference diminished when the apparently moving objects were concave hemispheres (Experiment 1B). Using luminance-polarized circles, Experiment 2 confirmed that these results were not due to the luminance information of shading. Experiment 3 demonstrated that RM magnitude was greater when convex apparently moving objects shifted to particular blurred convex hemispheres with low-pass filtering than when they shifted to concave hemispheres. These results suggest that the internal object's representation in apparent motion contains incomplete depth information intermediate between that of 2-D and 3-D objects, particularly with regard to convexity information with low-spatial-frequency components.

  1. Velocity and Density Models Incorporating the Cascadia Subduction Zone for 3D Earthquake Ground Motion Simulations

    USGS Publications Warehouse

    Stephenson, William J.

    2007-01-01

    INTRODUCTION In support of earthquake hazards and ground motion studies in the Pacific Northwest, three-dimensional P- and S-wave velocity (3D Vp and Vs) and density (3D rho) models incorporating the Cascadia subduction zone have been developed for the region encompassed from about 40.2?N to 50?N latitude, and from about -122?W to -129?W longitude. The model volume includes elevations from 0 km to 60 km (elevation is opposite of depth in model coordinates). Stephenson and Frankel (2003) presented preliminary ground motion simulations valid up to 0.1 Hz using an earlier version of these models. The version of the model volume described here includes more structural and geophysical detail, particularly in the Puget Lowland as required for scenario earthquake simulations in the development of the Seattle Urban Hazards Maps (Frankel and others, 2007). Olsen and others (in press) used the model volume discussed here to perform a Cascadia simulation up to 0.5 Hz using a Sumatra-Andaman Islands rupture history. As research from the EarthScope Program (http://www.earthscope.org) is published, a wealth of important detail can be added to these model volumes, particularly to depths of the upper-mantle. However, at the time of development for this model version, no EarthScope-specific results were incorporated. This report is intended to be a reference for colleagues and associates who have used or are planning to use this preliminary model in their research. To this end, it is intended that these models will be considered a beginning template for a community velocity model of the Cascadia region as more data and results become available.

  2. Correlation between the respiratory waveform measured using a respiratory sensor and 3D tumor motion in gated radiotherapy

    SciTech Connect

    Tsunashima, Yoshikazu . E-mail: tsunashima@pmrc.tsukuba.ac.jp; Sakae, Takeji; Shioyama, Yoshiyuki; Kagei, Kenji; Terunuma, Toshiyuki; Nohtomi, Akihiro; Akine, Yasuyuki

    2004-11-01

    Purpose: The purpose of this study is to investigate the correlation between the respiratory waveform measured using a respiratory sensor and three-dimensional (3D) tumor motion. Methods and materials: A laser displacement sensor (LDS: KEYENCE LB-300) that measures distance using infrared light was used as the respiratory sensor. This was placed such that the focus was in an area around the patient's navel. When the distance from the LDS to the body surface changes as the patient breathes, the displacement is detected as a respiratory waveform. To obtain the 3D tumor motion, a biplane digital radiography unit was used. For the tumor in the lung, liver, and esophagus of 26 patients, the waveform was compared with the 3D tumor motion. The relationship between the respiratory waveform and the 3D tumor motion was analyzed by means of the Fourier transform and a cross-correlation function. Results: The respiratory waveform cycle agreed with that of the cranial-caudal and dorsal-ventral tumor motion. A phase shift observed between the respiratory waveform and the 3D tumor motion was principally in the range 0.0 to 0.3 s, regardless of the organ being measured, which means that the respiratory waveform does not always express the 3D tumor motion with fidelity. For this reason, the standard deviation of the tumor position in the expiration phase, as indicated by the respiratory waveform, was derived, which should be helpful in suggesting the internal margin required in the case of respiratory gated radiotherapy. Conclusion: Although obtained from only a few breathing cycles for each patient, the correlation between the respiratory waveform and the 3D tumor motion was evident in this study. If this relationship is analyzed carefully and an internal margin is applied, the accuracy and convenience of respiratory gated radiotherapy could be improved by use of the respiratory sensor.Thus, it is expected that this procedure will come into wider use.

  3. 3D landslide motion from a UAV-derived time-series of morphological attributes

    NASA Astrophysics Data System (ADS)

    Valasia Peppa, Maria; Mills, Jon Philip; Moore, Philip; Miller, Pauline; Chambers, Jon

    2017-04-01

    Landslides are recognised as dynamic and significantly hazardous phenomena. Time-series observations can improve the understanding of a landslide's complex behaviour and aid assessment of its geometry and kinematics. Conventional quantification of landslide motion involves the installation of survey markers into the ground at discrete locations and periodic observations over time. However, such surveying is labour intensive, provides limited spatial resolution, is occasionally hazardous for steep terrain, or even impossible for inaccessible mountainous areas. The emergence of mini unmanned aerial vehicles (UAVs) equipped with off-the-shelf compact cameras, alongside the structure-from-motion (SfM) photogrammetric pipeline and modern pixel-based matching approaches, has expedited the automatic generation of high resolution digital elevation models (DEMs). Moreover, cross-correlation functions applied to finely co-registered consecutive orthomosaics and/or DEMs have been widely used to determine the displacement of moving features in an automated way, resulting in high spatial resolution motion vectors. This research focuses on estimating the 3D displacement field of an active slow moving earth-slide earth-flow landslide located in Lias mudrocks of North Yorkshire, UK, with the ultimate aim of assessing landslide deformation patterns. The landslide extends approximately 290 m E-W and 230 m N-S, with an average slope of 12˚ and 50 m elevation difference from N-S. Cross-correlation functions were applied to an eighteen-month duration, UAV-derived, time-series of morphological attributes in order to determine motion vectors for subsequent landslide analysis. A self-calibrating bundle adjustment was firstly incorporated into the SfM pipeline and utilised to process imagery acquired using a Panasonic Lumix DMC-LX5 compact camera from a mini fixed-wing Quest 300 UAV, with 2 m wingspan and maximum 5 kg payload. Data from six field campaigns were used to generate a DEM time

  4. The role of perspective information in the recovery of 3D structure-from-motion.

    PubMed

    Eagle, R A; Hogervorst, M A

    1999-05-01

    When investigating the recovery of three-dimensional structure-from-motion (SFM), vision scientists often assume that scaled-orthographic projection, which removes effects due to depth variations across the object, is an adequate approximation to full perspective projection. This is so even though SFM judgements can, in principle, be improved by exploiting perspective projection of scenes on to the retina. In an experiment, pairs of rotating hinged planes (open books) were simulated on a computer monitor, under either perspective or orthographic projection, and human observers were asked to indicate which they perceived had the larger dihedral angle. For small displays (4.6 x 6.0 degrees) discrimination thresholds were found to be similar under the two conditions, but diverged for all larger stimuli. In particular, as stimulus size was increased, performance under orthographic projection declined and by a stimulus size of 32 x 41 degrees performance was at chance for all subjects. In contrast, thresholds decreased under perspective projection as stimulus size was increased. These results show that human observers can use the information gained from perspective projection to recover SFM and that scaled-orthographic projection becomes an unacceptable approximation even at quite modest stimulus sizes. A model of SFM that incorporates measurement errors on the retinal motions accounts for performance under both projection systems, suggesting that this early noise forms the primary limitation on 3D discrimination performance.

  5. Recording High Resolution 3D Lagrangian Motions In Marine Dinoflagellates using Digital Holographic Microscopic Cinematography

    NASA Astrophysics Data System (ADS)

    Sheng, J.; Malkiel, E.; Katz, J.; Place, A. R.; Belas, R.

    2006-11-01

    Detailed data on swimming behavior and locomotion for dense population of dinoflagellates constitutes a key component to understanding cell migration, cell-cell interactions and predator-prey dynamics, all of which affect algae bloom dynamics. Due to the multi-dimensional nature of flagellated cell motions, spatial-temporal Lagrangian measurements of multiple cells in high concentration are very limited. Here we present detailed data on 3D Lagrangian motions for three marine dinoflagellates: Oxyrrhis marina, Karlodinium veneficum, and Pfiesteria piscicida, using digital holographic microscopic cinematography. The measurements are performed in a 5x5x25mm cuvette with cell densities varying from 50,000 ˜ 90,000 cells/ml. Approximately 200-500 cells are tracked simultaneously for 12s at 60fps in a sample volume of 1x1x5 mm at a spatial resolution of 0.4x0.4x2 μm. We fully resolve the longitudinal flagella (˜200nm) along with the Lagrangian trajectory of each organism. Species dependent swimming behavior are identified and categorized quantitatively by velocities, radii of curvature, and rotations of pitch. Statistics on locomotion, temporal & spatial scales, and diffusion rate show substantial differences between species. The scaling between turning radius and cell dimension can be explained by a distributed stokeslet model for a self-propelled body.

  6. Global Existence and Asymptotic Behavior of Affine Motion of 3D Ideal Fluids Surrounded by Vacuum

    NASA Astrophysics Data System (ADS)

    Sideris, Thomas C.

    2017-03-01

    The 3D compressible and incompressible Euler equations with a physical vacuum free boundary condition and affine initial conditions reduce to a globally solvable Hamiltonian system of ordinary differential equations for the deformation gradient in {GL^+(3, R)} . The evolution of the fluid domain is described by a family of ellipsoids whose diameter grows at a rate proportional to time. Upon rescaling to a fixed diameter, the asymptotic limit of the fluid ellipsoid is determined by a positive semi-definite quadratic form of rank r = 1, 2, or 3, corresponding to the asymptotic degeneration of the ellipsoid along 3-r of its principal axes. In the compressible case, the asymptotic limit has rank r = 3, and asymptotic completeness holds, when the adiabatic index {γ} satisfies {4/3 < γ < 2} . The number of possible degeneracies, 3-r, increases with the value of the adiabatic index {γ} . In the incompressible case, affine motion reduces to geodesic flow in {SL(3, R)} with the Euclidean metric. For incompressible affine swirling flow, there is a structural instability. Generically, when the vorticity is nonzero, the domains degenerate along only one axis, but the physical vacuum boundary condition fails over a finite time interval. The rescaled fluid domains of irrotational motion can collapse along two axes.

  7. Global Existence and Asymptotic Behavior of Affine Motion of 3D Ideal Fluids Surrounded by Vacuum

    NASA Astrophysics Data System (ADS)

    Sideris, Thomas C.

    2017-07-01

    The 3D compressible and incompressible Euler equations with a physical vacuum free boundary condition and affine initial conditions reduce to a globally solvable Hamiltonian system of ordinary differential equations for the deformation gradient in {GL^+(3, R)}. The evolution of the fluid domain is described by a family of ellipsoids whose diameter grows at a rate proportional to time. Upon rescaling to a fixed diameter, the asymptotic limit of the fluid ellipsoid is determined by a positive semi-definite quadratic form of rank r = 1, 2, or 3, corresponding to the asymptotic degeneration of the ellipsoid along 3- r of its principal axes. In the compressible case, the asymptotic limit has rank r = 3, and asymptotic completeness holds, when the adiabatic index {γ} satisfies {4/3 < γ < 2}. The number of possible degeneracies, 3- r, increases with the value of the adiabatic index {γ}. In the incompressible case, affine motion reduces to geodesic flow in {SL(3, R)} with the Euclidean metric. For incompressible affine swirling flow, there is a structural instability. Generically, when the vorticity is nonzero, the domains degenerate along only one axis, but the physical vacuum boundary condition fails over a finite time interval. The rescaled fluid domains of irrotational motion can collapse along two axes.

  8. Nonrigid Registration of 2-D and 3-D Dynamic Cell Nuclei Images for Improved Classification of Subcellular Particle Motion

    PubMed Central

    Kim, Il-Han; Chen, Yi-Chun M.; Spector, David L.; Eils, Roland; Rohr, Karl

    2012-01-01

    The observed motion of subcellular particles in fluorescence microscopy image sequences of live cells is generally a superposition of the motion and deformation of the cell and the motion of the particles. Decoupling the two types of movements to enable accurate classification of the particle motion requires the application of registration algorithms. We have developed an intensity-based approach for nonrigid registration of multi-channel microscopy image sequences of cell nuclei. First, based on 3-D synthetic images we demonstrate that cell nucleus deformations change the observed motion types of particles and that our approach allows to recover the original motion. Second, we have successfully applied our approach to register 2-D and 3-D real microscopy image sequences. A quantitative experimental comparison with previous approaches for nonrigid registration of cell microscopy has also been performed. PMID:20840894

  9. Correlation between a 2D simple image analysis method and 3D bony motion during the pivot shift test.

    PubMed

    Arilla, Fabio V; Rahnemai-Azar, Amir Ata; Yacuzzi, Carlos; Guenther, Daniel; Engel, Benjamin S; Fu, Freddie H; Musahl, Volker; Debski, Richard E

    2016-12-01

    The pivot shift test is the most specific clinical test to detect anterior cruciate ligament injury. The purpose of this study was to determine the correlation between the 2D simple image analysis method and the 3D bony motion of the knee during the pivot shift test and assess the intra- and inter-examiner agreements. Three orthopedic surgeons performed three trials of the standardized pivot shift test in seven knees. Two devices were used to measure motion of the lateral knee compartment simultaneously: 1) 2D simple image analysis method: translation was determined using a tablet computer with custom motion tracking software that quantified movement of three markers attached to skin over bony landmarks; 2) 3D bony motion: electromagnetic tracking system was used to measure movement of the same bony landmarks. The 2D simple image analysis method demonstrated a good correlation with the 3D bony motion (Pearson correlation: 0.75, 0.76 and 0.79). The 3D bony translation increased by 2.7 to 3.5 times for every unit increase measured by the 2D simple image analysis method. The mean intra-class correlation coefficients for the three examiners were 0.6 and 0.75, respectively for 3D bony motion and 2D image analyses, while the inter-examiner agreement was 0.65 and 0.73, respectively. The 2D simple image analysis method results are related to 3D bony motion of the lateral knee compartment, even with skin artifact present. This technique is a non-invasive and repeatable tool to quantify the motion of the lateral knee compartment during the pivot shift test. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. SU-E-J-135: An Investigation of Ultrasound Imaging for 3D Intra-Fraction Prostate Motion Estimation

    SciTech Connect

    O'Shea, T; Harris, E; Bamber, J; Evans, P

    2014-06-01

    Purpose: This study investigates the use of a mechanically swept 3D ultrasound (US) probe to estimate intra-fraction motion of the prostate during radiation therapy using an US phantom and simulated transperineal imaging. Methods: A 3D motion platform was used to translate an US speckle phantom while simulating transperineal US imaging. Motion patterns for five representative types of prostate motion, generated from patient data previously acquired with a Calypso system, were using to move the phantom in 3D. The phantom was also implanted with fiducial markers and subsequently tracked using the CyberKnife kV x-ray system for comparison. A normalised cross correlation block matching algorithm was used to track speckle patterns in 3D and 2D US data. Motion estimation results were compared with known phantom translations. Results: Transperineal 3D US could track superior-inferior (axial) and anterior-posterior (lateral) motion to better than 0.8 mm root-mean-square error (RMSE) at a volume rate of 1.7 Hz (comparable with kV x-ray tracking RMSE). Motion estimation accuracy was poorest along the US probe's swept axis (right-left; RL; RMSE < 4.2 mm) but simple regularisation methods could be used to improve RMSE (< 2 mm). 2D US was found to be feasible for slowly varying motion (RMSE < 0.5 mm). 3D US could also allow accurate radiation beam gating with displacement thresholds of 2 mm and 5 mm exhibiting a RMSE of less than 0.5 mm. Conclusion: 2D and 3D US speckle tracking is feasible for prostate motion estimation during radiation delivery. Since RL prostate motion is small in magnitude and frequency, 2D or a hybrid (2D/3D) US imaging approach which also accounts for potential prostate rotations could be used. Regularisation methods could be used to ensure the accuracy of tracking data, making US a feasible approach for gating or tracking in standard or hypo-fractionated prostate treatments.

  11. Are there side effects to watching 3D movies? A prospective crossover observational study on visually induced motion sickness.

    PubMed

    Solimini, Angelo G

    2013-01-01

    The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators.

  12. Are There Side Effects to Watching 3D Movies? A Prospective Crossover Observational Study on Visually Induced Motion Sickness

    PubMed Central

    Solimini, Angelo G.

    2013-01-01

    Background The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. Methods and Findings A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Conclusions Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators. PMID:23418530

  13. 3D Modelling of Inaccessible Areas using UAV-based Aerial Photography and Structure from Motion

    NASA Astrophysics Data System (ADS)

    Obanawa, Hiroyuki; Hayakawa, Yuichi; Gomez, Christopher

    2014-05-01

    In hardly accessible areas, the collection of 3D point-clouds using TLS (Terrestrial Laser Scanner) can be very challenging, while airborne equivalent would not give a correct account of subvertical features and concave geometries like caves. To solve such problem, the authors have experimented an aerial photography based SfM (Structure from Motion) technique on a 'peninsular-rock' surrounded on three sides by the sea at a Pacific coast in eastern Japan. The research was carried out using UAS (Unmanned Aerial System) combined with a commercial small UAV (Unmanned Aerial Vehicle) carrying a compact camera. The UAV is a DJI PHANTOM: the UAV has four rotors (quadcopter), it has a weight of 1000 g, a payload of 400 g and a maximum flight time of 15 minutes. The camera is a GoPro 'HERO3 Black Edition': resolution 12 million pixels; weight 74 g; and 0.5 sec. interval-shot. The 3D model has been constructed by digital photogrammetry using a commercial SfM software, Agisoft PhotoScan Professional®, which can generate sparse and dense point-clouds, from which polygonal models and orthophotographs can be calculated. Using the 'flight-log' and/or GCPs (Ground Control Points), the software can generate digital surface model. As a result, high-resolution aerial orthophotographs and a 3D model were obtained. The results have shown that it was possible to survey the sea cliff and the wave cut-bench, which are unobservable from land side. In details, we could observe the complexity of the sea cliff that is nearly vertical as a whole while slightly overhanging over the thinner base. The wave cut bench is nearly flat and develops extensively at the base of the cliff. Although there are some evidences of small rockfalls at the upper part of the cliff, there is no evidence of very recent activity, because no fallen rock exists on the wave cut bench. This system has several merits: firstly lower cost than the existing measuring methods such as manned-flight survey and aerial laser

  14. Direct Observation of Current-Induced Motion of a 3D Vortex Domain Wall in Cylindrical Nanowires.

    PubMed

    Ivanov, Yurii P; Chuvilin, Andrey; Lopatin, Sergei; Mohammed, Hanan; Kosel, Jurgen

    2017-05-24

    The current-induced dynamics of 3D magnetic vortex domain walls in cylindrical Co/Ni nanowires are revealed experimentally using Lorentz microscopy and theoretically using micromagnetic simulations. We demonstrate that a spin-polarized electric current can control the reversible motion of 3D vortex domain walls, which travel with a velocity of a few hundred meters per second. This finding is a key step in establishing fast, high-density memory devices based on vertical arrays of cylindrical magnetic nanowires.

  15. Infrared tomographic PIV and 3D motion tracking system applied to aquatic predator-prey interaction

    NASA Astrophysics Data System (ADS)

    Adhikari, Deepak; Longmire, Ellen K.

    2013-02-01

    Infrared tomographic PIV and 3D motion tracking are combined to measure evolving volumetric velocity fields and organism trajectories during aquatic predator-prey interactions. The technique was used to study zebrafish foraging on both non-evasive and evasive prey species. Measurement volumes of 22.5 mm × 10.5 mm × 12 mm were reconstructed from images captured on a set of four high-speed cameras. To obtain accurate fluid velocity vectors within each volume, fish were first masked out using an automated visual hull method. Fish and prey locations were identified independently from the same image sets and tracked separately within the measurement volume. Experiments demonstrated that fish were not influenced by the infrared laser illumination or the tracer particles. Results showed that the zebrafish used different strategies, suction and ram feeding, for successful capture of non-evasive and evasive prey, respectively. The two strategies yielded different variations in fluid velocity between the fish mouth and the prey. In general, the results suggest that the local flow field, the direction of prey locomotion with respect to the predator and the relative accelerations and speeds of the predator and prey may all be significant in determining predation success.

  16. Efficacy of computer-assisted, 3D motion-capture toothbrushing instruction.

    PubMed

    Kim, Kee-Deog; Jeong, Jin-Sun; Lee, Hae Na; Gu, Yu; Kim, Kyeong-Seop; Lee, Jeong-Whan; Park, Wonse

    2015-07-01

    The objective of this study was to compare the efficacy of computer-assisted TBI using a smart toothbrush (ST) and smart mirror (SM) in plaque control to that of conventional TBI. We evaluated the plaque removal efficacy of a ST comprising a computer-assisted, wirelessly linked, three-dimensional (3D) motion-capture, data-logging, and SM system in TBI. We also evaluated the efficacy of TBI with a ST and SM system by analyzing the reductions of the modified Quigley-Hein plaque index in 60 volunteers. These volunteers were separated randomly into two groups: conventional TBI (control group) and computer-assisted TBI (experimental group). The changes in the plaque indexes were recorded immediately, 1 week, 1 month, and 10 months after TBI. The patterns of decreases in the modified Quigley-Hein plaque indexes were similar in the two groups. Reductions of the plaque indexes of both groups in each time period were observed (P < 0.0001), and the effects of TBI did not differ between the two groups (P = 0.3803). All volunteers were sufficiently motivated in using this new system. The reported new, computer-assisted TBI system might be an alternative option in controlling dental plaque and maintaining oral hygiene. Individuals can be motivated by the new system; meanwhile, comparable effects of controlling dental plaque can be achieved.

  17. Combined aerial and terrestrial images for complete 3D documentation of Singosari Temple based on Structure from Motion algorithm

    NASA Astrophysics Data System (ADS)

    Hidayat, Husnul; Cahyono, A. B.

    2016-11-01

    Singosaritemple is one of cultural heritage building in East Java, Indonesia which was built in 1300s and restorated in 1934-1937. Because of its history and importance, complete documentation of this temple is required. Nowadays with the advent of low cost UAVs combining aerial photography with terrestrial photogrammetry gives more complete data for 3D documentation. This research aims to make complete 3D model of this landmark from aerial and terrestrial photographs with Structure from Motion algorithm. To establish correct scale, position, and orientation, the final 3D model was georeferenced with Ground Control Points in UTM 49S coordinate system. The result shows that all facades, floor, and upper structures can be modeled completely in 3D. In terms of 3D coordinate accuracy, the Root Mean Square Errors (RMSEs) are RMSEx=0,041 m; RMSEy=0,031 m; RMSEz=0,049 m which represent 0.071 m displacement in 3D space. In addition the mean difference of lenght measurements of the object is 0,057 m. With this accuracy, this method can be used to map the site up to 1:237 scale. Although the accuracy level is still in centimeters, the combined aerial and terrestrial photographs with Structure from Motion algorithm can provide complete and visually interesting 3D model.

  18. Comparison of 2D and 3D modeled tumor motion estimation/prediction for dynamic tumor tracking during arc radiotherapy.

    PubMed

    Liu, Wu; Ma, Xiangyu; Yan, Huagang; Chen, Zhe; Nath, Ravinder; Li, Haiyun

    2017-03-06

    Many real-time imaging techniques have been developed to localize the target in 3D space or in 2D beam's eye view (BEV) plane for intrafraction motion tracking in radiation therapy. With tracking system latency, 3D-modeled method is expected to be more accurate even in terms of 2D BEV tracking error. No quantitative analysis, however, has been reported. In this study, we simulated co-planar arc deliveries using respiratory motion data acquired from 42 patients to quantitatively compare the accuracy between 2D BEV and 3D-modeled tracking in arc therapy and determine whether 3D information is needed for motion tracking. We used our previously developed low kV dose adaptive MV-kV imaging and motion compensation framework as a representative of 3D-modeled methods. It optimizes the balance between additional kV imaging dose and 3D tracking accuracy and solves the MLC blockage issue. With simulated Gaussian marker detection errors (zero mean and 0.39 mm standard deviation) and ~155/310/460 ms tracking system latencies, the mean percentage of time that the target moved >2 mm from the predicted 2D BEV position are 1.1%/4.0%/7.8% and 1.3%/5.8%/11.6% for 3D-modeled and 2D-only tracking, respectively. The corresponding average BEV RMS errors are 0.67/0.90/1.13 mm and 0.79/1.10/1.37 mm. Compared to the 2D method, the 3D method reduced the average RMS unresolved motion along the beam direction from ~3 mm to ~1 mm, resulting on average only <1% dosimetric advantage in the depth direction. Only for a small fraction of the patients, when tracking latency is long, the 3D-modeled method showed significant improvement of BEV tracking accuracy, indicating potential dosimetric advantage. However, if the tracking latency is short (~150 ms or less), those improvements are limited. Therefore, 2D BEV tracking has sufficient targeting accuracy for most clinical cases. The 3D technique is, however, still important in solving the MLC blockage problem during 2D BEV tracking.

  19. Comparison of 2D and 3D modeled tumor motion estimation/prediction for dynamic tumor tracking during arc radiotherapy

    NASA Astrophysics Data System (ADS)

    Liu, Wu; Ma, Xiangyu; Yan, Huagang; Chen, Zhe; Nath, Ravinder; Li, Haiyun

    2017-05-01

    Many real-time imaging techniques have been developed to localize a target in 3D space or in a 2D beam’s eye view (BEV) plane for intrafraction motion tracking in radiation therapy. With tracking system latency, the 3D-modeled method is expected to be more accurate even in terms of 2D BEV tracking error. No quantitative analysis, however, has been reported. In this study, we simulated co-planar arc deliveries using respiratory motion data acquired from 42 patients to quantitatively compare the accuracy between 2D BEV and 3D-modeled tracking in arc therapy and to determine whether 3D information is needed for motion tracking. We used our previously developed low kV dose adaptive MV-kV imaging and motion compensation framework as a representative of 3D-modeled methods. It optimizes the balance between additional kV imaging dose and 3D tracking accuracy and solves the MLC blockage issue. With simulated Gaussian marker detection errors (zero mean and 0.39 mm standard deviation) and ~155/310/460 ms tracking system latencies, the mean percentage of time that the target moved  >2 mm from the predicted 2D BEV position are 1.1%/4.0%/7.8% and 1.3%/5.8%/11.6% for the 3D-modeled and 2D-only tracking, respectively. The corresponding average BEV RMS errors are 0.67/0.90/1.13 mm and 0.79/1.10/1.37 mm. Compared to the 2D method, the 3D method reduced the average RMS unresolved motion along the beam direction from ~3 mm to ~1 mm, resulting in on average only  <1% dosimetric advantage in the depth direction. Only for a small fraction of the patients, when tracking latency is long, the 3D-modeled method showed significant improvement of BEV tracking accuracy, indicating potential dosimetric advantage. However, if the tracking latency is short (~150 ms or less), those improvements are limited. Therefore, 2D BEV tracking has sufficient targeting accuracy for most clinical cases. The 3D technique is, however, still important in solving the MLC blockage problem

  20. Intersection Based Motion Correction of Multi-Slice MRI for 3D in utero Fetal Brain Image Formation

    PubMed Central

    Kim, Kio; Habas, Piotr A.; Rousseau, Francois; Glenn, Orit A.; Barkovich, Anthony J.; Studholme, Colin

    2012-01-01

    In recent years post-processing of fast multi-slice MR imaging to correct fetal motion has provided the first true 3D MR images of the developing human brain in utero. Early approaches have used reconstruction based algorithms, employing a two step iterative process, where slices from the acquired data are re-aligned to an approximate 3D reconstruction of the fetal brain, which is then refined further using the improved slice alignment. This two step slice-to-volume process, although powerful, is computationally expensive in needing a 3D reconstruction, and is limited in its ability to recover sub-voxel alignment. Here, we describe an alternative approach which we term slice intersection motion correction (SIMC), that seeks to directly co-align multiple slice stacks by considering the matching structure along all intersecting slice pairs in all orthogonally planned slices that are acquired in clinical imaging studies. A collective update scheme for all slices is then derived, to simultaneously drive slices into a consistent match along their lines of intersection. We then describe a 3D reconstruction algorithm that, using the final motion corrected slice locations, suppresses through-plane partial volume effects to provide a single high isotropic resolution 3D image. The method is tested on simulated data with known motions and is applied to retrospectively reconstruct 3D images from a range of clinically acquired imaging studies. The quantitative evaluation of the registration accuracy for the simulated data sets demonstrated a significant improvement over previous approaches. An initial application of the technique to studying clinical pathology is included, where the proposed method recovered up to 15 mm of translation and 30 degrees of rotation for individual slices, and produced full 3D reconstructions containing clinically useful additional information not visible in the original 2D slices. PMID:19744911

  1. Evaluating the utility of 3D TRUS image information in guiding intra-procedure registration for motion compensation

    NASA Astrophysics Data System (ADS)

    De Silva, Tharindu; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    In targeted 3D transrectal ultrasound (TRUS)-guided biopsy, patient and prostate movement during the procedure can cause target misalignments that hinder accurate sampling of pre-planned suspicious tissue locations. Multiple solutions have been proposed for motion compensation via registration of intra-procedural TRUS images to a baseline 3D TRUS image acquired at the beginning of the biopsy procedure. While 2D TRUS images are widely used for intra-procedural guidance, some solutions utilize richer intra-procedural images such as bi- or multi-planar TRUS or 3D TRUS, acquired by specialized probes. In this work, we measured the impact of such richer intra-procedural imaging on motion compensation accuracy, to evaluate the tradeoff between cost and complexity of intra-procedural imaging versus improved motion compensation. We acquired baseline and intra-procedural 3D TRUS images from 29 patients at standard sextant-template biopsy locations. We used the planes extracted from the 3D intra-procedural scans to simulate 2D and 3D information available in different clinically relevant scenarios for registration. The registration accuracy was evaluated by calculating the target registration error (TRE) using manually identified homologous fiducial markers (micro-calcifications). Our results indicate that TRE improves gradually when the number of intra-procedural imaging planes used in registration is increased. Full 3D TRUS information helps the registration algorithm to robustly converge to more accurate solutions. These results can also inform the design of a fail-safe workflow during motion compensation in a system using a tracked 2D TRUS probe, by prescribing rotational acquisitions that can be performed quickly and easily by the physician immediately prior to needle targeting.

  2. Phantom investigation of 3D motion-dependent volume aliasing during CT simulation for radiation therapy planning

    PubMed Central

    Tanyi, James A; Fuss, Martin; Varchena, Vladimir; Lancaster, Jack L; Salter, Bill J

    2007-01-01

    Purpose To quantify volumetric and positional aliasing during non-gated fast- and slow-scan acquisition CT in the presence of 3D target motion. Methods Single-slice fast, single-slice slow, and multi-slice fast scan helical CTs were acquired of dynamic spherical targets (1 and 3.15 cm in diameter), embedded in an anthropomorphic phantom. 3D target motions typical of clinically observed tumor motion parameters were investigated. Motion excursions included ± 5, ± 10, and ± 15 mm displacements in the S-I direction synchronized with constant displacements of ± 5 and ± 2 mm in the A-P and lateral directions, respectively. For each target, scan technique, and motion excursion, eight different initial motion-to-scan phase relationships were investigated. Results An anticipated general trend of target volume overestimation was observed. The mean percentage overestimation of the true physical target volume typically increased with target motion amplitude and decreasing target diameter. Slow-scan percentage overestimations were larger, and better approximated the time-averaged motion envelope, as opposed to fast-scans. Motion induced centroid misrepresentation was greater in the S-I direction for fast-scan techniques, and transaxial direction for the slow-scan technique. Overestimation is fairly uniform for slice widths < 5 mm, beyond which there is gross overestimation. Conclusion Non-gated CT imaging of targets describing clinically relevant, 3D motion results in aliased overestimation of the target volume and misrepresentation of centroid location, with little or no correlation between the physical target geometry and the CT-generated target geometry. Slow-scan techniques are a practical method for characterizing time-averaged target position. Fast-scan techniques provide a more reliable, albeit still distorted, target margin. PMID:17319965

  3. Model-based lasso catheter tracking in monoplane fluoroscopy for 3D breathing motion compensation during EP procedures

    NASA Astrophysics Data System (ADS)

    Liao, Rui

    2010-02-01

    Radio-frequency catheter ablation (RFCA) of the pulmonary veins (PVs) attached to the left atrium (LA) is usually carried out under fluoroscopy guidance. Overlay of detailed anatomical structures via 3-D CT and/or MR volumes onto the fluoroscopy helps visualization and navigation in electrophysiology procedures (EP). Unfortunately, respiratory motion may impair the utility of static overlay of the volume with fluoroscopy for catheter navigation. In this paper, we propose a B-spline based method for tracking the circumferential catheter (lasso catheter) in monoplane fluoroscopy. The tracked motion can be used for the estimation of the 3-D trajectory of breathing motion and for subsequent motion compensation. A lasso catheter is typically used during EP procedures and is pushed against the ostia of the PVs to be ablated. Hence this method does not require additional instruments, and achieves motion estimation right at the site of ablation. The performance of the proposed tracking algorithm was evaluated on 340 monoplane frames with an average error of 0.68 +/- 0.36 mms. Our contributions in this work are twofold. First and foremost, we show how to design an effective, practical, and workflow-friendly 3-D motion compensation scheme for EP procedures in a monoplane setup. In addition, we develop an efficient and accurate method for model-based tracking of the circumferential lasso catheter in the low-dose EP fluoroscopy.

  4. Real-time 3D ultrasound fetal image enhancment techniques using motion-compensated frame rate up-conversion

    NASA Astrophysics Data System (ADS)

    Lee, Gun-Ill; Park, Rae-Hong; Song, Young-Seuk; Kim, Cheol-An; Hwang, Jae-Sub

    2003-05-01

    In this paper, we present a motion compensated frame rate up-conversion method for real-time three-dimensional (3-D) ultrasound fetal image enhancement. The conventional mechanical scan method with one-dimensional (1-D) array converters used for 3-D volume data acquisition has a slow frame rate of multi-planar images. This drawback is not an issue for stationary objects, however in ultrasound images showing a fetus of more than about 25 weeks, we perceive abrupt changes due to fast motions. To compensate for this defect, we propose the frame rate up-conversion method by which new interpolated frames are inserted between two input frames, giving smooth renditions to human eyes. More natural motions can be obtained by frame rate up-conversion. In the proposed algorithm, we employ forward motion estimation (ME), in which motion vectors (MVs) ar estimated using a block matching algorithm (BMA). To smooth MVs over neighboring blocks, vector median filtering is performed. Using these smoothed MVs, interpolated frames are reconstructed by motion compensation (MC). The undesirable blocking artifacts due to blockwise processing are reduced by block boundary filtering using a Gaussian low pass filter (LPF). The proposed method can be used in computer aided diagnosis (CAD), where more natural 3-D ultrasound images are displayed in real-time. Simulation results with several real test sequences show the effectiveness of the proposed algorithm.

  5. The Source and Lateral Motion of Mantle Plumes From 3D Mantle Dynamic Models

    NASA Astrophysics Data System (ADS)

    Li, M.; Zhong, S.

    2016-12-01

    Intraplate volcanism such as hotspots may be caused by anomalously hot upwelling mantle plumes. The reconstruction of past plate motion often relies on the location of hotspots, which itself is determined by where mantle plumes form and how mantle plumes move laterally as they rise to the surface. Previous studies suggest that the large igneous provinces (LIPs), which may be caused by plume heads, preferentially locate near the edges of the seismically observed large low shear velocity provinces (LLSVPs) in the lowermost mantle beneath Africa and Pacific. However, the mechanism that leads to this interesting distribution of LIPs related to the LLSVPs is unclear. Important questions that remain to be answered include: (1) what controls the source location of mantle plumes, or more specifically, under what condition mantle plumes form outside, near the edges or in the middle of LLSVPs and (2) how much is the lateral movement of mantle plumes as they rise to the surface? In this study, we perform 3D geodynamical calculations to study plume source locations and the lateral movement of plumes. We employ plate motion history from 458 Ma as surface velocity boundary condition in our models, and the LLSVPs are simulated by large scale thermochemical piles. We explore how parameters (i.e., viscosity, thermal expansivity, thermal diffusivity) in mantle convection models control the source of plumes. We also quantify the lateral movement of mantle plumes, and compare our results with that predicted by previous studies. We find an increase of thermal diffusivity and decrease of thermal expansivity with depth, which are more consistent with mineral physical studies, suppress plumes in cold regions at the core-mantle boundary and promote plumes forming in regions with thermochemical piles. While some stable plumes originate on top and in the middle of piles, more plumes are triggered near the edges of the piles and they show transient features, and are generally hotter than that

  6. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    SciTech Connect

    Dhou, S; Hurwitz, M; Lewis, J; Mishra, P

    2014-06-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT

  7. Flying triangulation--an optical 3D sensor for the motion-robust acquisition of complex objects.

    PubMed

    Ettl, Svenja; Arold, Oliver; Yang, Zheng; Häusler, Gerd

    2012-01-10

    Three-dimensional (3D) shape acquisition is difficult if an all-around measurement of an object is desired or if a relative motion between object and sensor is unavoidable. An optical sensor principle is presented-we call it "flying triangulation"-that enables a motion-robust acquisition of 3D surface topography. It combines a simple handheld sensor with sophisticated registration algorithms. An easy acquisition of complex objects is possible-just by freely hand-guiding the sensor around the object. Real-time feedback of the sequential measurement results enables a comfortable handling for the user. No tracking is necessary. In contrast to most other eligible sensors, the presented sensor generates 3D data from each single camera image.

  8. A study of the effects of degraded imagery on tactical 3D model generation using structure-from-motion

    NASA Astrophysics Data System (ADS)

    Bolick, Leslie; Harguess, Josh

    2016-05-01

    An emerging technology in the realm of airborne intelligence, surveillance, and reconnaissance (ISR) systems is structure-from-motion (SfM), which enables the creation of three-dimensional (3D) point clouds and 3D models from two-dimensional (2D) imagery. There are several existing tools, such as VisualSFM and open source project OpenSfM, to assist in this process, however, it is well-known that pristine imagery is usually required to create meaningful 3D data from the imagery. In military applications, such as the use of unmanned aerial vehicles (UAV) for surveillance operations, imagery is rarely pristine. Therefore, we present an analysis of structure-from-motion packages on imagery that has been degraded in a controlled manner.

  9. A 3D MR-acquisition scheme for nonrigid bulk motion correction in simultaneous PET-MR.

    PubMed

    Kolbitsch, Christoph; Prieto, Claudia; Tsoumpas, Charalampos; Schaeffter, Tobias

    2014-08-01

    Positron emission tomography (PET) is a highly sensitive medical imaging technique commonly used to detect and assess tumor lesions. Magnetic resonance imaging (MRI) provides high resolution anatomical images with different contrasts and a range of additional information important for cancer diagnosis. Recently, simultaneous PET-MR systems have been released with the promise to provide complementary information from both modalities in a single examination. Due to long scan times, subject nonrigid bulk motion, i.e., changes of the patient's position on the scanner table leading to nonrigid changes of the patient's anatomy, during data acquisition can negatively impair image quality and tracer uptake quantification. A 3D MR-acquisition scheme is proposed to detect and correct for nonrigid bulk motion in simultaneously acquired PET-MR data. A respiratory navigated three dimensional (3D) MR-acquisition with Radial Phase Encoding (RPE) is used to obtain T1- and T2-weighted data with an isotropic resolution of 1.5 mm. Healthy volunteers are asked to move the abdomen two to three times during data acquisition resulting in overall 19 movements at arbitrary time points. The acquisition scheme is used to retrospectively reconstruct dynamic 3D MR images with different temporal resolutions. Nonrigid bulk motion is detected and corrected in this image data. A simultaneous PET acquisition is simulated and the effect of motion correction is assessed on image quality and standardized uptake values (SUV) for lesions with different diameters. Six respiratory gated 3D data sets with T1- and T2-weighted contrast have been obtained in healthy volunteers. All bulk motion shifts have successfully been detected and motion fields describing the transformation between the different motion states could be obtained with an accuracy of 1.71 ± 0.29 mm. The PET simulation showed errors of up to 67% in measured SUV due to bulk motion which could be reduced to less than 10% with the proposed

  10. A 3D MR-acquisition scheme for nonrigid bulk motion correction in simultaneous PET-MR

    SciTech Connect

    Kolbitsch, Christoph Prieto, Claudia; Schaeffter, Tobias; Tsoumpas, Charalampos

    2014-08-15

    Purpose: Positron emission tomography (PET) is a highly sensitive medical imaging technique commonly used to detect and assess tumor lesions. Magnetic resonance imaging (MRI) provides high resolution anatomical images with different contrasts and a range of additional information important for cancer diagnosis. Recently, simultaneous PET-MR systems have been released with the promise to provide complementary information from both modalities in a single examination. Due to long scan times, subject nonrigid bulk motion, i.e., changes of the patient's position on the scanner table leading to nonrigid changes of the patient's anatomy, during data acquisition can negatively impair image quality and tracer uptake quantification. A 3D MR-acquisition scheme is proposed to detect and correct for nonrigid bulk motion in simultaneously acquired PET-MR data. Methods: A respiratory navigated three dimensional (3D) MR-acquisition with Radial Phase Encoding (RPE) is used to obtain T1- and T2-weighted data with an isotropic resolution of 1.5 mm. Healthy volunteers are asked to move the abdomen two to three times during data acquisition resulting in overall 19 movements at arbitrary time points. The acquisition scheme is used to retrospectively reconstruct dynamic 3D MR images with different temporal resolutions. Nonrigid bulk motion is detected and corrected in this image data. A simultaneous PET acquisition is simulated and the effect of motion correction is assessed on image quality and standardized uptake values (SUV) for lesions with different diameters. Results: Six respiratory gated 3D data sets with T1- and T2-weighted contrast have been obtained in healthy volunteers. All bulk motion shifts have successfully been detected and motion fields describing the transformation between the different motion states could be obtained with an accuracy of 1.71 ± 0.29 mm. The PET simulation showed errors of up to 67% in measured SUV due to bulk motion which could be reduced to less than

  11. Time-resolved 3D contrast-enhanced MRA of an extended FOV using continuous table motion.

    PubMed

    Madhuranthakam, Ananth J; Kruger, David G; Riederer, Stephen J; Glockner, James F; Hu, Houchun H

    2004-03-01

    A method is presented for acquiring 3D time-resolved MR images of an extended (>100 cm) longitudinal field of view (FOV), as used for peripheral MR angiographic runoff studies. Previous techniques for long-FOV peripheral MRA have generally provided a single image (i.e., with no time resolution). The technique presented here generates a time series of 3D images of the FOV that lies within the homogeneous volume of the magnet. This is achieved by differential sampling of 3D k-space during continuous motion of the patient table. Each point in the object is interrogated in five consecutive 3D image sets generated at 2.5-s intervals. The method was tested experimentally in eight human subjects, and the leading edge of the bolus was observed in real time and maintained within the imaging FOV. The data revealed differential bolus velocities along the vasculature of the legs.

  12. 3D HUMAN MOTION RETRIEVAL BASED ON HUMAN HIERARCHICAL INDEX STRUCTURE

    PubMed Central

    Guo, X.

    2013-01-01

    With the development and wide application of motion capture technology, the captured motion data sets are becoming larger and larger. For this reason, an efficient retrieval method for the motion database is very important. The retrieval method needs an appropriate indexing scheme and an effective similarity measure that can organize the existing motion data well. In this paper, we represent a human motion hierarchical index structure and adopt a nonlinear method to segment motion sequences. Based on this, we extract motion patterns and then we employ a fast similarity measure algorithm for motion pattern similarity computation to efficiently retrieve motion sequences. The experiment results show that the approach proposed in our paper is effective and efficient. PMID:24744481

  13. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  14. 3D motion artifact compenstation in CT image with depth camera

    NASA Astrophysics Data System (ADS)

    Ko, Youngjun; Baek, Jongduk; Shim, Hyunjung

    2015-02-01

    Computed tomography (CT) is a medical imaging technology that projects computer-processed X-rays to acquire tomographic images or the slices of specific organ of body. A motion artifact caused by patient motion is a common problem in CT system and may introduce undesirable artifacts in CT images. This paper analyzes the critical problems in motion artifacts and proposes a new CT system for motion artifact compensation. We employ depth cameras to capture the patient motion and account it for the CT image reconstruction. In this way, we achieve the significant improvement in motion artifact compensation, which is not possible by previous techniques.

  15. Prospective motion correction for 3D pseudo-continuous arterial spin labeling using an external optical tracking system.

    PubMed

    Aksoy, Murat; Maclaren, Julian; Bammer, Roland

    2017-06-01

    Head motion is an unsolved problem in magnetic resonance imaging (MRI) studies of the brain. Real-time tracking using a camera has recently been proposed as a way to prevent head motion artifacts. As compared to navigator-based approaches that use MRI data to detect and correct motion, optical motion correction works independently of the MRI scanner, thus providing low-latency real-time motion updates without requiring any modifications to the pulse sequence. The purpose of this study was two-fold: 1) to demonstrate that prospective optical motion correction using an optical camera mitigates artifacts from head motion in three-dimensional pseudo-continuous arterial spin labeling (3D PCASL) acquisitions and 2) to assess the effect of latency differences between real-time optical motion tracking and navigator-style approaches (such as PROMO). An optical motion correction system comprising a single camera and a marker attached to the patient's forehead was used to track motion at a rate of 60fps. In the presence of motion, continuous tracking data from the optical system was used to update the scan plane in real-time during the 3D-PCASL acquisition. Navigator-style correction was simulated by using the tracking data from the optical system and performing updates only once per repetition time. Three normal volunteers and a patient were instructed to perform continuous and discrete head motion throughout the scan. Optical motion correction yielded superior image quality compared to uncorrected images or images using navigator-style correction. The standard deviations of pixel-wise CBF differences between reference and non-corrected, navigator-style-corrected and optical-corrected data were 14.28, 14.35 and 11.09mL/100g/min for continuous motion, and 12.42, 12.04 and 9.60mL/100g/min for discrete motion. Data obtained from the patient revealed that motion can obscure pathology and that application of optical prospective correction can successfully reveal the underlying

  16. Estimation of Pulmonary Motion in Healthy Subjects and Patients with Intrathoracic Tumors Using 3D-Dynamic MRI: Initial Results

    PubMed Central

    Schoebinger, Max; Herth, Felix; Tuengerthal, Siegfried; Meinzer, Heinz-Peter; Kauczor, Hans-Ulrich

    2009-01-01

    Objective To estimate a new technique for quantifying regional lung motion using 3D-MRI in healthy volunteers and to apply the technique in patients with intra- or extrapulmonary tumors. Materials and Methods Intraparenchymal lung motion during a whole breathing cycle was quantified in 30 healthy volunteers using 3D-dynamic MRI (FLASH [fast low angle shot] 3D, TRICKS [time-resolved interpolated contrast kinetics]). Qualitative and quantitative vector color maps and cumulative histograms were performed using an introduced semiautomatic algorithm. An analysis of lung motion was performed and correlated with an established 2D-MRI technique for verification. As a proof of concept, the technique was applied in five patients with non-small cell lung cancer (NSCLC) and 5 patients with malignant pleural mesothelioma (MPM). Results The correlation between intraparenchymal lung motion of the basal lung parts and the 2D-MRI technique was significant (r = 0.89, p < 0.05). Also, the vector color maps quantitatively illustrated regional lung motion in all healthy volunteers. No differences were observed between both hemithoraces, which was verified by cumulative histograms. The patients with NSCLC showed a local lack of lung motion in the area of the tumor. In the patients with MPM, there was global diminished motion of the tumor bearing hemithorax, which improved siginificantly after chemotherapy (CHT) (assessed by the 2D- and 3D-techniques) (p < 0.01). Using global spirometry, an improvement could also be shown (vital capacity 2.9 ± 0.5 versus 3.4 L ± 0.6, FEV1 0.9 ± 0.2 versus 1.4 ± 0.2 L) after CHT, but this improvement was not significant. Conclusion A 3D-dynamic MRI is able to quantify intraparenchymal lung motion. Local and global parenchymal pathologies can be precisely located and might be a new tool used to quantify even slight changes in lung motion (e.g. in therapy monitoring, follow-up studies or even benign lung diseases). PMID:19885311

  17. 3D pose estimation and motion analysis of the articulated human hand-forearm limb in an industrial production environment

    NASA Astrophysics Data System (ADS)

    Hahn, Markus; Barrois, Björn; Krüger, Lars; Wöhler, Christian; Sagerer, Gerhard; Kummert, Franz

    2010-09-01

    This study introduces an approach to model-based 3D pose estimation and instantaneous motion analysis of the human hand-forearm limb in the application context of safe human-robot interaction. 3D pose estimation is performed using two approaches: The Multiocular Contracting Curve Density (MOCCD) algorithm is a top-down technique based on pixel statistics around a contour model projected into the images from several cameras. The Iterative Closest Point (ICP) algorithm is a bottom-up approach which uses a motion-attributed 3D point cloud to estimate the object pose. Due to their orthogonal properties, a fusion of these algorithms is shown to be favorable. The fusion is performed by a weighted combination of the extracted pose parameters in an iterative manner. The analysis of object motion is based on the pose estimation result and the motion-attributed 3D points belonging to the hand-forearm limb using an extended constraint-line approach which does not rely on any temporal filtering. A further refinement is obtained using the Shape Flow algorithm, a temporal extension of the MOCCD approach, which estimates the temporal pose derivative based on the current and the two preceding images, corresponding to temporal filtering with a short response time of two or at most three frames. Combining the results of the two motion estimation stages provides information about the instantaneous motion properties of the object. Experimental investigations are performed on real-world image sequences displaying several test persons performing different working actions typically occurring in an industrial production scenario. In all example scenes, the background is cluttered, and the test persons wear various kinds of clothes. For evaluation, independently obtained ground truth data are used. [Figure not available: see fulltext.

  18. Dynamics and cortical distribution of neural responses to 2D and 3D motion in human

    PubMed Central

    McKee, Suzanne P.; Norcia, Anthony M.

    2013-01-01

    The perception of motion-in-depth is important for avoiding collisions and for the control of vergence eye-movements and other motor actions. Previous psychophysical studies have suggested that sensitivity to motion-in-depth has a lower temporal processing limit than the perception of lateral motion. The present study used functional MRI-informed EEG source-imaging to study the spatiotemporal properties of the responses to lateral motion and motion-in-depth in human visual cortex. Lateral motion and motion-in-depth displays comprised stimuli whose only difference was interocular phase: monocular oscillatory motion was either in-phase in the two eyes (lateral motion) or in antiphase (motion-in-depth). Spectral analysis was used to break the steady-state visually evoked potentials responses down into even and odd harmonic components within five functionally defined regions of interest: V1, V4, lateral occipital complex, V3A, and hMT+. We also characterized the responses within two anatomically defined regions: the inferior and superior parietal cortex. Even harmonic components dominated the evoked responses and were a factor of approximately two larger for lateral motion than motion-in-depth. These responses were slower for motion-in-depth and were largely independent of absolute disparity. In each of our regions of interest, responses at odd-harmonics were relatively small, but were larger for motion-in-depth than lateral motion, especially in parietal cortex, and depended on absolute disparity. Taken together, our results suggest a plausible neural basis for reduced psychophysical sensitivity to rapid motion-in-depth. PMID:24198326

  19. Piecewise-diffeomorphic image registration: application to the motion estimation between 3D CT lung images with sliding conditions.

    PubMed

    Risser, Laurent; Vialard, François-Xavier; Baluwala, Habib Y; Schnabel, Julia A

    2013-02-01

    In this paper, we propose a new strategy for modelling sliding conditions when registering 3D images in a piecewise-diffeomorphic framework. More specifically, our main contribution is the development of a mathematical formalism to perform Large Deformation Diffeomorphic Metric Mapping registration with sliding conditions. We also show how to adapt this formalism to the LogDemons diffeomorphic registration framework. We finally show how to apply this strategy to estimate the respiratory motion between 3D CT pulmonary images. Quantitative tests are performed on 2D and 3D synthetic images, as well as on real 3D lung images from the MICCAI EMPIRE10 challenge. Results show that our strategy estimates accurate mappings of entire 3D thoracic image volumes that exhibit a sliding motion, as opposed to conventional registration methods which are not capable of capturing discontinuous deformations at the thoracic cage boundary. They also show that although the deformations are not smooth across the location of sliding conditions, they are almost always invertible in the whole image domain. This would be helpful for radiotherapy planning and delivery. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  20. Perception of 3D spatial relations for 3D displays

    NASA Astrophysics Data System (ADS)

    Rosen, Paul; Pizlo, Zygmunt; Hoffmann, Christoph; Popescu, Voicu S.

    2004-05-01

    We test perception of 3D spatial relations in 3D images rendered by a 3D display (Perspecta from Actuality Systems) and compare it to that of a high-resolution flat panel display. 3D images provide the observer with such depth cues as motion parallax and binocular disparity. Our 3D display is a device that renders a 3D image by displaying, in rapid succession, radial slices through the scene on a rotating screen. The image is contained in a glass globe and can be viewed from virtually any direction. In the psychophysical experiment several families of 3D objects are used as stimuli: primitive shapes (cylinders and cuboids), and complex objects (multi-story buildings, cars, and pieces of furniture). Each object has at least one plane of symmetry. On each trial an object or its "distorted" version is shown at an arbitrary orientation. The distortion is produced by stretching an object in a random direction by 40%. This distortion must eliminate the symmetry of an object. The subject's task is to decide whether or not the presented object is distorted under several viewing conditions (monocular/binocular, with/without motion parallax, and near/far). The subject's performance is measured by the discriminability d', which is a conventional dependent variable in signal detection experiments.

  1. GPS Measurements of Crustal Motion Indicate 3D GIA Models are Needed to Understand Antarctic Ice Mass Change

    NASA Astrophysics Data System (ADS)

    Konfal, S. A.; Wilson, T. J.; Bevis, M. G.; Kendrick, E. C.; Dalziel, I. W. D.; Smalley, R., Jr.; Willis, M. J.; Heeszel, D.; Wiens, D. A.

    2014-12-01

    Continuous GPS measurements of bedrock crustal motions in response to GIA in Antarctica have been acquired by the Antarctic Network (ANET) component of the Polar Earth Observing Network (POLENET). Patterns of vertical crustal displacements are commonly considered the key fingerprints of GIA, with maximum uplift marking the position of former ice load centers. However, efforts to develop more realistic 3D earth models have shown that the horizontal motion pattern is a more important signature of GIA on a laterally varying earth. Here we provide the first measurements substantiating predictions of a reversal of horizontal motions across an extreme gradient in crustal thickness and mantle viscosity crossing Antarctica. GPS results document motion toward, rather than away from the sites of major ice mass loss in West Antarctica. When compared in a common reference frame, observed crustal motions are not in agreement with predictions from models of GIA. A gradient in crustal velocities, faster toward West Antarctica, is spatially coincident with the rheological boundary mapped from seismic tomographic results. This suggests that horizontal crustal motions are strongly influenced by laterally-varying earth properties, and demonstrates that only 3D earth models can produce reliable predictions of GIA for Antarctica.

  2. The 3D Tele Motion Tracking for the Orthodontic Facial Analysis

    PubMed Central

    Nota, Alessandro; Marchetti, Enrico; Padricelli, Giuseppe; Marzo, Giuseppe

    2016-01-01

    Aim. This study aimed to evaluate the reliability of 3D-TMT, previously used only for dynamic testing, in a static cephalometric evaluation. Material and Method. A group of 40 patients (20 males and 20 females; mean age 14.2 ± 1.2 years; 12–18 years old) was included in the study. The measurements obtained by the 3D-TMT cephalometric analysis with a conventional frontal cephalometric analysis were compared for each subject. Nine passive markers reflectors were positioned on the face skin for the detection of the profile of the patient. Through the acquisition of these points, corresponding plans for three-dimensional posterior-anterior cephalometric analysis were found. Results. The cephalometric results carried out with 3D-TMT and with traditional posterior-anterior cephalometric analysis showed the 3D-TMT system values are slightly higher than the values measured on radiographs but statistically significant; nevertheless their correlation is very high. Conclusion. The recorded values obtained using the 3D-TMT analysis were correlated to cephalometric analysis, with small but statistically significant differences. The Dahlberg errors resulted to be always lower than the mean difference between the 2D and 3D measurements. A clinician should use, during the clinical monitoring of a patient, always the same method, to avoid comparing different millimeter magnitudes. PMID:28044130

  3. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson’s Disease

    PubMed Central

    Piro, Neltje E.; Piro, Lennart K.; Kassubek, Jan; Blechschmidt-Trapp, Ronald A.

    2016-01-01

    Remote monitoring of Parkinson’s Disease (PD) patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS) classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a) an examination based on video recordings as a clinical reference; (b) an automatically classified UPDRS; and (c) a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG) sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b) the automatically classified UPDRS is 0.48 and with (c) the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team. PMID:27338400

  4. Nonrigid motion correction in 3D using autofocusing with localized linear translations.

    PubMed

    Cheng, Joseph Y; Alley, Marcus T; Cunningham, Charles H; Vasanawala, Shreyas S; Pauly, John M; Lustig, Michael

    2012-12-01

    MR scans are sensitive to motion effects due to the scan duration. To properly suppress artifacts from nonrigid body motion, complex models with elements such as translation, rotation, shear, and scaling have been incorporated into the reconstruction pipeline. However, these techniques are computationally intensive and difficult to implement for online reconstruction. On a sufficiently small spatial scale, the different types of motion can be well approximated as simple linear translations. This formulation allows for a practical autofocusing algorithm that locally minimizes a given motion metric--more specifically, the proposed localized gradient-entropy metric. To reduce the vast search space for an optimal solution, possible motion paths are limited to the motion measured from multichannel navigator data. The novel navigation strategy is based on the so-called "Butterfly" navigators, which are modifications of the spin-warp sequence that provides intrinsic translational motion information with negligible overhead. With a 32-channel abdominal coil, sufficient number of motion measurements were found to approximate possible linear motion paths for every image voxel. The correction scheme was applied to free-breathing abdominal patient studies. In these scans, a reduction in artifacts from complex, nonrigid motion was observed. Copyright © 2012 Wiley Periodicals, Inc.

  5. Non-rigid Motion Correction in 3D Using Autofocusing with Localized Linear Translations

    PubMed Central

    Cheng, Joseph Y.; Alley, Marcus T.; Cunningham, Charles H.; Vasanawala, Shreyas S.; Pauly, John M.; Lustig, Michael

    2012-01-01

    MR scans are sensitive to motion effects due to the scan duration. To properly suppress artifacts from non-rigid body motion, complex models with elements such as translation, rotation, shear, and scaling have been incorporated into the reconstruction pipeline. However, these techniques are computationally intensive and difficult to implement for online reconstruction. On a sufficiently small spatial scale, the different types of motion can be well-approximated as simple linear translations. This formulation allows for a practical autofocusing algorithm that locally minimizes a given motion metric – more specifically, the proposed localized gradient-entropy metric. To reduce the vast search space for an optimal solution, possible motion paths are limited to the motion measured from multi-channel navigator data. The novel navigation strategy is based on the so-called “Butterfly” navigators which are modifications to the spin-warp sequence that provide intrinsic translational motion information with negligible overhead. With a 32-channel abdominal coil, sufficient number of motion measurements were found to approximate possible linear motion paths for every image voxel. The correction scheme was applied to free-breathing abdominal patient studies. In these scans, a reduction in artifacts from complex, non-rigid motion was observed. PMID:22307933

  6. A control theory approach to the analysis and synthesis of the experimentally observed motion primitives.

    PubMed

    Nori, Francesco; Frezza, Ruggero

    2005-11-01

    Recent experiments on frogs and rats, have led to the hypothesis that sensory-motor systems are organized into a finite number of linearly combinable modules; each module generates a motor command that drives the system to a predefined equilibrium. Surprisingly, in spite of the infiniteness of different movements that can be realized, there seems to be only a handful of these modules. The structure can be thought of as a vocabulary of "elementary control actions". Admissible controls, which in principle belong to an infinite dimensional space, are reduced to the linear vector space spanned by these elementary controls. In the present paper we address some theoretical questions that arise naturally once a similar structure is applied to the control of nonlinear kinematic chains. First of all, we show how to choose the modules so that the system does not loose its capability of generating a "complete" set of movements. Secondly, we realize a "complete" vocabulary with a minimal number of elementary control actions. Subsequently, we show how to modify the control scheme so as to compensate for parametric changes in the system to be controlled. Remarkably, we construct a set of modules with the property of being invariant with respect to the parameters that model the growth of an individual. Robustness against uncertainties is also considered showing how to optimally choose the modules equilibria so as to compensate for errors affecting the system. Finally, the motion primitive paradigm is extended to locomotion and a related formalization of internal (proprioceptive) and external (exteroceptive) variables is given.

  7. PRIMAS: a real-time 3D motion-analysis system

    NASA Astrophysics Data System (ADS)

    Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans

    1994-03-01

    The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.

  8. 3D GABA imaging with real-time motion correction, shim update and reacquisition of adiabatic spiral MRSI.

    PubMed

    Bogner, Wolfgang; Gagoski, Borjan; Hess, Aaron T; Bhat, Himanshu; Tisdall, M Dylan; van der Kouwe, Andre J W; Strasser, Bernhard; Marjańska, Małgorzata; Trattnig, Siegfried; Grant, Ellen; Rosen, Bruce; Andronesi, Ovidiu C

    2014-12-01

    Gamma-aminobutyric acid (GABA) and glutamate (Glu) are the major neurotransmitters in the brain. They are crucial for the functioning of healthy brain and their alteration is a major mechanism in the pathophysiology of many neuro-psychiatric disorders. Magnetic resonance spectroscopy (MRS) is the only way to measure GABA and Glu non-invasively in vivo. GABA detection is particularly challenging and requires special MRS techniques. The most popular is MEscher-GArwood (MEGA) difference editing with single-voxel Point RESolved Spectroscopy (PRESS) localization. This technique has three major limitations: a) MEGA editing is a subtraction technique, hence is very sensitive to scanner instabilities and motion artifacts. b) PRESS is prone to localization errors at high fields (≥3T) that compromise accurate quantification. c) Single-voxel spectroscopy can (similar to a biopsy) only probe steady GABA and Glu levels in a single location at a time. To mitigate these problems, we implemented a 3D MEGA-editing MRS imaging sequence with the following three features: a) Real-time motion correction, dynamic shim updates, and selective reacquisition to eliminate subtraction artifacts due to scanner instabilities and subject motion. b) Localization by Adiabatic SElective Refocusing (LASER) to improve the localization accuracy and signal-to-noise ratio. c) K-space encoding via a weighted stack of spirals provides 3D metabolic mapping with flexible scan times. Simulations, phantom and in vivo experiments prove that our MEGA-LASER sequence enables 3D mapping of GABA+ and Glx (Glutamate+Gluatmine), by providing 1.66 times larger signal for the 3.02ppm multiplet of GABA+ compared to MEGA-PRESS, leading to clinically feasible scan times for 3D brain imaging. Hence, our sequence allows accurate and robust 3D-mapping of brain GABA+ and Glx levels to be performed at clinical 3T MR scanners for use in neuroscience and clinical applications. Copyright © 2014 Elsevier Inc. All rights

  9. 3D GABA imaging with real-time motion correction, shim update and reacquisition of adiabatic spiral MRSI

    PubMed Central

    Bogner, Wolfgang; Gagoski, Borjan; Hess, Aaron T; Bhat, Himanshu; Tisdall, M. Dylan; van der Kouwe, Andre J.W.; Strasser, Bernhard; Marjańska, Małgorzata; Trattnig, Siegfried; Grant, Ellen; Rosen, Bruce; Andronesi, Ovidiu C

    2014-01-01

    Gamma-aminobutyric acid (GABA) and glutamate (Glu) are the major neurotransmitters in the brain. They are crucial for the functioning of healthy brain and their alteration is a major mechanism in the pathophysiology of many neuro-psychiatric disorders. Magnetic resonance spectroscopy (MRS) is the only way to measure GABA and Glu non-invasively in vivo. GABA detection is particularly challenging and requires special MRS techniques. The most popular is MEscher-GArwood (MEGA) difference editing with single-voxel Point RESolved Spectroscopy (PRESS) localization. This technique has three major limitations: a) MEGA editing is a subtraction technique, hence is very sensitive to scanner instabilities and motion artifacts. b) PRESS is prone to localization errors at high fields (≥3T) that compromise accurate quantification. c) Single-voxel spectroscopy can (similar to a biopsy) only probe average GABA and Glu levels in a single location at a time. To mitigate these problems, we implemented a 3D MEGA-editing MRS imaging sequence with the following three features: a) Real-time motion correction, dynamic shim updates, and selective reacquisition to eliminate subtraction artifacts due to scanner instabilities and subject motion. b) Localization by Adiabatic SElective Refocusing (LASER) to improve the localization accuracy and signal-to-noise ratio. c) K-space encoding via a weighted stack of spirals provides 3D metabolic mapping with flexible scan times. Simulations, phantom and in vivo experiments prove that our MEGA-LASER sequence enables 3D mapping of GABA+ and Glx (Glutamate + Gluatmine), by providing 1.66 times larger signal for the 3.02 ppm multiplet of GABA+ compared to MEGA-PRESS, leading to clinically feasible scan times for 3D brain imaging. Hence, our sequence allows accurate and robust 3D-mapping of brain GABA+ and Glx levels to be performed at clinical 3T MR scanners for use in neuroscience and clinical applications. PMID:25255945

  10. A stroboscopic structured illumination system used in dynamic 3D visualization of high-speed motion object

    NASA Astrophysics Data System (ADS)

    Su, Xianyu; Zhang, Qican; Li, Yong; Xiang, Liqun; Cao, Yiping; Chen, Wenjing

    2005-04-01

    A stroboscopic structured illumination system, which can be used in measurement for 3D shape and deformation of high-speed motion object, is proposed and verified by experiments. The system, present in this paper, can automatically detect the position of high-speed moving object and synchronously control the flash of LED to project a structured optical field onto surface of motion object and the shoot of imaging system to acquire an image of deformed fringe pattern, also can create a signal, set artificially through software, to synchronously control the LED and imaging system to do their job. We experiment on a civil electric fan, successful acquire a serial of instantaneous, sharp and clear images of rotation blade and reconstruct its 3D shapes in difference revolutions.

  11. A collaborative computing framework of cloud network and WBSN applied to fall detection and 3-D motion reconstruction.

    PubMed

    Lai, Chin-Feng; Chen, Min; Pan, Jeng-Shyang; Youn, Chan-Hyun; Chao, Han-Chieh

    2014-03-01

    As cloud computing and wireless body sensor network technologies become gradually developed, ubiquitous healthcare services prevent accidents instantly and effectively, as well as provides relevant information to reduce related processing time and cost. This study proposes a co-processing intermediary framework integrated cloud and wireless body sensor networks, which is mainly applied to fall detection and 3-D motion reconstruction. In this study, the main focuses includes distributed computing and resource allocation of processing sensing data over the computing architecture, network conditions and performance evaluation. Through this framework, the transmissions and computing time of sensing data are reduced to enhance overall performance for the services of fall events detection and 3-D motion reconstruction.

  12. An efficient content-adaptive motion-compensated 3-D DWT with enhanced spatial and temporal scalability.

    PubMed

    Mehrseresht, Nagita; Taubman, David

    2006-06-01

    We propose a novel, content adaptive method for motion-compensated three-dimensional wavelet transformation (MC 3-D DWT) of video. The proposed method overcomes problems of ghosting and nonaligned aliasing artifacts which can arise in regions of motion model failure, when the video is reconstructed at reduced temporal or spatial resolutions. Previous MC 3-D DWT structures either take the form of MC temporal DWT followed by a spatial transform ("t+2D"), or perform the spatial transform first ("2D + t"), limiting the spatial frequencies which can be jointly compensated in the temporal transform, and hence limiting the compression efficiency. When the motion model fails, the "t + 2D" structure causes nonaligned aliasing artifacts in reduced spatial resolution sequences. Essentially, the proposed transform continuously adapts itself between the "t + 2D" and "2D + t" structures, based on information available within the compressed bit stream. Ghosting artifacts may also appear in reduced frame-rate sequences due to temporal low-pass filtering along invalid motion trajectories. To avoid the ghosting artifacts, we continuously select between different low-pass temporal filters, based on the estimated accuracy of the motion model. Experimental results indicate that the proposed adaptive transform preserves high compression efficiency while substantially improving the quality of reduced spatial and temporal resolution sequences.

  13. 3D nanometer images of biological fibers by directed motion of gold nanoparticles

    PubMed Central

    Estrada, Laura C.; Gratton, Enrico

    2011-01-01

    Using near-infrared femtosecond pulses we move single gold nanoparticles (AuNPs) along biological fibers such as collagen and actin filaments. While the AuNP is sliding on the fiber, its trajectory is measured in 3D with nanometer resolution providing a high resolution image of the fiber. Here, we systematically moved a single AuNP along nm-size collagen fibers and actin filament inside CHO K1 living cells mapping their 3D topography with high fidelity. PMID:21919444

  14. 3D Orthogonal Woven Triboelectric Nanogenerator for Effective Biomechanical Energy Harvesting and as Self-Powered Active Motion Sensors.

    PubMed

    Dong, Kai; Deng, Jianan; Zi, Yunlong; Wang, Yi-Cheng; Xu, Cheng; Zou, Haiyang; Ding, Wenbo; Dai, Yejing; Gu, Bohong; Sun, Baozhong; Wang, Zhong Lin

    2017-10-01

    The development of wearable and large-area energy-harvesting textiles has received intensive attention due to their promising applications in next-generation wearable functional electronics. However, the limited power outputs of conventional textiles have largely hindered their development. Here, in combination with the stainless steel/polyester fiber blended yarn, the polydimethylsiloxane-coated energy-harvesting yarn, and nonconductive binding yarn, a high-power-output textile triboelectric nanogenerator (TENG) with 3D orthogonal woven structure is developed for effective biomechanical energy harvesting and active motion signal tracking. Based on the advanced 3D structural design, the maximum peak power density of 3D textile can reach 263.36 mW m(-2) under the tapping frequency of 3 Hz, which is several times more than that of conventional 2D textile TENGs. Besides, its collected power is capable of lighting up a warning indicator, sustainably charging a commercial capacitor, and powering a smart watch. The 3D textile TENG can also be used as a self-powered active motion sensor to constantly monitor the movement signals of human body. Furthermore, a smart dancing blanket is designed to simultaneously convert biomechanical energy and perceive body movement. This work provides a new direction for multifunctional self-powered textiles with potential applications in wearable electronics, home security, and personalized healthcare. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Vision-Based 3D Motion Estimation for On-Orbit Proximity Satellite Tracking and Navigation

    DTIC Science & Technology

    2015-06-01

    Network .....................................................................................58 3. Telemetry Computer...screenshot of the telemetry software and the SSH terminals. ...........61 Figure 25. View of the VICON cameras above the granite flat floor of the FSS...point-wise kinematic models. The pose of the 3D structure is then estimated using a dual quaternion method [19]. The robustness and validity of this

  16. Free-breathing 3D whole-heart black-blood imaging with motion sensitized driven equilibrium.

    PubMed

    Srinivasan, Subashini; Hu, Peng; Kissinger, Kraig V; Goddu, Beth; Goepfert, Lois; Schmidt, Ehud J; Kozerke, Sebastian; Nezafat, Reza

    2012-08-01

    To assess the efficacy and robustness of motion sensitized driven equilibrium (MSDE) for blood suppression in volumetric 3D whole-heart cardiac MR. To investigate the efficacy of MSDE on blood suppression and myocardial signal-to-noise ratio (SNR) loss on different imaging sequences, seven healthy adult subjects were imaged using 3D electrocardiogram (ECG)-triggered MSDE-prep T(1) -weighted turbo spin echo (TSE), and spoiled gradient echo (GRE), after optimization of MSDE parameters in a pilot study of five subjects. Imaging artifacts, myocardial and blood SNR were assessed. Subsequently, the feasibility of isotropic spatial resolution MSDE-prep black-blood was assessed in six subjects. Finally, 15 patients with known or suspected cardiovascular disease were recruited to be imaged using a conventional multislice 2D double inversion recovery (DIR) TSE imaging sequence and a 3D MSDE-prep spoiled GRE. The MSDE-prep yielded significant blood suppression (75%-92%), enabling a volumetric 3D black-blood assessment of the whole heart with significantly improved visualization of the chamber walls. The MSDE-prep also allowed successful acquisition of black-blood images with isotropic spatial resolution. In the patient study, 3D black-blood MSDE-prep and DIR resulted in similar blood suppression in left ventricle and right ventricle walls but the MSDE-prep had superior myocardial signal and wall sharpness. MSDE-prep allows volumetric black-blood imaging of the heart. Copyright © 2012 Wiley Periodicals, Inc.

  17. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations.

  18. TWO-DIMENSIONAL VIDEO ANALYSIS IS COMPARABLE TO 3D MOTION CAPTURE IN LOWER EXTREMITY MOVEMENT ASSESSMENT

    PubMed Central

    Schurr, Stacy A.; Resch, Jacob E.; Saliba, Susan A

    2017-01-01

    Background Although 3D motion capture is considered the “gold standard” for recording and analyzing kinematics, 2D video analysis may be a more reasonable, inexpensive, and portable option for kinematic assessment during pre-participation screenings. Few studies have compared quantitative measurements of lower extremity functional tasks between 2D and 3D. Purpose To compare kinematic measurements of the trunk and lower extremity in the frontal and sagittal planes between 2D video camera and 3D motion capture analyses obtained concurrently during a SLS. Study Design Descriptive laboratory study. Methods Twenty-six healthy, recreationally active adults volunteered to participate. Participants performed three trials of the single leg squat on each limb, which were recorded simultaneously by three 2D video cameras and a 3D motion capture system. Dependent variables analyzed were joint displacement at the trunk, hip, knee, and ankle in the frontal and sagittal planes during the task compared to single leg quiet standing. Results Dependent variables exhibited moderate to strong correlations between the two measures in the sagittal plane (r = 0.51–.093), and a poor correlation at the knee in the frontal plane (r = 0.308) at (p ≤ 0.05) All other dependent variables revealed non-significant results between the two measures. Bland-Altman plots revealed strong agreement in the average mean difference in the amount of joint displacement between 2D and 3D in the sagittal plane (trunk = 1.68 º, hip = 2.60 º, knee = 0.74 º, and ankle = 3.12 º). Agreement in the frontal plane was good (trunk = 7.92 °, hip = -8.72 º, knee = -6.62 º, and ankle = 3.03 °). Conclusion Moderate to strong relationships were observed between 2D video camera and 3D motion capture analyses at all joints in the sagittal plane, and the average mean difference was comparable to the standard error of measure with goniometry. The results

  19. Stereo and motion parallax cues in human 3D vision: can they vanish without a trace?

    PubMed

    Rauschecker, Andreas M; Solomon, Samuel G; Glennerster, Andrew

    2006-12-19

    In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the "correct" size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues.

  20. An eliminating method of motion-induced vertical parallax for time-division 3D display technology

    NASA Astrophysics Data System (ADS)

    Lin, Liyuan; Hou, Chunping

    2015-10-01

    A time difference between the left image and right image of the time-division 3D display makes a person perceive alternating vertical parallax when an object is moving vertically on a fixed depth plane, which causes the left image and right image perceived do not match and makes people more prone to visual fatigue. This mismatch cannot eliminate simply rely on the precise synchronous control of the left image and right image. Based on the principle of time-division 3D display technology and human visual system characteristics, this paper establishes a model of the true vertical motion velocity in reality and vertical motion velocity on the screen, and calculates the amount of the vertical parallax caused by vertical motion, and then puts forward a motion compensation method to eliminate the vertical parallax. Finally, subjective experiments are carried out to analyze how the time difference affects the stereo visual comfort by comparing the comfort values of the stereo image sequences before and after compensating using the eliminating method. The theoretical analysis and experimental results show that the proposed method is reasonable and efficient.

  1. Speed and eccentricity tuning reveal a central role for the velocity-based cue to 3D visual motion.

    PubMed

    Czuba, Thaddeus B; Rokers, Bas; Huk, Alexander C; Cormack, Lawrence K

    2010-11-01

    Two binocular cues are thought to underlie the visual perception of three-dimensional (3D) motion: a disparity-based cue, which relies on changes in disparity over time, and a velocity-based cue, which relies on interocular velocity differences. The respective building blocks of these cues, instantaneous disparity and retinal motion, exhibit very distinct spatial and temporal signatures. Although these two cues are synchronous in naturally moving objects, disparity-based and velocity-based mechanisms can be dissociated experimentally. We therefore investigated how the relative contributions of these two cues change across a range of viewing conditions. We measured direction-discrimination sensitivity for motion though depth across a wide range of eccentricities and speeds for disparity-based stimuli, velocity-based stimuli, and "full cue" stimuli containing both changing disparities and interocular velocity differences. Surprisingly, the pattern of sensitivity for velocity-based stimuli was nearly identical to that for full cue stimuli across the entire extent of the measured spatiotemporal surface and both were clearly distinct from those for the disparity-based stimuli. These results suggest that for direction discrimination outside the fovea, 3D motion perception primarily relies on the velocity-based cue with little, if any, contribution from the disparity-based cue.

  2. Investigating Cardiac Motion Patterns Using Synthetic High-Resolution 3D Cardiovascular Magnetic Resonance Images and Statistical Shape Analysis

    PubMed Central

    Biffi, Benedetta; Bruse, Jan L.; Zuluaga, Maria A.; Ntsinjana, Hopewell N.; Taylor, Andrew M.; Schievano, Silvia

    2017-01-01

    Diagnosis of ventricular dysfunction in congenital heart disease is more and more based on medical imaging, which allows investigation of abnormal cardiac morphology and correlated abnormal function. Although analysis of 2D images represents the clinical standard, novel tools performing automatic processing of 3D images are becoming available, providing more detailed and comprehensive information than simple 2D morphometry. Among these, statistical shape analysis (SSA) allows a consistent and quantitative description of a population of complex shapes, as a way to detect novel biomarkers, ultimately improving diagnosis and pathology understanding. The aim of this study is to describe the implementation of a SSA method for the investigation of 3D left ventricular shape and motion patterns and to test it on a small sample of 4 congenital repaired aortic stenosis patients and 4 age-matched healthy volunteers to demonstrate its potential. The advantage of this method is the capability of analyzing subject-specific motion patterns separately from the individual morphology, visually and quantitatively, as a way to identify functional abnormalities related to both dynamics and shape. Specifically, we combined 3D, high-resolution whole heart data with 2D, temporal information provided by cine cardiovascular magnetic resonance images, and we used an SSA approach to analyze 3D motion per se. Preliminary results of this pilot study showed that using this method, some differences in end-diastolic and end-systolic ventricular shapes could be captured, but it was not possible to clearly separate the two cohorts based on shape information alone. However, further analyses on ventricular motion allowed to qualitatively identify differences between the two populations. Moreover, by describing shape and motion with a small number of principal components, this method offers a fully automated process to obtain visually intuitive and numerical information on cardiac shape and motion

  3. Effects of image noise, respiratory motion, and motion compensation on 3D activity quantification in count-limited PET images

    NASA Astrophysics Data System (ADS)

    Siman, W.; Mawlawi, O. R.; Mikell, J. K.; Mourtada, F.; Kappadath, S. C.

    2017-01-01

    The aims of this study were to evaluate the effects of noise, motion blur, and motion compensation using quiescent-period gating (QPG) on the activity concentration (AC) distribution—quantified using the cumulative AC volume histogram (ACVH)—in count-limited studies such as 90Y-PET/CT. An International Electrotechnical Commission phantom filled with low 18F activity was used to simulate clinical 90Y-PET images. PET data were acquired using a GE-D690 when the phantom was static and subject to 1-4 cm periodic 1D motion. The static data were down-sampled into shorter durations to determine the effect of noise on ACVH. Motion-degraded PET data were sorted into multiple gates to assess the effect of motion and QPG on ACVH. Errors in ACVH at AC90 (minimum AC that covers 90% of the volume of interest (VOI)), AC80, and ACmean (average AC in the VOI) were characterized as a function of noise and amplitude before and after QPG. Scan-time reduction increased the apparent non-uniformity of sphere doses and the dispersion of ACVH. These effects were more pronounced in smaller spheres. Noise-related errors in ACVH at AC20 to AC70 were smaller (<15%) compared to the errors between AC80 to AC90 (>15%). The accuracy of ACmean was largely independent of the total count. Motion decreased the observed AC and skewed the ACVH toward lower values; the severity of this effect depended on motion amplitude and tumor diameter. The errors in AC20 to AC80 for the 17 mm sphere were  -25% and  -55% for motion amplitudes of 2 cm and 4 cm, respectively. With QPG, the errors in AC20 to AC80 of the 17 mm sphere were reduced to  -15% for motion amplitudes  <4 cm. For spheres with motion amplitude to diameter ratio  >0.5, QPG was effective at reducing errors in ACVH despite increases in image non-uniformity due to increased noise. ACVH is believed to be more relevant than mean or maximum AC to calculate tumor control and normal tissue complication probability

  4. Atmospheric Motion Vectors from INSAT-3D: Initial quality assessment and its impact on track forecast of cyclonic storm NANAUK

    NASA Astrophysics Data System (ADS)

    Deb, S. K.; Kishtawal, C. M.; Kumar, Prashant; Kiran Kumar, A. S.; Pal, P. K.; Kaushik, Nitesh; Sangar, Ghansham

    2016-03-01

    The advanced Indian meteorological geostationary satellite INSAT-3D was launched on 26 July 2013 with an improved imager and an infrared sounder and is placed at 82°E over the Indian Ocean region. With the advancement in retrieval techniques of different atmospheric parameters and with improved imager data have enhanced the scope for better understanding of the different tropical atmospheric processes over this region. The retrieval techniques and accuracy of one such parameter, Atmospheric Motion Vectors (AMV) has improved significantly with the availability of improved spatial resolution data along with more options of spectral channels in the INSAT-3D imager. The present work is mainly focused on providing brief descriptions of INSAT-3D data and AMV derivation processes using these data. It also discussed the initial quality assessment of INSAT-3D AMVs for a period of six months starting from 01 February 2014 to 31 July 2014 with other independent observations: i) Meteosat-7 AMVs available over this region, ii) in-situ radiosonde wind measurements, iii) cloud tracked winds from Multi-angle Imaging Spectro-Radiometer (MISR) and iv) numerical model analysis. It is observed from this study that the qualities of newly derived INSAT-3D AMVs are comparable with existing two versions of Meteosat-7 AMVs over this region. To demonstrate its initial application, INSAT-3D AMVs are assimilated in the Weather Research and Forecasting (WRF) model and it is found that the assimilation of newly derived AMVs has helped in reduction of track forecast errors of the recent cyclonic storm NANAUK over the Arabian Sea. Though, the present study is limited to its application to one case study, however, it will provide some guidance to the operational agencies for implementation of this new AMV dataset for future applications in the Numerical Weather Prediction (NWP) over the south Asia region.

  5. Patient specific respiratory motion modeling using a 3D patient’s external surface

    PubMed Central

    Fayad, Hadi; Pan, Tinsu; Pradier, Olivier; Visvikis, Dimitris

    2012-01-01

    Purpose: Respiratory motion modeling of both tumor and surrounding tissues is a key element in minimizing errors and uncertainties in radiation therapy. Different continuous motion models have been previously developed. However, most of these models are based on the use of parameters such as amplitude and phase extracted from 1D external respiratory signal. A potentially reduced correlation between the internal structures (tumor and healthy organs) and the corresponding external surrogates obtained from such 1D respiratory signal is a limitation of these models. The objective of this work is to describe a continuous patient specific respiratory motion model, accounting for the irregular nature of respiratory signals, using patient external surface information as surrogate measures rather than a 1D respiratory signal. Methods: Ten patients were used in this study having each one 4D CT series, a synchronized RPM signal and patient surfaces extracted from the 4D CT volumes using a threshold based segmentation algorithm. A patient specific model based on the use of principal component analysis was subsequently constructed. This model relates the internal motion described by deformation matrices and the external motion characterized by the amplitude and the phase of the respiratory signal in the case of the RPM or using specific regions of interest (ROI) in the case of the patients’ external surface utilization. The capability of the different models considered to handle the irregular nature of respiration was assessed using two repeated 4D CT acquisitions (in two patients) and static CT images acquired at extreme respiration conditions (end of inspiration and expiration) for one patient. Results: Both quantitative and qualitative parameters covering local and global measures, including an expert observer study, were used to assess and compare the performance of the different motion estimation models considered. Results indicate that using surface information

  6. Patient specific respiratory motion modeling using a 3D patient's external surface.

    PubMed

    Fayad, Hadi; Pan, Tinsu; Pradier, Olivier; Visvikis, Dimitris

    2012-06-01

    Respiratory motion modeling of both tumor and surrounding tissues is a key element in minimizing errors and uncertainties in radiation therapy. Different continuous motion models have been previously developed. However, most of these models are based on the use of parameters such as amplitude and phase extracted from 1D external respiratory signal. A potentially reduced correlation between the internal structures (tumor and healthy organs) and the corresponding external surrogates obtained from such 1D respiratory signal is a limitation of these models. The objective of this work is to describe a continuous patient specific respiratory motion model, accounting for the irregular nature of respiratory signals, using patient external surface information as surrogate measures rather than a 1D respiratory signal. Ten patients were used in this study having each one 4D CT series, a synchronized RPM signal and patient surfaces extracted from the 4D CT volumes using a threshold based segmentation algorithm. A patient specific model based on the use of principal component analysis was subsequently constructed. This model relates the internal motion described by deformation matrices and the external motion characterized by the amplitude and the phase of the respiratory signal in the case of the RPM or using specific regions of interest (ROI) in the case of the patients' external surface utilization. The capability of the different models considered to handle the irregular nature of respiration was assessed using two repeated 4D CT acquisitions (in two patients) and static CT images acquired at extreme respiration conditions (end of inspiration and expiration) for one patient. Both quantitative and qualitative parameters covering local and global measures, including an expert observer study, were used to assess and compare the performance of the different motion estimation models considered. Results indicate that using surface information [correlation coefficient (CC): 0

  7. Lung Motion and Volume Measurement by Dynamic 3D MRI Using a 128-Channel Receiver Coil1

    PubMed Central

    Tokuda, Junichi; Schmitt, Melanie; Sun, Yanping; Patz, Samuel; Tang, Yi; Mountford, Carolyn E.; Hata, Nobuhiko; Wald, Lawrence L.; Hatabu, Hiroto

    2009-01-01

    Rationale and Objectives The authors present their initial experience using a 3-T whole-body scanner equipped with a 128-channel coil applied to lung motion assessment. Recent improvements in fast magnetic resonance imaging (MRI) technology have enabled several trials of free-breathing three-dimensional (3D) imaging of the lung. A large number of image frames necessarily increases the difficulty of image analysis and therefore warrants automatic image processing. However, the intensity homogeneities of images of prior dynamic 3D lung MRI studies have been insufficient to use such methods. In this study, initial data were obtained at 3 T with a 128-channel coil that demonstrate the feasibility of acquiring multiple sets of 3D pulmonary scans during free breathing and that have sufficient quality to be amenable to automatic segmentation. Materials and Methods Dynamic 3D images of the lungs of two volunteers were acquired with acquisition times of 0.62 to 0.76 frames/s and an image matrix of 128 × 128, with 24 to 30 slice encodings. The volunteers were instructed to take shallow and deep breaths during the scans. The variation of lung volume was measured from the segmented images. Results Dynamic 3D images were successfully acquired for both respiratory conditions for each subject. The images showed whole-lung motion, including lifting of the chest wall and the displacement of the diaphragm, with sufficient contrast to distinguish these structures from adjacent tissues. The average time to complete segmentation for one 3D image was 4.8 seconds. The tidal volume measured was consistent with known tidal volumes for healthy subjects performing deep-breathing maneuvers. The temporal resolution was insufficient to measure tidal volumes for shallow breathing. Conclusion This initial experience with a 3-T whole-body scanner and a 128-channel coil showed that the scanner and imaging protocol provided dynamic 3D images with spatial and temporal resolution sufficient to

  8. Sedimentary basin effects in Seattle, Washington: Ground-motion observations and 3D simulations

    USGS Publications Warehouse

    Frankel, Arthur; Stephenson, William; Carver, David

    2009-01-01

    Seismograms of local earthquakes recorded in Seattle exhibit surface waves in the Seattle basin and basin-edge focusing of S waves. Spectral ratios of Swaves and later arrivals at 1 Hz for stiff-soil sites in the Seattle basin show a dependence on the direction to the earthquake, with earthquakes to the south and southwest producing higher average amplification. Earthquakes to the southwest typically produce larger basin surface waves relative to S waves than earthquakes to the north and northwest, probably because of the velocity contrast across the Seattle fault along the southern margin of the Seattle basin. S to P conversions are observed for some events and are likely converted at the bottom of the Seattle basin. We model five earthquakes, including the M 6.8 Nisqually earthquake, using 3D finite-difference simulations accurate up to 1 Hz. The simulations reproduce the observed dependence of amplification on the direction to the earthquake. The simulations generally match the timing and character of basin surface waves observed for many events. The 3D simulation for the Nisqually earth-quake produces focusing of S waves along the southern margin of the Seattle basin near the area in west Seattle that experienced increased chimney damage from the earthquake, similar to the results of the higher-frequency 2D simulation reported by Stephenson et al. (2006). Waveforms from the 3D simulations show reasonable agreement with the data at low frequencies (0.2-0.4 Hz) for the Nisqually earthquake and an M 4.8 deep earthquake west of Seattle.

  9. Modelling of U-tube Tanks for ShipMo3D Ship Motion Predictions

    DTIC Science & Technology

    2012-01-01

    official languages unless the text is bilingual .) Ship roll motions in waves can be significant due to small roll damping and the proximity of ship... first ...John Duncan Head of Simulation Based Acquisition Defence Equipment and Support Abbey Wood Mail Point 8014 BRISTOL BS34 8JH UK DRDC

  10. Upper Extremity Motion Assessment in Adult Ischemic Stroke Patients: A 3-D Kinematic Model

    DTIC Science & Technology

    2001-10-25

    Botox , motion analysis, hemiplegia, stroke I. INTRODUCTION Recovery from ischemic stroke has been explained by patients learning new skills, by...University and the Medical College of Wisconsin and to Allergan, Inc.(Irvine, CA), makers of BOTOX ®, for their sponsorship. REFERENCES [1] Gracies

  11. Analysis of 3-D Tongue Motion from Tagged and Cine Magnetic Resonance Images

    ERIC Educational Resources Information Center

    Xing, Fangxu; Woo, Jonghye; Lee, Junghoon; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.

    2016-01-01

    Purpose: Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during…

  12. Experience affects the use of ego-motion signals during 3D shape perception

    PubMed Central

    Jain, Anshul; Backus, Benjamin T.

    2011-01-01

    Experience has long-term effects on perceptual appearance (Q. Haijiang, J. A. Saunders, R. W. Stone, & B. T. Backus, 2006). We asked whether experience affects the appearance of structure-from-motion stimuli when the optic flow is caused by observer ego-motion. Optic flow is an ambiguous depth cue: a rotating object and its oppositely rotating, depth-inverted dual generate similar flow. However, the visual system exploits ego-motion signals to prefer the percept of an object that is stationary over one that rotates (M. Wexler, F. Panerai, I. Lamouret, & J. Droulez, 2001). We replicated this finding and asked whether this preference for stationarity, the “stationarity prior,” is modulated by experience. During training, two groups of observers were exposed to objects with identical flow, but that were either stationary or moving as determined by other cues. The training caused identical test stimuli to be seen preferentially as stationary or moving by the two groups, respectively. We then asked whether different priors can exist independently at different locations in the visual field. Observers were trained to see objects either as stationary or as moving at two different locations. Observers’ stationarity bias at the two respective locations was modulated in the directions consistent with training. Thus, the utilization of extraretinal ego-motion signals for disambiguating optic flow signals can be updated as the result of experience, consistent with the updating of a Bayesian prior for stationarity. PMID:21191132

  13. Motion Controllers for Learners to Manipulate and Interact with 3D Objects for Mental Rotation Training

    ERIC Educational Resources Information Center

    Yeh, Shih-Ching; Wang, Jin-Liang; Wang, Chin-Yeh; Lin, Po-Han; Chen, Gwo-Dong; Rizzo, Albert

    2014-01-01

    Mental rotation is an important spatial processing ability and an important element in intelligence tests. However, the majority of past attempts at training mental rotation have used paper-and-pencil tests or digital images. This study proposes an innovative mental rotation training approach using magnetic motion controllers to allow learners to…

  14. Prediction of 3D internal organ position from skin surface motion: results from electromagnetic tracking studies

    NASA Astrophysics Data System (ADS)

    Wong, Kenneth H.; Tang, Jonathan; Zhang, Hui J.; Varghese, Emmanuel; Cleary, Kevin R.

    2005-04-01

    An effective treatment method for organs that move with respiration (such as the lungs, pancreas, and liver) is a major goal of radiation medicine. In order to treat such tumors, we need (1) real-time knowledge of the current location of the tumor, and (2) the ability to adapt the radiation delivery system to follow this constantly changing location. In this study, we used electromagnetic tracking in a swine model to address the first challenge, and to determine if movement of a marker attached to the skin could accurately predict movement of an internal marker embedded in an organ. Under approved animal research protocols, an electromagnetically tracked needle was inserted into a swine liver and an electromagnetically tracked guidewire was taped to the abdominal skin of the animal. The Aurora (Northern Digital Inc., Waterloo, Canada) electromagnetic tracking system was then used to monitor the position of both of these sensors every 40 msec. Position readouts from the sensors were then tested to see if any of the movements showed correlation. The strongest correlations were observed between external anterior-posterior motion and internal inferior-superior motion, with many other axes exhibiting only weak correlation. We also used these data to build a predictive model of internal motion by taking segments from the data and using them to derive a general functional relationship between the internal needle and the external guidewire. For the axis with the strongest correlation, this model enabled us to predict internal organ motion to within 1 mm.

  15. Analysis of 3-D Tongue Motion from Tagged and Cine Magnetic Resonance Images

    ERIC Educational Resources Information Center

    Xing, Fangxu; Woo, Jonghye; Lee, Junghoon; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.

    2016-01-01

    Purpose: Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during…

  16. Experience affects the use of ego-motion signals during 3D shape perception.

    PubMed

    Jain, Anshul; Backus, Benjamin T

    2010-12-29

    Experience has long-term effects on perceptual appearance (Q. Haijiang, J. A. Saunders, R. W. Stone, & B. T. Backus, 2006). We asked whether experience affects the appearance of structure-from-motion stimuli when the optic flow is caused by observer ego-motion. Optic flow is an ambiguous depth cue: a rotating object and its oppositely rotating, depth-inverted dual generate similar flow. However, the visual system exploits ego-motion signals to prefer the percept of an object that is stationary over one that rotates (M. Wexler, F. Panerai, I. Lamouret, & J. Droulez, 2001). We replicated this finding and asked whether this preference for stationarity, the "stationarity prior," is modulated by experience. During training, two groups of observers were exposed to objects with identical flow, but that were either stationary or moving as determined by other cues. The training caused identical test stimuli to be seen preferentially as stationary or moving by the two groups, respectively. We then asked whether different priors can exist independently at different locations in the visual field. Observers were trained to see objects either as stationary or as moving at two different locations. Observers' stationarity bias at the two respective locations was modulated in the directions consistent with training. Thus, the utilization of extraretinal ego-motion signals for disambiguating optic flow signals can be updated as the result of experience, consistent with the updating of a Bayesian prior for stationarity.

  17. Well-posedness of linearized motion for 3-D water waves far from equilibrium

    SciTech Connect

    Hou, T.Y.; Zhen-huan Teng; Pingwen Zhang

    1996-12-31

    In this paper, we study the motion of a free surface separating two different layers of fluid in three dimensions. We assume the flow to be inviscid, irrotational, and incompressible. In this case, one can reduce the entire motion by variables on the surface alone. In general, without additional regularizing effects such as surface alone. In general, without additional regularizing effects such as surface tension or viscosity, the flow can be subject to Rayleigh-Taylor or Kelvin-Helmholtz instabilities which will lead to unbounded growth in high frequency wave numbers. In this case, the problem is not well-posed in the Hadamard sense. The problem of water wave with no fluid above is a special case. It is well-known that such motion is well-posed when the free surface is sufficiently close to equilibrium. Beale, Hous and Lowengrub derived a general condition which ensures well-posedness of the linearization about a presumed time-dependent motion in two dimensional case. The linearized equations, when formulated in a proper coordinate system are found to have a qualitative structure surprisingly like that for the simple case of linear waves near equilbrium. Such an analysis is essential in analyzing stability of boundary integral methods for computing free interface problems. 19 refs.

  18. Motion Controllers for Learners to Manipulate and Interact with 3D Objects for Mental Rotation Training

    ERIC Educational Resources Information Center

    Yeh, Shih-Ching; Wang, Jin-Liang; Wang, Chin-Yeh; Lin, Po-Han; Chen, Gwo-Dong; Rizzo, Albert

    2014-01-01

    Mental rotation is an important spatial processing ability and an important element in intelligence tests. However, the majority of past attempts at training mental rotation have used paper-and-pencil tests or digital images. This study proposes an innovative mental rotation training approach using magnetic motion controllers to allow learners to…

  19. Two-Step System Identification and Primitive-Based Motion Planning for Control of Small Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Grymin, David J.

    This dissertation addresses motion planning, modeling, and feedback control for autonomous vehicle systems. A hierarchical approach for motion planning and control of nonlinear systems operating in obstacle environments is presented. To reduce computation time during the motion planning process, dynamically feasible trajectories are generated in real-time through concatenation of pre-specified motion primitives. The motion planning task is posed as a search over a directed graph, and the applicability of informed graph search techniques is investigated. Specifically, a locally greedy algorithm with effective backtracking ability is developed and compared to weighted A* search. The greedy algorithm shows an advantage with respect to solution cost and computation time when larger motion primitive libraries that do not operate on a regular state lattice are utilized. Linearization of the nonlinear system equations about the motion primitive library results in a hybrid linear time-varying model, and an optimal control algorithm using the l 2-induced norm as the performance measure is applied to ensure that the system tracks the desired trajectory. The ability of the resulting controller to closely track the trajectory obtained from the motion planner, despite various disturbances and uncertainties, is demonstrated through simulation. Additionally, an approach for obtaining dynamically feasible reference trajectories and feedback controllers for a small unmanned aerial vehicle (UAV) based on an aerodynamic model derived from flight tests is presented. The modeling approach utilizes the two step method (TSM) with stepwise multiple regression to determine relevant explanatory terms for the aerodynamic models. Dynamically feasible trajectories are then obtained through the solution of an optimal control problem using pseudospectral optimal control software. Discretetime feedback controllers are then obtained to regulate the vehicle along the desired reference trajectory

  20. Prospective motion correction of 3D echo-planar imaging data for functional MRI using optical tracking.

    PubMed

    Todd, Nick; Josephs, Oliver; Callaghan, Martina F; Lutti, Antoine; Weiskopf, Nikolaus

    2015-06-01

    We evaluated the performance of an optical camera based prospective motion correction (PMC) system in improving the quality of 3D echo-planar imaging functional MRI data. An optical camera and external marker were used to dynamically track the head movement of subjects during fMRI scanning. PMC was performed by using the motion information to dynamically update the sequence's RF excitation and gradient waveforms such that the field-of-view was realigned to match the subject's head movement. Task-free fMRI experiments on five healthy volunteers followed a 2 × 2 × 3 factorial design with the following factors: PMC on or off; 3.0mm or 1.5mm isotropic resolution; and no, slow, or fast head movements. Visual and motor fMRI experiments were additionally performed on one of the volunteers at 1.5mm resolution comparing PMC on vs PMC off for no and slow head movements. Metrics were developed to quantify the amount of motion as it occurred relative to k-space data acquisition. The motion quantification metric collapsed the very rich camera tracking data into one scalar value for each image volume that was strongly predictive of motion-induced artifacts. The PMC system did not introduce extraneous artifacts for the no motion conditions and improved the time series temporal signal-to-noise by 30% to 40% for all combinations of low/high resolution and slow/fast head movement relative to the standard acquisition with no prospective correction. The numbers of activated voxels (p<0.001, uncorrected) in both task-based experiments were comparable for the no motion cases and increased by 78% and 330%, respectively, for PMC on versus PMC off in the slow motion cases. The PMC system is a robust solution to decrease the motion sensitivity of multi-shot 3D EPI sequences and thereby overcome one of the main roadblocks to their widespread use in fMRI studies. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Combining marker-less patient setup and respiratory motion monitoring using low cost 3D camera technology

    NASA Astrophysics Data System (ADS)

    Tahavori, F.; Adams, E.; Dabbs, M.; Aldridge, L.; Liversidge, N.; Donovan, E.; Jordan, T.; Evans, PM.; Wells, K.

    2015-03-01

    Patient set-up misalignment/motion can be a significant source of error within external beam radiotherapy, leading to unwanted dose to healthy tissues and sub-optimal dose to the target tissue. Such inadvertent displacement or motion of the target volume may be caused by treatment set-up error, respiratory motion or an involuntary movement potentially decreasing therapeutic benefit. The conventional approach to managing abdominal-thoracic patient set-up is via skin markers (tattoos) and laser-based alignment. Alignment of the internal target volume with its position in the treatment plan can be achieved using Deep Inspiration Breath Hold (DIBH) in conjunction with marker-based respiratory motion monitoring. We propose a marker-less single system solution for patient set-up and respiratory motion management based on low cost 3D depth camera technology (such as the Microsoft Kinect). In this new work we assess this approach in a study group of six volunteer subjects. Separate simulated treatment mimic treatment "fractions" or set-ups are compared for each subject, undertaken using conventional laser-based alignment and with intrinsic depth images produced by Kinect. Microsoft Kinect is also compared with the well-known RPM system for respiratory motion management in terms of monitoring free-breathing and DIBH. Preliminary results suggest that Kinect is able to produce mm-level surface alignment and a comparable DIBH respiratory motion management when compared to the popular RPM system. Such an approach may also yield significant benefits in terms of patient throughput as marker alignment and respiratory motion can be automated in a single system.

  2. Flying triangulation - A motion-robust optical 3D sensor for the real-time shape acquisition of complex objects

    NASA Astrophysics Data System (ADS)

    Willomitzer, Florian; Ettl, Svenja; Arold, Oliver; Häusler, Gerd

    2013-05-01

    The three-dimensional shape acquisition of objects has become more and more important in the last years. Up to now, there are several well-established methods which already yield impressive results. However, even under quite common conditions like object movement or a complex shaping, most methods become unsatisfying. Thus, the 3D shape acquisition is still a difficult and non-trivial task. We present our measurement principle "Flying Triangulation" which enables a motion-robust 3D acquisition of complex-shaped object surfaces by a freely movable handheld sensor. Since "Flying Triangulation" is scalable, a whole sensor-zoo for different object sizes is presented. Concluding, an overview of current and future fields of investigation is given.

  3. Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras.

    PubMed

    Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Guo, Yin

    2016-11-18

    The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained.

  4. Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras

    PubMed Central

    Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Guo, Yin

    2016-01-01

    The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained. PMID:27869731

  5. Stereoscopic motion analysis in densely packed clusters: 3D analysis of the shimmering behaviour in Giant honey bees

    PubMed Central

    2011-01-01

    Background The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns. Results Stereoscopic imaging provided non-invasive, automated, simultaneous, in-situ 3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. Segmentation was the basis for the stereo matching, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the tracking procedure and triangulated into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb) of individual bees over time. Conclusions The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method could be used to study

  6. Using Fuzzy Gaussian Inference and Genetic Programming to Classify 3D Human Motions

    NASA Astrophysics Data System (ADS)

    Khoury, Mehdi; Liu, Honghai

    This research introduces and builds on the concept of Fuzzy Gaussian Inference (FGI) (Khoury and Liu in Proceedings of UKCI, 2008 and IEEE Workshop on Robotic Intelligence in Informationally Structured Space (RiiSS 2009), 2009) as a novel way to build Fuzzy Membership Functions that map to hidden Probability Distributions underlying human motions. This method is now combined with a Genetic Programming Fuzzy rule-based system in order to classify boxing moves from natural human Motion Capture data. In this experiment, FGI alone is able to recognise seven different boxing stances simultaneously with an accuracy superior to a GMM-based classifier. Results seem to indicate that adding an evolutionary Fuzzy Inference Engine on top of FGI improves the accuracy of the classifier in a consistent way.

  7. Kinetic Depth Effect and Optic Flow 1. 3D Shape from Fourier Motion

    DTIC Science & Technology

    1987-01-01

    rectification are unaffected by alternating-polarity but disrupted by interposed gray-frames. (2) To equate the accuracy of 2AFC planar direction-of...of the input stimulus. Direction. Discrimination between left and right motion direction (two-alternative forced choice, 2AFC Direction) minimally...and the standard stimulus would be recovered. 2AFC -Direction performance is impaired by polarity alternation, but still well above chance for a wide

  8. Free-breathing 3D cardiac MRI using iterative image-based respiratory motion correction.

    PubMed

    Moghari, Mehdi H; Roujol, Sébastien; Chan, Raymond H; Hong, Susie N; Bello, Natalie; Henningsson, Markus; Ngo, Long H; Goddu, Beth; Goepfert, Lois; Kissinger, Kraig V; Manning, Warren J; Nezafat, Reza

    2013-10-01

    Respiratory motion compensation using diaphragmatic navigator gating with a 5 mm gating window is conventionally used for free-breathing cardiac MRI. Because of the narrow gating window, scan efficiency is low resulting in long scan times, especially for patients with irregular breathing patterns. In this work, a new retrospective motion compensation algorithm is presented to reduce the scan time for free-breathing cardiac MRI that increasing the gating window to 15 mm without compromising image quality. The proposed algorithm iteratively corrects for respiratory-induced cardiac motion by optimizing the sharpness of the heart. To evaluate this technique, two coronary MRI datasets with 1.3 mm(3) resolution were acquired from 11 healthy subjects (seven females, 25 ± 9 years); one using a navigator with a 5 mm gating window acquired in 12.0 ± 2.0 min and one with a 15 mm gating window acquired in 7.1 ± 1.0 min. The images acquired with a 15 mm gating window were corrected using the proposed algorithm and compared to the uncorrected images acquired with the 5 and 15 mm gating windows. The image quality score, sharpness, and length of the three major coronary arteries were equivalent between the corrected images and the images acquired with a 5 mm gating window (P-value > 0.05), while the scan time was reduced by a factor of 1.7. Copyright © 2012 Wiley Periodicals, Inc.

  9. Free-breathing 3D Cardiac MRI Using Iterative Image-Based Respiratory Motion Correction

    PubMed Central

    Moghari, Mehdi H.; Roujol, Sébastien; Chan, Raymond H.; Hong, Susie N.; Bello, Natalie; Henningsson, Markus; Ngo, Long H.; Goddu, Beth; Goepfert, Lois; Kissinger, Kraig V.; Manning, Warren J.; Nezafat, Reza

    2012-01-01

    Respiratory motion compensation using diaphragmatic navigator (NAV) gating with a 5 mm gating window is conventionally used for free-breathing cardiac MRI. Due to the narrow gating window, scan efficiency is low resulting in long scan times, especially for patients with irregular breathing patterns. In this work, a new retrospective motion compensation algorithm is presented to reduce the scan time for free-breathing cardiac MRI that increasing the gating window to 15 mm without compromising image quality. The proposed algorithm iteratively corrects for respiratory-induced cardiac motion by optimizing the sharpness of the heart. To evaluate this technique, two coronary MRI datasets with 1.3 mm3 resolution were acquired from 11 healthy subjects (7 females, 25±9 years); one using a NAV with a 5 mm gating window acquired in 12.0±2.0 minutes and one with a 15 mm gating window acquired in 7.1±1.0 minutes. The images acquired with a 15 mm gating window were corrected using the proposed algorithm and compared to the uncorrected images acquired with the 5 mm and 15 mm gating windows. The image quality score, sharpness, and length of the three major coronary arteries were equivalent between the corrected images and the images acquired with a 5 mm gating window (p-value>0.05), while the scan time was reduced by a factor of 1.7. PMID:23132549

  10. Image segmentation and registration for the analysis of joint motion from 3D MRI

    NASA Astrophysics Data System (ADS)

    Hu, Yangqiu; Haynor, David R.; Fassbind, Michael; Rohr, Eric; Ledoux, William

    2006-03-01

    We report an image segmentation and registration method for studying joint morphology and kinematics from in vivo MRI scans and its application to the analysis of ankle joint motion. Using an MR-compatible loading device, a foot was scanned in a single neutral and seven dynamic positions including maximal flexion, rotation and inversion/eversion. A segmentation method combining graph cuts and level sets was developed which allows a user to interactively delineate 14 bones in the neutral position volume in less than 30 minutes total, including less than 10 minutes of user interaction. In the subsequent registration step, a separate rigid body transformation for each bone is obtained by registering the neutral position dataset to each of the dynamic ones, which produces an accurate description of the motion between them. We have processed six datasets, including 3 normal and 3 pathological feet. For validation our results were compared with those obtained from 3DViewnix, a semi-automatic segmentation program, and achieved good agreement in volume overlap ratios (mean: 91.57%, standard deviation: 3.58%) for all bones. Our tool requires only 1/50 and 1/150 of the user interaction time required by 3DViewnix and NIH Image Plus, respectively, an improvement that has the potential to make joint motion analysis from MRI practical in research and clinical applications.

  11. Long Period Ground Motion Prediction Of Linked Tonankai And Nankai Subduction Earthquakes Using 3D Finite Difference Method

    NASA Astrophysics Data System (ADS)

    Kawabe, H.; Kamae, K.

    2005-12-01

    There is high possibility of the occurrence of the Tonankai and Nankai earthquakes which are capable of causing immense damage. During these huge earthquakes, long period ground motions may strike mega-cities Osaka and Nagoya located inside the Osaka and Nobi basins in which there are many long period and low damping structures (such as tall buildings and oil tanks). It is very important for the earthquake disaster mitigation to predict long period strong ground motions of the future Tonankai and Nankai earthquakes that are capable of exciting long-period strong ground motions over a wide area. In this study, we tried to predict long-period ground motions of the future Tonankai and Nankai earthquakes using 3D finite difference method. We construct a three-dimensional underground structure model including not only the basins but also propagation field from the source to the basins. Resultantly, we can point out that the predominant periods of pseudo-velocity response spectra change basin by basin. Long period ground motions with periods of 5 to 8 second are predominant in the Osaka basin, 3 to 6 second in the Nobi basin and 2 to 5 second in the Kyoto basin. These characteristics of the long-period ground motions are related with the thicknesses of the sediments of the basins. The duration of long period ground motions inside the basin are more than 5 minutes. These results are very useful for the earthquake disaster mitigation of long period structures such as tall buildings and oil tanks.

  12. Instability of the perceived world while watching 3D stereoscopic imagery: A likely source of motion sickness symptoms.

    PubMed

    Hwang, Alex D; Peli, Eli

    2014-01-01

    Watching 3D content using a stereoscopic display may cause various discomforting symptoms, including eye strain, blurred vision, double vision, and motion sickness. Numerous studies have reported motion-sickness-like symptoms during stereoscopic viewing, but no causal linkage between specific aspects of the presentation and the induced discomfort has been explicitly proposed. Here, we describe several causes, in which stereoscopic capture, display, and viewing differ from natural viewing resulting in static and, importantly, dynamic distortions that conflict with the expected stability and rigidity of the real world. This analysis provides a basis for suggested changes to display systems that may alleviate the symptoms, and suggestions for future studies to determine the relative contribution of the various effects to the unpleasant symptoms.

  13. Instability of the perceived world while watching 3D stereoscopic imagery: A likely source of motion sickness symptoms

    PubMed Central

    Hwang, Alex D.; Peli, Eli

    2014-01-01

    Watching 3D content using a stereoscopic display may cause various discomforting symptoms, including eye strain, blurred vision, double vision, and motion sickness. Numerous studies have reported motion-sickness-like symptoms during stereoscopic viewing, but no causal linkage between specific aspects of the presentation and the induced discomfort has been explicitly proposed. Here, we describe several causes, in which stereoscopic capture, display, and viewing differ from natural viewing resulting in static and, importantly, dynamic distortions that conflict with the expected stability and rigidity of the real world. This analysis provides a basis for suggested changes to display systems that may alleviate the symptoms, and suggestions for future studies to determine the relative contribution of the various effects to the unpleasant symptoms. PMID:26034562

  14. 3-D uncertainty-based topographic change detection with structure-from-motion photogrammetry and precision maps

    NASA Astrophysics Data System (ADS)

    James, Mike R.; Robson, Stuart; Smith, Mark W.

    2017-04-01

    Structure-from-motion (SfM) software greatly facilitates the generation of 3-D surface models from photographs, but doesn't provide the detailed error metrics that are characteristic of rigorous photogrammetry. Here, we present a novel approach to generate maps of 3-D survey precision which describe the spatial variability in 3-D photogrammetric and georeferencing precision across surveys. Such maps then enable confidence-bounded quantification of 3-D topographic change that, for the first time, specifically account for the precision characteristics of photo-based surveys. Precision maps for surveys georeferenced either directly using camera positions or by ground control, illustrate the spatial variability in precision that is associated with the relative influences of photogrammetric (e.g. image network geometry, tie point quality) and georeferencing considerations. For common SfM-based software (which does not provide precision estimates directly), precision maps can be generated using a Monte Carlo procedure. Confidence-bounded full 3-D change detection between repeat surveys with associated precision maps, is then derived through adapting a state-of-the-art point-cloud comparison (M3C2; Lague, et al., 2013). We demonstrate the approach using annual aerial SfM surveys of an eroding badland, benchmarked against TLS data for validation. 3-D precision maps enable more probable erosion patterns to be identified than existing analyses. If precision is limited by weak georeferencing (e.g. using direct georeferencing with camera positions of multi-metre precision, such as from a consumer UAV), then overall survey precision scales as n-1 /2 of the control precision (n = number of images). However, direct georeferencing results from SfM software (PhotoScan) were not consistent with those from rigorous photogrammetric analysis. Our method not only enables confidence-bounded 3-D change detection and uncertainty-based DEM processing, but also provides covariance

  15. Simulation of 3D tidal flat topography based on fractional Brownian motion

    NASA Astrophysics Data System (ADS)

    Li, Xing; Wu, Wen; Zhou, Yunxuan; Zhang, Junru

    2010-11-01

    The intertidal zone as is one of the most dynamic areas on the earth and conducting topographic surveys is very difficult. It is particularly hard to obtain elevation information when the tidal flat is a mud flat. In this article, we selected Jiuduansha (Jiuduan Shoal) tidal flat as a test area to demonstrate a fractional Brownian motion model based tidal flat elevation estimation. The Jiuduansha is a large shoal with a large mud flat located in the Changjiang River Estuary. Two Landsat TM images and two CBERS CCD images acquired in different seasons in 2008 were processed and waterlines were extracted from images of different times according to the tidal conditions. By considering the fact that the water surface is actually a curved surface, we dissected the waterlines into waterside points and assigned an elevation value to every point through interpolation of the tidal level data taken from nearby tidal observation stations at the similar time when the satellite images were acquired. With the elevation data of the waterside points and digital sea chart depth data, a digital elevation model (DEM) was constructed using the fractional Brownian motion (fBm) model with a midpoint displacement algorithm through the Matlab toolbox. Finally, a quantitative validation of the model was completed using the 17 ground survey data positions on the tidal flat measured in November 2008. The simulation results show a good visual effect and high precision. The mean square root error is 0.155m respectively for the ground survey.

  16. Simulation of 3D tidal flat topography based on fractional Brownian motion

    NASA Astrophysics Data System (ADS)

    Li, Xing; Wu, Wen; Zhou, Yunxuan; Zhang, Junru

    2009-09-01

    The intertidal zone as is one of the most dynamic areas on the earth and conducting topographic surveys is very difficult. It is particularly hard to obtain elevation information when the tidal flat is a mud flat. In this article, we selected Jiuduansha (Jiuduan Shoal) tidal flat as a test area to demonstrate a fractional Brownian motion model based tidal flat elevation estimation. The Jiuduansha is a large shoal with a large mud flat located in the Changjiang River Estuary. Two Landsat TM images and two CBERS CCD images acquired in different seasons in 2008 were processed and waterlines were extracted from images of different times according to the tidal conditions. By considering the fact that the water surface is actually a curved surface, we dissected the waterlines into waterside points and assigned an elevation value to every point through interpolation of the tidal level data taken from nearby tidal observation stations at the similar time when the satellite images were acquired. With the elevation data of the waterside points and digital sea chart depth data, a digital elevation model (DEM) was constructed using the fractional Brownian motion (fBm) model with a midpoint displacement algorithm through the Matlab toolbox. Finally, a quantitative validation of the model was completed using the 17 ground survey data positions on the tidal flat measured in November 2008. The simulation results show a good visual effect and high precision. The mean square root error is 0.155m respectively for the ground survey.

  17. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    PubMed

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming.

  18. Ground motion in the presence of complex Topography II: Earthquake sources and 3D simulations

    USGS Publications Warehouse

    Hartzell, Stephen; Ramirez-Guzman, Leonardo; Meremonte, Mark; Leeds, Alena L.

    2017-01-01

    Eight seismic stations were placed in a linear array with a topographic relief of 222 m over Mission Peak in the east San Francisco Bay region for a period of one year to study topographic effects. Seventy‐two well‐recorded local earthquakes are used to calculate spectral amplitude ratios relative to a reference site. A well‐defined fundamental resonance peak is observed with individual station amplitudes following the theoretically predicted progression of larger amplitudes in the upslope direction. Favored directions of vibration are also seen that are related to the trapping of shear waves within the primary ridge dimensions. Spectral peaks above the fundamental one are also related to topographic effects but follow a more complex pattern. Theoretical predictions using a 3D velocity model and accurate topography reproduce many of the general frequency and time‐domain features of the data. Shifts in spectral frequencies and amplitude differences, however, are related to deficiencies of the model and point out the importance of contributing factors, including the shear‐wave velocity under the topographic feature, near‐surface velocity gradients, and source parameters.

  19. Optimization of real-time rigid registration motion compensation for prostate biopsies using 2D/3D ultrasound

    NASA Astrophysics Data System (ADS)

    Gillies, Derek J.; Gardi, Lori; Zhao, Ren; Fenster, Aaron

    2017-03-01

    During image-guided prostate biopsy, needles are targeted at suspicious tissues to obtain specimens that are later examined histologically for cancer. Patient motion causes inaccuracies when using MR-transrectal ultrasound (TRUS) image fusion approaches used to augment the conventional biopsy procedure. Motion compensation using a single, user initiated correction can be performed to temporarily compensate for prostate motion, but a real-time continuous registration offers an improvement to clinical workflow by reducing user interaction and procedure time. An automatic motion compensation method, approaching the frame rate of a TRUS-guided system, has been developed for use during fusion-based prostate biopsy to improve image guidance. 2D and 3D TRUS images of a prostate phantom were registered using an intensity based algorithm utilizing normalized cross-correlation and Powell's method for optimization with user initiated and continuous registration techniques. The user initiated correction performed with observed computation times of 78 ± 35 ms, 74 ± 28 ms, and 113 ± 49 ms for in-plane, out-of-plane, and roll motions, respectively, corresponding to errors of 0.5 ± 0.5 mm, 1.5 ± 1.4 mm, and 1.5 ± 1.6°. The continuous correction performed significantly faster (p < 0.05) than the user initiated method, with observed computation times of 31 ± 4 ms, 32 ± 4 ms, and 31 ± 6 ms for in-plane, out-of-plane, and roll motions, respectively, corresponding to errors of 0.2 ± 0.2 mm, 0.6 ± 0.5 mm, and 0.8 ± 0.4°.

  20. Spatial synchronization of an insole pressure distribution system with a 3D motion analysis system for center of pressure measurements.

    PubMed

    Fradet, Laetitia; Siegel, Johannes; Dahl, Marieke; Alimusaj, Merkur; Wolf, Sebastian I

    2009-01-01

    Insole pressure systems are often more appropriate than force platforms for analysing center of pressure (CoP) as they are more flexible in use and indicate the position of the CoP that characterizes the contact foot/shoe during gait with shoes. However, these systems are typically not synchronized with 3D motion analysis systems. The present paper proposes a direct method that does not require a force platform for synchronizing an insole pressure system with a 3D motion analysis system. The distance separating 24 different CoPs measured optically and their equivalents measured by the insoles and transformed in the global coordinate system did not exceed 2 mm, confirming the suitability of the method proposed. Additionally, during static single limb stance, distances smaller than 7 mm and correlations higher than 0.94 were found between CoP trajectories measured with insoles and force platforms. Similar measurements were performed during gait to illustrate the characteristics of the CoP measured with each system. The distance separating the two CoPs was below 19 mm and the coefficient of correlation above 0.86. The proposed method offers the possibility to conduct new experiments, such as the investigation of proprioception in climbing stairs or in the presence of obstacles.

  1. Dynamics of errors in 3D motion estimation and implications for strain-tensor imaging in acoustic elastography

    NASA Astrophysics Data System (ADS)

    Bilgen, Mehmet

    2000-06-01

    For the purpose of quantifying the noise in acoustic elastography, a displacement covariance matrix is derived analytically for the cross-correlation based 3D motion estimator. Static deformation induced in tissue from an external mechanical source is represented by a second-order strain tensor. A generalized 3D model is introduced for the ultrasonic echo signals. The components of the covariance matrix are related to the variances of the displacement errors and the errors made in estimating the elements of the strain tensor. The results are combined to investigate the dependences of these errors on the experimental and signal-processing parameters as well as to determine the effects of one strain component on the estimation of the other. The expressions are evaluated for special cases of axial strain estimation in the presence of axial, axial-shear and lateral-shear type deformations in 2D. The signals are shown to decorrelate with any of these deformations, with strengths depending on the reorganization and interaction of tissue scatterers with the ultrasonic point spread function following the deformation. Conditions that favour the improvements in motion estimation performance are discussed, and advantages gained by signal companding and pulse compression are illustrated.

  2. MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems

    NASA Astrophysics Data System (ADS)

    Kopecky, Ken; Winer, Eliot

    2014-06-01

    Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.

  3. Quantification of Ground Motion Reductions by Fault Zone Plasticity with 3D Spontaneous Rupture Simulations

    NASA Astrophysics Data System (ADS)

    Roten, D.; Olsen, K. B.; Cui, Y.; Day, S. M.

    2015-12-01

    We explore the effects of fault zone nonlinearity on peak ground velocities (PGVs) by simulating a suite of surface rupturing earthquakes in a visco-plastic medium. Our simulations, performed with the AWP-ODC 3D finite difference code, cover magnitudes from 6.5 to 8.0, with several realizations of the stochastic stress drop for a given magnitude. We test three different models of rock strength, with friction angles and cohesions based on criteria which are frequently applied to fractured rock masses in civil engineering and mining. We use a minimum shear-wave velocity of 500 m/s and a maximum frequency of 1 Hz. In rupture scenarios with average stress drop (~3.5 MPa), plastic yielding reduces near-fault PGVs by 15 to 30% in pre-fractured, low-strength rock, but less than 1% in massive, high quality rock. These reductions are almost insensitive to the scenario earthquake magnitude. In the case of high stress drop (~7 MPa), however, plasticity reduces near-fault PGVs by 38 to 45% in rocks of low strength and by 5 to 15% in rocks of high strength. Because plasticity reduces slip rates and static slip near the surface, these effects can partially be captured by defining a shallow velocity-strengthening layer. We also perform a dynamic nonlinear simulation of a high stress drop M 7.8 earthquake rupturing the southern San Andreas fault along 250 km from Indio to Lake Hughes. With respect to the viscoelastic solution (a), nonlinearity in the fault damage zone and in near-surface deposits would reduce long-period (> 1 s) peak ground velocities in the Los Angeles basin by 15-50% (b), depending on the strength of crustal rocks and shallow sediments. These simulation results suggest that nonlinear effects may be relevant even at long periods, especially for earthquakes with high stress drop.

  4. 3D PET image reconstruction including both motion correction and registration directly into an MR or stereotaxic spatial atlas

    NASA Astrophysics Data System (ADS)

    Gravel, Paul; Verhaeghe, Jeroen; Reader, Andrew J.

    2013-01-01

    This work explores the feasibility and impact of including both the motion correction and the image registration transformation parameters from positron emission tomography (PET) image space to magnetic resonance (MR), or stereotaxic, image space within the system matrix of PET image reconstruction. This approach is motivated by the fields of neuroscience and psychiatry, where PET is used to investigate differences in activation patterns between different groups of participants, requiring all images to be registered to a common spatial atlas. Currently, image registration is performed after image reconstruction which introduces interpolation effects into the final image. Furthermore, motion correction (also requiring registration) introduces a further level of interpolation, and the overall result of these operations can lead to resolution degradation and possibly artifacts. It is important to note that performing such operations on a post-reconstruction basis means, strictly speaking, that the final images are not ones which maximize the desired objective function (e.g. maximum likelihood (ML), or maximum a posteriori reconstruction (MAP)). To correctly seek parameter estimates in the desired spatial atlas which are in accordance with the chosen reconstruction objective function, it is necessary to include the transformation parameters for both motion correction and registration within the system modeling stage of image reconstruction. Such an approach not only respects the statistically chosen objective function (e.g. ML or MAP), but furthermore should serve to reduce the interpolation effects. To evaluate the proposed method, this work investigates registration (including motion correction) using 2D and 3D simulations based on the high resolution research tomograph (HRRT) PET scanner geometry, with and without resolution modeling, using the ML expectation maximization (MLEM) reconstruction algorithm. The quality of reconstruction was assessed using bias

  5. Uncertainty preserving patch-based online modeling for 3D model acquisition and integration from passive motion imagery

    NASA Astrophysics Data System (ADS)

    Tang, Hao; Chang, Peng; Molina, Edgardo; Zhu, Zhigang

    2012-06-01

    In both military and civilian applications, abundant data from diverse sources captured on airborne platforms are often available for a region attracting interest. Since the data often includes motion imagery streams collected from multiple platforms flying at different altitudes, with sensors of different field of views (FOVs), resolutions, frame rates and spectral bands, it is imperative that a cohesive site model encompassing all the information can be quickly built and presented to the analysts. In this paper, we propose to develop an Uncertainty Preserving Patch-based Online Modeling System (UPPOMS) leading towards the automatic creation and updating of a cohesive, geo-registered, uncertaintypreserving, efficient 3D site terrain model from passive imagery with varying field-of-views and phenomenologies. The proposed UPPOMS has the following technical thrusts that differentiate our approach from others: (1) An uncertaintypreserved, patch-based 3D model is generated, which enables the integration of images captured with a mixture of NFOV and WFOV and/or visible and infrared motion imagery sensors. (2) Patch-based stereo matching and multi-view 3D integration are utilized, which are suitable for scenes with many low texture regions, particularly in mid-wave infrared images. (3) In contrast to the conventional volumetric algorithms, whose computational and storage costs grow exponentially with the amount of input data and the scale of the scene, the proposed UPPOMS system employs an online algorithmic pipeline, and scales well to large amount of input data. Experimental results and discussions of future work will be provided.

  6. Automatic 3D relief acquisition and georeferencing of road sides by low-cost on-motion SfM

    NASA Astrophysics Data System (ADS)

    Voumard, Jérémie; Bornemann, Perrick; Malet, Jean-Philippe; Derron, Marc-Henri; Jaboyedoff, Michel

    2017-04-01

    3D terrain relief acquisition is important for a large part of geosciences. Several methods have been developed to digitize terrains, such as total station, LiDAR, GNSS or photogrammetry. To digitize road (or rail tracks) sides on long sections, mobile spatial imaging system or UAV are commonly used. In this project, we compare a still fairly new method -the SfM on-motion technics- with some traditional technics of terrain digitizing (terrestrial laser scanning, traditional SfM, UAS imaging solutions, GNSS surveying systems and total stations). The SfM on-motion technics generates 3D spatial data by photogrammetric processing of images taken from a moving vehicle. Our mobile system consists of six action cameras placed on a vehicle. Four fisheye cameras mounted on a mast on the vehicle roof are placed at 3.2 meters above the ground. Three of them have a GNNS chip providing geotagged images. Two pictures were acquired every second by each camera. 4K resolution fisheye videos were also used to extract 8.3M not geotagged pictures. All these pictures are then processed with the Agisoft PhotoScan Professional software. Results from the SfM on-motion technics are compared with results from classical SfM photogrammetry on a 500 meters long alpine track. They were also compared with mobile laser scanning data on the same road section. First results seem to indicate that slope structures are well observable up to decimetric accuracy. For the georeferencing, the planimetric (XY) accuracy of few meters is much better than the altimetric (Z) accuracy. There is indeed a Z coordinate shift of few tens of meters between GoPro cameras and Garmin camera. This makes necessary to give a greater freedom to altimetric coordinates in the processing software. Benefits of this low-cost SfM on-motion method are: 1) a simple setup to use in the field (easy to switch between vehicle types as car, train, bike, etc.), 2) a low cost and 3) an automatic georeferencing of 3D points clouds. Main

  7. Nonlinear, nonlaminar-3D computation of electron motion through the output cavity of a klystron

    NASA Technical Reports Server (NTRS)

    Albers, L. U.; Kosmahl, H. G.

    1971-01-01

    The equations of motion used in the computation are discussed along with the space charge fields and the integration process. The following assumptions were used as a basis for the computation: (1) The beam is divided into N axisymmetric discs of equal charge and each disc into R rings of equal charge. (2) The velocity of each disc, its phase with respect to the gap voltage, and its radius at a specified position in the drift tunnel prior to the interaction gap is known from available large signal one dimensional programs. (3) The fringing rf fields are computed from exact analytical expressions derived from the wave equation assuming a known field shape between the tunnel tips at a radius a. (4) The beam is focused by an axisymmetric magnetic field. Both components of B, that is B sub z and B sub r, are taken into account. (5) Since this integration does not start at the cathode but rather further down the stream prior to entering the output cavity it is assumed that each electron moved along a laminar path from the cathode to the start of integration.

  8. Capturing the 3D Motion of an Infalling Galaxy via Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Su, Yuanyuan; Kraft, Ralph P.; Nulsen, Paul E. J.; Roediger, Elke; Forman, William R.; Churazov, Eugene; Randall, Scott W.; Jones, Christine; Machacek, Marie E.

    2017-01-01

    The Fornax Cluster is the nearest (≤slant 20 Mpc) galaxy cluster in the southern sky. NGC 1404 is a bright elliptical galaxy falling through the intracluster medium (ICM) of the Fornax Cluster. The sharp leading edge of NGC 1404 forms a classical “cold front” that separates 0.6 keV dense interstellar medium and 1.5 keV diffuse ICM. We measure the angular pressure variation along the cold front using a very deep (670 ks) Chandra X-ray observation. We are taking the classical approach—using stagnation pressure to determine a substructure’s speed—to the next level by not only deriving a general speed but also directionality, which yields the complete velocity field as well as the distance of the substructure directly from the pressure distribution. We find a hydrodynamic model consistent with the pressure jump along NGC 1404's atmosphere measured in multiple directions. The best-fit model gives an inclination of 33° and a Mach number of 1.3 for the infall of NGC 1404, in agreement with complementary measurements of the motion of NGC 1404. Our study demonstrates the successful treatment of a highly ionized ICM as ideal fluid flow, in support of the hypothesis that magnetic pressure is not dynamically important over most of the virial region of galaxy clusters.

  9. 3-D microvessel-mimicking ultrasound phantoms produced with a scanning motion system.

    PubMed

    Gessner, Ryan C; Kothadia, Roshni; Feingold, Steven; Dayton, Paul A

    2011-05-01

    Ultrasound techniques are currently being developed that can assess the vascularization of tissue as a marker for therapeutic response. Some of these ultrasound imaging techniques seek to extract quantitative features about vessel networks, whereas high-frequency imaging also allows individual vessels to be resolved. The development of these new techniques, and subsequent imaging analysis strategies, necessitates an understanding of their sensitivities to vessel and vessel network structural abnormalities. Constructing in-vitro flow phantoms for this purpose can be prohibitively challenging, because simulating precise flow environments with nontrivial structures is often impossible using conventional methods of construction for flow phantoms. Presented in this manuscript is a method to create predefined structures with <10 μm precision using a three-axis motion system. The application of this technique is demonstrated for the creation of individual vessel and vessel networks, which can easily be made to simulate the development of structural abnormalities typical of diseased vasculature in vivo. In addition, beyond facilitating the creation of phantoms that would otherwise be very challenging to construct, the method presented herein enables one to precisely simulate very slow blood flow and respiration artifacts, and to measure imaging resolution.

  10. Motion corrected 3D reconstruction of the fetal thorax from prenatal MRI.

    PubMed

    Kainz, Bernhard; Malamateniou, Christina; Murgasova, Maria; Keraudren, Kevin; Rutherford, Mary; Hajnal, Joseph V; Rueckert, Daniel

    2014-01-01

    In this paper we present a semi-automatic method for analysis of the fetal thorax in genuine three-dimensional volumes. After one initial click we localize the spine and accurately determine the volume of the fetal lung from high resolution volumetric images reconstructed from motion corrupted prenatal Magnetic Resonance Imaging (MRI). We compare the current state-of-the-art method of segmenting the lung in a slice-by-slice manner with the most recent multi-scan reconstruction methods. We use fast rotation invariant spherical harmonics image descriptors with Classification Forest ensemble learning methods to extract the spinal cord and show an efficient way to generate a segmentation prior for the fetal lung from this information for two different MRI field strengths. The spinal cord can be segmented with a DICE coefficient of 0.89 and the automatic lung segmentation has been evaluated with a DICE coefficient of 0.87. We evaluate our method on 29 fetuses with a gestational age (GA) between 20 and 38 weeks and show that our computed segmentations and the manual ground truth correlate well with the recorded values in literature.

  11. 3D optical imagery for motion compensation in a limb ultrasound system

    NASA Astrophysics Data System (ADS)

    Ranger, Bryan J.; Feigin, Micha; Zhang, Xiang; Mireault, Al; Raskar, Ramesh; Herr, Hugh M.; Anthony, Brian W.

    2016-04-01

    Conventional processes for prosthetic socket fabrication are heavily subjective, often resulting in an interface to the human body that is neither comfortable nor completely functional. With nearly 100% of amputees reporting that they experience discomfort with the wearing of their prosthetic limb, designing an effective interface to the body can significantly affect quality of life and future health outcomes. Active research in medical imaging and biomechanical tissue modeling of residual limbs has led to significant advances in computer aided prosthetic socket design, demonstrating an interest in moving toward more quantifiable processes that are still patient-specific. In our work, medical ultrasonography is being pursued to acquire data that may quantify and improve the design process and fabrication of prosthetic sockets while greatly reducing cost compared to an MRI-based framework. This paper presents a prototype limb imaging system that uses a medical ultrasound probe, mounted to a mechanical positioning system and submerged in a water bath. The limb imaging is combined with three-dimensional optical imaging for motion compensation. Images are collected circumferentially around the limb and combined into cross-sectional axial image slices, resulting in a compound image that shows tissue distributions and anatomical boundaries similar to magnetic resonance imaging. In this paper we provide a progress update on our system development, along with preliminary results as we move toward full volumetric imaging of residual limbs for prosthetic socket design. This demonstrates a novel multi-modal approach to residual limb imaging.

  12. Mobile Biplane X-Ray Imaging System for Measuring 3D Dynamic Joint Motion During Overground Gait.

    PubMed

    Guan, Shanyuanye; Gray, Hans A; Keynejad, Farzad; Pandy, Marcus G

    2016-01-01

    Most X-ray fluoroscopy systems are stationary and impose restrictions on the measurement of dynamic joint motion; for example, knee-joint kinematics during gait is usually measured with the subject ambulating on a treadmill. We developed a computer-controlled, mobile, biplane, X-ray fluoroscopy system to track human body movement for high-speed imaging of 3D joint motion during overground gait. A robotic gantry mechanism translates the two X-ray units alongside the subject, tracking and imaging the joint of interest as the subject moves. The main aim of the present study was to determine the accuracy with which the mobile imaging system measures 3D knee-joint kinematics during walking. In vitro experiments were performed to measure the relative positions of the tibia and femur in an intact human cadaver knee and of the tibial and femoral components of a total knee arthroplasty (TKA) implant during simulated overground gait. Accuracy was determined by calculating mean, standard deviation and root-mean-squared errors from differences between kinematic measurements obtained using volumetric models of the bones and TKA components and reference measurements obtained from metal beads embedded in the bones. Measurement accuracy was enhanced by the ability to track and image the joint concurrently. Maximum root-mean-squared errors were 0.33 mm and 0.65° for translations and rotations of the TKA knee and 0.78 mm and 0.77° for translations and rotations of the intact knee, which are comparable to results reported for treadmill walking using stationary biplane systems. System capability for in vivo joint motion measurement was also demonstrated for overground gait.

  13. Effect of 3D physiological loading and motion on elastohydrodynamic lubrication of metal-on-metal total hip replacements.

    PubMed

    Gao, Leiming; Wang, Fengcai; Yang, Peiran; Jin, Zhongmin

    2009-07-01

    An elastohydrodynamic lubrication (EHL) simulation of a metal-on-metal (MOM) total hip implant was presented, considering both steady state and transient physiological loading and motion gait cycle in all three directions. The governing equations were solved numerically by the multi-grid method and fast Fourier transform in spherical coordinates, and full numerical solutions were presented included the pressure and film thickness distribution. Despite small variations in the magnitude of 3D resultant load, the horizontal anterior-posterior (AP) and medial-lateral (ML) load components were found to translate the contact area substantially in the corresponding direction and consequently to result in significant squeeze-film actions. For a cup positioned anatomically at 45 degrees , the variation of the resultant load was shown unlikely to cause the edge contact. The contact area was found within the cup dimensions of 70-130 degrees and 90-150 degrees in the AP and ML direction respectively even under the largest translations. Under walking conditions, the horizontal load components had a significant impact on the lubrication film due to the squeeze-film effect. The time-dependent film thickness was increased by the horizontal translation and decreased during the reverse of this translation caused by the multi-direction of the AP load during walking. The minimum film thickness of 12-20 nm was found at 0.4s and around the location at (95, 125) degrees. During the whole walking cycle both the average and centre film thickness were found obviously increased to a range of 40-65 nm, compared with the range of 25-55 nm under one load (vertical) and one motion (flexion-extension) condition, which suggested the lubrication in the current MOM hip implant was improved under 3D physiological loading and motion. This study suggested the lubrication performance especially the film thickness distribution should vary greatly under different operating conditions and the time and

  14. Design and verification of a simple 3D dynamic model of speed skating which mimics observed forces and motions.

    PubMed

    van der Kruk, E; Veeger, H E J; van der Helm, F C T; Schwab, A L

    2017-09-14

    Advice about the optimal coordination pattern for an individual speed skater, could be addressed by simulation and optimization of a biomechanical speed skating model. But before getting to this optimization approach one needs a model that can reasonably match observed behaviour. Therefore, the objective of this study is to present a verified three dimensional inverse skater model with minimal complexity, which models the speed skating motion on the straights. The model simulates the upper body transverse translation of the skater together with the forces exerted by the skates on the ice. The input of the model is the changing distance between the upper body and the skate, referred to as the leg extension (Euclidean distance in 3D space). Verification shows that the model mimics the observed forces and motions well. The model is most accurate for the position and velocity estimation (respectively 1.2% and 2.9% maximum residuals) and least accurate for the force estimations (underestimation of 4.5-10%). The model can be used to further investigate variables in the skating motion. For this, the input of the model, the leg extension, can be optimized to obtain a maximal forward velocity of the upper body. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  15. Study of human body: Kinematics and kinetics of a martial arts (Silat) performers using 3D-motion capture

    NASA Astrophysics Data System (ADS)

    Soh, Ahmad Afiq Sabqi Awang; Jafri, Mohd Zubir Mat; Azraai, Nur Zaidi

    2015-04-01

    The Interest in this studies of human kinematics goes back very far in human history drove by curiosity or need for the understanding the complexity of human body motion. To find new and accurate information about the human movement as the advance computing technology became available for human movement that can perform. Martial arts (silat) were chose and multiple type of movement was studied. This project has done by using cutting-edge technology which is 3D motion capture to characterize and to measure the motion done by the performers of martial arts (silat). The camera will detect the markers (infrared reflection by the marker) around the performer body (total of 24 markers) and will show as dot in the computer software. The markers detected were analyzing using kinematic kinetic approach and time as reference. A graph of velocity, acceleration and position at time,t (seconds) of each marker was plot. Then from the information obtain, more parameters were determined such as work done, momentum, center of mass of a body using mathematical approach. This data can be used for development of the effectiveness movement in martial arts which is contributed to the people in arts. More future works can be implemented from this project such as analysis of a martial arts competition.

  16. 3D measurements of alpine skiing with an inertial sensor motion capture suit and GNSS RTK system.

    PubMed

    Supej, Matej

    2010-05-01

    To date, camcorders have been the device of choice for 3D kinematic measurement in human locomotion, in spite of their limitations. This study examines a novel system involving a GNSS RTK that returns a reference trajectory through the use of a suit, imbedded with inertial sensors, to reveal subject segment motion. The aims were: (1) to validate the system's precision and (2) to measure an entire alpine ski race and retrieve the results shortly after measuring. For that purpose, four separate experiments were performed: (1) forced pendulum, (2) walking, (3) gate positions, and (4) skiing experiments. Segment movement validity was found to be dependent on the frequency of motion, with high accuracy (0.8 degrees , s = 0.6 degrees ) for 10 s, which equals approximately 10 slalom turns, while accuracy decreased slightly (2.1 degrees , 3.3 degrees , and 4.2 degrees for 0.5, 1, and 2 Hz oscillations, respectively) during 35 s of data collection. The motion capture suit's orientation inaccuracy was mostly due to geomagnetic secular variation. The system exhibited high validity regarding the reference trajectory (0.008 m, s = 0.0044) throughout an entire ski race. The system is capable of measuring an entire ski course with less manpower and therefore lower cost compared with camcorder-based techniques.

  17. How Plates Pull Transforms Apart: 3-D Numerical Models of Oceanic Transform Fault Response to Changes in Plate Motion Direction

    NASA Astrophysics Data System (ADS)

    Morrow, T. A.; Mittelstaedt, E. L.; Olive, J. A. L.

    2015-12-01

    Observations along oceanic fracture zones suggest that some mid-ocean ridge transform faults (TFs) previously split into multiple strike-slip segments separated by short (<~50 km) intra-transform spreading centers and then reunited to a single TF trace. This history of segmentation appears to correspond with changes in plate motion direction. Despite the clear evidence of TF segmentation, the processes governing its development and evolution are not well characterized. Here we use a 3-D, finite-difference / marker-in-cell technique to model the evolution of localized strain at a TF subjected to a sudden change in plate motion direction. We simulate the oceanic lithosphere and underlying asthenosphere at a ridge-transform-ridge setting using a visco-elastic-plastic rheology with a history-dependent plastic weakening law and a temperature- and stress-dependent mantle viscosity. To simulate the development of topography, a low density, low viscosity 'sticky air' layer is present above the oceanic lithosphere. The initial thermal gradient follows a half-space cooling solution with an offset across the TF. We impose an enhanced thermal diffusivity in the uppermost 6 km of lithosphere to simulate the effects of hydrothermal circulation. An initial weak seed in the lithosphere helps localize shear deformation between the two offset ridge axes to form a TF. For each model case, the simulation is run initially with TF-parallel plate motion until the thermal structure reaches a steady state. The direction of plate motion is then rotated either instantaneously or over a specified time period, placing the TF in a state of trans-tension. Model runs continue until the system reaches a new steady state. Parameters varied here include: initial TF length, spreading rate, and the rotation rate and magnitude of spreading obliquity. We compare our model predictions to structural observations at existing TFs and records of TF segmentation preserved in oceanic fracture zones.

  18. Shape and motion reconstruction from 3D-to-1D orthographically projected data via object-image relations.

    PubMed

    Ferrara, Matthew; Arnold, Gregory; Stuff, Mark

    2009-10-01

    This paper describes an invariant-based shape- and motion reconstruction algorithm for 3D-to-1D orthographically projected range data taken from unknown viewpoints. The algorithm exploits the object-image relation that arises in echo-based range data and represents a simplification and unification of previous work in the literature. Unlike one proposed approach, this method does not require uniqueness constraints, which makes its algorithmic form independent of the translation removal process (centroid removal, range alignment, etc.). The new algorithm, which simultaneously incorporates every projection and does not use an initialization in the optimization process, requires fewer calculations and is more straightforward than the previous approach. Additionally, the new algorithm is shown to be the natural extension of the approach developed by Tomasi and Kanade for 3D-to-2D orthographically projected data and is applied to a realistic inverse synthetic aperture radar imaging scenario, as well as experiments with varying amounts of aperture diversity and noise.

  19. 3-D or median map? Earthquake scenario ground-motion maps from physics-based models versus maps from ground-motion prediction equations

    NASA Astrophysics Data System (ADS)

    Porter, K.

    2015-12-01

    There are two common ways to create a ground-motion map for a hypothetical earthquake: using ground motion prediction equations (by far the more common of the two) and using 3-D physics-based modeling. The former is very familiar to engineers, the latter much less so, and the difference can present a problem because engineers tend to trust the familiar and distrust novelty. Maps for essentially the same hypothetical earthquake using the two different methods can look very different, while appearing to present the same information. Using one or the other can lead an engineer or disaster planner to very different estimates of damage and risk. The reasons have to do with depiction of variability, spatial correlation of shaking, the skewed distribution of real-world shaking, and the upward-curving relationship between shaking and damage. The scientists who develop the two kinds of map tend to specialize in one or the other and seem to defend their turf, which can aggravate the problem of clearly communicating with engineers.The USGS Science Application for Risk Reduction's (SAFRR) HayWired scenario has addressed the challenge of explaining to engineers the differences between the two maps, and why, in a disaster planning scenario, one might want to use the less-familiar 3-D map.

  20. Evaluation of the respiratory motion influence in the 3D dose distribution of IMRT breast radiation therapy treatments

    NASA Astrophysics Data System (ADS)

    Lizar, J. C.; Santos, L. F.; Brandão, F. C.; Volpato, K. C.; Guimarães, F. S.; Pavoni, J. F.

    2017-05-01

    This study aims to evaluate the motion influence in the tridimensional dose distribution due to respiratory for IMRT breast planning technique. To simulate the breathing movement an oscillating platform was used. To simulate the breast, MAGIC-f phantoms were used. CT images of a static phantom were obtained and the IMRT treatment was planned based on them. One phantom was irradiated static in the platform and two other phantoms were irradiated while oscillating in the platform with amplitudes of 0.34 cm and 1.22 cm, the fourth phantom was used as reference in the MRI acquisition. The percentage of points approved in the 3D global gamma analyses (3%/3mm) when comparing the dose distribution of the static phantom with the oscillating ones was 91% for the 0.34cm amplitude and 62% for the 1.22 cm amplitude. Considering this result, the differences found in the dosimetric analyses for the oscillating amplitude of 0.34cm could be considered acceptable in a real treatment. The isodose distribution analyses showed a decrease of dose in the anterior breast region and an increase of dose on the posterior breast region, being these differences most pronounced for large amplitude motion.

  1. 3D fault curvature and fractal roughness: Insights for rupture dynamics and ground motions using a Discontinous Galerkin method

    NASA Astrophysics Data System (ADS)

    Ulrich, Thomas; Gabriel, Alice-Agnes

    2017-04-01

    Natural fault geometries are subject to a large degree of uncertainty. Their geometrical structure is not directly observable and may only be inferred from surface traces, or geophysical measurements. Most studies aiming at assessing the potential seismic hazard of natural faults rely on idealised shaped models, based on observable large-scale features. Yet, real faults are wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. Dynamic rupture simulations aim to capture the observed complexity of earthquake sources and ground-motions. From a numerical point of view, incorporating rough faults in such simulations is challenging - it requires optimised codes able to run efficiently on high-performance computers and simultaneously handle complex geometries. Physics-based rupture dynamics hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Moreover, the simulated ground-motions present many similarities with observed ground-motions records. Thus, such simulations may foster our understanding of earthquake source processes, and help deriving more accurate seismic hazard estimates. In this presentation, the software package SeisSol (www.seissol.org), based on an ADER-Discontinuous Galerkin scheme, is used to solve the spontaneous dynamic earthquake rupture problem. The usage of tetrahedral unstructured meshes naturally allows for complicated fault geometries. However, SeisSol's high-order discretisation in time and space is not particularly suited for small-scale fault roughness. We will demonstrate modelling conditions under which SeisSol resolves rupture dynamics on rough faults accurately. The strong impact of the geometric gradient of the fault surface on the rupture process is then shown in 3D simulations. Following, the benefits of explicitly modelling fault curvature and roughness, in distinction to prescribing heterogeneous initial

  2. Hybrid MV-kV 3D respiratory motion tracking during radiation therapy with low imaging dose

    NASA Astrophysics Data System (ADS)

    Yan, Huagang; Li, Haiyun; Liu, Zhixiang; Nath, Ravinder; Liu, Wu

    2012-12-01

    A novel real-time adaptive MV-kV imaging framework for image-guided radiation therapy is developed to reduce the thoracic and abdominal tumor targeting uncertainty caused by respiration-induced intrafraction motion with ultra-low patient imaging dose. In our method, continuous stereoscopic MV-kV imaging is used at the beginning of a radiation therapy delivery for several seconds to measure the implanted marker positions. After this stereoscopic imaging period, the kV imager is switched off except for the times when no fiducial marker is detected in the cine-MV images. The 3D time-varying marker positions are estimated by combining the MV 2D projection data and the motion correlations between directional components of marker motion established from the stereoscopic imaging period and updated afterwards; in particular, the most likely position is assumed to be the position on the projection line that has the shortest distance to the first principal component line segment constructed from previous trajectory points. An adaptive windowed auto-regressive prediction is utilized to predict the marker position a short time later (310 ms and 460 ms in this study) to allow for tracking system latency. To demonstrate the feasibility and evaluate the accuracy of the proposed method, computer simulations were performed for both arc and fixed-gantry deliveries using 66 h of retrospective tumor motion data from 42 patients treated for thoracic or abdominal cancers. The simulations reveal that using our hybrid approach, a smaller than 1.2 mm or 1.5 mm root-mean-square tracking error can be achieved at a system latency of 310 ms or 460 ms, respectively. Because the kV imaging is only used for a short period of time in our method, extra patient imaging dose can be reduced by an order of magnitude compared to continuous MV-kV imaging, while the clinical tumor targeting accuracy for thoracic or abdominal cancers is maintained. Furthermore, no additional hardware is required with the

  3. Hybrid MV-kV 3D respiratory motion tracking during radiation therapy with low imaging dose.

    PubMed

    Yan, Huagang; Li, Haiyun; Liu, Zhixiang; Nath, Ravinder; Liu, Wu

    2012-12-21

    A novel real-time adaptive MV-kV imaging framework for image-guided radiation therapy is developed to reduce the thoracic and abdominal tumor targeting uncertainty caused by respiration-induced intrafraction motion with ultra-low patient imaging dose. In our method, continuous stereoscopic MV-kV imaging is used at the beginning of a radiation therapy delivery for several seconds to measure the implanted marker positions. After this stereoscopic imaging period, the kV imager is switched off except for the times when no fiducial marker is detected in the cine-MV images. The 3D time-varying marker positions are estimated by combining the MV 2D projection data and the motion correlations between directional components of marker motion established from the stereoscopic imaging period and updated afterwards; in particular, the most likely position is assumed to be the position on the projection line that has the shortest distance to the first principal component line segment constructed from previous trajectory points. An adaptive windowed auto-regressive prediction is utilized to predict the marker position a short time later (310 ms and 460 ms in this study) to allow for tracking system latency. To demonstrate the feasibility and evaluate the accuracy of the proposed method, computer simulations were performed for both arc and fixed-gantry deliveries using 66 h of retrospective tumor motion data from 42 patients treated for thoracic or abdominal cancers. The simulations reveal that using our hybrid approach, a smaller than 1.2 mm or 1.5 mm root-mean-square tracking error can be achieved at a system latency of 310 ms or 460 ms, respectively. Because the kV imaging is only used for a short period of time in our method, extra patient imaging dose can be reduced by an order of magnitude compared to continuous MV-kV imaging, while the clinical tumor targeting accuracy for thoracic or abdominal cancers is maintained. Furthermore, no additional hardware is required

  4. Does fluid infiltration affect the motion of sediment grains? - A 3-D numerical modelling approach using SPH

    NASA Astrophysics Data System (ADS)

    Bartzke, Gerhard; Rogers, Benedict D.; Fourtakas, Georgios; Mokos, Athanasios; Huhn, Katrin

    2016-04-01

    The processes that cause the creation of a variety of sediment morphological features, e.g. laminated beds, ripples, or dunes, are based on the initial motion of individual sediment grains. However, with experimental techniques it is difficult to measure the flow characteristics, i.e., the velocity of the pore water flow in sediments, at a sufficient resolution and in a non-intrusive way. As a result, the role of fluid infiltration at the surface and in the interior affecting the initiation of motion of a sediment bed is not yet fully understood. Consequently, there is a strong need for numerical models, since these are capable of quantifying fluid driven sediment transport processes of complex sediment beds composed of irregular shapes. The numerical method Smoothed Particle Hydrodynamics (SPH) satisfies this need. As a meshless and Lagrangian technique, SPH is ideally suited to simulating flows in sediment beds composed of various grain shapes, but also flow around single grains at a high temporal and spatial resolution. The solver chosen is DualSPHysics (www.dual.sphysics.org) since this is validated for a range of flow conditions. For the present investigation a 3-D numerical flume model was generated using SPH with a length of 4.0 cm, a width of 0.05 cm and a height of 0.2 cm where mobile sediment particles were deposited in a recess. An experimental setup was designed to test sediment configurations composed of irregular grain shapes (grain diameter, D50=1000 μm). Each bed consisted of 3500 mobile objects. After the bed generation process, the entire domain was flooded with 18 million fluid particles. To drive the flow, an oscillating motion perpendicular to the bed was applied to the fluid, reaching a peak value of 0.3 cm/s, simulating 4 seconds of real time. The model results showed that flow speeds decreased logarithmically from the top of the domain towards the surface of the beds, indicating a fully developed boundary layer. Analysis of the fluid

  5. Full-field modal analysis during base motion excitation using high-speed 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Molina-Viedma, Ángel J.; López-Alba, Elías; Felipe-Sesé, Luis; Díaz, Francisco A.

    2017-10-01

    In recent years, many efforts have been made to exploit full-field measurement optical techniques for modal identification. Three-dimensional digital image correlation using high-speed cameras has been extensively employed for this purpose. Modal identification algorithms are applied to process the frequency response functions (FRF), which relate the displacement response of the structure to the excitation force. However, one of the most common tests for modal analysis involves the base motion excitation of a structural element instead of force excitation. In this case, the relationship between response and excitation is typically based on displacements, which are known as transmissibility functions. In this study, a methodology for experimental modal analysis using high-speed 3D digital image correlation and base motion excitation tests is proposed. In particular, a cantilever beam was excited from its base with a random signal, using a clamped edge join. Full-field transmissibility functions were obtained through the beam and converted into FRF for proper identification, considering a single degree-of-freedom theoretical conversion. Subsequently, modal identification was performed using a circle-fit approach. The proposed methodology facilitates the management of the typically large amounts of data points involved in the DIC measurement during modal identification. Moreover, it was possible to determine the natural frequencies, damping ratios and full-field mode shapes without requiring any additional tests. Finally, the results were experimentally validated by comparing them with those obtained by employing traditional accelerometers, analytical models and finite element method analyses. The comparison was performed by using the quantitative indicator modal assurance criterion. The results showed a high level of correspondence, consolidating the proposed experimental methodology.

  6. Shoulder 3D range of motion and humerus rotation in two volleyball spike techniques: injury prevention and performance.

    PubMed

    Seminati, Elena; Marzari, Alessandra; Vacondio, Oreste; Minetti, Alberto E

    2015-06-01

    Repetitive stresses and movements on the shoulder in the volleyball spike expose this joint to overuse injuries, bringing athletes to a career threatening injury. Assuming that specific spike techniques play an important role in injury risk, we compared the kinematic of the traditional (TT) and the alternative (AT) techniques in 21 elite athletes, evaluating their safety with respect to performance. Glenohumeral joint was set as the centre of an imaginary sphere, intersected by the distal end of the humerus at different angles. Shoulder range of motion and angular velocities were calculated and compared to the joint limits. Ball speed and jump height were also assessed. Results indicated the trajectory of the humerus to be different for the TT, with maximal flexion of the shoulder reduced by 10 degrees, and horizontal abduction 15 degrees higher. No difference was found for external rotation angles, while axial rotation velocities were significantly higher in AT, with a 5% higher ball speed. Results suggest AT as a potential preventive solution to shoulder chronic pathologies, reducing shoulder flexion during spiking. The proposed method allows visualisation of risks associated with different overhead manoeuvres, by depicting humerus angles and velocities with respect to joint limits in the same 3D space.

  7. Investigation of visually induced motion sickness in dynamic 3D contents based on subjective judgment, heart rate variability, and depth gaze behavior.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2014-01-01

    Visually induced motion sickness (VIMS) is an important safety issue in stereoscopic 3D technology. Accompanying subjective judgment of VIMS with objective measurement is useful to identify not only biomedical effects of dynamic 3D contents, but also provoking scenes that induce VIMS, duration of VIMS, and user behavior during VIMS. Heart rate variability and depth gaze behavior are appropriate physiological indicators for such objective observation. However, there is no information about relationship between subjective judgment of VIMS, heart rate variability, and depth gaze behavior. In this paper, we present a novel investigation of VIMS based on simulator sickness questionnaire (SSQ), electrocardiography (ECG), and 3D gaze tracking. Statistical analysis on SSQ data shows that nausea and disorientation symptoms increase as amount of dynamic motions increases (nausea: p<;0.005; disorientation: p<;0.05). To reduce VIMS, SSQ and ECG data suggest that user should perform voluntary gaze fixation at one point when experiencing vertical motion (up or down) and horizontal motion (turn left and right) in dynamic 3D contents. Observation of 3D gaze tracking data reveals that users who experienced VIMS tended to have unstable depth gaze than ones who did not experience VIMS.

  8. Calculating the Probability of Strong Ground Motions Using 3D Seismic Waveform Modeling - SCEC CyberShake

    NASA Astrophysics Data System (ADS)

    Gupta, N.; Callaghan, S.; Graves, R.; Mehta, G.; Zhao, L.; Deelman, E.; Jordan, T. H.; Kesselman, C.; Okaya, D.; Cui, Y.; Field, E.; Gupta, V.; Vahi, K.; Maechling, P. J.

    2006-12-01

    Researchers from the SCEC Community Modeling Environment (SCEC/CME) project are utilizing the CyberShake computational platform and a distributed high performance computing environment that includes USC High Performance Computer Center and the NSF TeraGrid facilities to calculate physics-based probabilistic seismic hazard curves for several sites in the Southern California area. Traditionally, probabilistic seismic hazard analysis (PSHA) is conducted using intensity measure relationships based on empirical attenuation relationships. However, a more physics-based approach using waveform modeling could lead to significant improvements in seismic hazard analysis. Members of the SCEC/CME Project have integrated leading-edge PSHA software tools, SCEC-developed geophysical models, validated anelastic wave modeling software, and state-of-the-art computational technologies on the TeraGrid to calculate probabilistic seismic hazard curves using 3D waveform-based modeling. The CyberShake calculations for a single probablistic seismic hazard curve require tens of thousands of CPU hours and multiple terabytes of disk storage. The CyberShake workflows are run on high performance computing systems including multiple TeraGrid sites (currently SDSC and NCSA), and the USC Center for High Performance Computing and Communications. To manage the extensive job scheduling and data requirements, CyberShake utilizes a grid-based scientific workflow system based on the Virtual Data System (VDS), the Pegasus meta-scheduler system, and the Globus toolkit. Probabilistic seismic hazard curves for spectral acceleration at 3.0 seconds have been produced for eleven sites in the Southern California region, including rock and basin sites. At low ground motion levels, there is little difference between the CyberShake and attenuation relationship curves. At higher ground motion (lower probability) levels, the curves are similar for some sites (downtown LA, I-5/SR-14 interchange) but different for

  9. Does fluid infiltration affect the motion of sediment grains? - A 3-D numerical modelling approach using SPH

    NASA Astrophysics Data System (ADS)

    Bartzke, Gerhard; Rogers, Benedict D.; Fourtakas, Georgios; Mokos, Athanasios; Canelas, Ricardo B.; Huhn, Katrin

    2017-04-01

    With experimental techniques it is difficult to measure flow characteristics, e.g. the velocity of pore water flow in sediments, at a sufficient resolution and in a non-intrusive way. As a result, the effect of fluid flow at the surface and in the interior of a sediment bed on particle motion is not yet fully understood. Numerical models may help to overcome these problems. In this study Smoothed Particle Hydrodynamics (SPH) was chosen since it is ideally suited to simulate flows in sediment beds, at a high temporal and spatial resolution. The solver chosen is DualSPHysics 4.0 (www.dual.sphysics.org), since this is validated for a range of flow conditions. For the present investigation a 3D numerical flow channel was generated with a length of 15.0 cm, a width of 0.5 cm and a height of 4.0 cm. The entire domain was flooded with 8 million fluid particles, while 400 mobile sediment particles were deposited under applied gravity (grain diameter D50=10 mm) to generate randomly packed beds. Periodic boundaries were applied to the sidewalls to mimic an endless flow. To drive the flow, an acceleration perpendicular to the bed was applied to the fluid, reaching a target value of 0.3 cm/s, simulating 12 seconds of real time. Comparison of the model results to the law of the wall showed that flow speeds decreased logarithmically from the top of the domain towards the surface of the beds, indicating a fully developed boundary layer. Analysis of the fluid surrounding the sediment particles revealed critical threshold velocities, subsequently resulting in the initiation of motion due to drag. Sediment flux measurements indicated that with increasing simulation time a larger quantity of sediment particles was transported at the direct vicinity of the bed, whereas the amount of transported particles along with flow speed values, within the pore spaces, decreased with depth. Moreover, sediment - sediment particle collisions at the sediment surface lead to the opening of new pore

  10. SU-E-J-80: Interplay Effect Between VMAT Intensity Modulation and Tumor Motion in Hypofractioned Lung Treatment, Investigated with 3D Pressage Dosimeter

    SciTech Connect

    Touch, M; Wu, Q; Oldham, M

    2014-06-01

    Purpose: To demonstrate an embedded tissue equivalent presage dosimeter for measuring 3D doses in moving tumors and to study the interplay effect between the tumor motion and intensity modulation in hypofractioned Volumetric Modulated Arc Therapy(VMAT) lung treatment. Methods: Motion experiments were performed using cylindrical Presage dosimeters (5cm diameter by 7cm length) mounted inside the lung insert of a CIRS thorax phantom. Two different VMAT treatment plans were created and delivered in three different scenarios with the same prescribed dose of 18 Gy. Plan1, containing a 2 centimeter spherical CTV with an additional 2mm setup margin, was delivered on a stationary phantom. Plan2 used the same CTV except expanded by 1 cm in the Sup-Inf direction to generate ITV and PTV respectively. The dosimeters were irradiated in static and variable motion scenarios on a Truebeam system. After irradiation, high resolution 3D dosimetry was performed using the Duke Large Field-of-view Optical-CT Scanner, and compared to the calculated dose from Eclipse. Results: In the control case (no motion), good agreement was observed between the planned and delivered dose distributions as indicated by 100% 3D Gamma (3% of maximum planned dose and 3mm DTA) passing rates in the CTV. In motion cases gamma passing rates was 99% in CTV. DVH comparisons also showed good agreement between the planned and delivered dose in CTV for both control and motion cases. However, differences of 15% and 5% in dose to PTV were observed in the motion and control cases respectively. Conclusion: With very high dose nature of a hypofraction treatment, significant effect was observed only motion is introduced to the target. This can be resulted from the motion of the moving target and the modulation of the MLC. 3D optical dosimetry can be of great advantage in hypofraction treatment dose validation studies.

  11. Creation of 3D digital anthropomorphic phantoms which model actual patient non-rigid body motion as determined from MRI and position tracking studies of volunteers

    NASA Astrophysics Data System (ADS)

    Connolly, C. M.; Konik, A.; Dasari, P. K. R.; Segars, P.; Zheng, S.; Johnson, K. L.; Dey, J.; King, M. A.

    2011-03-01

    Patient motion can cause artifacts, which can lead to difficulty in interpretation. The purpose of this study is to create 3D digital anthropomorphic phantoms which model the location of the structures of the chest and upper abdomen of human volunteers undergoing a series of clinically relevant motions. The 3D anatomy is modeled using the XCAT phantom and based on MRI studies. The NURBS surfaces of the XCAT are interactively adapted to fit the MRI studies. A detailed XCAT phantom is first developed from an EKG triggered Navigator acquisition composed of sagittal slices with a 3 x 3 x 3 mm voxel dimension. Rigid body motion states are then acquired at breath-hold as sagittal slices partially covering the thorax, centered on the heart, with 9 mm gaps between them. For non-rigid body motion requiring greater sampling, modified Navigator sequences covering the entire thorax with 3 mm gaps between slices are obtained. The structures of the initial XCAT are then adapted to fit these different motion states. Simultaneous to MRI imaging the positions of multiple reflective markers on stretchy bands about the volunteer's chest and abdomen are optically tracked in 3D via stereo imaging. These phantoms with combined position tracking will be used to investigate both imaging-data-driven and motion-tracking strategies to estimate and correct for patient motion. Our initial application will be to cardiacperfusion SPECT imaging where the XCAT phantoms will be used to create patient activity and attenuation distributions for each volunteer with corresponding motion tracking data from the markers on the body-surface. Monte Carlo methods will then be used to simulate SPECT acquisitions, which will be used to evaluate various motion estimation and correction strategies.

  12. Respiratory motion compensation for simultaneous PET/MR based on a 3D-2D registration of strongly undersampled radial MR data: a simulation study

    NASA Astrophysics Data System (ADS)

    Rank, Christopher M.; Heußer, Thorsten; Flach, Barbara; Brehm, Marcus; Kachelrieß, Marc

    2015-03-01

    We propose a new method for PET/MR respiratory motion compensation, which is based on a 3D-2D registration of strongly undersampled MR data and a) runs in parallel with the PET acquisition, b) can be interlaced with clinical MR sequences, and c) requires less than one minute of the total MR acquisition time per bed position. In our simulation study, we applied a 3D encoded radial stack-of-stars sampling scheme with 160 radial spokes per slice and an acquisition time of 38 s. Gated 4D MR images were reconstructed using a 4D iterative reconstruction algorithm. Based on these images, motion vector fields were estimated using our newly-developed 3D-2D registration framework. A 4D PET volume of a patient with eight hot lesions in the lungs and upper abdomen was simulated and MoCo 4D PET images were reconstructed based on the motion vector fields derived from MR. For evaluation, average SUVmean values of the artificial lesions were determined for a 3D, a gated 4D, a MoCo 4D and a reference (with ten-fold measurement time) gated 4D reconstruction. Compared to the reference, 3D reconstructions yielded an underestimation of SUVmean values due to motion blurring. In contrast, gated 4D reconstructions showed the highest variation of SUVmean due to low statistics. MoCo 4D reconstructions were only slightly affected by these two sources of uncertainty resulting in a significant visual and quantitative improvement in terms of SUVmean values. Whereas temporal resolution was comparable to the gated 4D images, signal-to-noise ratio and contrast-to-noise ratio were close to the 3D reconstructions.

  13. Temporal integration of 3D coherent motion cues defining visual objects of unknown orientation is impaired in amnestic mild cognitive impairment and Alzheimer's disease.

    PubMed

    Lemos, Raquel; Figueiredo, Patrícia; Santana, Isabel; Simões, Mário R; Castelo-Branco, Miguel

    2012-01-01

    The nature of visual impairments in Alzheimer's disease (AD) and their relation with other cognitive deficits remains highly debated. We asked whether independent visual deficits are present in AD and amnestic forms of mild cognitive impairment (MCI) in the absence of other comorbidities by performing a hierarchical analysis of low-level and high-level visual function in MCI and AD. Since parietal structures are a frequent pathophysiological target in AD and subserve 3D vision driven by motion cues, we hypothesized that the parietal visual dorsal stream function is predominantly affected in these conditions. We used a novel 3D task combining three critical variables to challenge parietal function: 3D motion coherence of objects of unknown orientation, with constrained temporal integration of these cues. Groups of amnestic MCI (n = 20), AD (n = 19), and matched controls (n = 20) were studied. Low-level visual function was assessed using psychophysical contrast sensitivity tests probing the magnocellular, parvocellular, and koniocellular pathways. We probed visual ventral stream function using the Benton Face Recognition task. We have found hierarchical visual impairment in AD, independently of neuropsychological deficits, in particular in the novel parietal 3D task, which was selectively affected in MCI. Integration of local motion cues into 3D objects was specifically and most strongly impaired in AD and MCI, especially when 3D motion was unpredictable, with variable orientation and short-lived in space and time. In sum, specific early dorsal stream visual impairment occurs independently of ventral stream, low-level visual and neuropsychological deficits, in amnestic types of MCI and AD.

  14. Real-time motion- and B0-correction for LASER-localized spiral-accelerated 3D-MRSI of the brain at 3T.

    PubMed

    Bogner, Wolfgang; Hess, Aaron T; Gagoski, Borjan; Tisdall, M Dylan; van der Kouwe, Andre J W; Trattnig, Siegfried; Rosen, Bruce; Andronesi, Ovidiu C

    2014-03-01

    The full potential of magnetic resonance spectroscopic imaging (MRSI) is often limited by localization artifacts, motion-related artifacts, scanner instabilities, and long measurement times. Localized adiabatic selective refocusing (LASER) provides accurate B1-insensitive spatial excitation even at high magnetic fields. Spiral encoding accelerates MRSI acquisition, and thus, enables 3D-coverage without compromising spatial resolution. Real-time position- and shim/frequency-tracking using MR navigators correct motion- and scanner instability-related artifacts. Each of these three advanced MRI techniques provides superior MRSI data compared to commonly used methods. In this work, we integrated in a single pulse sequence these three promising approaches. Real-time correction of motion, shim, and frequency-drifts using volumetric dual-contrast echo planar imaging-based navigators were implemented in an MRSI sequence that uses low-power gradient modulated short-echo time LASER localization and time efficient spiral readouts, in order to provide fast and robust 3D-MRSI in the human brain at 3T. The proposed sequence was demonstrated to be insensitive to motion- and scanner drift-related degradations of MRSI data in both phantoms and volunteers. Motion and scanner drift artifacts were eliminated and excellent spectral quality was recovered in the presence of strong movement. Our results confirm the expected benefits of combining a spiral 3D-LASER-MRSI sequence with real-time correction. The new sequence provides accurate, fast, and robust 3D metabolic imaging of the human brain at 3T. This will further facilitate the use of 3D-MRSI for neuroscience and clinical applications.

  15. SU-C-209-02: 3D Fluoroscopic Image Generation From Patient-Specific 4DCBCT-Based Motion Models Derived From Clinical Patient Images

    SciTech Connect

    Dhou, S; Cai, W; Hurwitz, M; Williams, C; Lewis, J

    2016-06-15

    Purpose: We develop a method to generate time varying volumetric images (3D fluoroscopic images) using patient-specific motion models derived from four-dimensional cone-beam CT (4DCBCT). Methods: Motion models are derived by selecting one 4DCBCT phase as a reference image, and registering the remaining images to it. Principal component analysis (PCA) is performed on the resultant displacement vector fields (DVFs) to create a reduced set of PCA eigenvectors that capture the majority of respiratory motion. 3D fluoroscopic images are generated by optimizing the weights of the PCA eigenvectors iteratively through comparison of measured cone-beam projections and simulated projections generated from the motion model. This method was applied to images from five lung-cancer patients. The spatial accuracy of this method is evaluated by comparing landmark positions in the 3D fluoroscopic images to manually defined ground truth positions in the patient cone-beam projections. Results: 4DCBCT motion models were shown to accurately generate 3D fluoroscopic images when the patient cone-beam projections contained clearly visible structures moving with respiration (e.g., the diaphragm). When no moving anatomical structure was clearly visible in the projections, the 3D fluoroscopic images generated did not capture breathing deformations, and reverted to the reference image. For the subset of 3D fluoroscopic images generated from projections with visibly moving anatomy, the average tumor localization error and the 95th percentile were 1.6 mm and 3.1 mm respectively. Conclusion: This study showed that 4DCBCT-based 3D fluoroscopic images can accurately capture respiratory deformations in a patient dataset, so long as the cone-beam projections used contain visible structures that move with respiration. For clinical implementation of 3D fluoroscopic imaging for treatment verification, an imaging field of view (FOV) that contains visible structures moving with respiration should be

  16. Computer numerical control (CNC) lithography: light-motion synchronized UV-LED lithography for 3D microfabrication

    NASA Astrophysics Data System (ADS)

    Kim, Jungkwun; Yoon, Yong-Kyu; Allen, Mark G.

    2016-03-01

    This paper presents a computer-numerical-controlled ultraviolet light-emitting diode (CNC UV-LED) lithography scheme for three-dimensional (3D) microfabrication. The CNC lithography scheme utilizes sequential multi-angled UV light exposures along with a synchronized switchable UV light source to create arbitrary 3D light traces, which are transferred into the photosensitive resist. The system comprises a switchable, movable UV-LED array as a light source, a motorized tilt-rotational sample holder, and a computer-control unit. System operation is such that the tilt-rotational sample holder moves in a pre-programmed routine, and the UV-LED is illuminated only at desired positions of the sample holder during the desired time period, enabling the formation of complex 3D microstructures. This facilitates easy fabrication of complex 3D structures, which otherwise would have required multiple manual exposure steps as in the previous multidirectional 3D UV lithography approach. Since it is batch processed, processing time is far less than that of the 3D printing approach at the expense of some reduction in the degree of achievable 3D structure complexity. In order to produce uniform light intensity from the arrayed LED light source, the UV-LED array stage has been kept rotating during exposure. UV-LED 3D fabrication capability was demonstrated through a plurality of complex structures such as V-shaped micropillars, micropanels, a micro-‘hi’ structure, a micro-‘cat’s claw,’ a micro-‘horn,’ a micro-‘calla lily,’ a micro-‘cowboy’s hat,’ and a micro-‘table napkin’ array.

  17. A hybrid approach for fusing 4D-MRI temporal information with 3D-CT for the study of lung and lung tumor motion

    SciTech Connect

    Yang, Y. X.; Van Reeth, E.; Poh, C. L.; Teo, S.-K.; Tan, C. H.; Tham, I. W. K.

    2015-08-15

    Purpose: Accurate visualization of lung motion is important in many clinical applications, such as radiotherapy of lung cancer. Advancement in imaging modalities [e.g., computed tomography (CT) and MRI] has allowed dynamic imaging of lung and lung tumor motion. However, each imaging modality has its advantages and disadvantages. The study presented in this paper aims at generating synthetic 4D-CT dataset for lung cancer patients by combining both continuous three-dimensional (3D) motion captured by 4D-MRI and the high spatial resolution captured by CT using the authors’ proposed approach. Methods: A novel hybrid approach based on deformable image registration (DIR) and finite element method simulation was developed to fuse a static 3D-CT volume (acquired under breath-hold) and the 3D motion information extracted from 4D-MRI dataset, creating a synthetic 4D-CT dataset. Results: The study focuses on imaging of lung and lung tumor. Comparing the synthetic 4D-CT dataset with the acquired 4D-CT dataset of six lung cancer patients based on 420 landmarks, accurate results (average error <2 mm) were achieved using the authors’ proposed approach. Their hybrid approach achieved a 40% error reduction (based on landmarks assessment) over using only DIR techniques. Conclusions: The synthetic 4D-CT dataset generated has high spatial resolution, has excellent lung details, and is able to show movement of lung and lung tumor over multiple breathing cycles.

  18. A hybrid approach for fusing 4D-MRI temporal information with 3D-CT for the study of lung and lung tumor motion.

    PubMed

    Yang, Y X; Teo, S-K; Van Reeth, E; Tan, C H; Tham, I W K; Poh, C L

    2015-08-01

    Accurate visualization of lung motion is important in many clinical applications, such as radiotherapy of lung cancer. Advancement in imaging modalities [e.g., computed tomography (CT) and MRI] has allowed dynamic imaging of lung and lung tumor motion. However, each imaging modality has its advantages and disadvantages. The study presented in this paper aims at generating synthetic 4D-CT dataset for lung cancer patients by combining both continuous three-dimensional (3D) motion captured by 4D-MRI and the high spatial resolution captured by CT using the authors' proposed approach. A novel hybrid approach based on deformable image registration (DIR) and finite element method simulation was developed to fuse a static 3D-CT volume (acquired under breath-hold) and the 3D motion information extracted from 4D-MRI dataset, creating a synthetic 4D-CT dataset. The study focuses on imaging of lung and lung tumor. Comparing the synthetic 4D-CT dataset with the acquired 4D-CT dataset of six lung cancer patients based on 420 landmarks, accurate results (average error <2 mm) were achieved using the authors' proposed approach. Their hybrid approach achieved a 40% error reduction (based on landmarks assessment) over using only DIR techniques. The synthetic 4D-CT dataset generated has high spatial resolution, has excellent lung details, and is able to show movement of lung and lung tumor over multiple breathing cycles.

  19. Evaluation of the combined effects of target size, respiratory motion and background activity on 3D and 4D PET/CT images

    NASA Astrophysics Data System (ADS)

    Park, Sang-June; Ionascu, Dan; Killoran, Joseph; Mamede, Marcelo; Gerbaudo, Victor H.; Chin, Lee; Berbeco, Ross

    2008-07-01

    Gated (4D) PET/CT has the potential to greatly improve the accuracy of radiotherapy at treatment sites where internal organ motion is significant. However, the best methodology for applying 4D-PET/CT to target definition is not currently well established. With the goal of better understanding how to best apply 4D information to radiotherapy, initial studies were performed to investigate the effect of target size, respiratory motion and target-to-background activity concentration ratio (TBR) on 3D (ungated) and 4D PET images. Using a PET/CT scanner with 4D or gating capability, a full 3D-PET scan corrected with a 3D attenuation map from 3D-CT scan and a respiratory gated (4D) PET scan corrected with corresponding attenuation maps from 4D-CT were performed by imaging spherical targets (0.5-26.5 mL) filled with 18F-FDG in a dynamic thorax phantom and NEMA IEC body phantom at different TBRs (infinite, 8 and 4). To simulate respiratory motion, the phantoms were driven sinusoidally in the superior-inferior direction with amplitudes of 0, 1 and 2 cm and a period of 4.5 s. Recovery coefficients were determined on PET images. In addition, gating methods using different numbers of gating bins (1-20 bins) were evaluated with image noise and temporal resolution. For evaluation, volume recovery coefficient, signal-to-noise ratio and contrast-to-noise ratio were calculated as a function of the number of gating bins. Moreover, the optimum thresholds which give accurate moving target volumes were obtained for 3D and 4D images. The partial volume effect and signal loss in the 3D-PET images due to the limited PET resolution and the respiratory motion, respectively were measured. The results show that signal loss depends on both the amplitude and pattern of respiratory motion. However, the 4D-PET successfully recovers most of the loss induced by the respiratory motion. The 5-bin gating method gives the best temporal resolution with acceptable image noise. The results based on the 4D

  20. 2-D-3-D frequency registration using a low-dose radiographic system for knee motion estimation.

    PubMed

    Jerbi, Taha; Burdin, Valerie; Leboucher, Julien; Stindel, Eric; Roux, Christian

    2013-03-01

    In this paper, a new method is presented to study the feasibility of the pose and the position estimation of bone structures using a low-dose radiographic system, the entrepreneurial operating system (designed by EOS-Imaging Company). This method is based on a 2-D-3-D registration of EOS bi-planar X-ray images with an EOS 3-D reconstruction. This technique is relevant to such an application thanks to the EOS ability to simultaneously make acquisitions of frontal and sagittal radiographs, and also to produce a 3-D surface reconstruction with its attached software. In this paper, the pose and position of a bone in radiographs is estimated through the link between 3-D and 2-D data. This relationship is established in the frequency domain using the Fourier central slice theorem. To estimate the pose and position of the bone, we define a distance between the 3-D data and the radiographs, and use an iterative optimization approach to converge toward the best estimation. In this paper, we give the mathematical details of the method. We also show the experimental protocol and the results, which validate our approach.

  1. Dynamic particle accumulation structure (PAS) in half-zone liquid bridge Reconstruction of particle motion by 3-D PTV

    NASA Astrophysics Data System (ADS)

    Ueno, I.; Abe, Y.; Noguchi, K.; Kawamura, H.

    Three-dimensional (3-D) velocity field reconstruction of oscillatory thermocapillary convections in a half-zone liquid bridge with a radius of O (1 mm) was carried out by applying 3-D particle tracking velocimetry (PTV). Simultaneous observation of the particles suspended in the bridge by two CCD cameras was carried out by placing a small cubic beam splitter above a transparent top rod. The reconstruction of the 3-D trajectories and the velocity fields of the particles in the several types of oscillatory-flow regimes were conducted successfully for sufficiently long period without losing particle tracking. With this application the present authors conducted a series of experiments focusing upon the collapse and re-formation process of the PAS by mechanically disturbing fully developed PAS.

  2. Imaging bacterial 3D motion using digital in-line holographic microscopy and correlation-based de-noising algorithm

    PubMed Central

    Molaei, Mehdi; Sheng, Jian

    2014-01-01

    Abstract: Better understanding of bacteria environment interactions in the context of biofilm formation requires accurate 3-dimentional measurements of bacteria motility. Digital Holographic Microscopy (DHM) has demonstrated its capability in resolving 3D distribution and mobility of particulates in a dense suspension. Due to their low scattering efficiency, bacteria are substantially difficult to be imaged by DHM. In this paper, we introduce a novel correlation-based de-noising algorithm to remove the background noise and enhance the quality of the hologram. Implemented in conjunction with DHM, we demonstrate that the method allows DHM to resolve 3-D E. coli bacteria locations of a dense suspension (>107 cells/ml) with submicron resolutions (<0.5 µm) over substantial depth and to obtain thousands of 3D cell trajectories. PMID:25607177

  3. Influence of Head Motion on the Accuracy of 3D Reconstruction with Cone-Beam CT: Landmark Identification Errors in Maxillofacial Surface Model

    PubMed Central

    Song, Jin-Myoung; Cho, Jin-Hyoung

    2016-01-01

    Purpose The purpose of this study was to investigate the influence of head motion on the accuracy of three-dimensional (3D) reconstruction with cone-beam computed tomography (CBCT) scan. Materials and Methods Fifteen dry skulls were incorporated into a motion controller which simulated four types of head motion during CBCT scan: 2 horizontal rotations (to the right/to the left) and 2 vertical rotations (upward/downward). Each movement was triggered to occur at the start of the scan for 1 second by remote control. Four maxillofacial surface models with head motion and one control surface model without motion were obtained for each skull. Nine landmarks were identified on the five maxillofacial surface models for each skull, and landmark identification errors were compared between the control model and each of the models with head motion. Results Rendered surface models with head motion were similar to the control model in appearance; however, the landmark identification errors showed larger values in models with head motion than in the control. In particular, the Porion in the horizontal rotation models presented statistically significant differences (P < .05). Statistically significant difference in the errors between the right and left side landmark was present in the left side rotation which was opposite direction to the scanner rotation (P < .05). Conclusions Patient movement during CBCT scan might cause landmark identification errors on the 3D surface model in relation to the direction of the scanner rotation. Clinicians should take this into consideration to prevent patient movement during CBCT scan, particularly horizontal movement. PMID:27065238

  4. Bedside assistance in freehand ultrasonic diagnosis by real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion

    NASA Astrophysics Data System (ADS)

    Fukuzawa, M.; Kawata, K.; Nakamori, N.; Kitsunezuka, Y.

    2011-03-01

    By real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion, freehand ultrasonic diagnosis of neonatal ischemic diseases has been assisted at the bedside. The 2D ultrasonic movie was taken with a conventional ultrasonic apparatus (ATL HDI5000) and ultrasonic probes of 5-7 MHz with the compact tilt-sensor to measure the probe orientation. The real-time 3D visualization was realized by developing an extended version of the PC-based visualization system. The software was originally developed on the DirectX platform and optimized with the streaming SIMD extensions. The 3D scatter diagram of the latest pulsatile tissues has been continuously generated and visualized as projection image with the ultrasonic movie in the current section more than 15 fps. It revealed the 3D structure of pulsatile tissues such as middle and posterior cerebral arteries, Willis ring and cerebellar arteries, in which pediatricians have great interests in the blood flow because asphyxiated and/or low-birth-weight neonates have a high risk of ischemic diseases such as hypoxic-ischemic encephalopathy and periventricular leukomalacia. Since the pulsatile tissue-motion is due to local blood flow, it can be concluded that the system developed in this work is very useful to assist freehand ultrasonic diagnosis of ischemic diseases in the neonatal cranium.

  5. Combining 3D tracking and surgical instrumentation to determine the stiffness of spinal motion segments: a validation study.

    PubMed

    Reutlinger, C; Gédet, P; Büchler, P; Kowal, J; Rudolph, T; Burger, J; Scheffler, K; Hasler, C

    2011-04-01

    The spine is a complex structure that provides motion in three directions: flexion and extension, lateral bending and axial rotation. So far, the investigation of the mechanical and kinematic behavior of the basic unit of the spine, a motion segment, is predominantly a domain of in vitro experiments on spinal loading simulators. Most existing approaches to measure spinal stiffness intraoperatively in an in vivo environment use a distractor. However, these concepts usually assume a planar loading and motion. The objective of our study was to develop and validate an apparatus, that allows to perform intraoperative in vivo measurements to determine both the applied force and the resulting motion in three dimensional space. The proposed setup combines force measurement with an instrumented distractor and motion tracking with an optoelectronic system. As the orientation of the applied force and the three dimensional motion is known, not only force-displacement, but also moment-angle relations could be determined. The validation was performed using three cadaveric lumbar ovine spines. The lateral bending stiffness of two motion segments per specimen was determined with the proposed concept and compared with the stiffness acquired on a spinal loading simulator which was considered to be gold standard. The mean values of the stiffness computed with the proposed concept were within a range of ±15% compared to data obtained with the spinal loading simulator under applied loads of less than 5 Nm.

  6. Integrating structure-from-motion photogrammetry with geospatial software as a novel technique for quantifying 3D ecological characteristics of coral reefs

    PubMed Central

    Delparte, D; Gates, RD; Takabayashi, M

    2015-01-01

    The structural complexity of coral reefs plays a major role in the biodiversity, productivity, and overall functionality of reef ecosystems. Conventional metrics with 2-dimensional properties are inadequate for characterization of reef structural complexity. A 3-dimensional (3D) approach can better quantify topography, rugosity and other structural characteristics that play an important role in the ecology of coral reef communities. Structure-from-Motion (SfM) is an emerging low-cost photogrammetric method for high-resolution 3D topographic reconstruction. This study utilized SfM 3D reconstruction software tools to create textured mesh models of a reef at French Frigate Shoals, an atoll in the Northwestern Hawaiian Islands. The reconstructed orthophoto and digital elevation model were then integrated with geospatial software in order to quantify metrics pertaining to 3D complexity. The resulting data provided high-resolution physical properties of coral colonies that were then combined with live cover to accurately characterize the reef as a living structure. The 3D reconstruction of reef structure and complexity can be integrated with other physiological and ecological parameters in future research to develop reliable ecosystem models and improve capacity to monitor changes in the health and function of coral reef ecosystems. PMID:26207190

  7. Dynamic simulation and modeling of the motion modes produced during the 3D controlled manipulation of biological micro/nanoparticles based on the AFM.

    PubMed

    Saraee, Mahdieh B; Korayem, Moharam H

    2015-08-07

    Determining the motion modes and the exact position of a particle displaced during the manipulation process is of special importance. This issue becomes even more important when the studied particles are biological micro/nanoparticles and the goals of manipulation are the transfer of these particles within body cells, repair of cancerous cells and the delivery of medication to damaged cells. However, due to the delicate nature of biological nanoparticles and their higher vulnerability, by obtaining the necessary force of manipulation for the considered motion mode, we can prevent the sample from interlocking with or sticking to the substrate because of applying a weak force or avoid damaging the sample due to the exertion of excessive force. In this paper, the dynamic behaviors and the motion modes of biological micro/nanoparticles such as DNA, yeast, platelet and bacteria due to the 3D manipulation effect have been investigated. Since the above nanoparticles generally have a cylindrical shape, the cylindrical contact models have been employed in an attempt to more precisely model the forces exerted on the nanoparticle during the manipulation process. Also, this investigation has performed a comprehensive modeling and simulation of all the possible motion modes in 3D manipulation by taking into account the eccentricity of the applied load on the biological nanoparticle. The obtained results indicate that unlike the macroscopic scale, the sliding of nanoparticle on substrate in nano-scale takes place sooner than the other motion modes and that the spinning about the vertical and transverse axes and the rolling of nanoparticle occur later than the other motion modes. The simulation results also indicate that the applied force necessary for the onset of nanoparticle movement and the resulting motion mode depend on the size and aspect ratio of the nanoparticle. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Validation and Comparison of 2D and 3D Codes for Nearshore Motion of Long Waves Using Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey

    2016-04-01

    Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT

  9. Spatial Disorientation in Gondola Centrifuges Predicted by the Form of Motion as a Whole in 3-D

    PubMed Central

    Holly, Jan E.; Harmon, Katharine J.

    2009-01-01

    INTRODUCTION During a coordinated turn, subjects can misperceive tilts. Subjects accelerating in tilting-gondola centrifuges without external visual reference underestimate the roll angle, and underestimate more when backward-facing than when forward-facing. In addition, during centrifuge deceleration, the perception of pitch can include tumble while paradoxically maintaining a fixed perceived pitch angle. The goal of the present research was to test two competing hypotheses: (1) that components of motion are perceived relatively independently and then combined to form a three-dimensional perception, and (2) that perception is governed by familiarity of motions as a whole in three dimensions, with components depending more strongly on the overall shape of the motion. METHODS Published experimental data were used from existing tilting-gondola centrifuge studies. The two hypotheses were implemented formally in computer models, and centrifuge acceleration and deceleration were simulated. RESULTS The second, whole-motion oriented, hypothesis better predicted subjects' perceptions, including the forward-backward asymmetry and the paradoxical tumble upon deceleration. Important was the predominant stimulus at the beginning of the motion as well as the familiarity of centripetal acceleration. CONCLUSION Three-dimensional perception is better predicted by taking into account familiarity with the form of three-dimensional motion. PMID:19198199

  10. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    NASA Astrophysics Data System (ADS)

    Min, Yugang; Santhanam, Anand; Neelakkantan, Harini; Ruddy, Bari H.; Meeks, Sanford L.; Kupelian, Patrick A.

    2010-09-01

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  11. 3D heart motion from single-plane angiography of the coronary vasculature: a model-based approach

    NASA Astrophysics Data System (ADS)

    Sherknies, Denis; Meunier, Jean; Tardif, Jean-Claude

    2004-05-01

    In order to complete a thorough examination of a patient heart muscle, physicians practice two common invasive procedures: the ventriculography, which allows the determination of the ejection fraction, and the coronarography, giving among other things, information on stenosis of arteries. We propose a method that allows the determination of a contraction index similar to ejection fraction, using only single-plane coronarography. Our method first reconstructs in 3D, selected points on the angiogram, using a 3D model devised from data published by Dodge ea. ['88, '92]. We then follow the point displacements through a complete heart contraction cycle. The objective function, minimizing the RMS distances between the angiogram and the model, relies on affine transformations, i.e. translation, rotation and isotropic scaling. We validate our method on simulated projections using cases from Dodge data. In order to avoid any bias, a leave-one-out strategy was used, which excludes the reference case when constructing the 3D coronary heart model. The simulated projections are created by transforming the reference case, with scaling, translation and rotation transformations, and by adding random 3D noise for each frame in the contraction cycle. Comparing the true scaling parameters to the reconstructed sequence, our method is quite robust (R2=96.6%, P<1%), even when noise error level is as high as 1 cm. Using 10 clinical cases we then proceeded to reconstruct the contraction sequence for a complete cardiac cycle starting at end-diastole. A simple heart contraction mathematical model permitted us to link the measured ejection fraction of the different cases to the maximum heart contraction amplitude (R2=57%, P<1%) determined by our method.

  12. Motion-sensitive 3-D optical coherence microscope operating at 1300 nm for the visualization of early frog development

    NASA Astrophysics Data System (ADS)

    Hoeling, Barbara M.; Feldman, Stephanie S.; Strenge, Daniel T.; Bernard, Aaron; Hogan, Emily R.; Petersen, Daniel C.; Fraser, Scott E.; Kee, Yun; Tyszka, J. Michael; Haskell, Richard C.

    2007-02-01

    We present 3-dimensional volume-rendered in vivo images of developing embryos of the African clawed frog Xenopus laevis taken with our new en-face-scanning, focus-tracking OCM system at 1300 nm wavelength. Compared to our older instrument which operates at 850 nm, we measure a decrease in the attenuation coefficient by 33%, leading to a substantial improvement in depth penetration. Both instruments have motion-sensitivity capability. By evaluating the fast Fourier transform of the fringe signal, we can produce simultaneously images displaying the fringe amplitude of the backscattered light and images showing the random Brownian motion of the scatterers. We present time-lapse movies of frog gastrulation, an early event during vertebrate embryonic development in which cell movements result in the formation of three distinct layers that later give rise to the major organ systems. We show that the motion-sensitive images reveal features of the different tissue types that are not discernible in the fringe amplitude images. In particular, we observe strong diffusive motion in the vegetal (bottom) part of the frog embryo which we attribute to the Brownian motion of the yolk platelets in the endoderm.

  13. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    PubMed Central

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-01-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399

  14. Impact of assimilation of INSAT-3D retrieved atmospheric motion vectors on short-range forecast of summer monsoon 2014 over the South Asian region

    NASA Astrophysics Data System (ADS)

    Kumar, Prashant; Deb, Sanjib K.; Kishtawal, C. M.; Pal, P. K.

    2017-05-01

    The Weather Research and Forecasting (WRF) model and its three-dimensional variational data assimilation system are used in this study to assimilate the INSAT-3D, a recently launched Indian geostationary meteorological satellite derived from atmospheric motion vectors (AMVs) over the South Asian region during peak Indian summer monsoon month (i.e., July 2014). A total of four experiments were performed daily with and without assimilation of INSAT-3D-derived AMVs and the other AMVs available through Global Telecommunication System (GTS) for the entire month of July 2014. Before assimilating these newly derived INSAT-3D AMVs in the numerical model, a preliminary evaluation of these AMVs is performed with National Centers for Environmental Prediction (NCEP) final model analyses. The preliminary validation results show that root-mean-square vector difference (RMSVD) for INSAT-3D AMVs is ˜3.95, 6.66, and 5.65 ms-1 at low, mid, and high levels, respectively, and slightly more RMSVDs are noticed in GTS AMVs (˜4.0, 8.01, and 6.43 ms-1 at low, mid, and high levels, respectively). The assimilation of AMVs has improved the WRF model of produced wind speed, temperature, and moisture analyses as well as subsequent model forecasts over the Indian Ocean, Arabian Sea, Australia, and South Africa. Slightly more improvements are noticed in the experiment where only the INSAT-3D AMVs are assimilated compared to the experiment where only GTS AMVs are assimilated. The results also show improvement in rainfall predictions over the Indian region after AMV assimilation. Overall, the assimilation of INSAT-3D AMVs improved the WRF model short-range predictions over the South Asian region as compared to control experiments.

  15. Automated 3D architecture reconstruction from photogrammetric structure-and-motion: A case study of the One Pilla pagoda, Hanoi, Vienam

    NASA Astrophysics Data System (ADS)

    To, T.; Nguyen, D.; Tran, G.

    2015-04-01

    Heritage system of Vietnam has decline because of poor-conventional condition. For sustainable development, it is required a firmly control, space planning organization, and reasonable investment. Moreover, in the field of Cultural Heritage, the use of automated photogrammetric systems, based on Structure from Motion techniques (SfM), is widely used. With the potential of high-resolution, low-cost, large field of view, easiness, rapidity and completeness, the derivation of 3D metric information from Structure-and- Motion images is receiving great attention. In addition, heritage objects in form of 3D physical models are recorded not only for documentation issues, but also for historical interpretation, restoration, cultural and educational purposes. The study suggests the archaeological documentation of the "One Pilla" pagoda placed in Hanoi capital, Vietnam. The data acquired through digital camera Cannon EOS 550D, CMOS APS-C sensor 22.3 x 14.9 mm. Camera calibration and orientation were carried out by VisualSFM, CMPMVS (Multi-View Reconstruction) and SURE (Photogrammetric Surface Reconstruction from Imagery) software. The final result represents a scaled 3D model of the One Pilla Pagoda and displayed different views in MeshLab software.

  16. Generation of fluoroscopic 3D images with a respiratory motion model based on an external surrogate signal.

    PubMed

    Hurwitz, Martina; Williams, Christopher L; Mishra, Pankaj; Rottmann, Joerg; Dhou, Salam; Wagar, Matthew; Mannarino, Edward G; Mak, Raymond H; Lewis, John H

    2015-01-21

    Respiratory motion during radiotherapy can cause uncertainties in definition of the target volume and in estimation of the dose delivered to the target and healthy tissue. In this paper, we generate volumetric images of the internal patient anatomy during treatment using only the motion of a surrogate signal. Pre-treatment four-dimensional CT imaging is used to create a patient-specific model correlating internal respiratory motion with the trajectory of an external surrogate placed on the chest. The performance of this model is assessed with digital and physical phantoms reproducing measured irregular patient breathing patterns. Ten patient breathing patterns are incorporated in a digital phantom. For each patient breathing pattern, the model is used to generate images over the course of thirty seconds. The tumor position predicted by the model is compared to ground truth information from the digital phantom. Over the ten patient breathing patterns, the average absolute error in the tumor centroid position predicted by the motion model is 1.4 mm. The corresponding error for one patient breathing pattern implemented in an anthropomorphic physical phantom was 0.6 mm. The global voxel intensity error was used to compare the full image to the ground truth and demonstrates good agreement between predicted and true images. The model also generates accurate predictions for breathing patterns with irregular phases or amplitudes.

  17. Generation of fluoroscopic 3D images with a respiratory motion model based on an external surrogate signal

    NASA Astrophysics Data System (ADS)

    Hurwitz, Martina; Williams, Christopher L.; Mishra, Pankaj; Rottmann, Joerg; Dhou, Salam; Wagar, Matthew; Mannarino, Edward G.; Mak, Raymond H.; Lewis, John H.

    2015-01-01

    Respiratory motion during radiotherapy can cause uncertainties in definition of the target volume and in estimation of the dose delivered to the target and healthy tissue. In this paper, we generate volumetric images of the internal patient anatomy during treatment using only the motion of a surrogate signal. Pre-treatment four-dimensional CT imaging is used to create a patient-specific model correlating internal respiratory motion with the trajectory of an external surrogate placed on the chest. The performance of this model is assessed with digital and physical phantoms reproducing measured irregular patient breathing patterns. Ten patient breathing patterns are incorporated in a digital phantom. For each patient breathing pattern, the model is used to generate images over the course of thirty seconds. The tumor position predicted by the model is compared to ground truth information from the digital phantom. Over the ten patient breathing patterns, the average absolute error in the tumor centroid position predicted by the motion model is 1.4 mm. The corresponding error for one patient breathing pattern implemented in an anthropomorphic physical phantom was 0.6 mm. The global voxel intensity error was used to compare the full image to the ground truth and demonstrates good agreement between predicted and true images. The model also generates accurate predictions for breathing patterns with irregular phases or amplitudes.

  18. The Relationship of 3D Human Skull Motion to Brain Tissue Deformation in Magnetic Resonance Elastography Studies.

    PubMed

    Badachhape, Andrew A; Okamoto, Ruth J; Durham, Ramona S; Efron, Brent D; Nadell, Sam J; Johnson, Curtis L; Bayly, Philip V

    2017-03-07

    In traumatic brain injury (TBI), membranes such as the dura mater, arachnoid mater, and pia mater play a vital role in transmitting motion from the skull to brain tissue. Magnetic Resonance Elastography (MRE) is an imaging technique developed for non-invasive estimation of soft tissue material parameters. In MRE, dynamic deformation of brain tissue is induced by skull vibrations; however skull motion and its mode of transmission to the brain remain largely uncharacterized. In this study, displacements of points in the skull, reconstructed using data from an array of MRI-safe accelerometers, were compared to displacements of neighboring material points in brain tissue, estimated from MRE measurements. Comparison of the relative amplitudes, directions, and temporal phases of harmonic motion in the skulls and brains of six human subjects shows that the skull-brain interface significantly attenuates and delays transmission of motion from skull to brain. In contrast, in a cylindrical gelatin "phantom", displacements of the rigid case (reconstructed from accelerometer data) were transmitted to the gelatin inside (estimated from MRE data) with little attenuation or phase lag. This quantitative characterization of the skull-brain interface will be valuable in the parameterization and validation of computer models of TBI.

  19. Simultaneous 3D imaging of sound-induced motions of the tympanic membrane and middle ear ossicles

    PubMed Central

    Chang, Ernest W.; Cheng, Jeffrey T.; Röösli, Christof; Kobler, James B.; Rosowski, John J.; Yun, Seok Hyun

    2013-01-01

    Efficient transfer of sound by the middle ear ossicles is essential for hearing. Various pathologies can impede the transmission of sound and thereby cause conductive hearing loss. Differential diagnosis of ossicular disorders can be challenging since the ossicles are normally hidden behind the tympanic membrane (TM). Here we describe the use of a technique termed optical coherence tomography (OCT) vibrography to view the sound-induced motion of the TM and ossicles simultaneously. With this method, we were able to capture three-dimensional motion of the intact TM and ossicles of the chinchilla ear with nanometer-scale sensitivity at sound frequencies from 0.5 to 5 kHz. The vibration patterns of the TM were complex and highly frequency dependent with mean amplitudes of 70–120 nm at 100 dB sound pressure level. The TM motion was only marginally sensitive to stapes fixation and incus-stapes joint interruption; however, when additional information derived from the simultaneous measurement of ossicular motion was added, it was possible to clearly distinguish these different simulated pathologies. The technique may be applicable to clinical diagnosis in Otology and to basic research in audition and acoustics. PMID:23811181

  20. Development of the dynamic motion simulator of 3D micro-gravity with a combined passive/active suspension system

    NASA Technical Reports Server (NTRS)

    Yoshida, Kazuya; Hirose, Shigeo; Ogawa, Tadashi

    1994-01-01

    The establishment of those in-orbit operations like 'Rendez-Vous/Docking' and 'Manipulator Berthing' with the assistance of robotics or autonomous control technology, is essential for the near future space programs. In order to study the control methods, develop the flight models, and verify how the system works, we need a tool or a testbed which enables us to simulate mechanically the micro-gravity environment. There have been many attempts to develop the micro-gravity testbeds, but once the simulation goes into the docking and berthing operation that involves mechanical contacts among multi bodies, the requirement becomes critical. A group at the Tokyo Institute of Technology has proposed a method that can simulate the 3D micro-gravity producing a smooth response to the impact phenomena with relatively simple apparatus. Recently the group carried out basic experiments successfully using a prototype hardware model of the testbed. This paper will present our idea of the 3D micro-gravity simulator and report the results of our initial experiments.

  1. Hybrid 3-D rocket trajectory program. Part 1: Formulation and analysis. Part 2: Computer programming and user's instruction. [computerized simulation using three dimensional motion analysis

    NASA Technical Reports Server (NTRS)

    Huang, L. C. P.; Cook, R. A.

    1973-01-01

    Models utilizing various sub-sets of the six degrees of freedom are used in trajectory simulation. A 3-D model with only linear degrees of freedom is especially attractive, since the coefficients for the angular degrees of freedom are the most difficult to determine and the angular equations are the most time consuming for the computer to evaluate. A computer program is developed that uses three separate subsections to predict trajectories. A launch rail subsection is used until the rocket has left its launcher. The program then switches to a special 3-D section which computes motions in two linear and one angular degrees of freedom. When the rocket trims out, the program switches to the standard, three linear degrees of freedom model.

  2. Near Real-time Full-wave Centroid Moment Tensor (CMT) Inversion for Ground-motion forecast in 3D Earth Structure of Southern California

    NASA Astrophysics Data System (ADS)

    Chen, P.; Lee, E.; Jordan, T. H.; Maechling, P. J.

    2011-12-01

    Accurate and rapid CMT inversion is important for seismic hazard analysis. We have developed an algorithm for very rapid full-wave CMT inversions in a 3D Earth structure model and applied it on earthquakes recorded by the Southern California Seismic Network (SCSN). The procedure relies on the use of receiver-side Green tensors (RGTs), which comprise the spatial-temporal displacements produced by the three orthogonal unit impulsive point forces acting at the receiver. We have constructed a RGT database for 219 broadband stations in Southern California using an updated version of the 3D SCEC Community Velocity Model (CVM) version 4.0 and a staggered-grid finite-difference code. Finite-difference synthetic seismograms for any earthquake in our modeling volume can be simply calculated by extracting a small, source-centered volume from the RGT database and applying the reciprocity principle. We have developed an automated algorithm that combines a grid-search for suitable epicenter and focal mechanisms with a gradient-descent method that further refines the grid-search results. In this algorithm, the CMT solutions are inverted near real-time by using waveform in a 3D Earth structure. Comparing with the CMT solutions provided by the Southern California Seismic Network (SCSN) shows that our solutions generally provide better fit to the observed waveforms. Our algorithm may provide more robust CMT solutions for earthquakes in Southern California. In addition, the rapid and accurate full-wave CMT inversion has potential to extent to accurate near real-time ground-motion prediction based on 3D structure model for earthquake early warning purpose. When combined with real-time telemetered waveform recordings, our algorithm can provide (near) real-time ground-motion forecast.

  3. Double calibration: an accurate, reliable and easy-to-use method for 3D scapular motion analysis.

    PubMed

    Brochard, Sylvain; Lempereur, Mathieu; Rémy-Néris, Olivier

    2011-02-24

    The most recent non-invasive methods for the recording of scapular motion are based on an acromion marker (AM) set and a single calibration (SC) of the scapula in a resting position. However, this method fails to accurately measure scapular kinematics above 90° of arm elevation, due to soft tissue artifacts of the skin and muscles covering the acromion. The aim of this study was to evaluate the accuracy, and inter-trial and inter-session repeatability of a double calibration method (DC) in comparison with SC. The SC and DC data were measured with an optoelectronic system during arm flexion and abduction at different angles of elevation (0-180°). They were compared with palpation of the scapula using a scapula locator. DC data was not significantly different from palpation for 5/6 axes of rotation tested (Y, X, and Z in abduction and flexion), where as SC showed significant differences for 5/6 axes. The root mean square errors ranged from 2.96° to 4.48° for DC and from 6° to 9.19° for SC. The inter-trial repeatability was good to excellent for SC and DC. The inter-session repeatability was moderate to excellent for SC and moderate to good for DC. Coupling AM and DC is an easy-to-use method, which yields accurate and reliable measurements of scapular kinematics for the complete range of arm motion. It can be applied to the measurement of shoulder motion in many fields (sports, orthopaedics, and rehabilitation), especially when large ranges of arm motion are required.

  4. 3D Dynamic Rupture with Slip Reactivation and Ground Motion Simulations of the 2011 Mw 9.0 Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Dalguer, Luis; Galvez, Percy

    2013-04-01

    Seismological, geodetic and tsunami observations, including kinematic source inversion and back-projection models of the giant megathrust 2011 Mw 9.0 Tohoku earthquake indicate that the earthquake featured complex rupture patterns, with multiple rupture fronts and rupture styles. The compilation of these studies reveals fundamentally three main feature: 1) spectacular large slip over 50m, 2) the existence of slip reactivation and 3) distinct regions of low and high frequency radiation. In this paper we investigate the possible mechanisms causing the slip reactivation. For this purpose we perform earthquakes dynamic rupture and strong ground motion simulations. We investigate two mechanisms as potential sources of slip reactivation: 1) The additional push to the earthquake rupture (slip reactivation) comes from the rupture front back propagating from the free-surface after rupturing the trench of the fault, a phenomena usually observed in dynamic rupture simulations of dipping faults (e.g. Dalguer et al. 2001). This mechanism produces smooth slip velocity reactivation with low frequency content. 2) Slip reactivation governed by the friction constitutive low (in the form given by Kanamori and Heaton, 2000) in which frictional strength drops initially to certain value, but then at large slips there is a second drop in frictional strength. The slip velocity caused by this mechanism is a sharp pulse capable to radiate stronger ground motion. Our simulations show that the second mechanism produces synthetic ground motion pattern along the Japanese cost of the Tohoku event consistent with the observed ground motion. In addition, the rupture pattern with slip reactivation is also consistent with kinematic source inversion models in which slip reactivation is observed. Therefore we propose that the slip reactivation observed in this earthquake is results of strong frictional strength drop, maybe caused by fault melting, pressurization, lubrication or other thermal weakening

  5. Temporal-spatial reach parameters derived from inertial sensors: Comparison to 3D marker-based motion capture.

    PubMed

    Cahill-Rowley, Katelyn; Rose, Jessica

    2017-02-08

    Reaching is a well-practiced functional task crucial to daily living activities, and temporal-spatial measures of reaching reflect function for both adult and pediatric populations with upper-extremity motor impairments. Inertial sensors offer a mobile and inexpensive tool for clinical assessment of movement. This research outlines a method for measuring temporal-spatial reach parameters using inertial sensors, and validates these measures with traditional marker-based motion capture. 140 reaches from 10 adults, and 30 reaches from nine children aged 18-20 months, were recorded and analyzed using both inertial-sensor and motion-capture methods. Inertial sensors contained three-axis accelerometers, gyroscopes, and magnetometers. Gravitational offset of accelerometer data was measured when the sensor was at rest, and removed using sensor orientation measured at rest and throughout the reach. Velocity was calculated by numeric integration of acceleration, using a null-velocity assumption at reach start. Sensor drift was neglected given the 1-2s required for a reach. Temporal-spatial reach parameters were calculated independently for each data acquisition method. Reach path length and distance, peak velocity magnitude and timing, and acceleration at contact demonstrated consistent agreement between sensor- and motion-capture-based methods, for both adult and toddler reaches, as evaluated by intraclass correlation coefficients from 0.61 to 1.00. Taken together with actual difference between method measures, results indicate that these functional reach parameters may be reliably measured with inertial sensors.

  6. 3D Motions of Iron in Six-Coordinate {FeNO}7 Hemes by Nuclear Resonance Vibration Spectroscopy [3-D Motions of Iron in Six-coordinate {FeNO}7 Hemes by NRVS

    DOE PAGES

    Peng, Qian; Pavlik, Jeffrey W.; Silvernail, Nathan J.; ...

    2016-03-21

    The vibrational spectrum of a six-coordinate nitrosyl iron porphyrinate, monoclinic [Fe(TpFPP)(1-MeIm)(NO)] (TpFPP = tetra-para-fluorophenylporphyrin; 1-MeIm=1-methylimidazole), has been studied by oriented single-crystal nuclear resonance vibrational spectroscopy (NRVS). The crystal was oriented to give spectra perpendicular to the porphyrin plane and two in-plane spectra perpendicular or parallel to the projection of the FeNO plane. These enable assignment of the FeNO bending and stretching modes. The measurements reveal that the two in-plane spectra have substantial differences that result from the strongly bonded axial NO ligand. The direction of the in-plane iron motion is found to be largely parallel and perpendicular to the projectionmore » of the bent FeNO on the porphyrin plane. The out-of-plane Fe-N-O stretching and bending modes are strongly mixed with each other, as well as with porphyrin ligand modes. The stretch is mixed with v50 as was also observed for dioxygen complexes. The frequency of the assigned stretching mode of eight Fe-X-O (X= N, C, and O) complexes is correlated with the Fe XO bond lengths. The nature of highest frequency band at ≈560 cm-1 has also been examined in two additional new derivatives. Previously assigned as the Fe NO stretch (by resonance Raman), it is better described as the bend, as the motion of the central nitrogen atom of the FeNO group is very large. There is significant mixing of this mode. In conclusion, the results emphasize the importance of mode mixing; the extent of mixing must be related to the peripheral phenyl substituents.« less

  7. 3D Motions of Iron in Six-Coordinate {FeNO}7 Hemes by Nuclear Resonance Vibration Spectroscopy [3-D Motions of Iron in Six-coordinate {FeNO}7 Hemes by NRVS

    SciTech Connect

    Peng, Qian; Pavlik, Jeffrey W.; Silvernail, Nathan J.; Alp, E. Ercan; Hu, Michael Y.; Zhao, Jiyong; Sage, J. Timothy; Scheidt, W. Robert

    2016-03-21

    The vibrational spectrum of a six-coordinate nitrosyl iron porphyrinate, monoclinic [Fe(TpFPP)(1-MeIm)(NO)] (TpFPP = tetra-para-fluorophenylporphyrin; 1-MeIm=1-methylimidazole), has been studied by oriented single-crystal nuclear resonance vibrational spectroscopy (NRVS). The crystal was oriented to give spectra perpendicular to the porphyrin plane and two in-plane spectra perpendicular or parallel to the projection of the FeNO plane. These enable assignment of the FeNO bending and stretching modes. The measurements reveal that the two in-plane spectra have substantial differences that result from the strongly bonded axial NO ligand. The direction of the in-plane iron motion is found to be largely parallel and perpendicular to the projection of the bent FeNO on the porphyrin plane. The out-of-plane Fe-N-O stretching and bending modes are strongly mixed with each other, as well as with porphyrin ligand modes. The stretch is mixed with v50 as was also observed for dioxygen complexes. The frequency of the assigned stretching mode of eight Fe-X-O (X= N, C, and O) complexes is correlated with the Fe XO bond lengths. The nature of highest frequency band at ≈560 cm-1 has also been examined in two additional new derivatives. Previously assigned as the Fe NO stretch (by resonance Raman), it is better described as the bend, as the motion of the central nitrogen atom of the FeNO group is very large. There is significant mixing of this mode. In conclusion, the results emphasize the importance of mode mixing; the extent of mixing must be related to the peripheral phenyl substituents.

  8. Using SW4 for 3D Simulations of Earthquake Strong Ground Motions: Application to Near-Field Strong Motion, Building Response, Basin Edge Generated Waves and Earthquakes in the San Francisco Bay Are

    NASA Astrophysics Data System (ADS)

    Rodgers, A. J.; Pitarka, A.; Petersson, N. A.; Sjogreen, B.; McCallen, D.; Miah, M.

    2016-12-01

    Simulation of earthquake ground motions is becoming more widely used due to improvements of numerical methods, development of ever more efficient computer programs (codes), and growth in and access to High-Performance Computing (HPC). We report on how SW4 can be used for accurate and efficient simulations of earthquake strong motions. SW4 is an anelastic finite difference code based on a fourth order summation-by-parts displacement formulation. It is parallelized and can run on one or many processors. SW4 has many desirable features for seismic strong motion simulation: incorporation of surface topography; automatic mesh generation; mesh refinement; attenuation and supergrid boundary conditions. It also has several ways to introduce 3D models and sources (including Standard Rupture Format for extended sources). We are using SW4 to simulate strong ground motions for several applications. We are performing parametric studies of near-fault motions from moderate earthquakes to investigate basin edge generated waves and large earthquakes to provide motions to engineers study building response. We show that 3D propagation near basin edges can generate significant amplifications relative to 1D analysis. SW4 is also being used to model earthquakes in the San Francisco Bay Area. This includes modeling moderate (M3.5-5) events to evaluate the United States Geologic Survey's 3D model of regional structure as well as strong motions from the 2014 South Napa earthquake and possible large scenario events. Recently SW4 was built on a Commodity Technology Systems-1 (CTS-1) at LLNL, new systems for capacity computing at the DOE National Labs. We find SW4 scales well and runs faster on these systems compared to the previous generation of LINUX clusters.

  9. 3D CT to 2D low dose single-plane fluoroscopy registration algorithm for in-vivo knee motion analysis.

    PubMed

    Akter, Masuma; Lambert, Andrew J; Pickering, Mark R; Scarvell, Jennie M; Smith, Paul N

    2014-01-01

    A limitation to accurate automatic tracking of knee motion is the noise and blurring present in low dose X-ray fluoroscopy images. For more accurate tracking, this noise should be reduced while preserving anatomical structures such as bone. Noise in low dose X-ray images is generated from different sources, however quantum noise is by far the most dominant. In this paper we present an accurate multi-modal image registration algorithm which successfully registers 3D CT to 2D single plane low dose noisy and blurred fluoroscopy images that are captured for healthy knees. The proposed algorithm uses a new registration framework including a filtering method to reduce the noise and blurring effect in fluoroscopy images. Our experimental results show that the extra pre-filtering step included in the proposed approach maintains higher accuracy and repeatability for in vivo knee joint motion analysis.

  10. Site-Specific Internal Motions in GB1 Protein Microcrystals Revealed by 3D 2H–13C–13C Solid-State NMR Spectroscopy

    PubMed Central

    2016-01-01

    2H quadrupolar line shapes deliver rich information about protein dynamics. A newly designed 3D 2H–13C–13C solid-state NMR magic angle spinning (MAS) experiment is presented and demonstrated on the microcrystalline β1 immunoglobulin binding domain of protein G (GB1). The implementation of 2H–13C adiabatic rotor-echo-short-pulse-irradiation cross-polarization (RESPIRATION CP) ensures the accuracy of the extracted line shapes and provides enhanced sensitivity relative to conventional CP methods. The 3D 2H–13C–13C spectrum reveals 2H line shapes for 140 resolved aliphatic deuterium sites. Motional-averaged 2H quadrupolar parameters obtained from the line-shape fitting identify side-chain motions. Restricted side-chain dynamics are observed for a number of polar residues including K13, D22, E27, K31, D36, N37, D46, D47, K50, and E56, which we attribute to the effects of salt bridges and hydrogen bonds. In contrast, we observe significantly enhanced side-chain flexibility for Q2, K4, K10, E15, E19, N35, N40, and E42, due to solvent exposure and low packing density. T11, T16, and T17 side chains exhibit motions with larger amplitudes than other Thr residues due to solvent interactions. The side chains of L5, V54, and V29 are highly rigid because they are packed in the core of the protein. High correlations were demonstrated between GB1 side-chain dynamics and its biological function. Large-amplitude side-chain motions are observed for regions contacting and interacting with immunoglobulin G (IgG). In contrast, rigid side chains are primarily found for residues in the structural core of the protein that are absent from protein binding and interactions. PMID:26849428

  11. 3D computation of an incipient motion of a sessile drop on a rigid surface with contact angle hysteresis

    NASA Astrophysics Data System (ADS)

    Linder, Nicklas; Criscione, Antonio; Roisman, Ilia V.; Marschall, Holger; Tropea, Cameron

    2015-12-01

    Contact line phenomena govern a large number of multiphase flows. A reliable description of the contact line dynamics is therefore essential for prediction of such flows. Well-known difficulties of computation of the wetting phenomena include the mesh dependence of the results caused by flow singularity near the contact line and accurate estimation of its propagating velocity. The present study deals with the computational problem arising from the discontinuity in the dependence of the dynamic contact angle on the propagation velocity, associated with the contact angle hysteresis. The numerical simulations are performed using the volume of fluid method. The boundary conditions in the neighborhood of the contact line are switched depending on the value of the computed current local contact angle between a propagating contact line and a pinning condition. The method is applied to the simulation of the deformation and incipient motion of a shedding drop. The model is validated by comparison of the numerical predictions with experimental data.

  12. Real-time intensity based 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Steiner, E.; Stock, M.; Georg, D.; Birkfellner, W.

    2014-03-01

    Intra-fractional respiratorymotion during radiotherapy is one of themain sources of uncertainty in dose application creating the need to extend themargins of the planning target volume (PTV). Real-time tumormotion tracking by 2D/3D registration using on-board kilo-voltage (kV) imaging can lead to a reduction of the PTV. One limitation of this technique when using one projection image, is the inability to resolve motion along the imaging beam axis. We present a retrospective patient study to investigate the impact of paired portal mega-voltage (MV) and kV images, on registration accuracy. We used data from eighteen patients suffering from non small cell lung cancer undergoing regular treatment at our center. For each patient we acquired a planning CT and sequences of kV and MV images during treatment. Our evaluation consisted of comparing the accuracy of motion tracking in 6 degrees-of-freedom(DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. We use graphics processing unit rendering for real-time performance. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 3.3 mm to 1.8 mm and the motion along AP was successfully extracted. The mean registration time was of 190+/-35ms. Our evaluation shows that using kVMV image pairs leads to improved motion extraction in 6 DOF. Therefore, this approach is suitable for accurate, real-time tumor motion tracking with a conventional LINAC.

  13. Global and regional kinematics of the cervical spine during upper cervical spine manipulation: a reliability analysis of 3D motion data.

    PubMed

    Dugailly, Pierre-Michel; Beyer, Benoît; Sobczak, Stéphane; Salvia, Patrick; Feipel, Véronique

    2014-10-01

    Studies reporting spine kinematics during cervical manipulation are usually related to continuous global head-trunk motion or discrete angular displacements for pre-positioning. To date, segmental data analyzing continuous kinematics of cervical manipulation is lacking. The objective of this study was to investigate upper cervical spine (UCS) manipulation in vitro. This paper reports an inter- and intra-rater reliability analysis of kinematics during high velocity low amplitude manipulation of the UCS. Integration of kinematics into specific-subject 3D models has been processed as well for providing anatomical motion representation during thrust manipulation. Three unembalmed specimens were included in the study. Restricted dissection was realized to attach technical clusters to each bone of interest (skull, C1-C4 and sternum). During manipulation, bone motion data was computed using an optoelectronic system. The reliability of manipulation kinematics was assessed for three experimented practitioners performing two trials of 3 repetitions on two separate days. During UCS manipulation, average global head-trunk motion ROM (±SD) were 14 ± 5°, 35 ± 7° and 14 ± 8° for lateral bending, axial rotation and flexion-extension, respectively. For regional ROM (C0-C2), amplitudes were 10 ± 5°, 30 ± 5° and 16 ± 4° for the same respective motions. Concerning the reliability, mean RMS ranged from 1° to 4° and from 3° to 6° for intra- and inter-rater comparisons, respectively. The present results confirm the limited angular displacement during manipulation either for global head-trunk or for UCS motion components, especially for axial rotation. Additionally, kinematics variability was low confirming intra- and inter-practitioners consistency of UCS manipulation achievement.

  14. Accuracy and precision of a custom camera-based system for 2-d and 3-d motion tracking during speech and nonspeech motor tasks.

    PubMed

    Feng, Yongqiang; Max, Ludo

    2014-04-01

    PURPOSE Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and submillimeter accuracy. METHOD The authors examined the accuracy and precision of 2-D and 3-D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially available computer software (APAS, Ariel Dynamics), and a custom calibration device. RESULTS Overall root-mean-square error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3- vs. 6-mm diameter) was negligible at all frame rates for both 2-D and 3-D data. CONCLUSION Motion tracking with consumer-grade digital cameras and the APAS software can achieve submillimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes.

  15. Validation of 4D dose calculation using an independent motion monitoring by the calypso tracking system and 3D polymer gel dosimetry

    NASA Astrophysics Data System (ADS)

    Mann, P.; Saito, N.; Lang, C.; Runz, A.; Johnen, W.; Witte, M.; Schmitt, D.; Karger, C. P.

    2017-05-01

    This study aims to evaluate an in-house developed 4D dose calculation algorithm that uses Calypso motion tracking data and to compare the results against 3D polymer gel dosimetry measurements. For this, a cylindrical water phantom was constructed that allows to insert (i) the polymer gel, (ii) a PinPoint ® ionization chamber and (iii) Calypso beacons™ for motion tracking. A treatment plan covering a gel flask in the center of the static phantom plus a 1 mm margin homogeneously with dose was generated. During irradiation, however, the phantom was moved periodically by means of a robot with a peak-to-peak amplitude of 2.5 cm. The results of the 4D dose calculations show good agreement with the gel-dosimetric measurements in most of the volume. Remaining small deviations have to be evaluated in further experiments. The developed experimental setup allows for 3D-dosimetric validation of 4D dose calculations algorithms prior to application in patients.

  16. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  17. 3D Motions of Iron in Six-Coordinate {FeNO}(7) Hemes by Nuclear Resonance Vibration Spectroscopy.

    PubMed

    Peng, Qian; Pavlik, Jeffrey W; Silvernail, Nathan J; Alp, E Ercan; Hu, Michael Y; Zhao, Jiyong; Sage, J Timothy; Scheidt, W Robert

    2016-04-25

    The vibrational spectrum of a six-coordinate nitrosyl iron porphyrinate, monoclinic [Fe(TpFPP)(1-MeIm)(NO)] (TpFPP=tetra-para-fluorophenylporphyrin; 1-MeIm=1-methylimidazole), has been studied by oriented single-crystal nuclear resonance vibrational spectroscopy (NRVS). The crystal was oriented to give spectra perpendicular to the porphyrin plane and two in-plane spectra perpendicular or parallel to the projection of the FeNO plane. These enable assignment of the FeNO bending and stretching modes. The measurements reveal that the two in-plane spectra have substantial differences that result from the strongly bonded axial NO ligand. The direction of the in-plane iron motion is found to be largely parallel and perpendicular to the projection of the bent FeNO on the porphyrin plane. The out-of-plane Fe-N-O stretching and bending modes are strongly mixed with each other, as well as with porphyrin ligand modes. The stretch is mixed with v50 as was also observed for dioxygen complexes. The frequency of the assigned stretching mode of eight Fe-X-O (X=N, C, and O) complexes is correlated with the Fe-XO bond lengths. The nature of highest frequency band at ≈560 cm(-1) has also been examined in two additional new derivatives. Previously assigned as the Fe-NO stretch (by resonance Raman), it is better described as the bend, as the motion of the central nitrogen atom of the FeNO group is very large. There is significant mixing of this mode. The results emphasize the importance of mode mixing; the extent of mixing must be related to the peripheral phenyl substituents.

  18. 3D Motions of Iron in Six-Coordinate {FeNO} 7 Hemes by Nuclear Resonance Vibration Spectroscopy

    DOE PAGES

    Peng, Qian; Pavlik, Jeffrey W.; Silvernail, Nathan J.; ...

    2016-03-21

    The vibrational spectrum of a six-coordinate nitrosyl iron porphyrinate, monoclinic [Fe(TpFPP)(1-MeIm)(NO)] (TpFPP = tetra-para-fluorophenylporphyrin; 1-MeIm=1-methylimidazole), has been studied by oriented single-crystal nuclear resonance vibrational spectroscopy (NRVS). The crystal was oriented to give spectra perpendicular to the porphyrin plane and two in-plane spectra perpendicular or parallel to the projection of the FeNO plane. These enable assignment of the FeNO bending and stretching modes. The measurements reveal that the two in-plane spectra have substantial differences that result from the strongly bonded axial NO ligand. The direction of the in-plane iron motion is found to be largely parallel and perpendicular to the projectionmore » of the bent FeNO on the porphyrin plane. The out-of-plane Fe-N-O stretching and bending modes are strongly mixed with each other, as well as with porphyrin ligand modes. The stretch is mixed with v(50) as was also observed for dioxygen complexes. The frequency of the assigned stretching mode of eight Fe-X-O (X= N, C, and O) complexes is correlated with the Fe XO bond lengths. The nature of highest frequency band at 560 cm(-1) has also been examined in two additional new derivatives. Previously assigned as the Fe NO stretch (by resonance Raman), it is better described as the bend, as the motion of the central nitrogen atom of the FeNO group is very large. There is significant mixing of this mode. The results emphasize the importance of mode mixing; the extent of mixing must be related to the peripheral phenyl substituents.« less

  19. Determining inter-fractional motion of the uterus using 3D ultrasound imaging during radiotherapy for cervical cancer

    NASA Astrophysics Data System (ADS)

    Baker, Mariwan; Jensen, Jørgen Arendt; Behrens, Claus F.

    2014-03-01

    Uterine positional changes can reduce the accuracy of radiotherapy for cervical cancer patients. The purpose of this study was to; 1) Quantify the inter-fractional uterine displacement using a novel 3D ultrasound (US) imaging system, and 2) Compare the result with the bone match shift determined by Cone- Beam CT (CBCT) imaging.Five cervical cancer patients were enrolled in the study. Three of them underwent weekly CBCT imaging prior to treatment and bone match shift was applied. After treatment delivery they underwent a weekly US scan. The transabdominal scans were conducted using a Clarity US system (Clarity® Model 310C00). Uterine positional shifts based on soft-tissue match using US was performed and compared to bone match shifts for the three directions. Mean value (+/-1 SD) of the US shifts were (mm); anterior-posterior (A/P): (3.8+/-5.5), superior-inferior (S/I) (-3.5+/-5.2), and left-right (L/R): (0.4+/-4.9). The variations were larger than the CBCT shifts. The largest inter-fractional displacement was from -2 mm to +14 mm in the AP-direction for patient 3. Thus, CBCT bone matching underestimates the uterine positional displacement due to neglecting internal uterine positional change to the bone structures. Since the US images were significantly better than the CBCT images in terms of soft-tissue visualization, the US system can provide an optional image-guided radiation therapy (IGRT) system. US imaging might be a better IGRT system than CBCT, despite difficulty in capturing the entire uterus. Uterine shifts based on US imaging contains relative uterus-bone displacement, which is not taken into consideration using CBCT bone match.

  20. The birth of a dinosaur footprint: Subsurface 3D motion reconstruction and discrete element simulation reveal track ontogeny

    PubMed Central

    2014-01-01

    Locomotion over deformable substrates is a common occurrence in nature. Footprints represent sedimentary distortions that provide anatomical, functional, and behavioral insights into trackmaker biology. The interpretation of such evidence can be challenging, however, particularly for fossil tracks recovered at bedding planes below the originally exposed surface. Even in living animals, the complex dynamics that give rise to footprint morphology are obscured by both foot and sediment opacity, which conceals animal–substrate and substrate–substrate interactions. We used X-ray reconstruction of moving morphology (XROMM) to image and animate the hind limb skeleton of a chicken-like bird traversing a dry, granular material. Foot movement differed significantly from walking on solid ground; the longest toe penetrated to a depth of ∼5 cm, reaching an angle of 30° below horizontal before slipping backward on withdrawal. The 3D kinematic data were integrated into a validated substrate simulation using the discrete element method (DEM) to create a quantitative model of limb-induced substrate deformation. Simulation revealed that despite sediment collapse yielding poor quality tracks at the air–substrate interface, subsurface displacements maintain a high level of organization owing to grain–grain support. Splitting the substrate volume along “virtual bedding planes” exposed prints that more closely resembled the foot and could easily be mistaken for shallow tracks. DEM data elucidate how highly localized deformations associated with foot entry and exit generate specific features in the final tracks, a temporal sequence that we term “track ontogeny.” This combination of methodologies fosters a synthesis between the surface/layer-based perspective prevalent in paleontology and the particle/volume-based perspective essential for a mechanistic understanding of sediment redistribution during track formation. PMID:25489092

  1. The motion of a 3D toroidal bubble and its interaction with a free surface near an inclined boundary

    NASA Astrophysics Data System (ADS)

    Liu, Y. L.; Wang, Q. X.; Wang, S. P.; Zhang, A. M.

    2016-12-01

    The numerical modelling of 3D toroidal bubble dynamics is a challenging problem due to the complex topological transition of the flow domain, and physical and numerical instabilities, associated with jet penetration through the bubble. In this paper, this phenomenon is modelled using the boundary integral method (BIM) coupled with a vortex ring model. We implement a new impact model consisting of the refined local mesh near the impact location immediately before and after impact, and a surgical cut at a high resolution forming a smooth hole for the transition from a singly connected to doubly connected form. This enables a smooth transition from a singly connected bubble to a toroidal bubble. The potential due to a vortex ring is reduced to the line integral along the vortex ring. A new mesh density control technique is described to update the bubble and free surfaces, which provides a high mesh quality of the surfaces with the mesh density in terms of the curvature distribution of the surface. The pressure distribution in the flow field is calculated by using the Bernoulli equation, where the partial derivative of the velocity potential in time is calculated using the BIM model to avoid numerical instabilities. Experiments are carried out for the interaction of a spark generated bubble with a free surface near a boundary, which is captured by using a high speed camera. Our numerical results agree well with the experimental images, for the bubble and free surface shapes for both before and after jet impact. New results are analyzed for the interaction of a toroidal bubble with a free surface near a vertical boundary and a sloping boundary, at both negative and positive angles to the vertical, without and with buoyancy, respectively. After jet impact, the bubble becomes a bubble ring, whose cross section is much thinner at the distal side from the boundary. It subsequently breaks into a crescent shaped bubble. The free surface displays singular features at its

  2. Alignment of sparse freehand 3-D ultrasound with preoperative images of the liver using models of respiratory motion and deformation.

    PubMed

    Blackall, Jane M; Penney, Graeme P; King, Andrew P; Hawkes, David J

    2005-11-01

    We present a method for alignment of an interventional plan to optically tracked two-dimensional intraoperative ultrasound (US) images of the liver. Our clinical motivation is to enable the accurate transfer of information from three-dimensional preoperative imaging modalities [magnetic resonance (MR) or computed tomography (CT)] to intraoperative US to aid needle placement for thermal ablation of liver metastases. An initial rigid registration to intraoperative coordinates is obtained using a set of US images acquired at maximum exhalation. A preprocessing step is applied to both the preoperative images and the US images to produce evidence of corresponding structures. This yields two sets of images representing classification of regions as vessels. The registration then proceeds using these images. The preoperative images and plan are then warped to correspond to a single US slice acquired at an unknown point in the breathing cycle where the liver is likely to have moved and deformed relative to the preoperative image. Alignment is constrained using a patient-specific model of breathing motion and deformation. Target registration error is estimated by carrying out simulation experiments using resliced MR volumes to simulate real US and comparing the registration results to a "bronze-standard" registration performed on the full MR volume. Finally, the system is tested using real US and verified using visual inspection.

  3. 3D basin-shape ratio effects on frequency content and spectral amplitudes of basin-generated surface waves and associated spatial ground motion amplification and differential ground motion

    NASA Astrophysics Data System (ADS)

    Kamal; Narayan, J. P.

    2015-04-01

    This paper presents the effects of basin-shape ratio (BSR) on the frequency content and spectral amplitudes of the basin-generated surface (BGS) waves and the associated spatial variation of ground motion amplification and differential ground motion (DGM) in a 3D semi-spherical (SS) basin. Seismic responses were computed using a recently developed 3D fourth-order spatial accurate time-domain finite-difference (FD) algorithm based on the parsimonious staggered-grid approximation of the 3D viscoelastic wave equations. The simulated results revealed the decrease of both the frequency content and the spectral amplitudes of the BGS waves and the duration of ground motion in the SS basin with the decrease of BSR. An increase of the average spectral amplification (ASA), DGM and the average aggravation factor (AAF) towards the centre of the SS basin was obtained due to the focusing of the surface waves. A decrease of ASA, DGM and AAF with the decrease of BSR was also obtained.

  4. Robust patella motion tracking using intensity-based 2D-3D registration on dynamic bi-plane fluoroscopy: towards quantitative assessment in MPFL reconstruction surgery

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Esnault, Matthieu; Grupp, Robert; Kosugi, Shinichi; Sato, Yoshinobu

    2016-03-01

    The determination of in vivo motion of multiple-bones using dynamic fluoroscopic images and computed tomography (CT) is useful for post-operative assessment of orthopaedic surgeries such as medial patellofemoral ligament reconstruction. We propose a robust method to measure the 3D motion of multiple rigid objects with high accuracy using a series of bi-plane fluoroscopic images and a multi-resolution, intensity-based, 2D-3D registration. A Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimizer was used with a gradient correlation similarity metric. Four approaches to register three rigid objects (femur, tibia-fibula and patella) were implemented: 1) an individual bone approach registering one bone at a time, each with optimization of a six degrees of freedom (6DOF) parameter, 2) a sequential approach registering one bone at a time but using the previous bone results as the background in DRR generation, 3) a simultaneous approach registering all the bones together (18DOF) and 4) a combination of the sequential and the simultaneous approaches. These approaches were compared in experiments using simulated images generated from the CT of a healthy volunteer and measured fluoroscopic images. Over the 120 simulated frames of motion, the simultaneous approach showed improved registration accuracy compared to the individual approach: with less than 0.68mm root-mean-square error (RMSE) for translation and less than 1.12° RMSE for rotation. A robustness evaluation was conducted with 45 trials of a randomly perturbed initialization showed that the sequential approach improved robustness significantly (74% success rate) compared to the individual bone approach (34% success) for patella registration (femur and tibia-fibula registration had a 100% success rate with each approach).

  5. Powered wheelchair simulator development: implementing combined navigation-reaching tasks with a 3D hand motion controller.

    PubMed

    Tao, Gordon; Archambault, Philippe S

    2016-01-19

    Powered wheelchair (PW) training involving combined navigation and reaching is often limited or unfeasible. Virtual reality (VR) simulators offer a feasible alternative for rehabilitation training either at home or in a clinical setting. This study evaluated a low-cost magnetic-based hand motion controller as an interface for reaching tasks within the McGill Immersive Wheelchair (miWe) simulator. Twelve experienced PW users performed three navigation-reaching tasks in the real world (RW) and in VR: working at a desk, using an elevator, and opening a door. The sense of presence in VR was assessed using the iGroup Presence Questionnaire (IPQ). We determined concordance of task performance in VR with that in the RW. A video task analysis was performed to analyse task behaviours. Compared to previous miWe data, IPQ scores were greater in the involvement domain (p < 0.05). Task analysis showed most of navigation and reaching behaviours as having moderate to excellent (K > 0.4, Cohen's Kappa) agreement between the two environments, but greater (p < 0.05) risk of collisions and reaching errors in VR. VR performance demonstrated longer (p < 0.05) task times and more discreet movements for the elevator and desk tasks but not the door task. Task performance showed poorer kinematic performance in VR than RW but similar strategies. Therefore, the reaching component represents a promising addition to the miWe training simulator, though some limitations must be addressed in future development.

  6. Rigid model-based 3D segmentation of the bones of joints in MR and CT images for motion analysis

    PubMed Central

    Liu, Jiamin; Udupa, Jayaram K.; Saha, Punam K.; Odhner, Dewey; Hirsch, Bruce E.; Siegler, Sorin; Simon, Scott; Winkelstein, Beth A.

    2008-01-01

    There are several medical application areas that require the segmentation and separation of the component bones of joints in a sequence of images of the joint acquired under various loading conditions, our own target area being joint motion analysis. This is a challenging problem due to the proximity of bones at the joint, partial volume effects, and other imaging modality-specific factors that confound boundary contrast. In this article, a two-step model-based segmentation strategy is proposed that utilizes the unique context of the current application wherein the shape of each individual bone is preserved in all scans of a particular joint while the spatial arrangement of the bones alters significantly among bones and scans. In the first step, a rigid deterministic model of the bone is generated from a segmentation of the bone in the image corresponding to one position of the joint by using the live wire method. Subsequently, in other images of the same joint, this model is used to search for the same bone by minimizing an energy function that utilizes both boundary- and region-based information. An evaluation of the method by utilizing a total of 60 data sets on MR and CT images of the ankle complex and cervical spine indicates that the segmentations agree very closely with the live wire segmentations, yielding true positive and false positive volume fractions in the range 89%–97% and 0.2%–0.7%. The method requires 1–2 minutes of operator time and 6–7 min of computer time per data set, which makes it significantly more efficient than live wire—the method currently available for the task that can be used routinely. PMID:18777924

  7. Rigid model-based 3D segmentation of the bones of joints in MR and CT images for motion analysis.

    PubMed

    Liu, Jiamin; Udupa, Jayaram K; Saha, Punam K; Odhner, Dewey; Hirsch, Bruce E; Siegler, Sorin; Simon, Scott; Winkelstein, Beth A

    2008-08-01

    There are several medical application areas that require the segmentation and separation of the component bones of joints in a sequence of images of the joint acquired under various loading conditions, our own target area being joint motion analysis. This is a challenging problem due to the proximity of bones at the joint, partial volume effects, and other imaging modality-specific factors that confound boundary contrast. In this article, a two-step model-based segmentation strategy is proposed that utilizes the unique context of the current application wherein the shape of each individual bone is preserved in all scans of a particular joint while the spatial arrangement of the bones alters significantly among bones and scans. In the first step, a rigid deterministic model of the bone is generated from a segmentation of the bone in the image corresponding to one position of the joint by using the live wire method. Subsequently, in other images of the same joint, this model is used to search for the same bone by minimizing an energy function that utilizes both boundary- and region-based information. An evaluation of the method by utilizing a total of 60 data sets on MR and CT images of the ankle complex and cervical spine indicates that the segmentations agree very closely with the live wire segmentations, yielding true positive and false positive volume fractions in the range 89%-97% and 0.2%-0.7%. The method requires 1-2 minutes of operator time and 6-7 min of computer time per data set, which makes it significantly more efficient than live wire-the method currently available for the task that can be used routinely.

  8. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network.

    PubMed

    Bukhari, W; Hong, S-M

    2016-03-07

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient's breathing cycle. The algorithm, named EKF-GPRN(+) , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN(+) prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN(+) implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN(+) . The experimental results show that the EKF-GPRN(+) algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN(+) algorithm can further reduce the prediction error by employing the gating

  9. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network

    NASA Astrophysics Data System (ADS)

    Bukhari, W.; Hong, S.-M.

    2016-03-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN+ , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN+ prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN+ implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN+ . The experimental results show that the EKF-GPRN+ algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN+ algorithm can further reduce the prediction error by employing the gating function, albeit

  10. 3D Dynamic Rupture process ans Near Source Ground Motion Simulation Using the Discrete Element Method: Application to the 1999 Chi-chi and 2000 Tottori Earthquakes

    NASA Astrophysics Data System (ADS)

    Dalguer Gudiel, L. A.; Irikura, K.

    2001-12-01

    We performed a 3D model to simulate the dynamic rupture of a pre-existing fault and near-source ground motion of actual earthquakes solving the elastodynamic equation of motion using the 3D Discrete Element Method (DEM). The DEM is widely employed in engineering to designate lumped mass models in a truss arrangement, as opposed to FEM (Finite Element) models that may also consist of lumped masses, but normally require to mount a full stiffness matrix for response determination. The term has also been used for models of solids consisting of assemblies of discrete elements, such as spheres in elastic contact, employed in the analysis of perforation or penetration of concrete or rock. It should be noted that the designation Lattice Models, common in Physics, may be more adequate, although it omits reference to a fundamental property of the approach, which is the lumped-mass representation. In the present DEM formulation, the method models any orthotropic elastic solid. It is constructed by a three dimensional periodic truss-like structures using cubic elements that consists of lumping masses in nodal points, which are interconnected by unidimensional elements. The method was previously used in 2D to simulate in a simplified way the 1999 Chi-chi (Taiwan) earthquake (Dalguer et. al., 2000). Now the method was extended to resolve 3D problems. We apply the model to simulate the dynamic rupture process and near source ground motion of the 1999 Chi-chi (Taiwan) and the 2000 Tottori (Japan) earthquakes. The attractive feature in the problem under consideration is the possibility of introducing internal cracks or fractures with little computational effort and without increasing the number of degrees of freedom. For the 3D dynamic spontaneous rupture simulation of these eartquakes we need to know: the geometry of the fault, the initial stress distribution along the fault, the stress drop distribution, the strength of the fault to break and the critical slip (because slip

  11. Analysis of local molecular motions of aromatic sidechains in proteins by 2D and 3D fast MAS NMR spectroscopy and quantum mechanical calculations.

    PubMed

    Paluch, Piotr; Pawlak, Tomasz; Jeziorna, Agata; Trébosc, Julien; Hou, Guangjin; Vega, Alexander J; Amoureux, Jean-Paul; Dracinsky, Martin; Polenova, Tatyana; Potrzebowski, Marek J

    2015-11-21

    We report a new multidimensional magic angle spinning NMR methodology, which provides an accurate and detailed probe of molecular motions occurring on timescales of nano- to microseconds, in sidechains of proteins. The approach is based on a 3D CPVC-RFDR correlation experiment recorded under fast MAS conditions (ν(R) = 62 kHz), where (13)C-(1)H CPVC dipolar lineshapes are recorded in a chemical shift resolved manner. The power of the technique is demonstrated in model tripeptide Tyr-(d)Ala-Phe and two nanocrystalline proteins, GB1 and LC8. We demonstrate that, through numerical simulations of dipolar lineshapes of aromatic sidechains, their detailed dynamic profile, i.e., the motional modes, is obtained. In GB1 and LC8 the results unequivocally indicate that a number of aromatic residues are dynamic, and using quantum mechanical calculations, we correlate the molecular motions of aromatic groups to their local environment in the crystal lattice. The approach presented here is general and can be readily extended to other biological systems.

  12. Predicting Strong Ground-Motion Seismograms for Magnitude 9 Cascadia Earthquakes Using 3D Simulations with High Stress Drop Sub-Events

    NASA Astrophysics Data System (ADS)

    Frankel, A. D.; Wirth, E. A.; Stephenson, W. J.; Moschetti, M. P.; Ramirez-Guzman, L.

    2015-12-01

    We have produced broadband (0-10 Hz) synthetic seismograms for magnitude 9.0 earthquakes on the Cascadia subduction zone by combining synthetics from simulations with a 3D velocity model at low frequencies (≤ 1 Hz) with stochastic synthetics at high frequencies (≥ 1 Hz). We use a compound rupture model consisting of a set of M8 high stress drop sub-events superimposed on a background slip distribution of up to 20m that builds relatively slowly. The 3D simulations were conducted using a finite difference program and the finite element program Hercules. The high-frequency (≥ 1 Hz) energy in this rupture model is primarily generated in the portion of the rupture with the M8 sub-events. In our initial runs, we included four M7.9-8.2 sub-events similar to those that we used to successfully model the strong ground motions recorded from the 2010 M8.8 Maule, Chile earthquake. At periods of 2-10 s, the 3D synthetics exhibit substantial amplification (about a factor of 2) for sites in the Puget Lowland and even more amplification (up to a factor of 5) for sites in the Seattle and Tacoma sedimentary basins, compared to rock sites outside of the Puget Lowland. This regional and more localized basin amplification found from the simulations is supported by observations from local earthquakes. There are substantial variations in the simulated M9 time histories and response spectra caused by differences in the hypocenter location, slip distribution, down-dip extent of rupture, coherence of the rupture front, and location of sub-events. We examined the sensitivity of the 3D synthetics to the velocity model of the Seattle basin. We found significant differences in S-wave focusing and surface wave conversions between a 3D model of the basin from a spatially-smoothed tomographic inversion of Rayleigh-wave phase velocities and a model that has an abrupt southern edge of the Seattle basin, as observed in seismic reflection profiles.

  13. 3D crustal structure and long-period ground motions from a M9.0 megathrust earthquake in the Pacific Northwest region

    USGS Publications Warehouse

    Olsen, K.B.; Stephenson, W.J.; Geisselmeyer, A.

    2008-01-01

    We have developed a community velocity model for the Pacific Northwest region from northern California to southern Canada and carried out the first 3D simulation of a Mw 9.0 megathrust earthquake rupturing along the Cascadia subduction zone using a parallel supercomputer. A long-period (<0.5 Hz) source model was designed by mapping the inversion results for the December 26, 2004 Sumatra–Andaman earthquake (Han et al., Science 313(5787):658–662, 2006) onto the Cascadia subduction zone. Representative peak ground velocities for the metropolitan centers of the region include 42 cm/s in the Seattle area and 8–20 cm/s in the Tacoma, Olympia, Vancouver, and Portland areas. Combined with an extended duration of the shaking up to 5 min, these long-period ground motions may inflict significant damage on the built environment, in particular on the highrises in downtown Seattle.

  14. From Monotonous Hop-and-Sink Swimming to Constant Gliding via Chaotic Motions in 3D: Is There Adaptive Behavior in Planktonic Micro-Crustaceans?

    NASA Astrophysics Data System (ADS)

    Strickler, J. R.

    2007-12-01

    Planktonic micro-crustaceans, such as Daphnia, Copepod, and Cyclops, swim in the 3D environment of water and feed on suspended material, mostly algae and bacteria. Their mechanisms for swimming differ; some use their swimming legs to produce one hop per second resulting in a speed of one body-length per second, while others scan water volumes with their mouthparts and glide through the water column at 1 to 10 body-lengths per second. However, our observations show that these speeds are modulated. The question to be discussed will be whether or not these modulations show adaptive behavior taking food quality and food abundance as criteria for the swimming performances. Additionally, we investigated the degree these temporal motion patterns are dependant on the sizes, and therefore, on the Reynolds number of the animals.

  15. Intrafraction motion of the prostate during an IMRT session: a fiducial-based 3D measurement with Cone-beam CT

    PubMed Central

    Boda-Heggemann, Judit; Köhler, Frederick Marc; Wertz, Hansjörg; Ehmann, Michael; Hermann, Brigitte; Riesenacker, Nadja; Küpper, Beate; Lohr, Frank; Wenz, Frederik

    2008-01-01

    Background Image-guidance systems allow accurate interfractional repositioning of IMRT treatments, however, these may require up to 15 minutes. Therefore intrafraction motion might have an impact on treatment precision. 3D geometric data regarding intrafraction prostate motion are rare; we therefore assessed its magnitude with pre- and post-treatment fiducial-based imaging with cone-beam-CT (CBCT). Methods 39 IMRT fractions in 5 prostate cancer patients after 125I-seed implantation were evaluated. Patient position was corrected based on the 125I-seeds after pre-treatment CBCT. Immediately after treatment delivery, a second CBCT was performed. Differences in bone- and fiducial position were measured by seed-based grey-value matching. Results Fraction time was 13.6 ± 1.6 minutes. Median overall displacement vector length of 125I-seeds was 3 mm (M = 3 mm, Σ = 0.9 mm, σ = 1.7 mm; M: group systematic error, Σ: SD of systematic error, σ: SD of random error). Median displacement vector of bony structures was 1.84 mm (M = 2.9 mm, Σ = 1 mm, σ = 3.2 mm). Median displacement vector length of the prostate relative to bony structures was 1.9 mm (M = 3 mm, Σ = 1.3 mm, σ = 2.6 mm). Conclusion a) Overall displacement vector length during an IMRT session is < 3 mm. b) Positioning devices reducing intrafraction bony displacements can further reduce overall intrafraction motion. c) Intrafraction prostate motion relative to bony structures is < 2 mm and may be further reduced by institutional protocols and reduction of IMRT duration. PMID:18986517

  16. Estimating 3D L5/S1 moments and ground reaction forces during trunk bending using a full-body ambulatory inertial motion capture system.

    PubMed

    Faber, G S; Chang, C C; Kingma, I; Dennerlein, J T; van Dieën, J H

    2016-04-11

    Inertial motion capture (IMC) systems have become increasingly popular for ambulatory movement analysis. However, few studies have attempted to use these measurement techniques to estimate kinetic variables, such as joint moments and ground reaction forces (GRFs). Therefore, we investigated the performance of a full-body ambulatory IMC system in estimating 3D L5/S1 moments and GRFs during symmetric, asymmetric and fast trunk bending, performed by nine male participants. Using an ambulatory IMC system (Xsens/MVN), L5/S1 moments were estimated based on the upper-body segment kinematics using a top-down inverse dynamics analysis, and GRFs were estimated based on full-body segment accelerations. As a reference, a laboratory measurement system was utilized: GRFs were measured with Kistler force plates (FPs), and L5/S1 moments were calculated using a bottom-up inverse dynamics model based on FP data and lower-body kinematics measured with an optical motion capture system (OMC). Correspondence between the OMC+FP and IMC systems was quantified by calculating root-mean-square errors (RMSerrors) of moment/force time series and the interclass correlation (ICC) of the absolute peak moments/forces. Averaged over subjects, L5/S1 moment RMSerrors remained below 10Nm (about 5% of the peak extension moment) and 3D GRF RMSerrors remained below 20N (about 2% of the peak vertical force). ICCs were high for the peak L5/S1 extension moment (0.971) and vertical GRF (0.998). Due to lower amplitudes, smaller ICCs were found for the peak asymmetric L5/S1 moments (0.690-0.781) and horizontal GRFs (0.559-0.948). In conclusion, close correspondence was found between the ambulatory IMC-based and laboratory-based estimates of back load. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Real-Time Motion Capture Toolbox (RTMocap): an open-source code for recording 3-D motion kinematics to study action-effect anticipations during motor and social interactions.

    PubMed

    Lewkowicz, Daniel; Delevoye-Turrell, Yvonne

    2016-03-01

    We present here a toolbox for the real-time motion capture of biological movements that runs in the cross-platform MATLAB environment (The MathWorks, Inc., Natick, MA). It provides instantaneous processing of the 3-D movement coordinates of up to 20 markers at a single instant. Available functions include (1) the setting of reference positions, areas, and trajectories of interest; (2) recording of the 3-D coordinates for each marker over the trial duration; and (3) the detection of events to use as triggers for external reinforcers (e.g., lights, sounds, or odors). Through fast online communication between the hardware controller and RTMocap, automatic trial selection is possible by means of either a preset or an adaptive criterion. Rapid preprocessing of signals is also provided, which includes artifact rejection, filtering, spline interpolation, and averaging. A key example is detailed, and three typical variations are developed (1) to provide a clear understanding of the importance of real-time control for 3-D motion in cognitive sciences and (2) to present users with simple lines of code that can be used as starting points for customizing experiments using the simple MATLAB syntax. RTMocap is freely available (http://sites.google.com/site/RTMocap/) under the GNU public license for noncommercial use and open-source development, together with sample data and extensive documentation.

  18. Mapping motion from 4D-MRI to 3D-CT for use in 4D dose calculations: A technical feasibility study

    SciTech Connect

    Boye, Dirk; Lomax, Tony; Knopf, Antje

    2013-06-15

    Purpose: Target sites affected by organ motion require a time resolved (4D) dose calculation. Typical 4D dose calculations use 4D-CT as a basis. Unfortunately, 4D-CT images have the disadvantage of being a 'snap-shot' of the motion during acquisition and of assuming regularity of breathing. In addition, 4D-CT acquisitions involve a substantial additional dose burden to the patient making many, repeated 4D-CT acquisitions undesirable. Here the authors test the feasibility of an alternative approach to generate patient specific 4D-CT data sets. Methods: In this approach motion information is extracted from 4D-MRI. Simulated 4D-CT data sets [which the authors call 4D-CT(MRI)] are created by warping extracted deformation fields to a static 3D-CT data set. The employment of 4D-MRI sequences for this has the advantage that no assumptions on breathing regularity are made, irregularities in breathing can be studied and, if necessary, many repeat imaging studies (and consequently simulated 4D-CT data sets) can be performed on patients and/or volunteers. The accuracy of 4D-CT(MRI)s has been validated by 4D proton dose calculations. Our 4D dose algorithm takes into account displacements as well as deformations on the originating 4D-CT/4D-CT(MRI) by calculating the dose of each pencil beam based on an individual time stamp of when that pencil beam is applied. According to corresponding displacement and density-variation-maps the position and the water equivalent range of the dose grid points is adjusted at each time instance. Results: 4D dose distributions, using 4D-CT(MRI) data sets as input were compared to results based on a reference conventional 4D-CT data set capturing similar motion characteristics. Almost identical 4D dose distributions could be achieved, even though scanned proton beams are very sensitive to small differences in the patient geometry. In addition, 4D dose calculations have been performed on the same patient, but using 4D-CT(MRI) data sets based on

  19. Histograms of Oriented 3D Gradients for Fully Automated Fetal Brain Localization and Robust Motion Correction in 3 T Magnetic Resonance Images

    PubMed Central

    Macnaught, Gillian; Denison, Fiona C.; Reynolds, Rebecca M.; Semple, Scott I.; Boardman, James P.

    2017-01-01

    Fetal brain magnetic resonance imaging (MRI) is a rapidly emerging diagnostic imaging tool. However, automated fetal brain localization is one of the biggest obstacles in expediting and fully automating large-scale fetal MRI processing. We propose a method for automatic localization of fetal brain in 3 T MRI when the images are acquired as a stack of 2D slices that are misaligned due to fetal motion. First, the Histogram of Oriented Gradients (HOG) feature descriptor is extended from 2D to 3D images. Then, a sliding window is used to assign a score to all possible windows in an image, depending on the likelihood of it containing a brain, and the window with the highest score is selected. In our evaluation experiments using a leave-one-out cross-validation strategy, we achieved 96% of complete brain localization using a database of 104 MRI scans at gestational ages between 34 and 38 weeks. We carried out comparisons against template matching and random forest based regression methods and the proposed method showed superior performance. We also showed the application of the proposed method in the optimization of fetal motion correction and how it is essential for the reconstruction process. The method is robust and does not rely on any prior knowledge of fetal brain development. PMID:28251155

  20. Evolution of the regions of the 3D particle motion in the regular polygon problem of (N+1) bodies with a quasi-homogeneous potential

    NASA Astrophysics Data System (ADS)

    Fakis, Demetrios; Kalvouridis, Tilemahos

    2017-09-01

    The regular polygon problem of (N+1) bodies deals with the dynamics of a small body, natural or artificial, in the force field of N big bodies, the ν=N-1 of which have equal masses and form an imaginary regular ν -gon, while the Nth body with a different mass is located at the center of mass of the system. In this work, instead of considering Newtonian potentials and forces, we assume that the big bodies create quasi-homogeneous potentials, in the sense that we insert to the inverse square Newtonian law of gravitation an inverse cube corrective term, aiming to approximate various phenomena due to their shape or to the radiation emitting from the primaries. Based on this new consideration, we apply a general methodology in order to investigate by means of the zero-velocity surfaces, the regions where 3D motions of the small body are allowed, their evolutions and parametric variations, their topological bifurcations, as well as the existing trapping domains of the particle. Here we note that this process is definitely a fundamental step of great importance in the study of many dynamical systems characterized by a Jacobian-type integral of motion in the long way of searching for solutions of any kind.

  1. Histograms of Oriented 3D Gradients for Fully Automated Fetal Brain Localization and Robust Motion Correction in 3 T Magnetic Resonance Images.

    PubMed

    Serag, Ahmed; Macnaught, Gillian; Denison, Fiona C; Reynolds, Rebecca M; Semple, Scott I; Boardman, James P

    2017-01-01

    Fetal brain magnetic resonance imaging (MRI) is a rapidly emerging diagnostic imaging tool. However, automated fetal brain localization is one of the biggest obstacles in expediting and fully automating large-scale fetal MRI processing. We propose a method for automatic localization of fetal brain in 3 T MRI when the images are acquired as a stack of 2D slices that are misaligned due to fetal motion. First, the Histogram of Oriented Gradients (HOG) feature descriptor is extended from 2D to 3D images. Then, a sliding window is used to assign a score to all possible windows in an image, depending on the likelihood of it containing a brain, and the window with the highest score is selected. In our evaluation experiments using a leave-one-out cross-validation strategy, we achieved 96% of complete brain localization using a database of 104 MRI scans at gestational ages between 34 and 38 weeks. We carried out comparisons against template matching and random forest based regression methods and the proposed method showed superior performance. We also showed the application of the proposed method in the optimization of fetal motion correction and how it is essential for the reconstruction process. The method is robust and does not rely on any prior knowledge of fetal brain development.

  2. Identifying the origin of differences between 3D numerical simulations of ground motion in sedimentary basins: lessons from stringent canonical test models in the E2VP framework

    NASA Astrophysics Data System (ADS)

    Chaljub, Emmanuel; Maufroy, Emeline; Moczo, Peter; Kristek, Jozef; Priolo, Enrico; Klin, Peter; De Martin, Florent; Zhang, Zenghuo; Hollender, Fabrice; Bard, Pierre-Yves

    2013-04-01

    Numerical simulation is playing a role of increasing importance in the field of seismic hazard by providing quantitative estimates of earthquake ground motion, its variability, and its sensitivity to geometrical and mechanical properties of the medium. Continuous efforts to develop accurate and computationally efficient numerical methods, combined with increasing computational power have made it technically feasible to calculate seismograms in 3D realistic configurations and for frequencies of interest in seismic design applications. Now, in order to foster the use of numerical simulations in practical prediction of earthquake ground motion, it is important to evaluate the accuracy of current numerical methods when applied to realistic 3D sites. This process of verification is a necessary prerequisite to confrontation of numerical predictions and observations. Through the ongoing Euroseistest Verification and Validation Project (E2VP), which focuses on the Mygdonian basin (northern Greece), we investigated the capability of numerical methods to predict earthquake ground motion for frequencies up to 4 Hz. Numerical predictions obtained by several teams using a wide variety of methods were compared using quantitative goodness-of-fit criteria. In order to better understand the cause of misfits between different simulations, initially performed for the realistic geometry of the Mygdonian basin, we defined five stringent canonical configurations. The canonical models allow for identifying sources of misfits and quantify their importance. Detailed quantitative comparison of simulations in relation to dominant features of the models shows that even relatively simple heterogeneous models must be treated with maximum care in order to achieve sufficient level of accuracy. One important conclusion is that the numerical representation of models with strong variations (e.g. discontinuities) may considerably vary from one method to the other, and may become a dominant source of

  3. Classification and segmentation of orbital space based objects against terrestrial distractors for the purpose of finding holes in shape from motion 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Mundhenk, T. Nathan; Flores, Arturo; Hoffman, Heiko

    2013-12-01

    3D reconstruction of objects via Shape from Motion (SFM) has made great strides recently. Utilizing images from a variety of poses, objects can be reconstructed in 3D without knowing a priori the camera pose. These feature points can then be bundled together to create large scale scene reconstructions automatically. A shortcoming of current methods of SFM reconstruction is in dealing with specular or flat low feature surfaces. The inability of SFM to handle these places creates holes in a 3D reconstruction. This can cause problems when the 3D reconstruction is used for proximity detection and collision avoidance by a space vehicle working around another space vehicle. As such, we would like the automatic ability to recognize when a hole in a 3D reconstruction is in fact not a hole, but is a place where reconstruction has failed. Once we know about such a location, methods can be used to try to either more vigorously fill in that region or to instruct a space vehicle to proceed with more caution around that area. Detecting such areas in earth orbiting objects is non-trivial since we need to parse out complex vehicle features from complex earth features, particularly when the observing vehicle is overhead the target vehicle. To do this, we have created a Space Object Classifier and Segmenter (SOCS) hole finder. The general principle we use is to classify image features into three categories (earth, man-made, space). Classified regions are then clustered into probabilistic regions which can then be segmented out. Our categorization method uses an augmentation of a state of the art bag of visual words method for object categorization. This method works by first extracting PHOW (dense SIFT like) features which are computed over an image and then quantized via KD Tree. The quantization results are then binned into histograms and results classified by the PEGASOS support vector machine solver. This gives a probability that a patch in the image corresponds to one of three

  4. A 'chemotactic dipole' mechanism for large-scale vortex motion during primitive streak formation in the chick embryo.

    PubMed

    Sandersius, S A; Chuai, M; Weijer, C J; Newman, T J

    2011-08-01

    Primitive streak formation in the chick embryo involves significant coordinated cell movement lateral to the streak, in addition to the posterior-anterior movement of cells in the streak proper. Cells lateral to the streak are observed to undergo 'polonaise movements', i.e. two large counter-rotating vortices, reminiscent of eddies in a fluid. In this paper, we propose a mechanism for these movement patterns which relies on chemotactic signals emitted by a dipolar configuration of cells in the posterior region of the epiblast. The 'chemotactic dipole' consists of adjacent regions of cells emitting chemo-attractants and chemo-repellents. We motivate this idea using a mathematical analogy between chemotaxis and electrostatics, and test this idea using large-scale computer simulations. We implement active cell response to both neighboring mechanical interactions and chemotactic gradients using the Subcellular Element Model. Simulations show the emergence of large-scale vortices of cell movement. The length and time scales of vortex formation are in reasonable agreement with experimental data. We also provide quantitative estimates for the robustness of the chemotaxis dipole mechanism, which indicate that the mechanism has an error tolerance of about 10% to variation in chemotactic parameters, assuming that only 1% of the cell population is involved in emitting signals. This tolerance increases for larger populations of cells emitting signals.

  5. Quantitative anatomical analysis of facial expression using a 3D motion capture system: Application to cosmetic surgery and facial recognition technology.

    PubMed

    Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin

    2015-09-01

    The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots. © 2015 Wiley Periodicals, Inc.

  6. A New Accurate 3D Measurement Tool to Assess the Range of Motion of the Tongue in Oral Cancer Patients: A Standardized Model.

    PubMed

    van Dijk, Simone; van Alphen, Maarten J A; Jacobi, Irene; Smeele, Ludwig E; van der Heijden, Ferdinand; Balm, Alfons J M

    2016-02-01

    In oral cancer treatment, function loss such as speech and swallowing deterioration can be severe, mostly due to reduced lingual mobility. Until now, there is no standardized measurement tool for tongue mobility and pre-operative prediction of function loss is based on expert opinion instead of evidence based insight. The purpose of this study was to assess the reliability of a triple-camera setup for the measurement of tongue range of motion (ROM) in healthy adults and its feasibility in patients with partial glossectomy. A triple-camera setup was used, and 3D coordinates of the tongue in five standardized tongue positions were achieved in 15 healthy volunteers. Maximum distances between the tip of the tongue and the maxillary midline were calculated. Each participant was recorded twice, and each movie was analysed three times by two separate raters. Intrarater, interrater and test-retest reliability were the main outcome measures. Secondly, feasibility of the method was tested in ten patients treated for oral tongue carcinoma. Intrarater, interrater and test-retest reliability all showed high correlation coefficients of >0.9 in both study groups. All healthy subjects showed perfect symmetrical tongue ROM. In patients, significant differences in lateral tongue movements were found, due to restricted tongue mobility after surgery. This triple-camera setup is a reliable measurement tool to assess three-dimensional information of tongue ROM. It constitutes an accurate tool for objective grading of reduced tongue mobility after partial glossectomy.

  7. WE-G-BRB-02: The Role of Program Project Grants in Study of 3D Conformal Therapy, Dose Escalation and Motion Management

    SciTech Connect

    Fraass, B.

    2015-06-15

    Over the past 20 years the NIH has funded individual grants, program projects grants, and clinical trials which have been instrumental in advancing patient care. The ways that each grant mechanism lends itself to the different phases of translating research into clinical practice will be described. Major technological innovations, such as IMRT and proton therapy, have been advanced with R01-type and P01-type funding and will be discussed. Similarly, the role of program project grants in identifying and addressing key hypotheses on the potential of 3D conformal therapy, normal tissue-guided dose escalation and motion management will be described. An overview will be provided regarding how these technological innovations have been applied to multi-institutional NIH-sponsored trials. Finally, the panel will discuss regarding which research questions should be funded by the NIH to inspire the next advances in radiation therapy. Learning Objectives: Understand the different funding mechanisms of the NIH Learn about research advances that have led to innovation in delivery Review achievements due to NIH-funded program project grants in radiotherapy over the past 20 years Understand example advances achieved with multi-institutional clinical trials NIH.

  8. Lifetime of inner-shell hole states of Ar (2p) and Kr (3d) using equation-of-motion coupled cluster method

    SciTech Connect

    Ghosh, Aryya; Vaval, Nayana; Pal, Sourav

    2015-07-14

    Auger decay is an efficient ultrafast relaxation process of core-shell or inner-shell excited atom or molecule. Generally, it occurs in femto-second or even atto-second time domain. Direct measurement of lifetimes of Auger process of single ionized and double ionized inner-shell state of an atom or molecule is an extremely difficult task. In this paper, we have applied the highly correlated complex absorbing potential-equation-of-motion coupled cluster (CAP-EOMCC) approach which is a combination of CAP and EOMCC approach to calculate the lifetime of the states arising from 2p inner-shell ionization of an Ar atom and 3d inner-shell ionization of Kr atom. We have also calculated the lifetime of Ar{sup 2+}(2p{sup −1}3p{sup −1}) {sup 1}D, Ar{sup 2+}(2p{sup −1}3p{sup −1}) {sup 1}S, and Ar{sup 2+}(2p{sup −1}3s{sup −1}) {sup 1}P double ionized states. The predicted results are compared with the other theoretical results as well as experimental results available in the literature.

  9. Dynamic Primitives of Motor Behavior

    PubMed Central

    Hogan, Neville; Sternad, Dagmar

    2013-01-01

    We present in outline a theory of sensorimotor control based on dynamic primitives, which we define as attractors. To account for the broad class of human interactive behaviors—especially tool use—we propose three distinct primitives: submovements, oscillations and mechanical impedances, the latter necessary for interaction with objects. Due to fundamental features of the neuromuscular system, most notably its slow response, we argue that encoding in terms of parameterized primitives may be an essential simplification required for learning, performance, and retention of complex skills. Primitives may simultaneously and sequentially be combined to produce observable forces and motions. This may be achieved by defining a virtual trajectory composed of submovements and/or oscillations interacting with impedances. Identifying primitives requires care: in principle, overlapping submovements would be sufficient to compose all observed movements but biological evidence shows that oscillations are a distinct primitive. Conversely, we suggest that kinematic synergies, frequently discussed as primitives of complex actions, may be an emergent consequence of neuromuscular impedance. To illustrate how these dynamic primitives may account for complex actions, we briefly review three types of interactive behaviors: constrained motion, impact tasks, and manipulation of dynamic objects. PMID:23124919

  10. Thromboembolic risk in atrial fibrillation: association between left atrium mechanics and risk scores. A study based on 3D wall-motion tracking technology.

    PubMed

    Islas, Fabián; Olmos, Carmen; Vieira, Catarina; De Agustín, José A; Marcos-Alberca, Pedro; Saltijeral, Adriana; Almería, Carlos; Rodrigo, José L; García Fernández, Miguel A; Macaya, Carlos; Pérez de Isla, Leopoldo

    2015-04-01

    Atrial fibrillation (AF) is the most common cardiac arrhythmia and is associated with a significantly high risk of stroke and systemic embolism. The aim of our study was to assess the association between left atrium (LA) mechanics measured by 3D wall-motion tracking (3DWMT) technology and the most common thromboembolic risk scores (CHADS2, CHA2DS2-VASc). A total of 101 consecutive patients with permanent AF referred were included. Conventional bidimensional (2D) LA parameters, and LA mechanics by means of 3DWMT were studied. Association between LA 2D and 3DWMT parameters and both risk scores was evaluated as well as its correlation with every component of the score individually. Mean age was 78 ± 10 years. Mean CHADS2 was 2.7 ± 1.3 and mean CHA2DS2-VASc was 4.4 ± 1.7. Values of 2D and 3DWTM LA parameters were: 2D area 26.4 ± 9.7 cm(2) , 2D volume index 49.4 ± 10.1 mL/m(2) , 3DWMT left atrial emptying fraction (LAEF) 15.9 ± 8.4%, longitudinal strain 9.1 ± 4.5% and area strain 14.9 ± 8.8%. Linear regression analysis showed statistically significant correlation between LA longitudinal strain and LAEF with CHADS2 and CHA2DS2-VASc scores. For each 10% variation in longitudinal strain, CHADS2 and CHA2DS2-VASc scores change in 0.7 and 0.8 points, respectively. Left atrial longitudinal strain and emptying fraction assessed by 3D WMT technology have correlation with both CHADS2 and CHA2DS2-VASc scores. Each 10% of variation in longitudinal strain represents a 0.7 and 0.8 points change in those risk scores. LA mechanics evaluation might provide additional value to risk scores and could be considered to be a predictor of stroke in patients with AF. © 2014, Wiley Periodicals, Inc.

  11. WE-G-207-06: 3D Fluoroscopic Image Generation From Patient-Specific 4DCBCT-Based Motion Models Derived From Physical Phantom and Clinical Patient Images

    SciTech Connect

    Dhou, S; Cai, W; Hurwitz, M; Rottmann, J; Myronakis, M; Cifter, F; Berbeco, R; Lewis, J; Williams, C; Mishra, P; Ionascu, D

    2015-06-15

    Purpose: Respiratory-correlated cone-beam CT (4DCBCT) images acquired immediately prior to treatment have the potential to represent patient motion patterns and anatomy during treatment, including both intra- and inter-fractional changes. We develop a method to generate patient-specific motion models based on 4DCBCT images acquired with existing clinical equipment and used to generate time varying volumetric images (3D fluoroscopic images) representing motion during treatment delivery. Methods: Motion models are derived by deformably registering each 4DCBCT phase to a reference phase, and performing principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated by optimizing the resulting PCA coefficients iteratively through comparison of the cone-beam projections simulating kV treatment imaging and digitally reconstructed radiographs generated from the motion model. Patient and physical phantom datasets are used to evaluate the method in terms of tumor localization error compared to manually defined ground truth positions. Results: 4DCBCT-based motion models were derived and used to generate 3D fluoroscopic images at treatment time. For the patient datasets, the average tumor localization error and the 95th percentile were 1.57 and 3.13 respectively in subsets of four patient datasets. For the physical phantom datasets, the average tumor localization error and the 95th percentile were 1.14 and 2.78 respectively in two datasets. 4DCBCT motion models are shown to perform well in the context of generating 3D fluoroscopic images due to their ability to reproduce anatomical changes at treatment time. Conclusion: This study showed the feasibility of deriving 4DCBCT-based motion models and using them to generate 3D fluoroscopic images at treatment time in real clinical settings. 4DCBCT-based motion models were found to account for the 3D non-rigid motion of the patient anatomy during treatment and have the potential

  12. Automatic techniques for 3D reconstruction of critical workplace body postures from range imaging data

    NASA Astrophysics Data System (ADS)

    Westfeld, Patrick; Maas, Hans-Gerd; Bringmann, Oliver; Gröllich, Daniel; Schmauder, Martin

    2013-11-01

    The paper shows techniques for the determination of structured motion parameters from range camera image sequences. The core contribution of the work presented here is the development of an integrated least squares 3D tracking approach based on amplitude and range image sequences to calculate dense 3D motion vector fields. Geometric primitives of a human body model are fitted to time series of range camera point clouds using these vector fields as additional information. Body poses and motion information for individual body parts are derived from the model fit. On the basis of these pose and motion parameters, critical body postures are detected. The primary aim of the study is to automate ergonomic studies for risk assessments regulated by law, identifying harmful movements and awkward body postures in a workplace.

  13. Primitive Clay.

    ERIC Educational Resources Information Center

    Chorches, Joan

    A five-week unit providing first hand experience with primitive ceramic techniques is described in this curriculum guide, which includes course goals and objectives, a daily schedule of class activities, and handouts for students. The unit features construction of a sawdust kiln as a group problem-solving activity; students work in groups…

  14. Primitive Clay.

    ERIC Educational Resources Information Center

    Chorches, Joan

    A five-week unit providing first hand experience with primitive ceramic techniques is described in this curriculum guide, which includes course goals and objectives, a daily schedule of class activities, and handouts for students. The unit features construction of a sawdust kiln as a group problem-solving activity; students work in groups…

  15. 3D surface flow kinematics derived from airborne UAVSAR interferometric synthetic aperture radar to constrain the physical mechanisms controlling landslide motion

    NASA Astrophysics Data System (ADS)

    Delbridge, B. G.; Burgmann, R.; Fielding, E. J.; Hensley, S.; Schulz, W. H.

    2013-12-01

    This project focuses on improving our understanding of the physical mechanisms controlling landslide motion by studying the landslide-wide kinematics of the Slumgullion landslide in southwestern Colorado using interferometric synthetic aperture radar (InSAR) and GPS. The NASA/JPL UAVSAR airborne repeat-pass SAR interferometry system imaged the Slumgullion landslide from 4 look directions on eight flights in 2011 and 2012. Combining the four look directions allows us to extract the full 3-D velocity field of the surface. Observing the full 3-dimensional flow field allows us to extract the full strain tensor (assuming free surface boundary conditions and incompressible flow) since we have both the spatial resolution to take spatial derivates and full deformation information. COSMO-SkyMed(CSK) high-resolution Spotlight data was also acquired during time intervals overlapping with the UAVSAR one-week pairs, with intervals as short as one day. These observations allow for the quantitative testing of the deformation magnitude and estimated formal errors in the UAVSAR derived deformation field. We also test the agreement of the deformation at 20 GPS monitoring sites concurrently acquired by the USGS. We also utilize the temporal resolution of real-time GPS acquired by the UC Berkeley Active Tectonics Group during a temporary deployment from July 22nd - August 2nd. By combining this data with the kinematic data we hope to elucidate the response of the landslide to environmental changes such as rainfall, snowmelt, and atmospheric pressure, and consequently the mechanisms controlling the dynamics of the landslide system. To constrain the longer temporal dynamics, interferograms made from pairs of CSK images acquired in 2010, 2011, 2012 and 2013 reveal the slide deformation on a longer timescale by allowing us to measure meters of motion and see the average rates over year long intervals using pixel offset tracking of the high-resolution SAR amplitude images. The results of

  16. WE-G-217BCD-05: Fiducial Marker-Based Motion Compensation for the Acquisition of 3D Knee Geometry Under Weight-Bearing Conditions Using a C-Arm CT Scanner.

    PubMed

    Choi, J-H; Keil, A; Maier, A; Pal, S; McWalter, E J; Fahrig, R

    2012-06-01

    Imaging the knee under realistic load-bearing conditions can be carried out in a horizontal plane using a C-arm CT scanner. Human subjects can be scanned in a standing position and acquired data successfully reconstructed. However, reconstructing this data is a challenge due to significant artifacts that are induced due to involuntary motion. Here, we propose motion correction methods in 2D and 3D. Four volunteers were scanned for 8 seconds while squatting with ∼30 degree flexion. Eight tantalum fiducial markers suitably attached around the knee were used to track motion. The marker position in each projection was semi- automatically detected. Each marker's static 3D position, which served as a reference to correct temporal motion, was estimated by triangulating each marker's 2D position from 248 projections using known projection matrices. Motion was corrected in 3 ways: 1) 2D projection shifting based on the mean position of markers, 2) 2D projection warping using approximate thin- plate splines, 3) 3D rigid body warping. The original reconstruction was severely motion-corrupted which made it impossible to distinguish the boundaries of bones. Reconstruction with projection shifting and warping in 2D improved visualization of edges of soft tissue as well as bone. A simple numerical metric of residual bead deviation from static position was reduced from 3.2mm to 0.4mm. The 2D-based methods are inherently limited in that they cannot fully accommodate different 3D movements at different depths from the X-ray source. Reconstruction with 3D warping shows clearer edges and less streak artifact than the 2D methods. The proposed three motion correction methods effectively reduced motion-induced artifacts in the reconstruction and are therefore suitable for weight-bearing scanning. Future work includes scanning patients in standing position after contrast injection for evaluating the soft tissue structure and constructing 3D finite element models for the estimation of

  17. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  18. 3D and beyond

    NASA Astrophysics Data System (ADS)

    Fung, Y. C.

    1995-05-01

    This conference on physiology and function covers a wide range of subjects, including the vasculature and blood flow, the flow of gas, water, and blood in the lung, the neurological structure and function, the modeling, and the motion and mechanics of organs. Many technologies are discussed. I believe that the list would include a robotic photographer, to hold the optical equipment in a precisely controlled way to obtain the images for the user. Why are 3D images needed? They are to achieve certain objectives through measurements of some objects. For example, in order to improve performance in sports or beauty of a person, we measure the form, dimensions, appearance, and movements.

  19. Utility of real-time prospective motion correction (PROMO) on 3D T1-weighted imaging in automated brain structure measurements

    PubMed Central

    Watanabe, Keita; Kakeda, Shingo; Igata, Natsuki; Watanabe, Rieko; Narimatsu, Hidekuni; Nozaki, Atsushi; Dan Rettmann; Abe, Osamu; Korogi, Yukunori

    2016-01-01

    PROspective MOtion correction (PROMO) can prevent motion artefacts. The aim of this study was to determine whether brain structure measurements of motion-corrected images with PROMO were reliable and equivalent to conventional images without motion artefacts. The following T1-weighted images were obtained in healthy subjects: (A) resting scans with and without PROMO and (B) two types of motion scans (“side-to-side” and “nodding” motions) with and without PROMO. The total gray matter volumes and cortical thicknesses were significantly decreased in motion scans without PROMO as compared to the resting scans without PROMO (p < 0.05). Conversely, Bland–Altman analysis indicated no bias between motion scans with PROMO, which have good image quality, and resting scans without PROMO. In addition, there was no bias between resting scans with and without PROMO. The use of PROMO facilitated more reliable brain structure measurements in subjects moving during data acquisition. PMID:27917950

  20. Utility of real-time prospective motion correction (PROMO) on 3D T1-weighted imaging in automated brain structure measurements

    NASA Astrophysics Data System (ADS)

    Watanabe, Keita; Kakeda, Shingo; Igata, Natsuki; Watanabe, Rieko; Narimatsu, Hidekuni; Nozaki, Atsushi; Dan Rettmann; Abe, Osamu; Korogi, Yukunori

    2016-12-01

    PROspective MOtion correction (PROMO) can prevent motion artefacts. The aim of this study was to determine whether brain structure measurements of motion-corrected images with PROMO were reliable and equivalent to conventional images without motion artefacts. The following T1-weighted images were obtained in healthy subjects: (A) resting scans with and without PROMO and (B) two types of motion scans (“side-to-side” and “nodding” motions) with and without PROMO. The total gray matter volumes and cortical thicknesses were significantly decreased in motion scans without PROMO as compared to the resting scans without PROMO (p < 0.05). Conversely, Bland-Altman analysis indicated no bias between motion scans with PROMO, which have good image quality, and resting scans without PROMO. In addition, there was no bias between resting scans with and without PROMO. The use of PROMO facilitated more reliable brain structure measurements in subjects moving during data acquisition.

  1. Using Averaging-Based Factorization to Compare Seismic Hazard Models Derived from 3D Earthquake Simulations with NGA Ground Motion Prediction Equations

    NASA Astrophysics Data System (ADS)

    Wang, F.; Jordan, T. H.

    2012-12-01

    Seismic hazard models based on empirical ground motion prediction equations (GMPEs) employ a model-based factorization to account for source, propagation, and path effects. An alternative is to simulate these effects directly using earthquake source models combined with three-dimensional (3D) models of Earth structure. We have developed an averaging-based factorization (ABF) scheme that facilitates the geographically explicit comparison of these two types of seismic hazard models. For any fault source k with epicentral position x, slip spatial and temporal distribution f, and moment magnitude m, we calculate the excitation functions G(s, k, x, m, f) for sites s in a geographical region R, such as 5% damped spectral acceleration at a particular period. Through a sequence of weighted-averaging and normalization operations following a certain hierarchy over f, m, x, k, and s, we uniquely factorize G(s, k, x, m, f) into six components: A, B(s), C(s, k), D(s, k, x), E(s, k, x, m), and F(s, k, x, m, f). Factors for a target model can be divided by those of a reference model to obtain six corresponding factor ratios, or residual factors: a, b(s), c(s, k), d(s, k, x), e(s, k, x, m), and f(s, k, x, m, f). We show that these residual factors characterize differences in basin effects primarily through b(s), distance scaling primarily through c(s, k), and source directivity primarily through d(s, k, x). We illustrate the ABF scheme by comparing CyberShake Hazard Model (CSHM) for the Los Angeles region (Graves et. al. 2010) with the Next Generation Attenuation (NGA) GMPEs modified according to the directivity relations of Spudich and Chiou (2008). Relative to CSHM, all NGA models underestimate the directivity and basin effects. In particular, the NGA models do not account for the coupling between source directivity and basin excitation that substantially enhance the low-frequency seismic hazards in the sedimentary basins of the Los Angeles region. Assuming Cyber

  2. Quantitative Evaluation of 3D Mouse Behaviors and Motor Function in the Open-Field after Spinal Cord Injury Using Markerless Motion Tracking

    PubMed Central

    Sheets, Alison L.; Lai, Po-Lun; Fisher, Lesley C.; Basso, D. Michele

    2013-01-01

    Thousands of scientists strive to identify cellular mechanisms that could lead to breakthroughs in developing ameliorative treatments for debilitating neural and muscular conditions such as spinal cord injury (SCI). Most studies use rodent models to test hypotheses, and these are all limited by the methods available to evaluate animal motor function. This study’s goal was to develop a behavioral and locomotor assessment system in a murine model of SCI that enables quantitative kinematic measurements to be made automatically in the open-field by applying markerless motion tracking approaches. Three-dimensional movements of eight naïve, five mild, five moderate, and four severe SCI mice were recorded using 10 cameras (100 Hz). Background subtraction was used in each video frame to identify the animal’s silhouette, and the 3D shape at each time was reconstructed using shape-from-silhouette. The reconstructed volume was divided into front and back halves using k-means clustering. The animal’s front Center of Volume (CoV) height and whole-body CoV speed were calculated and used to automatically classify animal behaviors including directed locomotion, exploratory locomotion, meandering, standing, and rearing. More detailed analyses of CoV height, speed, and lateral deviation during directed locomotion revealed behavioral differences and functional impairments in animals with mild, moderate, and severe SCI when compared with naïve animals. Naïve animals displayed the widest variety of behaviors including rearing and crossing the center of the open-field, the fastest speeds, and tallest rear CoV heights. SCI reduced the range of behaviors, and decreased speed (r = .70 p<.005) and rear CoV height (r = .65 p<.01) were significantly correlated with greater lesion size. This markerless tracking approach is a first step toward fundamentally changing how rodent movement studies are conducted. By providing scientists with sensitive, quantitative measurement

  3. KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, Dan Clark, with KSC Boeing, operates the camera for a 3D digital scan of the actuator on the table. There are two actuators per engine on the Shuttle, one for pitch motion and one for yaw motion. The Space Shuttle Main Engine hydraulic servoactuators are used to gimbal the main engine.

    NASA Image and Video Library

    2003-09-03

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, Dan Clark, with KSC Boeing, operates the camera for a 3D digital scan of the actuator on the table. There are two actuators per engine on the Shuttle, one for pitch motion and one for yaw motion. The Space Shuttle Main Engine hydraulic servoactuators are used to gimbal the main engine.

  4. KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, Boeing worker Alden Pitard looks at a 3D digital scan of an actuator. There are two actuators per engine on the Shuttle, one for pitch motion and one for yaw motion. The Space Shuttle Main Engine hydraulic servoactuators are used to gimbal the main engine.

    NASA Image and Video Library

    2003-09-03

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, Boeing worker Alden Pitard looks at a 3D digital scan of an actuator. There are two actuators per engine on the Shuttle, one for pitch motion and one for yaw motion. The Space Shuttle Main Engine hydraulic servoactuators are used to gimbal the main engine.

  5. KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, an actuator is set up on a table for a 3D digital scan. There are two actuators per engine on the Shuttle, one for pitch motion and one for yaw motion. The Space Shuttle Main Engine hydraulic servoactuators are used to gimbal the main engine.

    NASA Image and Video Library

    2003-09-03

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, an actuator is set up on a table for a 3D digital scan. There are two actuators per engine on the Shuttle, one for pitch motion and one for yaw motion. The Space Shuttle Main Engine hydraulic servoactuators are used to gimbal the main engine.

  6. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  7. TU-F-17A-04: Respiratory Phase-Resolved 3D MRI with Isotropic High Spatial Resolution: Determination of the Average Breathing Motion Pattern for Abdominal Radiotherapy Planning

    SciTech Connect

    Deng, Z; Pang, J; Yang, W; Yue, Y; Tuli, R; Fraass, B; Li, D; Fan, Z

    2014-06-15

    Purpose: To develop a retrospective 4D-MRI technique (respiratory phase-resolved 3D-MRI) for providing an accurate assessment of tumor motion secondary to respiration. Methods: A 3D projection reconstruction (PR) sequence with self-gating (SG) was developed for 4D-MRI on a 3.0T MRI scanner. The respiration-induced shift of the imaging target was recorded by SG signals acquired in the superior-inferior direction every 15 radial projections (i.e. temporal resolution 98 ms). A total of 73000 radial projections obtained in 8-min were retrospectively sorted into 10 time-domain evenly distributed respiratory phases based on the SG information. Ten 3D image sets were then reconstructed offline. The technique was validated on a motion phantom (gadolinium-doped water-filled box, frequency of 10 and 18 cycles/min) and humans (4 healthy and 2 patients with liver tumors). Imaging protocol included 8-min 4D-MRI followed by 1-min 2D-realtime (498 ms/frame) MRI as a reference. Results: The multiphase 3D image sets with isotropic high spatial resolution (1.56 mm) permits flexible image reformatting and visualization. No intra-phase motion-induced blurring was observed. Comparing to 2D-realtime, 4D-MRI yielded similar motion range (phantom: 10.46 vs. 11.27 mm; healthy subject: 25.20 vs. 17.9 mm; patient: 11.38 vs. 9.30 mm), reasonable displacement difference averaged over the 10 phases (0.74mm; 3.63mm; 1.65mm), and excellent cross-correlation (0.98; 0.96; 0.94) between the two displacement series. Conclusion: Our preliminary study has demonstrated that the 4D-MRI technique can provide high-quality respiratory phase-resolved 3D images that feature: a) isotropic high spatial resolution, b) a fixed scan time of 8 minutes, c) an accurate estimate of average motion pattern, and d) minimal intra-phase motion artifact. This approach has the potential to become a viable alternative solution to assess the impact of breathing on tumor motion and determine appropriate treatment margins

  8. QUANTIFYING UNCERTAINTIES IN GROUND MOTION SIMULATIONS FOR SCENARIO EARTHQUAKES ON THE HAYWARD-RODGERS CREEK FAULT SYSTEM USING THE USGS 3D VELOCITY MODEL AND REALISTIC PSEUDODYNAMIC RUPTURE MODELS

    SciTech Connect

    Rodgers, A; Xie, X

    2008-01-09

    This project seeks to compute ground motions for large (M>6.5) scenario earthquakes on the Hayward Fault using realistic pseudodynamic ruptures, the USGS three-dimensional (3D) velocity model and anelastic finite difference simulations on parallel computers. We will attempt to bound ground motions by performing simulations with suites of stochastic rupture models for a given scenario on a given fault segment. The outcome of this effort will provide the average, spread and range of ground motions that can be expected from likely large earthquake scenarios. The resulting ground motions will be based on first-principles calculations and include the effects of slip heterogeneity, fault geometry and directivity, however, they will be band-limited to relatively low-frequency (< 1 Hz).

  9. Comparison of flux motion in type-II superconductors including pinning centers with the shapes of nano-rods and nano-particles by using 3D-TDGL simulation

    NASA Astrophysics Data System (ADS)

    Ito, Shintaro; Ichino, Yusuke; Yoshida, Yutaka

    2015-11-01

    Time-dependent Ginzburg-Landau (TDGL) equations are very useful method for simulation of the motion of flux quanta in type-II superconductors. We constructed the 3D-TDGL simulator and succeeded to simulate the motion of flux quanta in 3-dimension. We carried out the 3D-TDGL simulation to compare two superconductors which included only pinning centers with the shape of nano-rods and only nano-particle-like pinning centers in the viewpoint of the flux motion. As a result, a motion of "single-kink" caused the whole motion of a flux quantum in the superconductor including only the nano-rods. On the other hand, in the superconductor including the nano-particles, the flux quanta were pinned by the nano-particles in the various magnetic field applied angles. As the result, no "single-kink" occurred in the superconductor including the nano-particles. Therefore, the nano-particle-like pinning centers are effective shape to trap flux quanta for various magnetic field applied angles.

  10. Computation of the 3D kinematics in a global frame over a 40m-long pathway using a rolling motion analysis system.

    PubMed

    Begon, Mickaël; Colloud, Floren; Fohanno, Vincent; Bahuaud, Pascal; Monnet, Tony

    2009-12-11

    A rolling motion analysis system has been purpose-built to acquire an accurate three-dimensional kinematics of human motion with large displacement. Using this device, the kinematics is collected in a local frame associated with the rolling motion analysis system. The purpose of this paper is to express the local kinematics of a subject walking on a 40 m-long pathway in a global system of co-ordinates. One participant performed five trials of walking while he was followed by a rolling eight camera optoelectronic motion analysis system. The kinematics of the trials were reconstructed in the global frame using two different algorithms and 82 markers placed on the floor organized in two parallel and horizontal lines. The maximal error ranged from 0.033 to 0.187 m (<0.5% of the volume diagonal). As a result, this device is accurate enough for acquiring the kinematics of cyclic activities with large displacements in ecological environment.

  11. Using subject-specific three-dimensional (3D) anthropometry data in digital human modelling: case study in hand motion simulation.

    PubMed

    Tsao, Liuxing; Ma, Liang

    2016-11-01

    Digital human modelling enables ergonomists and designers to consider ergonomic concerns and design alternatives in a timely and cost-efficient manner in the early stages of design. However, the reliability of the simulation could be limited due to the percentile-based approach used in constructing the digital human model. To enhance the accuracy of the size and shape of the models, we proposed a framework to generate digital human models using three-dimensional (3D) anthropometric data. The 3D scan data from specific subjects' hands were segmented based on the estimated centres of rotation. The segments were then driven in forward kinematics to perform several functional postures. The constructed hand models were then verified, thereby validating the feasibility of the framework. The proposed framework helps generate accurate subject-specific digital human models, which can be utilised to guide product design and workspace arrangement. Practitioner Summary: Subject-specific digital human models can be constructed under the proposed framework based on three-dimensional (3D) anthropometry. This approach enables more reliable digital human simulation to guide product design and workspace arrangement.

  12. Particle Tracking Facilitates Real Time Capable Motion Correction in 2D or 3D Two-Photon Imaging of Neuronal Activity.

    PubMed

    Aghayee, Samira; Winkowski, Daniel E; Bowen, Zachary; Marshall, Erin E; Harrington, Matt J; Kanold, Patrick O; Losert, Wolfgang

    2017-01-01

    The application of 2-photon laser scanning microscopy (TPLSM) techniques to measure the dynamics of cellular calcium signals in populations of neurons is an extremely powerful technique for characterizing neural activity within the central nervous system. The use of TPLSM on awake and behaving subjects promises new insights into how neural circuit elements cooperatively interact to form sensory perceptions and generate behavior. A major challenge in imaging such preparations is unavoidable animal and tissue movement, which leads to shifts in the imaging location (jitter). The presence of image motion can lead to artifacts, especially since quantification of TPLSM images involves analysis of fluctuations in fluorescence intensities for each neuron, determined from small regions of interest (ROIs). Here, we validate a new motion correction approach to compensate for motion of TPLSM images in the superficial layers of auditory cortex of awake mice. We use a nominally uniform fluorescent signal as a secondary signal to complement the dynamic signals from genetically encoded calcium indicators. We tested motion correction for single plane time lapse imaging as well as multiplane (i.e., volume) time lapse imaging of cortical tissue. Our procedure of motion correction relies on locating the brightest neurons and tracking their positions over time using established techniques of particle finding and tracking. We show that our tracking based approach provides subpixel resolution without compromising speed. Unlike most established methods, our algorithm also captures deformations of the field of view and thus can compensate e.g., for rotations. Object tracking based motion correction thus offers an alternative approach for motion correction, one that is well suited for real time spike inference analysis and feedback control, and for correcting for tissue distortions.

  13. Orthogonally combined motion- and diffusion-sensitized driven equilibrium (OC-MDSDE) preparation for vessel signal suppression in 3D turbo spin echo imaging of peripheral nerves in the extremities.

    PubMed

    Cervantes, Barbara; Kirschke, Jan S; Klupp, Elizabeth; Kooijman, Hendrik; Börnert, Peter; Haase, Axel; Rummeny, Ernst J; Karampinos, Dimitrios C

    2017-03-05

    To design a preparation module for vessel signal suppression in MR neurography of the extremities, which causes minimal attenuation of nerve signal and is highly insensitive to eddy currents and motion. The orthogonally combined motion- and diffusion-sensitized driven equilibrium (OC-MDSDE) preparation was proposed, based on the improved motion- and diffusion-sensitized driven equilibrium methods (iMSDE and FC-DSDE, respectively), with specific gradient design and orientation. OC-MDSDE was desensitized against eddy currents using appropriately designed gradient prepulses. The motion sensitivity and vessel signal suppression capability of OC-MDSDE and its components were assessed in vivo in the knee using 3D turbo spin echo (TSE). Nerve-to-vessel signal ratios were measured for iMSDE and OC-MDSDE in 7 subjects. iMSDE was shown to be highly sensitive to motion with increasing flow sensitization. FC-DSDE showed robustness against motion, but resulted in strong nerve signal loss with diffusion gradients oriented parallel to the nerve. OC-MDSDE showed superior vessel suppression compared to iMSDE and FC-DSDE and maintained high nerve signal. Mean nerve-to-vessel signal ratios in 7 subjects were 0.40 ± 0.17 for iMSDE and 0.63 ± 0.37 for OC-MDSDE. OC-MDSDE combined with 3D TSE in the extremities allows high-near-isotropic-resolution imaging of peripheral nerves with reduced vessel contamination and high nerve signal. Magn Reson Med, 2017. © 2017 Wiley Periodicals, Inc. © 2017 International Society for Magnetic Resonance in Medicine.

  14. ShipMo3D Version 3.0 User Manual for Computing Ship Motions in the Time and Frequency Domains

    DTIC Science & Technology

    2012-01-01

    permettent de modéliser un navire en manœuvre libre et en eau calme ou dans les vagues. SM3DBuildSeaway construit des modèles de voie maritime à trajet...manœuvrant libre- ment en eau calme ou dans une voie maritime modélisée. Plusieurs applications du logiciel ShipMo3D font des prévisions des mouvements de...utilisateur connexes pour la prévision des mouvements de navires en eau calme et dans les vagues. Les prévisions des mouvements sont disponibles dans le

  15. Accuracy and Precision of a Custom Camera-Based System for 2-D and 3-D Motion Tracking during Speech and Nonspeech Motor Tasks

    ERIC Educational Resources Information Center

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose: Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable…

  16. Accuracy and Precision of a Custom Camera-Based System for 2-D and 3-D Motion Tracking during Speech and Nonspeech Motor Tasks

    ERIC Educational Resources Information Center

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose: Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable…

  17. Predation by the Dwarf Seahorse on Copepods: Quantifying Motion and Flows Using 3D High Speed Digital Holographic Cinematography - When Seahorses Attack!

    NASA Astrophysics Data System (ADS)

    Gemmell, Brad; Sheng, Jian; Buskey, Ed

    2008-11-01

    Copepods are an important planktonic food source for most of the world's fish species. This high predation pressure has led copepods to evolve an extremely effective escape response, with reaction times to hydrodynamic disturbances of less than 4 ms and escape speeds of over 500 body lengths per second. Using 3D high speed digital holographic cinematography (up to 2000 frames per second) we elucidate the role of entrainment flow fields generated by a natural visual predator, the dwarf seahorse (Hippocampus zosterae) during attacks on its prey, Acartia tonsa. Using phytoplankton as a tracer, we recorded and reconstructed 3D flow fields around the head of the seahorse and its prey during both successful and unsuccessful attacks to better understand how some attacks lead to capture with little or no detection from the copepod while others result in failed attacks. Attacks start with a slow approach to minimize the hydro-mechanical disturbance which is used by copepods to detect the approach of a potential predator. Successful attacks result in the seahorse using its pipette-like mouth to create suction faster than the copepod's response latency. As these characteristic scales of entrainment increase, a successful escape becomes more likely.

  18. Comparison of 3D Joint Angles Measured With the Kinect 2.0 Skeletal Tracker Versus a Marker-Based Motion Capture System.

    PubMed

    Guess, Trent M; Razu, Swithin; Jahandar, Amirhossein; Skubic, Marjorie; Huo, Zhiyu

    2017-04-01

    The Microsoft Kinect is becoming a widely used tool for inexpensive, portable measurement of human motion, with the potential to support clinical assessments of performance and function. In this study, the relative osteokinematic Cardan joint angles of the hip and knee were calculated using the Kinect 2.0 skeletal tracker. The pelvis segments of the default skeletal model were reoriented and 3-dimensional joint angles were compared with a marker-based system during a drop vertical jump and a hip abduction motion. Good agreement between the Kinect and marker-based system were found for knee (correlation coefficient = 0.96, cycle RMS error = 11°, peak flexion difference = 3°) and hip (correlation coefficient = 0.97, cycle RMS = 12°, peak flexion difference = 12°) flexion during the landing phase of the drop vertical jump and for hip abduction/adduction (correlation coefficient = 0.99, cycle RMS error = 7°, peak flexion difference = 8°) during isolated hip motion. Nonsagittal hip and knee angles did not correlate well for the drop vertical jump. When limited to activities in the optimal capture volume and with simple modifications to the skeletal model, the Kinect 2.0 skeletal tracker can provide limited 3-dimensional kinematic information of the lower limbs that may be useful for functional movement assessment.

  19. Desynchronization of Cartesian k-space sampling and periodic motion for improved retrospectively self-gated 3D lung MRI using quasi-random numbers.

    PubMed

    Weick, Stefan; Völker, Michael; Hemberger, Kathrin; Meyer, Cord; Ehses, Philipp; Polat, Bülent; Breuer, Felix A; Blaimer, Martin; Fink, Christian; Schad, Lothar R; Sauer, Otto A; Flentje, Michael; Jakob, Peter M

    2017-02-01

    To demonstrate that desynchronization between Cartesian k-space sampling and periodic motion in free-breathing lung MRI improves the robustness and efficiency of retrospective respiratory self-gating. Desynchronization was accomplished by reordering the phase (ky ) and partition (kz ) encoding of a three-dimensional FLASH sequence according to two-dimensional, quasi-random (QR) numbers. For retrospective respiratory self-gating, the k-space center signal (DC signal) was acquired separately after each encoded k-space line. QR sampling results in a uniform distribution of k-space lines after gating. Missing lines resulting from the gating process were reconstructed using iterative GRAPPA. Volunteer measurements were performed to compare quasi-random with conventional sampling. Patient measurements were performed to demonstrate the feasibility of QR sampling in a clinical setting. The uniformly sampled k-space after retrospective gating allows for a more stable iterative GRAPPA reconstruction and improved ghost artifact reduction compared with conventional sampling. It is shown that this stability can either be used to reduce the total scan time or to reconstruct artifact-free data sets in different respiratory phases, both resulting in an improved efficiency of retrospective respiratory self-gating. QR sampling leads to desynchronization between repeated data acquisition and periodic respiratory motion. This results in an improved motion artifact reduction in shorter scan time. Magn Reson Med 77:787-793, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  20. 3D for the people: multi-camera motion capture in the field with consumer-grade cameras and open source software.

    PubMed

    Jackson, Brandon E; Evangelista, Dennis J; Ray, Dylan D; Hedrick, Tyson L

    2016-09-15

    Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts.

  1. 3D for the people: multi-camera motion capture in the field with consumer-grade cameras and open source software

    PubMed Central

    Evangelista, Dennis J.; Ray, Dylan D.; Hedrick, Tyson L.

    2016-01-01

    ABSTRACT Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts. PMID:27444791

  2. Assessment of 3D T2-weighted high-sampling-efficiency technique (SPACE) for detection of cerebellar tonsillar motion: new useful sign for Chiari I malformation.

    PubMed

    Ucar, Murat; Tokgoz, Nil; Koc, Ali Murat; Kilic, Koray; Borcek, Alp Ozgun; Oner, Ali Yusuf; Kalkan, Gokalp; Akkan, Koray

    2015-01-01

    To describe tonsillar blackout sign (TBS) on three-dimensional (3D)-SPACE, evaluate its performance in identifying Chiari malformation (CM1) as diagnostic marker, and investigate its role in differentiation of symptomatic and asymptomatic CM1. One-hundred fifty-six patients were divided into two groups based on caudal displacement of cerebellar tonsils: CM1 (Group I) and non-CM1 (Group II). Group I was subclassified as symptomatic and asymptomatic by a neurosurgeon. Two radiologists evaluated TBS and cerebrospinal fluid flow abnormality. All subjects presenting TBS had CM1. Difference in presence of TBS between Group I and Group II was highly significant (P<.001).Grading of TBS in symptomatic patients was significantly higher than that in asymptomatic patients (P<.001). TBS is highly suggestive of CM1 and potentially useful in differentiation of symptomatic and asymptomatic CM1. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  4. Modeling the effects of source and path heterogeneity on ground motions of great earthquakes on the Cascadia Subduction Zone Using 3D simulations

    USGS Publications Warehouse

    Delorey, Andrew; Frankel, Arthur; Liu, Pengcheng; Stephenson, William J.

    2014-01-01

    We ran finite‐difference earthquake simulations for great subduction zone earthquakes in Cascadia to model the effects of source and path heterogeneity for the purpose of improving strong‐motion predictions. We developed a rupture model for large subduction zone earthquakes based on a k−2 slip spectrum and scale‐dependent rise times by representing the slip distribution as the sum of normal modes of a vibrating membrane.Finite source and path effects were important in determining the distribution of strong motions through the locations of the hypocenter, subevents, and crustal structures like sedimentary basins. Some regions in Cascadia appear to be at greater risk than others during an event due to the geometry of the Cascadia fault zone relative to the coast and populated regions. The southern Oregon coast appears to have increased risk because it is closer to the locked zone of the Cascadia fault than other coastal areas and is also in the path of directivity amplification from any rupture propagating north to south in that part of the subduction zone, and the basins in the Puget Sound area are efficiently amplified by both north and south propagating ruptures off the coast of western Washington. We find that the median spectral accelerations at 5 s period from the simulations are similar to that of the Zhao et al. (2006) ground‐motion prediction equation, although our simulations predict higher amplitudes near the region of greatest slip and in the sedimentary basins, such as the Seattle basin.

  5. A multiple-shape memory polymer-metal composite actuator capable of programmable control, creating complex 3D motion of bending, twisting, and oscillation

    PubMed Central

    Shen, Qi; Trabia, Sarah; Stalbaum, Tyler; Palmre, Viljar; Kim, Kwang; Oh, Il-Kwon

    2016-01-01

    Development of biomimetic actuators has been an essential motivation in the study of smart materials. However, few materials are capable of controlling complex twisting and bending deformations simultaneously or separately using a dynamic control system. Here, we report an ionic polymer-metal composite actuator having multiple-shape memory effect, and is able to perform complex motion by two external inputs, electrical and thermal. Prior to the development of this type of actuator, this capability only could be realized with existing actuator technologies by using multiple actuators or another robotic system. This paper introduces a soft multiple-shape-memory polymer-metal composite (MSMPMC) actuator having multiple degrees-of-freedom that demonstrates high maneuverability when controlled by two external inputs, electrical and thermal. These multiple inputs allow for complex motions that are routine in nature, but that would be otherwise difficult to obtain with a single actuator. To the best of the authors’ knowledge, this MSMPMC actuator is the first solitary actuator capable of multiple-input control and the resulting deformability and maneuverability. PMID:27080134

  6. A multiple-shape memory polymer-metal composite actuator capable of programmable control, creating complex 3D motion of bending, twisting, and oscillation.

    PubMed

    Shen, Qi; Trabia, Sarah; Stalbaum, Tyler; Palmre, Viljar; Kim, Kwang; Oh, Il-Kwon

    2016-04-15

    Development of biomimetic actuators has been an essential motivation in the study of smart materials. However, few materials are capable of controlling complex twisting and bending deformations simultaneously or separately using a dynamic control system. Here, we report an ionic polymer-metal composite actuator having multiple-shape memory effect, and is able to perform complex motion by two external inputs, electrical and thermal. Prior to the development of this type of actuator, this capability only could be realized with existing actuator technologies by using multiple actuators or another robotic system. This paper introduces a soft multiple-shape-memory polymer-metal composite (MSMPMC) actuator having multiple degrees-of-freedom that demonstrates high maneuverability when controlled by two external inputs, electrical and thermal. These multiple inputs allow for complex motions that are routine in nature, but that would be otherwise difficult to obtain with a single actuator. To the best of the authors' knowledge, this MSMPMC actuator is the first solitary actuator capable of multiple-input control and the resulting deformability and maneuverability.

  7. Application of recursive Gibbs-Appell formulation in deriving the equations of motion of N-viscoelastic robotic manipulators in 3D space using Timoshenko Beam Theory

    NASA Astrophysics Data System (ADS)

    Korayem, M. H.; Shafei, A. M.

    2013-02-01

    The goal of this paper is to describe the application of Gibbs-Appell (G-A) formulation and the assumed modes method to the mathematical modeling of N-viscoelastic link manipulators. The paper's focus is on obtaining accurate and complete equations of motion which encompass the most related structural properties of lightweight elastic manipulators. In this study, two important damping mechanisms, namely, the structural viscoelasticity (Kelvin-Voigt) effect (as internal damping) and the viscous air effect (as external damping) have been considered. To include the effects of shear and rotational inertia, the assumption of Timoshenko beam (TB) theory (TBT) has been applied. Gravity, torsion, and longitudinal elongation effects have also been included in the formulations. To systematically derive the equations of motion and improve the computational efficiency, a recursive algorithm has been used in the modeling of the system. In this algorithm, all the mathematical operations are carried out by only 3×3 and 3×1 matrices. Finally, a computational simulation for a manipulator with two elastic links is performed in order to verify the proposed method.

  8. Advantages of fibre lasers in 3D metal cutting and welding applications supported by a 'beam in motion (BIM)' beam delivery system

    NASA Astrophysics Data System (ADS)

    Scheller, Torsten; Bastick, André; Griebel, Martin

    2012-03-01

    Modern laser technology is continuously opening up new fields of applications. Driven by the development of increasingly efficient laser sources, the new technology is successfully entering classical applications such as 3D cutting and welding of metals. Especially in light weight applications in the automotive industry laser manufacturing is key. Only by this technology the reduction of welding widths could be realised as well as the efficient machining of aluminium and the abrasion free machining of hardened steel. The paper compares the operation of different laser types in metal machining regarding wavelength, laser power, laser brilliance, process speed and welding depth to give an estimation for best use of single mode or multi mode lasers in this field of application. The experimental results will be presented by samples of applied parts. In addition a correlation between the process and the achieved mechanical properties will be made. For this application JENOPTIK Automatisierungstechnik GmbH is using the BIM beam control system in its machines, which is the first one to realize a fully integrated combination of beam control and robot. The wide performance and wavelength range of the laser radiation which can be transmitted opens up diverse possibilities of application and makes BIM a universal tool.

  9. A 3d-3d appetizer

    NASA Astrophysics Data System (ADS)

    Pei, Du; Ye, Ke

    2016-11-01

    We test the 3d-3d correspondence for theories that are labeled by Lens spaces. We find a full agreement between the index of the 3d N=2 "Lens space theory" T [ L( p, 1)] and the partition function of complex Chern-Simons theory on L( p, 1). In particular, for p = 1, we show how the familiar S 3 partition function of Chern-Simons theory arises from the index of a free theory. For large p, we find that the index of T[ L( p, 1)] becomes a constant independent of p. In addition, we study T[ L( p, 1)] on the squashed three-sphere S b 3 . This enables us to see clearly, at the level of partition function, to what extent G ℂ complex Chern-Simons theory can be thought of as two copies of Chern-Simons theory with compact gauge group G.

  10. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; ...

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  11. 3d-3d correspondence revisited

    SciTech Connect

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  12. 3D Data Acquisition Platform for Human Activity Understanding

    DTIC Science & Technology

    2016-03-02

    SECURITY CLASSIFICATION OF: In this project, we incorporated motion capture devices, 3D vision sensors, and EMG sensors to cross validate...multimodality data acquisition, and address fundamental research problems of representation and invariant description of 3D data, human motion modeling and...Report: 3D Data Acquisition Platform for Human Activity Understanding Report Title In this project, we incorporated motion capture devices, 3D vision

  13. Performance assessment of HIFU lesion detection by Harmonic Motion Imaging for Focused Ultrasound (HMIFU): A 3D finite-element-based framework with experimental validation

    PubMed Central

    Hou, Gary Y.; Luo, Jianwen; Marquet, Fabrice; Maleke, Caroline; Vappou, Jonathan; Konofagou, Elisa E.

    2014-01-01

    Harmonic Motion Imaging for Focused Ultrasound (HMIFU) is a novel high-intensity focused ultrasound (HIFU) therapy monitoring method with feasibilities demonstrated in vitro, ex vivo and in vivo. Its principle is based on Amplitude-modulated (AM) - Harmonic Motion Imaging (HMI), an oscillatory radiation force used for imaging the tissue mechanical response during thermal ablation. In this study, a theoretical framework of HMIFU is presented, comprising a customized nonlinear wave propagation model, a finite-element (FE) analysis module, and an image-formation model. The objective of this study is to develop such a framework in order to 1) assess the fundamental performance of HMIFU in detecting HIFU lesions based on the change in tissue apparent elasticity, i.e., the increasing Young's modulus, and the HIFU lesion size with respect to the HIFU exposure time and 2) validate the simulation findings ex vivo. The same HMI and HMIFU parameters as in the experimental studies were used, i.e., 4.5-MHz HIFU frequency and 25 Hz AM frequency. For a lesion-to-background Young's modulus ratio of 3, 6, and 9, the FE and estimated HMI displacement ratios were equal to 1.83, 3.69, 5.39 and 1.65, 3.19, 4.59, respectively. In experiments, the HMI displacement followed a similar increasing trend of 1.19, 1.28, and 1.78 at 10-s, 20-s, and 30-s HIFU exposure, respectively. In addition, moderate agreement in lesion size growth was also found in both simulations (16.2, 73.1 and 334.7 mm2) and experiments (26.2, 94.2 and 206.2 mm2). Therefore, the feasibility of HMIFU for HIFU lesion detection based on the underlying tissue elasticity changes was verified through the developed theoretical framework, i.e., validation of the fundamental performance of the HMIFU system for lesion detection, localization and quantification, was demonstrated both theoretically and ex vivo. PMID:22036637

  14. Performance assessment of HIFU lesion detection by harmonic motion imaging for focused ultrasound (HMIFU): a 3-D finite-element-based framework with experimental validation.

    PubMed

    Hou, Gary Y; Luo, Jianwen; Marquet, Fabrice; Maleke, Caroline; Vappou, Jonathan; Konofagou, Elisa E

    2011-12-01

    Harmonic motion imaging for focused ultrasound (HMIFU) is a novel high-intensity focused ultrasound (HIFU) therapy monitoring method with feasibilities demonstrated in vitro, ex vivo and in vivo. Its principle is based on amplitude-modulated (AM) - harmonic motion imaging (HMI), an oscillatory radiation force used for imaging the tissue mechanical response during thermal ablation. In this study, a theoretical framework of HMIFU is presented, comprising a customized nonlinear wave propagation model, a finite-element (FE) analysis module and an image-formation model. The objective of this study is to develop such a framework to (1) assess the fundamental performance of HMIFU in detecting HIFU lesions based on the change in tissue apparent elasticity, i.e., the increasing Young's modulus, and the HIFU lesion size with respect to the HIFU exposure time and (2) validate the simulation findings ex vivo. The same HMI and HMIFU parameters as in the experimental studies were used, i.e., 4.5-MHz HIFU frequency and 25 Hz AM frequency. For a lesion-to-background Young's modulus ratio of 3, 6 and 9, the FE and estimated HMI displacement ratios were equal to 1.83, 3.69 and 5.39 and 1.65, 3.19 and 4.59, respectively. In experiments, the HMI displacement followed a similar increasing trend of 1.19, 1.28 and 1.78 at 10-s, 20-s and 30-s HIFU exposure, respectively. In addition, moderate agreement in lesion size growth was found in both simulations (16.2, 73.1 and 334.7 mm(2)) and experiments (26.2, 94.2 and 206.2 mm(2)). Therefore, the feasibility of HMIFU for HIFU lesion detection based on the underlying tissue elasticity changes was verified through the developed theoretical framework, i.e., validation of the fundamental performance of the HMIFU system for lesion detection, localization and quantification, was demonstrated both theoretically and ex vivo.

  15. The application of 3D Zernike moments for the description of "model-free" molecular structure, functional motion, and structural reliability.

    PubMed

    Grandison, Scott; Roberts, Carl; Morris, Richard J

    2009-03-01

    Protein structures are not static entities consisting of equally well-determined atomic coordinates. Proteins undergo continuous motion, and as catalytic machines, these movements can be of high relevance for understanding function. In addition to this strong biological motivation for considering shape changes is the necessity to correctly capture different levels of detail and error in protein structures. Some parts of a structural model are often poorly defined, and the atomic displacement parameters provide an excellent means to characterize the confidence in an atom's spatial coordinates. A mathematical framework for studying these shape changes, and handling positional variance is therefore of high importance. We present an approach for capturing various protein structure properties in a concise mathematical framework that allows us to compare features in a highly efficient manner. We demonstrate how three-dimensional Zernike moments can be employed to describe functions, not only on the surface of a protein but throughout the entire molecule. A number of proof-of-principle examples are given which demonstrate how this approach may be used in practice for the representation of movement and uncertainty.

  16. Measurement of 3-D Vibrational Motion by Dynamic Photogrammetry Using Least-Square Image Matching for Sub-Pixel Targeting to Improve Accuracy

    PubMed Central

    Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho

    2016-01-01

    This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility. PMID:26978366

  17. Structure from Motion Photogrammetry and Micro X-Ray Computed Tomography 3-D Reconstruction Data Fusion for Non-Destructive Conservation Documentation of Lunar Samples

    NASA Technical Reports Server (NTRS)

    Beaulieu, K. R.; Blumenfeld, E. H.; Liddle, D. A.; Oshel, E. R.; Evans, C. A.; Zeigler, R. A.; Righter, K.; Hanna, R. D.; Ketcham, R. A.

    2017-01-01

    Our team is developing a modern, cross-disciplinary approach to documentation and preservation of astromaterials, specifically lunar and meteorite samples stored at the Johnson Space Center (JSC) Lunar Sample Laboratory Facility. Apollo Lunar Sample 60639, collected as part of rake sample 60610 during the 3rd Extra-Vehicular Activity of the Apollo 16 mission in 1972, served as the first NASA-preserved lunar sample to be examined by our team in the development of a novel approach to internal and external sample visualization. Apollo Sample 60639 is classified as a breccia with a glass-coated side and pristine mare basalt and anorthosite clasts. The aim was to accurately register a 3-dimensional Micro X-Ray Computed Tomography (XCT)-derived internal composition data set and a Structure-From-Motion (SFM) Photogrammetry-derived high-fidelity, textured external polygonal model of Apollo Sample 60639. The developed process provided the means for accurate, comprehensive, non-destructive visualization of NASA's heritage lunar samples. The data products, to be ultimately served via an end-user web interface, will allow researchers and the public to interact with the unique heritage samples, providing a platform to "slice through" a photo-realistic rendering of a sample to analyze both its external visual and internal composition simultaneously.

  18. Measurement of 3-D Vibrational Motion by Dynamic Photogrammetry Using Least-Square Image Matching for Sub-Pixel Targeting to Improve Accuracy.

    PubMed

    Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho

    2016-03-11

    This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility.

  19. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  20. Refined 3d-3d correspondence

    NASA Astrophysics Data System (ADS)

    Alday, Luis F.; Genolini, Pietro Benetti; Bullimore, Mathew; van Loon, Mark

    2017-04-01

    We explore aspects of the correspondence between Seifert 3-manifolds and 3d N = 2 supersymmetric theories with a distinguished abelian flavour symmetry. We give a prescription for computing the squashed three-sphere partition functions of such 3d N = 2 theories constructed from boundary conditions and interfaces in a 4d N = 2∗ theory, mirroring the construction of Seifert manifold invariants via Dehn surgery. This is extended to include links in the Seifert manifold by the insertion of supersymmetric Wilson-'t Hooft loops in the 4d N = 2∗ theory. In the presence of a mass parameter cfor the distinguished flavour symmetry, we recover aspects of refined Chern-Simons theory with complex gauge group, and in particular construct an analytic continuation of the S-matrix of refined Chern-Simons theory.

  1. A 3d-3d appetizer

    DOE PAGES

    Pei, Du; Ye, Ke

    2016-11-02

    Here, we test the 3d-3d correspondence for theories that are labeled by Lens spaces. We find a full agreement between the index of the 3d N=2 “Lens space theory” T [L(p, 1)] and the partition function of complex Chern-Simons theory on L(p, 1). In particular, for p = 1, we show how the familiar S3 partition function of Chern-Simons theory arises from the index of a free theory. For large p, we find that the index of T[L(p, 1)] becomes a constant independent of p. In addition, we study T[L(p, 1)] on the squashed three-sphere Sb3. This enables us tomore » see clearly, at the level of partition function, to what extent GC complex Chern-Simons theory can be thought of as two copies of Chern-Simons theory with compact gauge group G.« less

  2. A 3d-3d appetizer

    SciTech Connect

    Pei, Du; Ye, Ke

    2016-11-02

    Here, we test the 3d-3d correspondence for theories that are labeled by Lens spaces. We find a full agreement between the index of the 3d N=2 “Lens space theory” T [L(p, 1)] and the partition function of complex Chern-Simons theory on L(p, 1). In particular, for p = 1, we show how the familiar S3 partition function of Chern-Simons theory arises from the index of a free theory. For large p, we find that the index of T[L(p, 1)] becomes a constant independent of p. In addition, we study T[L(p, 1)] on the squashed three-sphere Sb3. This enables us to see clearly, at the level of partition function, to what extent GC complex Chern-Simons theory can be thought of as two copies of Chern-Simons theory with compact gauge group G.

  3. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  4. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  5. Diamond in 3-D

    NASA Image and Video Library

    2004-08-20

    This 3-D, microscopic imager mosaic of a target area on a rock called Diamond Jenness was taken after NASA Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time. 3D glasses are necessary.

  6. 3D Printed Multimaterial Microfluidic Valve

    PubMed Central

    Patrick, William G.; Sharma, Sunanda; Kong, David S.; Oxman, Neri

    2016-01-01

    We present a novel 3D printed multimaterial microfluidic proportional valve. The microfluidic valve is a fundamental primitive that enables the development of programmable, automated devices for controlling fluids in a precise manner. We discuss valve characterization results, as well as exploratory design variations in channel width, membrane thickness, and membrane stiffness. Compared to previous single material 3D printed valves that are stiff, these printed valves constrain fluidic deformation spatially, through combinations of stiff and flexible materials, to enable intricate geometries in an actuated, functionally graded device. Research presented marks a shift towards 3D printing multi-property programmable fluidic devices in a single step, in which integrated multimaterial valves can be used to control complex fluidic reactions for a variety of applications, including DNA assembly and analysis, continuous sampling and sensing, and soft robotics. PMID:27525809

  7. 3D Plasmon Ruler

    SciTech Connect

    2011-01-01

    In this animation of a 3D plasmon ruler, the plasmonic assembly acts as a transducer to deliver optical information about the structural dynamics of an attached protein. (courtesy of Paul Alivisatos group)

  8. Prominent Rocks - 3-D

    NASA Image and Video Library

    1997-07-13

    Many prominent rocks near the Sagan Memorial Station are featured in this image from NASA Mars Pathfinder. Shark, Half-Dome, and Pumpkin are at center 3D glasses are necessary to identify surface detail.

  9. 3D Laser System

    NASA Image and Video Library

    2015-09-16

    NASA Glenn's Icing Research Tunnel 3D Laser System used for digitizing ice shapes created in the wind tunnel. The ice shapes are later utilized for characterization, analysis, and software development.

  10. AE3D

    SciTech Connect

    Spong, Donald A

    2016-06-20

    AE3D solves for the shear Alfven eigenmodes and eigenfrequencies in a torodal magnetic fusion confinement device. The configuration can be either 2D (e.g. tokamak, reversed field pinch) or 3D (e.g. stellarator, helical reversed field pinch, tokamak with ripple). The equations solved are based on a reduced MHD model and sound wave coupling effects are not currently included.

  11. Topology dictionary for 3D video understanding.

    PubMed

    Tung, Tony; Matsuyama, Takashi

    2012-08-01

    This paper presents a novel approach that achieves 3D video understanding. 3D video consists of a stream of 3D models of subjects in motion. The acquisition of long sequences requires large storage space (2 GB for 1 min). Moreover, it is tedious to browse data sets and extract meaningful information. We propose the topology dictionary to encode and describe 3D video content. The model consists of a topology-based shape descriptor dictionary which can be generated from either extracted patterns or training sequences. The model relies on 1) topology description and classification using Reeb graphs, and 2) a Markov motion graph to represent topology change states. We show that the use of Reeb graphs as the high-level topology descriptor is relevant. It allows the dictionary to automatically model complex sequences, whereas other strategies would require prior knowledge on the shape and topology of the captured subjects. Our approach serves to encode 3D video sequences, and can be applied for content-based description and summarization of 3D video sequences. Furthermore, topology class labeling during a learning process enables the system to perform content-based event recognition. Experiments were carried out on various 3D videos. We showcase an application for 3D video progressive summarization using the topology dictionary.

  12. Dynamic primitives in the control of locomotion

    PubMed Central

    Hogan, Neville; Sternad, Dagmar

    2013-01-01

    Humans achieve locomotor dexterity that far exceeds the capability of modern robots, yet this is achieved despite slower actuators, imprecise sensors, and vastly slower communication. We propose that this spectacular performance arises from encoding motor commands in terms of dynamic primitives. We propose three primitives as a foundation for a comprehensive theoretical framework that can embrace a wide range of upper- and lower-limb behaviors. Building on previous work that suggested discrete and rhythmic movements as elementary dynamic behaviors, we define submovements and oscillations: as discrete movements cannot be combined with sufficient flexibility, we argue that suitably-defined submovements are primitives. As the term “rhythmic” may be ambiguous, we define oscillations as the corresponding class of primitives. We further propose mechanical impedances as a third class of dynamic primitives, necessary for interaction with the physical environment. Combination of these three classes of primitive requires care. One approach is through a generalized equivalent network: a virtual trajectory composed of simultaneous and/or sequential submovements and/or oscillations that interacts with mechanical impedances to produce observable forces and motions. Reliable experimental identification of these dynamic primitives presents challenges: identification of mechanical impedances is exquisitely sensitive to assumptions about their dynamic structure; identification of submovements and oscillations is sensitive to their assumed form and to details of the algorithm used to extract them. Some methods to address these challenges are presented. Some implications of this theoretical framework for locomotor rehabilitation are considered. PMID:23801959

  13. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  14. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-07

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability.

  15. Intra-event and Inter-event Ground Motion Variability from 3-D Broadband (0-8 Hz) Ensemble Simulations of Mw 6.7 Thrust Events Including Rough Fault Descriptions, Small-Scale Heterogeneities and Q(f)

    NASA Astrophysics Data System (ADS)

    Withers, K.; Olsen, K. B.; Shi, Z.; Day, S. M.

    2015-12-01

    We model blind thrust scenario earthquakes matching the fault geometry of 1994 Mw 6.7 Northridge earthquake up to 8 Hz by first performing dynamic rupture propagation using a support operator method (SORD). We extend the ground motion by converting the slip-rate data to a kinematic source for the finite difference wave propagation code AWP-ODC, which incorporates an improved frequency-dependent attenuation approach. This technique has high accuracy for Q values down to 15. The desired Q function is fit to the 'effective' Q over the coarse grained-cell for low Q, and a simple interpolation formula is used to interpolate the weights for arbitrary Q. Here, we use a power-law model Q above a reference frequency in the form Q 0 f^n with exponents ranging from 0.0-0.9. We find envelope and phase misfits only slightly larger than that of the elastic case when compared with that of the frequency-wavenumber solution for both a homogenous and a layered model with a large-velocity contrast. We also include small-scale medium complexity in both a 1D layered model and a 3D medium extracted from SCEC CVM-S4 including a surface geotechnical layer (GTL). We model additional realizations of the scenario by varying the hypocenter location, and find that similar moment magnitudes are generated. We observe that while the ground motion pattern changes, the median ground motion is not affected significantly, when binned as a function of distance, and is within 1 interevent standard deviation from the median GMPEs. We find that intra-event variability for the layered model simulations is similar to observed values of single-station standard deviation. We show that small-scale heterogeneity can significantly affect the intra-event variability at frequencies greater than ~1 Hz, becoming increasingly important at larger distances from the source. We perform a parameter space study by varying statistical parameters and find that the variability is fairly independent of the correlation length

  16. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  17. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  18. Ultrafast 3D imaging by holography

    NASA Astrophysics Data System (ADS)

    Awatsuji, Yasuhiro

    2017-02-01

    As an ultrafast 3D imaging technique, an improved light-in-flight recording by holography using a femtosecond is presented. To record 3D image of light propagation, a voluminous light-scattering medium is introduced to the light-inflight recording by holography. A mode-locked Ti:Sapphire laser are employed for the optical source. To generate the 3D image of propagating light, a voluminous light-scattering medium is made of gelatin jelly and set in the optical path of the object wave of holography. 3D motion picture of propagation of a