Science.gov

Sample records for 3d motion primitives

  1. Generalized compliant motion primitive

    NASA Technical Reports Server (NTRS)

    Backes, Paul G. (Inventor)

    1994-01-01

    This invention relates to a general primitive for controlling a telerobot with a set of input parameters. The primitive includes a trajectory generator; a teleoperation sensor; a joint limit generator; a force setpoint generator; a dither function generator, which produces telerobot motion inputs in a common coordinate frame for simultaneous combination in sensor summers. Virtual return spring motion input is provided by a restoration spring subsystem. The novel features of this invention include use of a single general motion primitive at a remote site to permit the shared and supervisory control of the robot manipulator to perform tasks via a remotely transferred input parameter set.

  2. A Primitive-Based 3D Object Recognition System

    NASA Astrophysics Data System (ADS)

    Dhawan, Atam P.

    1988-08-01

    A knowledge-based 3D object recognition system has been developed. The system uses the hierarchical structural, geometrical and relational knowledge in matching the 3D object models to the image data through pre-defined primitives. The primitives, we have selected, to begin with, are 3D boxes, cylinders, and spheres. These primitives as viewed from different angles covering complete 3D rotation range are stored in a "Primitive-Viewing Knowledge-Base" in form of hierarchical structural and relational graphs. The knowledge-based system then hypothesizes about the viewing angle and decomposes the segmented image data into valid primitives. A rough 3D structural and relational description is made on the basis of recognized 3D primitives. This description is now used in the detailed high-level frame-based structural and relational matching. The system has several expert and knowledge-based systems working in both stand-alone and cooperative modes to provide multi-level processing. This multi-level processing utilizes both bottom-up (data-driven) and top-down (model-driven) approaches in order to acquire sufficient knowledge to accept or reject any hypothesis for matching or recognizing the objects in the given image.

  3. Flexible building primitives for 3D building modeling

    NASA Astrophysics Data System (ADS)

    Xiong, B.; Jancosek, M.; Oude Elberink, S.; Vosselman, G.

    2015-03-01

    3D building models, being the main part of a digital city scene, are essential to all applications related to human activities in urban environments. The development of range sensors and Multi-View Stereo (MVS) technology facilitates our ability to automatically reconstruct level of details 2 (LoD2) models of buildings. However, because of the high complexity of building structures, no fully automatic system is currently available for producing building models. In order to simplify the problem, a lot of research focuses only on particular buildings shapes, and relatively simple ones. In this paper, we analyze the property of topology graphs of object surfaces, and find that roof topology graphs have three basic elements: loose nodes, loose edges, and minimum cycles. These elements have interesting physical meanings: a loose node is a building with one roof face; a loose edge is a ridge line between two roof faces whose end points are not defined by a third roof face; and a minimum cycle represents a roof corner of a building. Building primitives, which introduce building shape knowledge, are defined according to these three basic elements. Then all buildings can be represented by combining such building primitives. The building parts are searched according to the predefined building primitives, reconstructed independently, and grouped into a complete building model in a CSG-style. The shape knowledge is inferred via the building primitives and used as constraints to improve the building models, in which all roof parameters are simultaneously adjusted. Experiments show the flexibility of building primitives in both lidar point cloud and stereo point cloud.

  4. Describing 3D Geometric Primitives Using the Gaussian Sphere and the Gaussian Accumulator

    NASA Astrophysics Data System (ADS)

    Toony, Zahra; Laurendeau, Denis; Gagné, Christian

    2015-12-01

    Most complex object models are composed of basic parts or primitives. Being able to decompose a complex 3D model into such basic primitives is an important step in reverse engineering. Even when an algorithm can segment a complex model into its primitives, a description technique is still needed in order to identify the type of each primitive. Most feature extraction methods fail to describe these basic primitives or need a trained classifier on a database of prepared data to perform this identification. In this paper, we propose a method that can describe basic primitives such as planes, cones, cylinders, spheres, and tori as well as partial models of the latter four primitives. To achieve this task, we combine the concept of Gaussian sphere to a new concept introduced in this paper: the Gaussian accumulator. Comparison of the results of our method with other feature extractors reveals that our approach can distinguish all of these primitives from each other including partial models. Our method was also tested on real scanned data with noise and missing areas. The results show that our method is able to distinguish all of these models as well.

  5. 3D Human Motion Editing and Synthesis: A Survey

    PubMed Central

    Wang, Xin; Chen, Qiudi; Wang, Wanliang

    2014-01-01

    The ways to compute the kinematics and dynamic quantities of human bodies in motion have been studied in many biomedical papers. This paper presents a comprehensive survey of 3D human motion editing and synthesis techniques. Firstly, four types of methods for 3D human motion synthesis are introduced and compared. Secondly, motion capture data representation, motion editing, and motion synthesis are reviewed successively. Finally, future research directions are suggested. PMID:25045395

  6. Concurrent 3-D motion segmentation and 3-D interpretation of temporal sequences of monocular images.

    PubMed

    Sekkati, Hicham; Mitiche, Amar

    2006-03-01

    The purpose of this study is to investigate a variational method for joint multiregion three-dimensional (3-D) motion segmentation and 3-D interpretation of temporal sequences of monocular images. Interpretation consists of dense recovery of 3-D structure and motion from the image sequence spatiotemporal variations due to short-range image motion. The method is direct insomuch as it does not require prior computation of image motion. It allows movement of both viewing system and multiple independently moving objects. The problem is formulated following a variational statement with a functional containing three terms. One term measures the conformity of the interpretation within each region of 3-D motion segmentation to the image sequence spatiotemporal variations. The second term is of regularization of depth. The assumption that environmental objects are rigid accounts automatically for the regularity of 3-D motion within each region of segmentation. The third and last term is for the regularity of segmentation boundaries. Minimization of the functional follows the corresponding Euler-Lagrange equations. This results in iterated concurrent computation of 3-D motion segmentation by curve evolution, depth by gradient descent, and 3-D motion by least squares within each region of segmentation. Curve evolution is implemented via level sets for topology independence and numerical stability. This algorithm and its implementation are verified on synthetic and real image sequences. Viewers presented with anaglyphs of stereoscopic images constructed from the algorithm's output reported a strong perception of depth. PMID:16519351

  7. Insertion of 3-D-primitives in mesh-based representations: towards compact models preserving the details.

    PubMed

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu

    2010-07-01

    We propose an original hybrid modeling process of urban scenes that represents 3-D models as a combination of mesh-based surfaces and geometric 3-D-primitives. Meshes describe details such as ornaments and statues, whereas 3-D-primitives code for regular shapes such as walls and columns. Starting from an 3-D-surface obtained by multiview stereo techniques, these primitives are inserted into the surface after being detected. This strategy allows the introduction of semantic knowledge, the simplification of the modeling, and even correction of errors generated by the acquisition process. We design a hierarchical approach exploring different scales of an observed scene. Each level consists first in segmenting the surface using a multilabel energy model optimized by -expansion and then in fitting 3-D-primitives such as planes, cylinders or tori on the obtained partition where relevant. Experiments on real meshes, depth maps and synthetic surfaces show good potential for the proposed approach. PMID:20236893

  8. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439

  9. [Evaluation of Motion Sickness Induced by 3D Video Clips].

    PubMed

    Matsuura, Yasuyuki; Takada, Hiroki

    2016-01-01

    The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology. PMID:26832611

  10. 3D tongue motion from tagged and cine MR images.

    PubMed

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z; Lee, Junghoon; Stone, Maureen; Prince, Jerry L

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach suffers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information to yield improved estimation of 3D tongue motion. The method uses the harmonic phase (HARP) algorithm to extract motion from tags and diffeomorphic demons to provide surface deformation. It then uses an incompressible deformation estimation algorithm to incorporate both sources of displacement information to form an estimate of the 3D whole tongue motion. Experimental results show that use of combined information improves motion estimation near the tongue surface, a problem that has previously been reported as problematic in HARP analysis, while preserving accurate internal motion estimates. Results on both normal and abnormal tongue motions are shown. PMID:24505742

  11. Preference for motion and depth in 3D film

    NASA Astrophysics Data System (ADS)

    Hartle, Brittney; Lugtigheid, Arthur; Kazimi, Ali; Allison, Robert S.; Wilcox, Laurie M.

    2015-03-01

    While heuristics have evolved over decades for the capture and display of conventional 2D film, it is not clear these always apply well to stereoscopic 3D (S3D) film. Further, while there has been considerable recent research on viewer comfort in S3D media, little attention has been paid to audience preferences for filming parameters in S3D. Here we evaluate viewers' preferences for moving S3D film content in a theatre setting. Specifically we examine preferences for combinations of camera motion (speed and direction) and stereoscopic depth (IA). The amount of IA had no impact on clip preferences regardless of the direction or speed of camera movement. However, preferences were influenced by camera speed, but only in the in-depth condition where viewers preferred faster motion. Given that previous research shows that slower speeds are more comfortable for viewing S3D content, our results show that viewing preferences cannot be predicted simply from measures of comfort. Instead, it is clear that viewer response to S3D film is complex and that film parameters selected to enhance comfort may in some instances produce less appealing content.

  12. Compression of point-texture 3D motion sequences

    NASA Astrophysics Data System (ADS)

    Song, In-Wook; Kim, Chang-Su; Lee, Sang-Uk

    2005-10-01

    In this work, we propose two compression algorithms for PointTexture 3D sequences: the octree-based scheme and the motion-compensated prediction scheme. The first scheme represents each PointTexture frame hierarchically using an octree. The geometry information in the octree nodes is encoded by the predictive partial matching (PPM) method. The encoder supports the progressive transmission of the 3D frame by transmitting the octree nodes in a top-down manner. The second scheme adopts the motion-compensated prediction to exploit the temporal correlation in 3D sequences. It first divides each frame into blocks, and then estimates the motion of each block using the block matching algorithm. In contrast to the motion-compensated 2D video coding, the prediction residual may take more bits than the original signal. Thus, in our approach, the motion compensation is used only for the blocks that can be replaced by the matching blocks. The other blocks are PPM-encoded. Extensive simulation results demonstrate that the proposed algorithms provide excellent compression performances.

  13. Learning Projectile Motion with the Computer Game ``Scorched 3D``

    NASA Astrophysics Data System (ADS)

    Jurcevic, John S.

    2008-01-01

    For most of our students, video games are a normal part of their lives. We should take advantage of this medium to teach physics in a manner that is engrossing for our students. In particular, modern video games incorporate accurate physics in their game engines, and they allow us to visualize the physics through flashy and captivating graphics. I recently used the game "Scorched 3D" to help my students understand projectile motion.

  14. 3D Guided Wave Motion Analysis on Laminated Composites

    NASA Technical Reports Server (NTRS)

    Tian, Zhenhua; Leckey, Cara; Yu, Lingyu

    2013-01-01

    Ultrasonic guided waves have proved useful for structural health monitoring (SHM) and nondestructive evaluation (NDE) due to their ability to propagate long distances with less energy loss compared to bulk waves and due to their sensitivity to small defects in the structure. Analysis of actively transmitted ultrasonic signals has long been used to detect and assess damage. However, there remain many challenging tasks for guided wave based SHM due to the complexity involved with propagating guided waves, especially in the case of composite materials. The multimodal nature of the ultrasonic guided waves complicates the related damage analysis. This paper presents results from parallel 3D elastodynamic finite integration technique (EFIT) simulations used to acquire 3D wave motion in the subject laminated carbon fiber reinforced polymer composites. The acquired 3D wave motion is then analyzed by frequency-wavenumber analysis to study the wave propagation and interaction in the composite laminate. The frequency-wavenumber analysis enables the study of individual modes and visualization of mode conversion. Delamination damage has been incorporated into the EFIT model to generate "damaged" data. The potential for damage detection in laminated composites is discussed in the end.

  15. Stereo and motion in the display of 3-D scattergrams

    SciTech Connect

    Littlefield, R.J.

    1982-04-01

    A display technique is described that is useful for detecting structure in a 3-dimensional distribution of points. The technique uses a high resolution color raster display to produce a 3-D scattergram. Depth cueing is provided by motion parallax using a capture-replay mechanism. Stereo vision depth cues can also be provided. The paper discusses some general aspects of stereo scattergrams and describes their implementation as red/green anaglyphs. These techniques have been used with data sets containing over 20,000 data points. They can be implemented on relatively inexpensive hardware. (A film of the display was shown at the conference.)

  16. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  17. The Visual Priming of Motion-Defined 3D Objects

    PubMed Central

    Jiang, Xiong; Jiang, Yang

    2015-01-01

    The perception of a stimulus can be influenced by previous perceptual experience, a phenomenon known as perceptual priming. However, there has been limited investigation on perceptual priming of shape perception of three-dimensional object structures defined by moving dots. Here we examined the perceptual priming of a 3D object shape defined purely by motion-in-depth cues (i.e., Shape-From-Motion, SFM) using a classic prime-target paradigm. The results from the first two experiments revealed a significant increase in accuracy when a “cloudy” SFM stimulus (whose object structure was difficult to recognize due to the presence of strong noise) was preceded by an unambiguous SFM that clearly defined the same transparent 3D shape. In contrast, results from Experiment 3 revealed no change in accuracy when a “cloudy” SFM stimulus was preceded by a static shape or a semantic word that defined the same object shape. Instead, there was a significant decrease in accuracy when preceded by a static shape or a semantic word that defined a different object shape. These results suggested that the perception of a noisy SFM stimulus can be facilitated by a preceding unambiguous SFM stimulus—but not a static image or a semantic stimulus—that defined the same shape. The potential neural and computational mechanisms underlying the difference in priming are discussed. PMID:26658496

  18. Reconstructing 3-D Ship Motion for Synthetic Aperture Sonar Processing

    NASA Astrophysics Data System (ADS)

    Thomsen, D. R.; Chadwell, C. D.; Sandwell, D.

    2004-12-01

    We are investigating the feasibility of coherent ping-to-ping processing of multibeam sonar data for high-resolution mapping and change detection in the deep ocean. Theoretical calculations suggest that standard multibeam resolution can be improved from 100 m to ~10 m through coherent summation of pings similar to synthetic aperture radar image formation. A requirement for coherent summation of pings is to correct the phase of the return echoes to an accuracy of ~3 cm at a sampling rate of ~10 Hz. In September of 2003, we conducted a seagoing experiment aboard R/V Revelle to test these ideas. Three geodetic-quality GPS receivers were deployed to recover 3-D ship motion to an accuracy of +- 3cm at a 1 Hz sampling rate [Chadwell and Bock, GRL, 2001]. Additionally, inertial navigation data (INS) from fiber-optic gyroscopes and pendulum-type accelerometers were collected at a 10 Hz rate. Independent measurements of ship orientation (yaw, pitch, and roll) from the GPS and INS show agreement to an RMS accuracy of better than 0.1 degree. Because inertial navigation hardware is susceptible to drift, these measurements were combined with the GPS to achieve both high accuracy and high sampling rate. To preserve the short-timescale accuracy of the INS and the long-timescale accuracy of the GPS measurements, time-filtered differences between the GPS and INS were subtracted from the INS integrated linear velocities. An optimal filter length of 25 s was chosen to force the RMS difference between the GPS and the integrated INS to be on the order of the accuracy of the GPS measurements. This analysis provides an upper bound on 3-D ship motion accuracy. Additionally, errors in the attitude can translate to the projections of motion for individual hydrophones. With lever arms on the order of 5m, these errors will likely be ~1mm. Based on these analyses, we expect to achieve the 3-cm accuracy requirement. Using full-resolution hydrophone data collected by a SIMRAD EM/120 echo sounder

  19. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  20. Faceless identification: a model for person identification using the 3D shape and 3D motion as cues

    NASA Astrophysics Data System (ADS)

    Klasen, Lena M.; Li, Haibo

    1999-02-01

    Person identification by using biometric methods based on image sequences, or still images, often requires a controllable and cooperative environment during the image capturing stage. In the forensic case the situation is more likely to be the opposite. In this work we propose a method that makes use of the anthropometry of the human body and human actions as cues for identification. Image sequences from surveillance systems are used, which can be seen as monocular image sequences. A 3D deformable wireframe body model is used as a platform to handle the non-rigid information of the 3D shape and 3D motion of the human body from the image sequence. A recursive method for estimating global motion and local shape variations is presented, using two recursive feedback systems.

  1. Determining 3-D motion and structure from image sequences

    NASA Technical Reports Server (NTRS)

    Huang, T. S.

    1982-01-01

    A method of determining three-dimensional motion and structure from two image frames is presented. The method requires eight point correspondences between the two frames, from which motion and structure parameters are determined by solving a set of eight linear equations and a singular value decomposition of a 3x3 matrix. It is shown that the solution thus obtained is unique.

  2. LV motion tracking from 3D echocardiography using textural and structural information.

    PubMed

    Myronenko, Andriy; Song, Xubo; Sahn, David J

    2007-01-01

    Automated motion reconstruction of the left ventricle (LV) from 3D echocardiography provides insight into myocardium architecture and function. Low image quality and artifacts make 3D ultrasound image processing a challenging problem. We introduce a LV tracking method, which combines textural and structural information to overcome the image quality limitations. Our method automatically reconstructs the motion of the LV contour (endocardium and epicardium) from a sequence of 3D ultrasound images. PMID:18044597

  3. Identifying and modeling motion primitives for the hydromedusae Sarsia tubulosa and Aequorea victoria.

    PubMed

    Sledge, Isaac; Krieg, Michael; Lipinski, Doug; Mohseni, Kamran

    2015-12-01

    The movements of organisms can be thought of as aggregations of motion primitives: motion segments containing one or more significant actions. Here, we present a means to identify and characterize motion primitives from recorded movement data. We address these problems by assuming that the motion sequences can be characterized as a series of dynamical-system-based pattern generators. By adopting a nonparametric, Bayesian formalism for learning and simplifying these pattern generators, we arrive at a purely data-driven model to automatically identify breakpoints in the movement sequences. We apply this model to swimming sequences from two hydromedusa. The first hydromedusa is the prolate Sarsia tubulosa, for which we obtain five motion primitives that correspond to bell cavity pressurization, jet formation, jetting, cavity fluid refill, and coasting. The second hydromedusa is the oblate Aequorea victoria, for which we obtain five motion primitives that correspond to bell compression, vortex separation, cavity fluid refill, vortex formation, and coasting. Our experimental results indicate that the breakpoints between primitives are correlated with transitions in the bell geometry, vortex formation and shedding, and changes in derived dynamical quantities. These dynamics quantities include terms like pressure, power, drag, and thrust. Such findings suggest that dynamics information is inherently present in the observed motions. PMID:26495992

  4. Tracking 3-D body motion for docking and robot control

    NASA Technical Reports Server (NTRS)

    Donath, M.; Sorensen, B.; Yang, G. B.; Starr, R.

    1987-01-01

    An advanced method of tracking three-dimensional motion of bodies has been developed. This system has the potential to dynamically characterize machine and other structural motion, even in the presence of structural flexibility, thus facilitating closed loop structural motion control. The system's operation is based on the concept that the intersection of three planes defines a point. Three rotating planes of laser light, fixed and moving photovoltaic diode targets, and a pipe-lined architecture of analog and digital electronics are used to locate multiple targets whose number is only limited by available computer memory. Data collection rates are a function of the laser scan rotation speed and are currently selectable up to 480 Hz. The tested performance on a preliminary prototype designed for 0.1 in accuracy (for tracking human motion) at a 480 Hz data rate includes a worst case resolution of 0.8 mm (0.03 inches), a repeatability of plus or minus 0.635 mm (plus or minus 0.025 inches), and an absolute accuracy of plus or minus 2.0 mm (plus or minus 0.08 inches) within an eight cubic meter volume with all results applicable at the 95 percent level of confidence along each coordinate region. The full six degrees of freedom of a body can be computed by attaching three or more target detectors to the body of interest.

  5. Rigid Body Motion in Stereo 3D Simulation

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2010-01-01

    This paper addresses the difficulties experienced by first-grade students studying rigid body motion at Sofia University. Most quantities describing the rigid body are in relations that the students find hard to visualize and understand. They also lose the notion of cause-result relations between vector quantities, such as the relation between…

  6. Markerless 3D motion capture for animal locomotion studies

    PubMed Central

    Sellers, William Irvin; Hirasaki, Eishi

    2014-01-01

    ABSTRACT Obtaining quantitative data describing the movements of animals is an essential step in understanding their locomotor biology. Outside the laboratory, measuring animal locomotion often relies on video-based approaches and analysis is hampered because of difficulties in calibration and often the limited availability of possible camera positions. It is also usually restricted to two dimensions, which is often an undesirable over-simplification given the essentially three-dimensional nature of many locomotor performances. In this paper we demonstrate a fully three-dimensional approach based on 3D photogrammetric reconstruction using multiple, synchronised video cameras. This approach allows full calibration based on the separation of the individual cameras and will work fully automatically with completely unmarked and undisturbed animals. As such it has the potential to revolutionise work carried out on free-ranging animals in sanctuaries and zoological gardens where ad hoc approaches are essential and access within enclosures often severely restricted. The paper demonstrates the effectiveness of video-based 3D photogrammetry with examples from primates and birds, as well as discussing the current limitations of this technique and illustrating the accuracies that can be obtained. All the software required is open source so this can be a very cost effective approach and provides a methodology of obtaining data in situations where other approaches would be completely ineffective. PMID:24972869

  7. Markerless 3D motion capture for animal locomotion studies.

    PubMed

    Sellers, William Irvin; Hirasaki, Eishi

    2014-01-01

    Obtaining quantitative data describing the movements of animals is an essential step in understanding their locomotor biology. Outside the laboratory, measuring animal locomotion often relies on video-based approaches and analysis is hampered because of difficulties in calibration and often the limited availability of possible camera positions. It is also usually restricted to two dimensions, which is often an undesirable over-simplification given the essentially three-dimensional nature of many locomotor performances. In this paper we demonstrate a fully three-dimensional approach based on 3D photogrammetric reconstruction using multiple, synchronised video cameras. This approach allows full calibration based on the separation of the individual cameras and will work fully automatically with completely unmarked and undisturbed animals. As such it has the potential to revolutionise work carried out on free-ranging animals in sanctuaries and zoological gardens where ad hoc approaches are essential and access within enclosures often severely restricted. The paper demonstrates the effectiveness of video-based 3D photogrammetry with examples from primates and birds, as well as discussing the current limitations of this technique and illustrating the accuracies that can be obtained. All the software required is open source so this can be a very cost effective approach and provides a methodology of obtaining data in situations where other approaches would be completely ineffective. PMID:24972869

  8. Flash trajectory imaging of target 3D motion

    NASA Astrophysics Data System (ADS)

    Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang

    2011-03-01

    We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.

  9. 2D-3D rigid registration to compensate for prostate motion during 3D TRUS-guided biopsy

    NASA Astrophysics Data System (ADS)

    De Silva, Tharindu; Fenster, Aaron; Bax, Jeffrey; Gardi, Lori; Romagnoli, Cesare; Samarabandu, Jagath; Ward, Aaron D.

    2012-02-01

    Prostate biopsy is the clinical standard for prostate cancer diagnosis. To improve the accuracy of targeting suspicious locations, systems have been developed that can plan and record biopsy locations in a 3D TRUS image acquired at the beginning of the procedure. Some systems are designed for maximum compatibility with existing ultrasound equipment and are thus designed around the use of a conventional 2D TRUS probe, using controlled axial rotation of this probe to acquire a 3D TRUS reference image at the start of the biopsy procedure. Prostate motion during the biopsy procedure causes misalignments between the prostate in the live 2D TRUS images and the pre-acquired 3D TRUS image. We present an image-based rigid registration technique that aligns live 2D TRUS images, acquired immediately prior to biopsy needle insertion, with the pre-acquired 3D TRUS image to compensate for this motion. Our method was validated using 33 manually identified intrinsic fiducials in eight subjects and the target registration error was found to be 1.89 mm. We analysed the suitability of two image similarity metrics (normalized cross correlation and mutual information) for this task by plotting these metrics as a function of varying parameters in the six degree-of-freedom transformation space, with the ground truth plane obtained from registration as the starting point for the parameter exploration. We observed a generally convex behaviour of the similarity metrics. This encourages their use for this registration problem, and could assist in the design of a tool for the detection of misalignment, which could trigger the execution of a non-real-time registration, when needed during the procedure.

  10. Simple 3-D stimulus for motion parallax and its simulation.

    PubMed

    Ono, Hiroshi; Chornenkyy, Yevgen; D'Amour, Sarah

    2013-01-01

    Simulation of a given stimulus situation should produce the same perception as the original. Rogers et al (2009 Perception 38 907-911) simulated Wheeler's (1982, PhD thesis, Rutgers University, NJ) motion parallax stimulus and obtained quite different perceptions. Wheeler's observers were unable to reliably report the correct direction of depth, whereas Rogers's were. With three experiments we explored the possible reasons for the discrepancy. Our results suggest that Rogers was able to see depth from the simulation partly due to his experience seeing depth with random dot surfaces. PMID:23964382

  11. Motion field estimation for a dynamic scene using a 3D LiDAR.

    PubMed

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  12. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR

    PubMed Central

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  13. Nonrigid Autofocus Motion Correction for Coronary MR Angiography with a 3D Cones Trajectory

    PubMed Central

    Ingle, R. Reeve; Wu, Holden H.; Addy, Nii Okai; Cheng, Joseph Y.; Yang, Phillip C.; Hu, Bob S.; Nishimura, Dwight G.

    2014-01-01

    Purpose: To implement a nonrigid autofocus motion correction technique to improve respiratory motion correction of free-breathing whole-heart coronary magnetic resonance angiography (CMRA) acquisitions using an image-navigated 3D cones sequence. Methods: 2D image navigators acquired every heartbeat are used to measure superior-inferior, anterior-posterior, and right-left translation of the heart during a free-breathing CMRA scan using a 3D cones readout trajectory. Various tidal respiratory motion patterns are modeled by independently scaling the three measured displacement trajectories. These scaled motion trajectories are used for 3D translational compensation of the acquired data, and a bank of motion-compensated images is reconstructed. From this bank, a gradient entropy focusing metric is used to generate a nonrigid motion-corrected image on a pixel-by-pixel basis. The performance of the autofocus motion correction technique is compared with rigid-body translational correction and no correction in phantom, volunteer, and patient studies. Results: Nonrigid autofocus motion correction yields improved image quality compared to rigid-body-corrected images and uncorrected images. Quantitative vessel sharpness measurements indicate superiority of the proposed technique in 14 out of 15 coronary segments from three patient and two volunteer studies. Conclusion: The proposed technique corrects nonrigid motion artifacts in free-breathing 3D cones acquisitions, improving image quality compared to rigid-body motion correction. PMID:24006292

  14. Motion-Corrected 3D Sonic Anemometer for Tethersondes and Other Moving Platforms

    NASA Technical Reports Server (NTRS)

    Bognar, John

    2012-01-01

    To date, it has not been possible to apply 3D sonic anemometers on tethersondes or similar atmospheric research platforms due to the motion of the supporting platform. A tethersonde module including both a 3D sonic anemometer and associated motion correction sensors has been developed, enabling motion-corrected 3D winds to be measured from a moving platform such as a tethersonde. Blimps and other similar lifting systems are used to support tethersondes meteorological devices that fly on the tether of a blimp or similar platform. To date, tethersondes have been limited to making basic meteorological measurements (pressure, temperature, humidity, and wind speed and direction). The motion of the tethersonde has precluded the addition of 3D sonic anemometers, which can be used for high-speed flux measurements, thereby limiting what has been achieved to date with tethersondes. The tethersonde modules fly on a tether that can be constantly moving and swaying. This would introduce enormous error into the output of an uncorrected 3D sonic anemometer. The motion correction that is required must be implemented in a low-weight, low-cost manner to be suitable for this application. Until now, flux measurements using 3D sonic anemometers could only be made if the 3D sonic anemometer was located on a rigid, fixed platform such as a tower. This limited the areas in which they could be set up and used. The purpose of the innovation was to enable precise 3D wind and flux measurements to be made using tether - sondes. In brief, a 3D accelerometer and a 3D gyroscope were added to a tethersonde module along with a 3D sonic anemometer. This combination allowed for the necessary package motions to be measured, which were then mathematically combined with the measured winds to yield motion-corrected 3D winds. At the time of this reporting, no tethersonde has been able to make any wind measurement other than a basic wind speed and direction measurement. The addition of a 3D sonic

  15. Tracking 3D Picometer-Scale Motions of Single Nanoparticles with High-Energy Electron Probes

    PubMed Central

    Ogawa, Naoki; Hoshisashi, Kentaro; Sekiguchi, Hiroshi; Ichiyanagi, Kouhei; Matsushita, Yufuku; Hirohata, Yasuhisa; Suzuki, Seiichi; Ishikawa, Akira; Sasaki, Yuji C.

    2013-01-01

    We observed the high-speed anisotropic motion of an individual gold nanoparticle in 3D at the picometer scale using a high-energy electron probe. Diffracted electron tracking (DET) using the electron back-scattered diffraction (EBSD) patterns of labeled nanoparticles under wet-SEM allowed us to super-accurately measure the time-resolved 3D motion of individual nanoparticles in aqueous conditions. The highly precise DET data corresponded to the 3D anisotropic log-normal Gaussian distributions over time at the millisecond scale. PMID:23868465

  16. Computing 3-D structure of rigid objects using stereo and motion

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1987-01-01

    Work performed as a step toward an intelligent automatic machine vision system for 3-D imaging is discussed. The problem considered is the quantitative 3-D reconstruction of rigid objects. Motion and stereo are the two clues considered in this system. The system basically consists of three processes: the low level process to extract image features, the middle level process to establish the correspondence in the stereo (spatial) and motion (temporal) modalities, and the high level process to compute the 3-D coordinates of the corner points by integrating the spatial and temporal correspondences.

  17. Blind watermark algorithm on 3D motion model based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Qi, Hu; Zhai, Lang

    2013-12-01

    With the continuous development of 3D vision technology, digital watermark technology, as the best choice for copyright protection, has fused with it gradually. This paper proposed a blind watermark plan of 3D motion model based on wavelet transform, and made it loaded into the Vega real-time visual simulation system. Firstly, put 3D model into affine transform, and take the distance from the center of gravity to the vertex of 3D object in order to generate a one-dimensional discrete signal; then make this signal into wavelet transform to change its frequency coefficients and embed watermark, finally generate 3D motion model with watermarking. In fixed affine space, achieve the robustness in translation, revolving and proportion transforms. The results show that this approach has better performances not only in robustness, but also in watermark- invisibility.

  18. Model-based risk assessment for motion effects in 3D radiotherapy of lung tumors

    NASA Astrophysics Data System (ADS)

    Werner, René; Ehrhardt, Jan; Schmidt-Richberg, Alexander; Handels, Heinz

    2012-02-01

    Although 4D CT imaging becomes available in an increasing number of radiotherapy facilities, 3D imaging and planning is still standard in current clinical practice. In particular for lung tumors, respiratory motion is a known source of uncertainty and should be accounted for during radiotherapy planning - which is difficult by using only a 3D planning CT. In this contribution, we propose applying a statistical lung motion model to predict patients' motion patterns and to estimate dosimetric motion effects in lung tumor radiotherapy if only 3D images are available. Being generated based on 4D CT images of patients with unimpaired lung motion, the model tends to overestimate lung tumor motion. It therefore promises conservative risk assessment regarding tumor dose coverage. This is exemplarily evaluated using treatment plans of lung tumor patients with different tumor motion patterns and for two treatment modalities (conventional 3D conformal radiotherapy and step-&- shoot intensity modulated radiotherapy). For the test cases, 4D CT images are available. Thus, also a standard registration-based 4D dose calculation is performed, which serves as reference to judge plausibility of the modelbased 4D dose calculation. It will be shown that, if combined with an additional simple patient-specific breathing surrogate measurement (here: spirometry), the model-based dose calculation provides reasonable risk assessment of respiratory motion effects.

  19. Structural response to 3D simulated earthquake motions in San Bernardino Valley

    USGS Publications Warehouse

    Safak, E.; Frankel, A.

    1994-01-01

    Structural repsonse to one- and three-dimensional (3D) simulated motions in San Bernardino Valley from a hypothetical earthquake along the San Andreas fault with moment magnitude 6.5 and rupture length of 30km is investigated. The results show that the ground motions and the structural response vary dramatically with the type of simulation and the location. -from Authors

  20. On the integrability of the motion of 3D-Swinging Atwood machine and related problems

    NASA Astrophysics Data System (ADS)

    Elmandouh, A. A.

    2016-03-01

    In the present article, we study the problem of the motion of 3D- Swinging Atwood machine. A new integrable case for this problem is announced. We point out a new integrable case describing the motion of a heavy particle on a titled cone.

  1. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    NASA Astrophysics Data System (ADS)

    Dhou, S.; Hurwitz, M.; Mishra, P.; Cai, W.; Rottmann, J.; Li, R.; Williams, C.; Wagar, M.; Berbeco, R.; Ionascu, D.; Lewis, J. H.

    2015-05-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  2. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    PubMed Central

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722

  3. The effect of motion on IMRT - looking at interplay with 3D measurements

    NASA Astrophysics Data System (ADS)

    Thomas, A.; Yan, H.; Oldham, M.; Juang, T.; Adamovics, J.; Yin, F. F.

    2013-06-01

    Clinical recommendations to address tumor motion management have been derived from studies dealing with simulations and 2D measurements. 3D measurements may provide more insight and possibly alter the current motion management guidelines. This study provides an initial look at true 3D measurements involving leaf motion deliveries by use of a motion phantom and the PRESAGE/DLOS dosimetry system. An IMRT and VMAT plan were delivered to the phantom and analyzed by means of DVHs to determine whether the expansion of treatment volumes based on known imaging motion adequately cover the target. DVHs confirmed that for these deliveries the expansion volumes were adequate to treat the intended target although further studies should be conducted to allow for differences in parameters that could alter the results, such as delivery dose and breathe rate.

  4. The 3D Human Motion Control Through Refined Video Gesture Annotation

    NASA Astrophysics Data System (ADS)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  5. 3D model-based catheter tracking for motion compensation in EP procedures

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Liao, Rui; Hornegger, Joachim; Strobel, Norbert

    2010-02-01

    Atrial fibrillation is the most common sustained heart arrhythmia and a leading cause of stroke. Its treatment by radio-frequency catheter ablation, performed using fluoroscopic image guidance, is gaining increasingly more importance. Two-dimensional fluoroscopic navigation can take advantage of overlay images derived from pre-operative 3-D data to add anatomical details otherwise not visible under X-ray. Unfortunately, respiratory motion may impair the utility of these static overlay images for catheter navigation. We developed an approach for image-based 3-D motion compensation as a solution to this problem. A bi-plane C-arm system is used to take X-ray images of a special circumferential mapping catheter from two directions. In the first step of the method, a 3-D model of the device is reconstructed. Three-dimensional respiratory motion at the site of ablation is then estimated by tracking the reconstructed catheter model in 3-D. This step involves bi-plane fluoroscopy and 2-D/3-D registration. Phantom data and clinical data were used to assess our model-based catheter tracking method. Experiments involving a moving heart phantom yielded an average 2-D tracking error of 1.4 mm and an average 3-D tracking error of 1.1 mm. Our evaluation of clinical data sets comprised 469 bi-plane fluoroscopy frames (938 monoplane fluoroscopy frames). We observed an average 2-D tracking error of 1.0 mm +/- 0.4 mm and an average 3-D tracking error of 0.8 mm +/- 0.5 mm. These results demonstrate that model-based motion-compensation based on 2-D/3-D registration is both feasible and accurate.

  6. Motion-Capture-Enabled Software for Gestural Control of 3D Models

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey S.; Luo, Victor; Crockett, Thomas M.; Shams, Khawaja S.; Powell, Mark W.; Valderrama, Anthony

    2012-01-01

    Current state-of-the-art systems use general-purpose input devices such as a keyboard, mouse, or joystick that map to tasks in unintuitive ways. This software enables a person to control intuitively the position, size, and orientation of synthetic objects in a 3D virtual environment. It makes possible the simultaneous control of the 3D position, scale, and orientation of 3D objects using natural gestures. Enabling the control of 3D objects using a commercial motion-capture system allows for natural mapping of the many degrees of freedom of the human body to the manipulation of the 3D objects. It reduces training time for this kind of task, and eliminates the need to create an expensive, special-purpose controller.

  7. On the Kinematic Motion Primitives (kMPs) - Theory and Application.

    PubMed

    Moro, Federico L; Tsagarakis, Nikos G; Caldwell, Darwin G

    2012-01-01

    Human neuromotor capabilities guarantee a wide variety of motions. A full understanding of human motion can be beneficial for rehabilitation or performance enhancement purposes, or for its reproduction on artificial systems like robots. This work aims at describing the complexity of human motion in a reduced dimensionality, by means of kinematic Motion Primitives (kMPs). A set of five invariant kMPs are identified for periodic motions, and a set of two kMPs for discrete motions. It is shown how these two sets of kMPs can be combined to synthesize more complex motion as the simultaneous execution of the periodic and the discrete motions. The results reported are an evidence of the theory of Central Pattern Generators (CPG), showing its effects on the kinematics, and are related to what presented in the literature on the Motor Primitives extracted from EMG signals. Experimental tests with the COmpliant huMANoid (COMAN) were performed to show that the kMPs extracted from human subjects can be used to transfer the features of human locomotion to the gait of a robot. PMID:23091459

  8. Intrathoracic tumour motion estimation from CT imaging using the 3D optical flow method

    NASA Astrophysics Data System (ADS)

    Guerrero, Thomas; Zhang, Geoffrey; Huang, Tzung-Chi; Lin, Kang-Ping

    2004-09-01

    The purpose of this work was to develop and validate an automated method for intrathoracic tumour motion estimation from breath-hold computed tomography (BH CT) imaging using the three-dimensional optical flow method (3D OFM). A modified 3D OFM algorithm provided 3D displacement vectors for each voxel which were used to map tumour voxels on expiration BH CT onto inspiration BH CT images. A thoracic phantom and simulated expiration/inspiration BH CT pairs were used for validation. The 3D OFM was applied to the measured inspiration and expiration BH CT images from one lung cancer and one oesophageal cancer patient. The resulting displacements were plotted in histogram format and analysed to provide insight regarding the tumour motion. The phantom tumour displacement was measured as 1.20 and 2.40 cm with full-width at tenth maximum (FWTM) for the distribution of displacement estimates of 0.008 and 0.006 cm, respectively. The maximum error of any single voxel's motion estimate was 1.1 mm along the z-dimension or approximately one-third of the z-dimension voxel size. The simulated BH CT pairs revealed an rms error of less than 0.25 mm. The displacement of the oesophageal tumours was nonuniform and up to 1.4 cm, this was a new finding. A lung tumour maximum displacement of 2.4 cm was found in the case evaluated. In conclusion, 3D OFM provided an accurate estimation of intrathoracic tumour motion, with estimated errors less than the voxel dimension in a simulated motion phantom study. Surprisingly, oesophageal tumour motion was large and nonuniform, with greatest motion occurring at the gastro-oesophageal junction. Presented at The IASTED Second International Conference on Biomedical Engineering (BioMED 2004), Innsbruck, Austria, 16-18 February 2004.

  9. Motion-induced phase error estimation and correction in 3D diffusion tensor imaging.

    PubMed

    Van, Anh T; Hernando, Diego; Sutton, Bradley P

    2011-11-01

    A multishot data acquisition strategy is one way to mitigate B0 distortion and T2∗ blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to significant image artifacts. This work proposes a maximum likelihood estimation and k-space correction of motion-induced phase errors in 3D multishot diffusion tensor imaging. The proposed error estimation is robust, unbiased, and approaches the Cramer-Rao lower bound. For rigid body motion, the proposed correction effectively removes motion-induced phase errors regardless of the k-space trajectory used and gives comparable performance to the more computationally expensive 3D iterative nonlinear phase error correction method. The method has been extended to handle multichannel data collected using phased-array coils. Simulation and in vivo data are shown to demonstrate the performance of the method. PMID:21652284

  10. Meshless deformable models for 3D cardiac motion and strain analysis from tagged MRI.

    PubMed

    Wang, Xiaoxu; Chen, Ting; Zhang, Shaoting; Schaerer, Joël; Qian, Zhen; Huh, Suejung; Metaxas, Dimitris; Axel, Leon

    2015-01-01

    Tagged magnetic resonance imaging (TMRI) provides a direct and noninvasive way to visualize the in-wall deformation of the myocardium. Due to the through-plane motion, the tracking of 3D trajectories of the material points and the computation of 3D strain field call for the necessity of building 3D cardiac deformable models. The intersections of three stacks of orthogonal tagging planes are material points in the myocardium. With these intersections as control points, 3D motion can be reconstructed with a novel meshless deformable model (MDM). Volumetric MDMs describe an object as point cloud inside the object boundary and the coordinate of each point can be written in parametric functions. A generic heart mesh is registered on the TMRI with polar decomposition. A 3D MDM is generated and deformed with MR image tagging lines. Volumetric MDMs are deformed by calculating the dynamics function and minimizing the local Laplacian coordinates. The similarity transformation of each point is computed by assuming its neighboring points are making the same transformation. The deformation is computed iteratively until the control points match the target positions in the consecutive image frame. The 3D strain field is computed from the 3D displacement field with moving least squares. We demonstrate that MDMs outperformed the finite element method and the spline method with a numerical phantom. Meshless deformable models can track the trajectory of any material point in the myocardium and compute the 3D strain field of any particular area. The experimental results on in vivo healthy and patient heart MRI show that the MDM can fully recover the myocardium motion in three dimensions. PMID:25157446

  11. Meshless deformable models for 3D cardiac motion and strain analysis from tagged MRI

    PubMed Central

    Wang, Xiaoxu; Chen, Ting; Zhang, Shaoting; Schaerer, Joël; Qian, Zhen; Huh, Suejung; Metaxas, Dimitris; Axel, Leon

    2016-01-01

    Tagged magnetic resonance imaging (TMRI) provides a direct and noninvasive way to visualize the in-wall deformation of the myocardium. Due to the through-plane motion, the tracking of 3D trajectories of the material points and the computation of 3D strain field call for the necessity of building 3D cardiac deformable models. The intersections of three stacks of orthogonal tagging planes are material points in the myocardium. With these intersections as control points, 3D motion can be reconstructed with a novel meshless deformable model (MDM). Volumetric MDMs describe an object as point cloud inside the object boundary and the coordinate of each point can be written in parametric functions. A generic heart mesh is registered on the TMRI with polar decomposition. A 3D MDM is generated and deformed with MR image tagging lines. Volumetric MDMs are deformed by calculating the dynamics function and minimizing the local Laplacian coordinates. The similarity transformation of each point is computed by assuming its neighboring points are making the same transformation. The deformation is computed iteratively until the control points match the target positions in the consecutive image frame. The 3D strain field is computed from the 3D displacement field with moving least squares. We demonstrate that MDMs outperformed the finite element method and the spline method with a numerical phantom. Meshless deformable models can track the trajectory of any material point in the myocardium and compute the 3D strain field of any particular area. The experimental results on in vivo healthy and patient heart MRI show that the MDM can fully recover the myocardium motion in three dimensions. PMID:25157446

  12. Recovery of liver motion and deformation due to respiration using laparoscopic freehand 3D ultrasound system.

    PubMed

    Nakamoto, Masahiko; Hirayama, Hiroaki; Sato, Yoshinobu; Konishi, Kozo; Kakeji, Yoshihiro; Hashizume, Makoto; Tamura, Shinichi

    2006-01-01

    This paper describes a rapid method for intraoperative recovery of liver motion and deformation due to respiration by using a laparoscopic freehand 3D ultrasound (US) system. Using the proposed method, 3D US images of the liver can be extended to 4D US images by acquiring additional several sequences of 2D US images during a couple of respiration cycles. Time-varying 2D US images are acquired on several sagittal image planes and their 3D positions and orientations are measured using a laparoscopic ultrasound probe to which a miniature magnetic 3D position sensor is attached. During the acquisition, the probe is assumed to move together with the liver surface. In-plane 2D deformation fields and respiratory phase are estimated from the time-varying 2D US images, and then the time-varying 3D deformation fields on the sagittal image planes are obtained by combining 3D positions and orientations of the image planes. The time-varying 3D deformation field of the volume is obtained by interpolating the 3D deformation fields estimated on several planes. The proposed method was evaluated by in vivo experiments using a pig liver. PMID:17354794

  13. X-ray stereo imaging for micro 3D motions within non-transparent objects

    NASA Astrophysics Data System (ADS)

    Salih, Wasil H. M.; Buytaert, Jan A. N.; Dirckx, Joris J. J.

    2012-03-01

    We propose a new technique to measure the 3D motion of marker points along a straight path within an object using x-ray stereo projections. From recordings of two x-ray projections with 90° separation angle, the 3D coordinates of marker points can be determined. By synchronizing the x-ray exposure time to the motion event, a moving marker leaves a trace in the image of which the gray scale is linearly proportional to the marker velocity. From the gray scale along the motion path, the 3D motion (velocity) is obtained. The path of motion was reconstructed and compared with the applied waveform. The results showed that the accuracy is in order of 5%. The difference of displacement amplitude between the new method and laser vibrometry was less than 5μm. We demonstrated the method on the malleus ossicle motion in the gerbil middle ear as a function of pressure applied on the eardrum. The new method has the advantage over existing methods such as laser vibrometry that the structures under study do not need to be visually exposed. Due to the short measurement time and the high resolution, the method can be useful in the field of biomechanics for a variety of applications.

  14. Motion corrected LV quantification based on 3D modelling for improved functional assessment in cardiac MRI

    NASA Astrophysics Data System (ADS)

    Liew, Y. M.; McLaughlin, R. A.; Chan, B. T.; Aziz, Y. F. Abdul; Chee, K. H.; Ung, N. M.; Tan, L. K.; Lai, K. W.; Ng, S.; Lim, E.

    2015-04-01

    Cine MRI is a clinical reference standard for the quantitative assessment of cardiac function, but reproducibility is confounded by motion artefacts. We explore the feasibility of a motion corrected 3D left ventricle (LV) quantification method, incorporating multislice image registration into the 3D model reconstruction, to improve reproducibility of 3D LV functional quantification. Multi-breath-hold short-axis and radial long-axis images were acquired from 10 patients and 10 healthy subjects. The proposed framework reduced misalignment between slices to subpixel accuracy (2.88 to 1.21 mm), and improved interstudy reproducibility for 5 important clinical functional measures, i.e. end-diastolic volume, end-systolic volume, ejection fraction, myocardial mass and 3D-sphericity index, as reflected in a reduction in the sample size required to detect statistically significant cardiac changes: a reduction of 21-66%. Our investigation on the optimum registration parameters, including both cardiac time frames and number of long-axis (LA) slices, suggested that a single time frame is adequate for motion correction whereas integrating more LA slices can improve registration and model reconstruction accuracy for improved functional quantification especially on datasets with severe motion artefacts.

  15. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0–1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  16. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy.

    PubMed

    Stemkens, Bjorn; Tijssen, Rob H N; de Senneville, Baudouin Denis; Lagendijk, Jan J W; van den Berg, Cornelis A T

    2016-07-21

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy. PMID:27362636

  17. A comparison of 3D scapular kinematics between dominant and nondominant shoulders during multiplanar arm motion

    PubMed Central

    Lee, Sang Ki; Yang, Dae Suk; Kim, Ha Yong; Choy, Won Sik

    2013-01-01

    Background: Generally, the scapular motions of pathologic and contralateral normal shoulders are compared to characterize shoulder disorders. However, the symmetry of scapular motion of normal shoulders remains undetermined. Therefore, the aim of this study was to compare 3dimensinal (3D) scapular motion between dominant and nondominant shoulders during three different planes of arm motion by using an optical tracking system. Materials and Methods: Twenty healthy subjects completed five repetitions of elevation and lowering in sagittal plane flexion, scapular plane abduction, and coronal plane abduction. The 3D scapular motion was measured using an optical tracking system, after minimizing reflective marker skin slippage using ultrasonography. The dynamic 3D motion of the scapula of dominant and nondominant shoulders, and the scapulohumeral rhythm (SHR) were analyzed at each 10° increment during the three planes of arm motion. Results: There was no significant difference in upward rotation or internal rotation (P > 0.05) of the scapula between dominant and nondominant shoulders during the three planes of arm motion. However, there was a significant difference in posterior tilting (P = 0.018) during coronal plane abduction. The SHR was a large positive or negative number in the initial phase of sagittal plane flexion and scapular plane abduction. However, the SHR was a small positive or negative number in the initial phase of coronal plane abduction. Conclusions: Only posterior tilting of the scapula during coronal plane abduction was asymmetrical in our healthy subjects, and depending on the plane of arm motion, the pattern of the SHR differed as well. These differences should be considered in the clinical assessment of shoulder pathology. PMID:23682174

  18. Effects of 3D random correlated velocity perturbations on predicted ground motions

    USGS Publications Warehouse

    Hartzell, S.; Harmsen, S.; Frankel, A.

    2010-01-01

    Three-dimensional, finite-difference simulations of a realistic finite-fault rupture on the southern Hayward fault are used to evaluate the effects of random, correlated velocity perturbations on predicted ground motions. Velocity perturbations are added to a three-dimensional (3D) regional seismic velocity model of the San Francisco Bay Area using a 3D von Karman random medium. Velocity correlation lengths of 5 and 10 km and standard deviations in the velocity of 5% and 10% are considered. The results show that significant deviations in predicted ground velocities are seen in the calculated frequency range (≤1 Hz) for standard deviations in velocity of 5% to 10%. These results have implications for the practical limits on the accuracy of scenario ground-motion calculations and on retrieval of source parameters using higher-frequency, strong-motion data.

  19. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Smith, Philip W.

    1993-01-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  20. A 3D space-time motion evaluation for image registration in digital subtraction angiography.

    PubMed

    Taleb, N; Bentoutou, Y; Deforges, O; Taleb, M

    2001-01-01

    In modern clinical practice, Digital Subtraction Angiography (DSA) is a powerful technique for the visualization of blood vessels in a sequence of X-ray images. A serious problem encountered in this technique is the presence of artifacts due to patient motion. The resulting artifacts frequently lead to misdiagnosis or rejection of a DSA image sequence. In this paper, a new technique for removing both global and local motion artifacts is presented. It is based on a 3D space-time motion evaluation for separating pixels changing values because of motion from those changing values because of contrast flow. This technique is proved to be very efficient to correct for patient motion artifacts and is computationally cheap. Experimental results with several clinical data sets show that this technique is very fast and results in higher quality images. PMID:11179698

  1. Ultra-high voltage electron microscopy of primitive algae illuminates 3D ultrastructures of the first photosynthetic eukaryote

    PubMed Central

    Takahashi, Toshiyuki; Nishida, Tomoki; Saito, Chieko; Yasuda, Hidehiro; Nozaki, Hisayoshi

    2015-01-01

    A heterotrophic organism 1–2 billion years ago enslaved a cyanobacterium to become the first photosynthetic eukaryote, and has diverged globally. The primary phototrophs, glaucophytes, are thought to retain ancestral features of the first photosynthetic eukaryote, but examining the protoplast ultrastructure has previously been problematic in the coccoid glaucophyte Glaucocystis due to its thick cell wall. Here, we examined the three-dimensional (3D) ultrastructure in two divergent species of Glaucocystis using ultra-high voltage electron microscopy. Three-dimensional modelling of Glaucocystis cells using electron tomography clearly showed that numerous, leaflet-like flattened vesicles are distributed throughout the protoplast periphery just underneath a single-layered plasma membrane. This 3D feature is essentially identical to that of another glaucophyte genus Cyanophora, as well as the secondary phototrophs in Alveolata. Thus, the common ancestor of glaucophytes and/or the first photosynthetic eukaryote may have shown similar 3D structures. PMID:26439276

  2. Markerless motion capture can provide reliable 3D gait kinematics in the sagittal and frontal plane.

    PubMed

    Sandau, Martin; Koblauch, Henrik; Moeslund, Thomas B; Aanæs, Henrik; Alkjær, Tine; Simonsen, Erik B

    2014-09-01

    Estimating 3D joint rotations in the lower extremities accurately and reliably remains unresolved in markerless motion capture, despite extensive studies in the past decades. The main problems have been ascribed to the limited accuracy of the 3D reconstructions. Accordingly, the purpose of the present study was to develop a new approach based on highly detailed 3D reconstructions in combination with a translational and rotational unconstrained articulated model. The highly detailed 3D reconstructions were synthesized from an eight camera setup using a stereo vision approach. The subject specific articulated model was generated with three rotational and three translational degrees of freedom for each limb segment and without any constraints to the range of motion. This approach was tested on 3D gait analysis and compared to a marker based method. The experiment included ten healthy subjects in whom hip, knee and ankle joint were analysed. Flexion/extension angles as well as hip abduction/adduction closely resembled those obtained from the marker based system. However, the internal/external rotations, knee abduction/adduction and ankle inversion/eversion were less reliable. PMID:25085672

  3. 3D deformable organ model based liver motion tracking in ultrasound videos

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Bae; Hwang, Youngkyoo; Oh, Young-Taek; Bang, Won-Chul; Lee, Heesae; Kim, James D. K.; Kim, Chang Yeong

    2013-03-01

    This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor's 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.

  4. 3D Measurement of Forearm and Upper Arm during Throwing Motion using Body Mounted Sensor

    NASA Astrophysics Data System (ADS)

    Koda, Hideharu; Sagawa, Koichi; Kuroshima, Kouta; Tsukamoto, Toshiaki; Urita, Kazutaka; Ishibashi, Yasuyuki

    The aim of this study is to propose the measurement method of three-dimensional (3D) movement of forearm and upper arm during pitching motion of baseball using inertial sensors without serious consideration of sensor installation. Although high accuracy measurement of sports motion is achieved by using optical motion capture system at present, it has some disadvantages such as the calibration of cameras and limitation of measurement place. Whereas the proposed method for 3D measurement of pitching motion using body mounted sensors provides trajectory and orientation of upper arm by the integration of acceleration and angular velocity measured on upper limb. The trajectory of forearm is derived so that the elbow joint axis of forearm corresponds to that of upper arm. Spatial relation between upper limb and sensor system is obtained by performing predetermined movements of upper limb and utilizing angular velocity and gravitational acceleration. The integration error is modified so that the estimated final position, velocity and posture of upper limb agree with the actual ones. The experimental results of the measurement of pitching motion show that trajectories of shoulder, elbow and wrist estimated by the proposed method are highly correlated to those from the motion capture system within the estimation error of about 10 [%].

  5. Brightness-compensated 3-D optical flow algorithm for monitoring cochlear motion patterns

    NASA Astrophysics Data System (ADS)

    von Tiedemann, Miriam; Fridberger, Anders; Ulfendahl, Mats; de Monvel, Jacques Boutet

    2010-09-01

    A method for three-dimensional motion analysis designed for live cell imaging by fluorescence confocal microscopy is described. The approach is based on optical flow computation and takes into account brightness variations in the image scene that are not due to motion, such as photobleaching or fluorescence variations that may reflect changes in cellular physiology. The 3-D optical flow algorithm allowed almost perfect motion estimation on noise-free artificial sequences, and performed with a relative error of <10% on noisy images typical of real experiments. The method was applied to a series of 3-D confocal image stacks from an in vitro preparation of the guinea pig cochlea. The complex motions caused by slow pressure changes in the cochlear compartments were quantified. At the surface of the hearing organ, the largest motion component was the transverse one (normal to the surface), but significant radial and longitudinal displacements were also present. The outer hair cell displayed larger radial motion at their basolateral membrane than at their apical surface. These movements reflect mechanical interactions between different cellular structures, which may be important for communicating sound-evoked vibrations to the sensory cells. A better understanding of these interactions is important for testing realistic models of cochlear mechanics.

  6. Reconstructing 3-D skin surface motion for the DIET breast cancer screening system.

    PubMed

    Botterill, Tom; Lotz, Thomas; Kashif, Amer; Chase, J Geoffrey

    2014-05-01

    Digital image-based elasto-tomography (DIET) is a prototype system for breast cancer screening. A breast is imaged while being vibrated, and the observed surface motion is used to infer the internal stiffness of the breast, hence identifying tumors. This paper describes a computer vision system for accurately measuring 3-D surface motion. A model-based segmentation is used to identify the profile of the breast in each image, and the 3-D surface is reconstructed by fitting a model to the profiles. The surface motion is measured using a modern optical flow implementation customized to the application, then trajectories of points on the 3-D surface are given by fusing the optical flow with the reconstructed surfaces. On data from human trials, the system is shown to exceed the performance of an earlier marker-based system at tracking skin surface motion. We demonstrate that the system can detect a 10 mm tumor in a silicone phantom breast. PMID:24770915

  7. 3D imaging of particle-scale rotational motion in cyclically driven granular flows

    NASA Astrophysics Data System (ADS)

    Harrington, Matt; Powers, Dylan; Cooper, Eric; Losert, Wolfgang

    Recent experimental advances have enabled three-dimensional (3D) imaging of motion, structure, and failure within granular systems. 3D imaging allows researchers to directly characterize bulk behaviors that arise from particle- and meso-scale features. For instance, segregation of a bidisperse system of spheres under cyclic shear can originate from microscopic irreversibilities and the development of convective secondary flows. Rotational motion and frictional rotational coupling, meanwhile, have been less explored in such experimental 3D systems, especially under cyclic forcing. In particular, relative amounts of sliding and/or rolling between pairs of contacting grains could influence the reversibility of both trajectories, in terms of both position and orientation. In this work, we apply the Refractive Index Matched Scanning technique to a granular system that is cyclically driven and measure both translational and rotational motion of individual grains. We relate measured rotational motion to resulting shear bands and convective flows, further indicating the degree to which pairs and neighborhoods of grains collectively rotate.

  8. Nearly automatic motion capture system for tracking octopus arm movements in 3D space.

    PubMed

    Zelman, Ido; Galun, Meirav; Akselrod-Ballin, Ayelet; Yekutieli, Yoram; Hochner, Binyamin; Flash, Tamar

    2009-08-30

    Tracking animal movements in 3D space is an essential part of many biomechanical studies. The most popular technique for human motion capture uses markers placed on the skin which are tracked by a dedicated system. However, this technique may be inadequate for tracking animal movements, especially when it is impossible to attach markers to the animal's body either because of its size or shape or because of the environment in which the animal performs its movements. Attaching markers to an animal's body may also alter its behavior. Here we present a nearly automatic markerless motion capture system that overcomes these problems and successfully tracks octopus arm movements in 3D space. The system is based on three successive tracking and processing stages. The first stage uses a recently presented segmentation algorithm to detect the movement in a pair of video sequences recorded by two calibrated cameras. In the second stage, the results of the first stage are processed to produce 2D skeletal representations of the moving arm. Finally, the 2D skeletons are used to reconstruct the octopus arm movement as a sequence of 3D curves varying in time. Motion tracking, segmentation and reconstruction are especially difficult problems in the case of octopus arm movements because of the deformable, non-rigid structure of the octopus arm and the underwater environment in which it moves. Our successful results suggest that the motion-tracking system presented here may be used for tracking other elongated objects. PMID:19505502

  9. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson's Disease.

    PubMed

    Piro, Neltje E; Piro, Lennart K; Kassubek, Jan; Blechschmidt-Trapp, Ronald A

    2016-01-01

    Remote monitoring of Parkinson's Disease (PD) patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS) classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a) an examination based on video recordings as a clinical reference; (b) an automatically classified UPDRS; and (c) a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG) sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b) the automatically classified UPDRS is 0.48 and with (c) the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team. PMID:27338400

  10. 4DCBCT-based motion modeling and 3D fluoroscopic image generation for lung cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Berbeco, Ross; Lewis, John

    2015-03-01

    A method is developed to build patient-specific motion models based on 4DCBCT images taken at treatment time and use them to generate 3D time-varying images (referred to as 3D fluoroscopic images). Motion models are built by applying Principal Component Analysis (PCA) on the displacement vector fields (DVFs) estimated by performing deformable image registration on each phase of 4DCBCT relative to a reference phase. The resulting PCA coefficients are optimized iteratively by comparing 2D projections captured at treatment time with projections estimated using the motion model. The optimized coefficients are used to generate 3D fluoroscopic images. The method is evaluated using anthropomorphic physical and digital phantoms reproducing real patient trajectories. For physical phantom datasets, the average tumor localization error (TLE) and (95th percentile) in two datasets were 0.95 (2.2) mm. For digital phantoms assuming superior image quality of 4DCT and no anatomic or positioning disparities between 4DCT and treatment time, the average TLE and the image intensity error (IIE) in six datasets were smaller using 4DCT-based motion models. When simulating positioning disparities and tumor baseline shifts at treatment time compared to planning 4DCT, the average TLE (95th percentile) and IIE were 4.2 (5.4) mm and 0.15 using 4DCT-based models, while they were 1.2 (2.2) mm and 0.10 using 4DCBCT-based ones, respectively. 4DCBCT-based models were shown to perform better when there are positioning and tumor baseline shift uncertainties at treatment time. Thus, generating 3D fluoroscopic images based on 4DCBCT-based motion models can capture both inter- and intra- fraction anatomical changes during treatment.

  11. Solutions for 3D self-reconfiguration in a modular robotic system: implementation and motion planning

    NASA Astrophysics Data System (ADS)

    Unsal, Cem; Khosla, Pradeep K.

    2000-10-01

    In this manuscript, we discuss new solutions for mechanical design and motion planning for a class of 3D modular self- reconfigurable robotic system, namely I-Cubes. This system is a bipartite collection of active links that provide motions for self-reconfiguration, and cubes acting as connection points. The links are three degree of freedom manipulators that can attach to and detach from the cube faces. The cubes can be positioned and oriented using the links. These capabilities enable the system to change its shape and perform locomotion tasks over difficult terrain. This paper describes the scaled down version of the system previously described in and details the new design and manufacturing approaches. Initially designed algorithms for motion planning of I-Cubes are improved to provide better results. Results of our tests are given and issues related to motion planning are discussed. The user interfaces designed for the control of the system and algorithm evaluation is also described.

  12. Edge preserving motion estimation with occlusions correction for assisted 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Pohl, Petr; Sirotenko, Michael; Tolstaya, Ekaterina; Bucha, Victor

    2014-02-01

    In this article we propose high quality motion estimation based on variational optical flow formulation with non-local regularization term. To improve motion in occlusion areas we introduce occlusion motion inpainting based on 3-frame motion clustering. Variational formulation of optical flow proved itself to be very successful, however a global optimization of cost function can be time consuming. To achieve acceptable computation times we adapted the algorithm that optimizes convex function in coarse-to-fine pyramid strategy and is suitable for modern GPU hardware implementation. We also introduced two simplifications of cost function that significantly decrease computation time with acceptable decrease of quality. For motion clustering based motion inpaitning in occlusion areas we introduce effective method of occlusion aware joint 3-frame motion clustering using RANSAC algorithm. Occlusion areas are inpainted by motion model taken from cluster that shows consistency in opposite direction. We tested our algorithm on Middlebury optical flow benchmark, where we scored around 20th position, but being one of the fastest method near the top. We also successfully used this algorithm in semi-automatic 2D to 3D conversion tool for spatio-temporal background inpainting, automatic adaptive key frame detection and key points tracking.

  13. Angle-independent measure of motion for image-based gating in 3D coronary angiography

    SciTech Connect

    Lehmann, Glen C.; Holdsworth, David W.; Drangova, Maria

    2006-05-15

    The role of three-dimensional (3D) image guidance for interventional procedures and minimally invasive surgeries is increasing for the treatment of vascular disease. Currently, most interventional procedures are guided by two-dimensional x-ray angiography, but computed rotational angiography has the potential to provide 3D geometric information about the coronary arteries. The creation of 3D angiographic images of the coronary arteries requires synchronization of data acquisition with respect to the cardiac cycle, in order to minimize motion artifacts. This can be achieved by inferring the extent of motion from a patient's electrocardiogram (ECG) signal. However, a direct measurement of motion (from the 2D angiograms) has the potential to improve the 3D angiographic images by ensuring that only projections acquired during periods of minimal motion are included in the reconstruction. This paper presents an image-based metric for measuring the extent of motion in 2D x-ray angiographic images. Adaptive histogram equalization was applied to projection images to increase the sharpness of coronary arteries and the superior-inferior component of the weighted centroid (SIC) was measured. The SIC constitutes an image-based metric that can be used to track vessel motion, independent of apparent motion induced by the rotational acquisition. To evaluate the technique, six consecutive patients scheduled for routine coronary angiography procedures were studied. We compared the end of the SIC rest period ({rho}) to R-waves (R) detected in the patient's ECG and found a mean difference of 14{+-}80 ms. Two simultaneous angular positions were acquired and {rho} was detected for each position. There was no statistically significant difference (P=0.79) between {rho} in the two simultaneously acquired angular positions. Thus we have shown the SIC to be independent of view angle, which is critical for rotational angiography. A preliminary image-based gating strategy that employed the SIC

  14. Using a wireless motion controller for 3D medical image catheter interactions

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    State-of-the-art morphological imaging techniques usually provide high resolution 3D images with a huge number of slices. In clinical practice, however, 2D slice-based examinations are still the method of choice even for these large amounts of data. Providing intuitive interaction methods for specific 3D medical visualization applications is therefore a critical feature for clinical imaging applications. For the domain of catheter navigation and surgery planning, it is crucial to assist the physician with appropriate visualization techniques, such as 3D segmentation maps, fly-through cameras or virtual interaction approaches. There has been an ongoing development and improvement for controllers that help to interact with 3D environments in the domain of computer games. These controllers are based on both motion and infrared sensors and are typically used to detect 3D position and orientation. We have investigated how a state-of-the-art wireless motion sensor controller (Wiimote), developed by Nintendo, can be used for catheter navigation and planning purposes. By default the Wiimote controller only measure rough acceleration over a range of +/- 3g with 10% sensitivity and orientation. Therefore, a pose estimation algorithm was developed for computing accurate position and orientation in 3D space regarding 4 Infrared LEDs. Current results show that for the translation it is possible to obtain a mean error of (0.38cm, 0.41cm, 4.94cm) and for the rotation (0.16, 0.28) respectively. Within this paper we introduce a clinical prototype that allows steering of a virtual fly-through camera attached to the catheter tip by the Wii controller on basis of a segmented vessel tree.

  15. Ultrasonic diaphragm tracking for cardiac interventional navigation on 3D motion compensated static roadmaps

    NASA Astrophysics Data System (ADS)

    Timinger, Holger; Kruger, Sascha; Dietmayer, Klaus; Borgert, Joern

    2005-04-01

    In this paper, a novel approach to cardiac interventional navigation on 3D motion-compensated static roadmaps is presented. Current coronary interventions, e.g. percutaneous transluminal coronary angioplasties, are performed using 2D X-ray fluoroscopy. This comes along with well-known drawbacks like radiation exposure, use of contrast agent, and limited visualization, e.g. overlap and foreshortening, due to projection imaging. In the presented approach, the interventional device, i.e. the catheter, is tracked using an electromagnetic tracking system (MTS). Therefore, the catheters position is mapped into a static 3D image of the volume of interest (VOI) by means of an affine registration. In order to compensate for respiratory motion of the catheter with respect to the static image, a parameterized affine motion model is used which is driven by a respiratory sensor signal. This signal is derived from ultrasonic diaphragm tracking. The motion compensation for the heartbeat is done using ECG-gating. The methods are validated using a heart- and diaphragm-phantom. The mean displacement of the catheter due to the simulated organ motion decreases from approximately 9 mm to 1.3 mm. This result indicates that the proposed method is able to reconstruct the catheter position within the VOI accurately and that it can help to overcome drawbacks of current interventional procedures.

  16. 3D Geometry and Motion Estimations of Maneuvering Targets for Interferometric ISAR With Sparse Aperture.

    PubMed

    Xu, Gang; Xing, Mengdao; Xia, Xiang-Gen; Zhang, Lei; Chen, Qianqian; Bao, Zheng

    2016-05-01

    In the current scenario of high-resolution inverse synthetic aperture radar (ISAR) imaging, the non-cooperative targets may have strong maneuverability, which tends to cause time-variant Doppler modulation and imaging plane in the echoed data. Furthermore, it is still a challenge to realize ISAR imaging of maneuvering targets from sparse aperture (SA) data. In this paper, we focus on the problem of 3D geometry and motion estimations of maneuvering targets for interferometric ISAR (InISAR) with SA. For a target of uniformly accelerated rotation, the rotational modulation in echo is formulated as chirp sensing code under a chirp-Fourier dictionary to represent the maneuverability. In particular, a joint multi-channel imaging approach is developed to incorporate the multi-channel data and treat the multi-channel ISAR image formation as a joint-sparsity constraint optimization. Then, a modified orthogonal matching pursuit (OMP) algorithm is employed to solve the optimization problem to produce high-resolution range-Doppler (RD) images and chirp parameter estimation. The 3D target geometry and the motion estimations are followed by using the acquired RD images and chirp parameters. Herein, a joint estimation approach of 3D geometry and rotation motion is presented to realize outlier removing and error reduction. In comparison with independent single-channel processing, the proposed joint multi-channel imaging approach performs better in 2D imaging, 3D imaging, and motion estimation. Finally, experiments using both simulated and measured data are performed to confirm the effectiveness of the proposed algorithm. PMID:26930684

  17. A motion- and sound-activated, 3D-printed, chalcogenide-based triboelectric nanogenerator.

    PubMed

    Kanik, Mehmet; Say, Mehmet Girayhan; Daglar, Bihter; Yavuz, Ahmet Faruk; Dolas, Muhammet Halit; El-Ashry, Mostafa M; Bayindir, Mehmet

    2015-04-01

    A multilayered triboelectric nanogenerator (MULTENG) that can be actuated by acoustic waves, vibration of a moving car, and tapping motion is built using a 3D-printing technique. The MULTENG can generate an open-circuit voltage of up to 396 V and a short-circuit current of up to 1.62 mA, and can power 38 LEDs. The layers of the triboelectric generator are made of polyetherimide nanopillars and chalcogenide core-shell nanofibers. PMID:25722118

  18. Ground motion simulations in Marmara (Turkey) region from 3D finite difference method

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Ulrich, Thomas; Douglas, John

    2016-04-01

    In the framework of the European project MARSite (2012-2016), one of the main contributions from our research team was to provide ground-motion simulations for the Marmara region from various earthquake source scenarios. We adopted a 3D finite difference code, taking into account the 3D structure around the Sea of Marmara (including the bathymetry) and the sea layer. We simulated two moderate earthquakes (about Mw4.5) and found that the 3D structure improves significantly the waveforms compared to the 1D layer model. Simulations were carried out for different earthquakes (moderate point sources and large finite sources) in order to provide shake maps (Aochi and Ulrich, BSSA, 2015), to study the variability of ground-motion parameters (Douglas & Aochi, BSSA, 2016) as well as to provide synthetic seismograms for the blind inversion tests (Diao et al., GJI, 2016). The results are also planned to be integrated in broadband ground-motion simulations, tsunamis generation and simulations of triggered landslides (in progress by different partners). The simulations are freely shared among the partners via the internet and the visualization of the results is diffused on the project's homepage. All these simulations should be seen as a reference for this region, as they are based on the latest knowledge that obtained during the MARSite project, although their refinement and validation of the model parameters and the simulations are a continuing research task relying on continuing observations. The numerical code used, the models and the simulations are available on demand.

  19. Free-Breathing 3D Whole Heart Black Blood Imaging with Motion Sensitized Driven Equilibrium

    PubMed Central

    Srinivasan, Subashini; Hu, Peng; Kissinger, Kraig V.; Goddu, Beth; Goepfert, Lois; Schmidt, Ehud J.; Kozerke, Sebastian; Nezafat, Reza

    2012-01-01

    Purpose To assess the efficacy and robustness of motion sensitized driven equilibrium (MSDE) for blood suppression in volumetric 3D whole heart cardiac MR. Materials and Methods To investigate the efficacy of MSDE on blood suppression and myocardial SNR loss on different imaging sequences. 7 healthy adult subjects were imaged using 3D ECG-triggered MSDE-prep T1-weighted turbo spin echo (TSE), and spoiled gradient echo (GRE), after optimization of MSDE parameters in a pilot study of 5 subjects. Imaging artifacts, myocardial and blood SNR were assessed. Subsequently, the feasibility of isotropic spatial resolution MSDE-prep black-blood was assessed in 6 subjects. Finally, 15 patients with known or suspected cardiovascular disease were recruited to be imaged using conventional multi-slice 2D DIR TSE imaging sequence and 3D MSDE-prep spoiled GRE. Results The MSDE-prep yields significant blood suppression (75-92%), enabling a volumetric 3D black-blood assessment of the whole heart with significantly improved visualization of the chamber walls. The MSDE-prep also allowed successful acquisition of black-blood images with isotropic spatial resolution. In the patient study, 3D black-blood MSDE-prep and DIR resulted in similar blood suppression in LV and RV walls but the MSDE prep had superior myocardial signal and wall sharpness. Conclusion MSDE-prep allows volumetric black-blood imaging of the heart. PMID:22517477

  20. Inferred motion perception of light sources in 3D scenes is color-blind.

    PubMed

    Gerhard, Holly E; Maloney, Laurence T

    2013-01-01

    In everyday scenes, the illuminant can vary spatially in chromaticity and luminance, and change over time (e.g. sunset). Such variation generates dramatic image effects too complex for any contemporary machine vision system to overcome, yet human observers are remarkably successful at inferring object properties separately from lighting, an ability linked with estimation and tracking of light field parameters. Which information does the visual system use to infer light field dynamics? Here, we specifically ask whether color contributes to inferred light source motion. Observers viewed 3D surfaces illuminated by an out-of-view moving collimated source (sun) and a diffuse source (sky). In half of the trials, the two sources differed in chromaticity, thereby providing more information about motion direction. Observers discriminated light motion direction above chance, and only the least sensitive observer benefited slightly from the added color information, suggesting that color plays only a very minor role for inferring light field dynamics. PMID:23755354

  1. Towards robust 3D visual tracking for motion compensation in beating heart surgery.

    PubMed

    Richa, Rogério; Bó, Antônio P L; Poignet, Philippe

    2011-06-01

    In the context of minimally invasive cardiac surgery, active vision-based motion compensation schemes have been proposed for mitigating problems related to physiological motion. However, robust and accurate visual tracking remains a difficult task. The purpose of this paper is to present a robust visual tracking method that estimates the 3D temporal and spatial deformation of the heart surface using stereo endoscopic images. The novelty is the combination of a visual tracking method based on a Thin-Plate Spline (TPS) model for representing the heart surface deformations with a temporal heart motion model based on a time-varying dual Fourier series for overcoming tracking disturbances or failures. The considerable improvements in tracking robustness facing specular reflections and occlusions are demonstrated through experiments using images of in vivo porcine and human beating hearts. PMID:21277821

  2. Integration of 3D Structure from Disparity into Biological Motion Perception Independent of Depth Awareness

    PubMed Central

    Wang, Ying; Jiang, Yi

    2014-01-01

    Images projected onto the retinas of our two eyes come from slightly different directions in the real world, constituting binocular disparity that serves as an important source for depth perception - the ability to see the world in three dimensions. It remains unclear whether the integration of disparity cues into visual perception depends on the conscious representation of stereoscopic depth. Here we report evidence that, even without inducing discernible perceptual representations, the disparity-defined depth information could still modulate the visual processing of 3D objects in depth-irrelevant aspects. Specifically, observers who could not discriminate disparity-defined in-depth facing orientations of biological motions (i.e., approaching vs. receding) due to an excessive perceptual bias nevertheless exhibited a robust perceptual asymmetry in response to the indistinguishable facing orientations, similar to those who could consciously discriminate such 3D information. These results clearly demonstrate that the visual processing of biological motion engages the disparity cues independent of observers’ depth awareness. The extraction and utilization of binocular depth signals thus can be dissociable from the conscious representation of 3D structure in high-level visual perception. PMID:24586622

  3. Stopping is not an option: the evolution of unstoppable motion elements (primitives).

    PubMed

    Sosnik, Ronen; Chaim, Eliyahu; Flash, Tamar

    2015-08-01

    Stopping performance is known to depend on low-level motion features, such as movement velocity. It is not known, however, whether it is also subject to high-level motion constraints. Here, we report results of 15 subjects instructed to connect four target points depicted on a digitizing tablet and stop "as rapidly as possible" upon hearing a "stop" cue (tone). Four subjects connected target points with straight paths, whereas 11 subjects generated movements corresponding to coarticulation between adjacent movement components. For the noncoarticulating and coarticulating subjects, stopping performance was not correlated or only weakly correlated with motion velocity, respectively. The generation of a straight, point-to-point movement or a smooth, curved trajectory was not disturbed by the occurrence of a stop cue. Overall, the results indicate that stopping performance is subject to high-level motion constraints, such as the completion of a geometrical plan, and that globally planned movements, once started, must run to completion, providing evidence for the definition of a motion primitive as an unstoppable motion element. PMID:26041827

  4. A Little Knowledge of Ground Motion: Explaining 3-D Physics-Based Modeling to Engineers

    NASA Astrophysics Data System (ADS)

    Porter, K.

    2014-12-01

    Users of earthquake planning scenarios require the ground-motion map to be credible enough to justify costly planning efforts, but not all ground-motion maps are right for all uses. There are two common ways to create a map of ground motion for a hypothetical earthquake. One approach is to map the median shaking estimated by empirical attenuation relationships. The other uses 3-D physics-based modeling, in which one analyzes a mathematical model of the earth's crust near the fault rupture and calculates the generation and propagation of seismic waves from source to ground surface by first principles. The two approaches produce different-looking maps. The more-familiar median maps smooth out variability and correlation. Using them in a planning scenario can lead to a systematic underestimation of damage and loss, and could leave a community underprepared for realistic shaking. The 3-D maps show variability, including some very high values that can disconcert non-scientists. So when the USGS Science Application for Risk Reduction's (SAFRR) Haywired scenario project selected 3-D maps, it was necessary to explain to scenario users—especially engineers who often use median maps—the differences, advantages, and disadvantages of the two approaches. We used authority, empirical evidence, and theory to support our choice. We prefaced our explanation with SAFRR's policy of using the best available earth science, and cited the credentials of the maps' developers and the reputation of the journal in which they published the maps. We cited recorded examples from past earthquakes of extreme ground motions that are like those in the scenario map. We explained the maps on theoretical grounds as well, explaining well established causes of variability: directivity, basin effects, and source parameters. The largest mapped motions relate to potentially unfamiliar extreme-value theory, so we used analogies to human longevity and the average age of the oldest person in samples of

  5. 3D Motion Planning Algorithms for Steerable Needles Using Inverse Kinematics

    PubMed Central

    Duindam, Vincent; Xu, Jijie; Alterovitz, Ron; Sastry, Shankar; Goldberg, Ken

    2010-01-01

    Steerable needles can be used in medical applications to reach targets behind sensitive or impenetrable areas. The kinematics of a steerable needle are nonholonomic and, in 2D, equivalent to a Dubins car with constant radius of curvature. In 3D, the needle can be interpreted as an airplane with constant speed and pitch rate, zero yaw, and controllable roll angle. We present a constant-time motion planning algorithm for steerable needles based on explicit geometric inverse kinematics similar to the classic Paden-Kahan subproblems. Reachability and path competitivity are analyzed using analytic comparisons with shortest path solutions for the Dubins car (for 2D) and numerical simulations (for 3D). We also present an algorithm for local path adaptation using null-space results from redundant manipulator theory. Finally, we discuss several ways to use and extend the inverse kinematics solution to generate needle paths that avoid obstacles. PMID:21359051

  6. 3D delivered dose assessment using a 4DCT-based motion model

    SciTech Connect

    Cai, Weixing; Hurwitz, Martina H.; Williams, Christopher L.; Dhou, Salam; Berbeco, Ross I.; Mishra, Pankaj E-mail: jhlewis@lroc.harvard.edu; Lewis, John H. E-mail: jhlewis@lroc.harvard.edu; Seco, Joao

    2015-06-15

    Purpose: The purpose of this work is to develop a clinically feasible method of calculating actual delivered dose distributions for patients who have significant respiratory motion during the course of stereotactic body radiation therapy (SBRT). Methods: A novel approach was proposed to calculate the actual delivered dose distribution for SBRT lung treatment. This approach can be specified in three steps. (1) At the treatment planning stage, a patient-specific motion model is created from planning 4DCT data. This model assumes that the displacement vector field (DVF) of any respiratory motion deformation can be described as a linear combination of some basis DVFs. (2) During the treatment procedure, 2D time-varying projection images (either kV or MV projections) are acquired, from which time-varying “fluoroscopic” 3D images of the patient are reconstructed using the motion model. The DVF of each timepoint in the time-varying reconstruction is an optimized linear combination of basis DVFs such that the 2D projection of the 3D volume at this timepoint matches the projection image. (3) 3D dose distribution is computed for each timepoint in the set of 3D reconstructed fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating deformed dose distributions. This approach was first validated using two modified digital extended cardio-torso (XCAT) phantoms with lung tumors and different respiratory motions. The estimated doses were compared to the dose that would be calculated for routine 4DCT-based planning and to the actual delivered dose that was calculated using “ground truth” XCAT phantoms at all timepoints. The approach was also tested using one set of patient data, which demonstrated the application of our method in a clinical scenario. Results: For the first XCAT phantom that has a mostly regular breathing pattern, the errors in 95% volume dose (D95) are 0.11% and 0.83%, respectively for 3D fluoroscopic images

  7. 3D delivered dose assessment using a 4DCT-based motion model

    PubMed Central

    Cai, Weixing; Hurwitz, Martina H.; Williams, Christopher L.; Dhou, Salam; Berbeco, Ross I.; Seco, Joao; Mishra, Pankaj; Lewis, John H.

    2015-01-01

    Purpose: The purpose of this work is to develop a clinically feasible method of calculating actual delivered dose distributions for patients who have significant respiratory motion during the course of stereotactic body radiation therapy (SBRT). Methods: A novel approach was proposed to calculate the actual delivered dose distribution for SBRT lung treatment. This approach can be specified in three steps. (1) At the treatment planning stage, a patient-specific motion model is created from planning 4DCT data. This model assumes that the displacement vector field (DVF) of any respiratory motion deformation can be described as a linear combination of some basis DVFs. (2) During the treatment procedure, 2D time-varying projection images (either kV or MV projections) are acquired, from which time-varying “fluoroscopic” 3D images of the patient are reconstructed using the motion model. The DVF of each timepoint in the time-varying reconstruction is an optimized linear combination of basis DVFs such that the 2D projection of the 3D volume at this timepoint matches the projection image. (3) 3D dose distribution is computed for each timepoint in the set of 3D reconstructed fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating deformed dose distributions. This approach was first validated using two modified digital extended cardio-torso (XCAT) phantoms with lung tumors and different respiratory motions. The estimated doses were compared to the dose that would be calculated for routine 4DCT-based planning and to the actual delivered dose that was calculated using “ground truth” XCAT phantoms at all timepoints. The approach was also tested using one set of patient data, which demonstrated the application of our method in a clinical scenario. Results: For the first XCAT phantom that has a mostly regular breathing pattern, the errors in 95% volume dose (D95) are 0.11% and 0.83%, respectively for 3D fluoroscopic images

  8. Computational optical-sectioning microscopy for 3D quantization of cell motion: results and challenges

    NASA Astrophysics Data System (ADS)

    McNally, James G.

    1994-09-01

    How cells move and navigate within a 3D tissue mass is of central importance in such diverse problems as embryonic development, wound healing and metastasis. This locomotion can now be visualized and quantified by using computation optical-sectioning microscopy. In this approach, a series of 2D images at different depths in a specimen are stacked to construct a 3D image, and then with a knowledge of the microscope's point-spread function, the actual distribution of fluorescent intensity in the specimen is estimated via computation. When coupled with wide-field optics and a cooled CCD camera, this approach permits non-destructive 3D imaging of living specimens over long time periods. With these techniques, we have observed a complex diversity of motile behaviors in a model embryonic system, the cellular slime mold Dictyostelium. To understand the mechanisms which control these various behaviors, we are examining motion in various Dictyostelium mutants with known defects in proteins thought to be essential for signal reception, cell-cell adhesion or locomotion. This application of computational techniques to analyze 3D cell locomotion raises several technical challenges. Image restoration techniques must be fast enough to process numerous 1 Gbyte time-lapse data sets (16 Mbytes per 3D image X 60 time points). Because some cells are weakly labeled and background intensity is often high due to unincorporated dye, the SNR in some of these images is poor. Currently, the images are processed by a regularized linear least- squares restoration method, and occasionally by a maximum-likelihood method. Also required for these studies are accurate automated- tracking procedures to generate both 3D trajectories for individual cells and 3D flows for a group of cells. Tracking is currently done independently for each cell, using a cell's image as a template to search for a similar image at the next time point. Finally, sophisticated visualization techniques are needed to view the

  9. 3D motion of DNA-Au nanoconjugates in graphene liquid cell electron microscopy.

    PubMed

    Chen, Qian; Smith, Jessica M; Park, Jungwon; Kim, Kwanpyo; Ho, Davy; Rasool, Haider I; Zettl, Alex; Alivisatos, A Paul

    2013-09-11

    Liquid-phase transmission electron microscopy (TEM) can probe and visualize dynamic events with structural or functional details at the nanoscale in a liquid medium. Earlier efforts have focused on the growth and transformation kinetics of hard material systems, relying on their stability under electron beam. Our recently developed graphene liquid cell technique pushed the spatial resolution of such imaging to the atomic scale but still focused on growth trajectories of metallic nanocrystals. Here, we adopt this technique to imaging three-dimensional (3D) dynamics of soft materials instead, double strand (dsDNA) connecting Au nanocrystals as one example, at nanometer resolution. We demonstrate first that a graphene liquid cell can seal an aqueous sample solution of a lower vapor pressure than previously investigated well against the high vacuum in TEM. Then, from quantitative analysis of real time nanocrystal trajectories, we show that the status and configuration of dsDNA dictate the motions of linked nanocrystals throughout the imaging time of minutes. This sustained connecting ability of dsDNA enables this unprecedented continuous imaging of its dynamics via TEM. Furthermore, the inert graphene surface minimizes sample-substrate interaction and allows the whole nanostructure to rotate freely in the liquid environment; we thus develop and implement the reconstruction of 3D configuration and motions of the nanostructure from the series of 2D projected TEM images captured while it rotates. In addition to further proving the nanoconjugate structural stability, this reconstruction demonstrates 3D dynamic imaging by TEM beyond its conventional use in seeing a flattened and dry sample. Altogether, we foresee the new and exciting use of graphene liquid cell TEM in imaging 3D biomolecular transformations or interaction dynamics at nanometer resolution. PMID:23944844

  10. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846

  11. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis

    PubMed Central

    Cerveri, Pietro; Barros, Ricardo M. L.; Marins, João C. B.; Silvatti, Amanda P.

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846

  12. Validation of INSAT-3D atmospheric motion vectors for monsoon 2015

    NASA Astrophysics Data System (ADS)

    Sharma, Priti; Rani, S. Indira; Das Gupta, M.

    2016-05-01

    Atmospheric Motion Vector (AMV) over Indian Ocean and surrounding region is one of the most important sources of tropospheric wind information assimilated in numerical weather prediction (NWP) system. Earlier studies showed that the quality of Indian geo-stationary satellite Kalpana-1 AMVs was not comparable to that of other geostationary satellites over this region and hence not used in NWP system. Indian satellite INSAT-3D was successfully launched on July 26, 2013 with upgraded imaging system as compared to that of previous Indian satellite Kalpana-1. INSAT-3D has middle infrared band (3.80 - 4.00 μm) which is capable of night time pictures of low clouds and fog. Three consecutive images of 30-minutes interval are used to derive the AMVs. New height assignment scheme (using NWP first guess and replacing old empirical GA method) along with modified quality control scheme were implemented for deriving INSAT-3D AMVs. In this paper an attempt has been made to validate these AMVs against in-situ observations as well as against NCMRWF's NWP first guess for monsoon 2015. AMVs are subdivided into three different pressure levels in the vertical viz. low (1000 - 700 hPa), middle (700 - 400 hPa) and high (400 - 100 hPa) for validation purpose. Several statistics viz. normalized root mean square vector difference; biases etc. have been computed over different latitudinal belt. Result shows that the general mean monsoon circulations along with all the transient monsoon systems are well captured by INSAT-3D AMVs, as well as the error statistics viz., RMSE etc of INSAT-3D AMVs is now comparable to other geostationary satellites.

  13. Broadband Near-Field Ground Motion Simulations in 3D Scattering Media

    NASA Astrophysics Data System (ADS)

    Imperatori, Walter; Mai, Martin

    2013-04-01

    The heterogeneous nature of Earth's crust is manifested in the scattering of propagating seismic waves. In recent years, different techniques have been developed to include such phenomenon in broadband ground-motion calculations, either considering scattering as a semi-stochastic or pure stochastic process. In this study, we simulate broadband (0-10 Hz) ground motions using a 3D finite-difference wave propagation solver using several 3D media characterized by Von Karman correlation functions with different correlation lengths and standard deviation values. Our goal is to investigate scattering characteristics and its influence on the seismic wave-field at short and intermediate distances from the source in terms of ground motion parameters. We also examine other relevant scattering-related phenomena, such as the loss of radiation pattern and the directivity breakdown. We first simulate broadband ground motions for a point-source characterized by a classic omega-squared spectrum model. Fault finiteness is then introduced by means of a Haskell-type source model presenting both sub-shear and super-shear rupture speed. Results indicate that scattering plays an important role in ground motion even at short distances from the source, where source effects are thought to be dominating. In particular, peak ground motion parameters can be affected even at relatively low frequencies, implying that earthquake ground-motion simulations should include scattering also for PGV calculations. At the same time, we find a gradual loss of the source signature in the 2-5 Hz frequency range, together with a distortion of the Mach cones in case of super-shear rupture. For more complex source models and truly heterogeneous Earth, these effects may occur even at lower frequencies. Our simulations suggest that Von Karman correlation functions with correlation length between several hundred meters and few kilometers, Hurst exponent around 0.3 and standard deviation in the 5-10% range

  14. Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller

    PubMed Central

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  15. Biodynamic Doppler imaging of subcellular motion inside 3D living tissue culture and biopsies (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Nolte, David D.

    2016-03-01

    Biodynamic imaging is an emerging 3D optical imaging technology that probes up to 1 mm deep inside three-dimensional living tissue using short-coherence dynamic light scattering to measure the intracellular motions of cells inside their natural microenvironments. Biodynamic imaging is label-free and non-invasive. The information content of biodynamic imaging is captured through tissue dynamics spectroscopy that displays the changes in the Doppler signatures from intracellular constituents in response to applied compounds. The affected dynamic intracellular mechanisms include organelle transport, membrane undulations, cytoskeletal restructuring, strain at cellular adhesions, cytokinesis, mitosis, exo- and endo-cytosis among others. The development of 3D high-content assays such as biodynamic profiling can become a critical new tool for assessing efficacy of drugs and the suitability of specific types of tissue growth for drug discovery and development. The use of biodynamic profiling to predict clinical outcome of living biopsies to cancer therapeutics can be developed into a phenotypic companion diagnostic, as well as a new tool for therapy selection in personalized medicine. This invited talk will present an overview of the optical, physical and physiological processes involved in biodynamic imaging. Several different biodynamic imaging modalities include motility contrast imaging (MCI), tissue-dynamics spectroscopy (TDS) and tissue-dynamics imaging (TDI). A wide range of potential applications will be described that include process monitoring for 3D tissue culture, drug discovery and development, cancer therapy selection, embryo assessment for in-vitro fertilization and artificial reproductive technologies, among others.

  16. Exploring direct 3D interaction for full horizontal parallax light field displays using leap motion controller.

    PubMed

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  17. On-line 3D motion estimation using low resolution MRI

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; de Senneville, B. Denis; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2015-08-01

    Image processing such as deformable image registration finds its way into radiotherapy as a means to track non-rigid anatomy. With the advent of magnetic resonance imaging (MRI) guided radiotherapy, intrafraction anatomy snapshots become technically feasible. MRI provides the needed tissue signal for high-fidelity image registration. However, acquisitions, especially in 3D, take a considerable amount of time. Pushing towards real-time adaptive radiotherapy, MRI needs to be accelerated without degrading the quality of information. In this paper, we investigate the impact of image resolution on the quality of motion estimations. Potentially, spatially undersampled images yield comparable motion estimations. At the same time, their acquisition times would reduce greatly due to the sparser sampling. In order to substantiate this hypothesis, exemplary 4D datasets of the abdomen were downsampled gradually. Subsequently, spatiotemporal deformations are extracted consistently using the same motion estimation for each downsampled dataset. Errors between the original and the respectively downsampled version of the dataset are then evaluated. Compared to ground-truth, results show high similarity of deformations estimated from downsampled image data. Using a dataset with {{≤ft(2.5 \\text{mm}\\right)}3} voxel size, deformation fields could be recovered well up to a downsampling factor of 2, i.e. {{≤ft(5 \\text{mm}\\right)}3} . In a therapy guidance scenario MRI, imaging speed could accordingly increase approximately fourfold, with acceptable loss of estimated motion quality.

  18. Interactive Motion Planning for Steerable Needles in 3D Environments with Obstacles

    PubMed Central

    Patil, Sachin; Alterovitz, Ron

    2011-01-01

    Bevel-tip steerable needles for minimally invasive medical procedures can be used to reach clinical targets that are behind sensitive or impenetrable areas and are inaccessible to straight, rigid needles. We present a fast algorithm that can compute motion plans for steerable needles to reach targets in complex, 3D environments with obstacles at interactive rates. The fast computation makes this method suitable for online control of the steerable needle based on 3D imaging feedback and allows physicians to interactively edit the planning environment in real-time by adding obstacle definitions as they are discovered or become relevant. We achieve this fast performance by using a Rapidly Exploring Random Tree (RRT) combined with a reachability-guided sampling heuristic to alleviate the sensitivity of the RRT planner to the choice of the distance metric. We also relax the constraint of constant-curvature needle trajectories by relying on duty-cycling to realize bounded-curvature needle trajectories. These characteristics enable us to achieve orders of magnitude speed-up compared to previous approaches; we compute steerable needle motion plans in under 1 second for challenging environments containing complex, polyhedral obstacles and narrow passages. PMID:22294214

  19. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking.

    PubMed

    Dettmer, Simon L; Keyser, Ulrich F; Pagliara, Stefano

    2014-02-01

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces. PMID:24593372

  20. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    2014-02-01

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  1. 3D motion tracking of the heart using Harmonic Phase (HARP) isosurfaces

    NASA Astrophysics Data System (ADS)

    Soliman, Abraam S.; Osman, Nael F.

    2010-03-01

    Tags are non-invasive features induced in the heart muscle that enable the tracking of heart motion. Each tag line, in fact, corresponds to a 3D tag surface that deforms with the heart muscle during the cardiac cycle. Tracking of tag surfaces deformation is useful for the analysis of left ventricular motion. Cardiac material markers (Kerwin et al, MIA, 1997) can be obtained from the intersections of orthogonal surfaces which can be reconstructed from short- and long-axis tagged images. The proposed method uses Harmonic Phase (HARP) method for tracking tag lines corresponding to a specific harmonic phase value and then the reconstruction of grid tag surfaces is achieved by a Delaunay triangulation-based interpolation for sparse tag points. Having three different tag orientations from short- and long-axis images, the proposed method showed the deformation of 3D tag surfaces during the cardiac cycle. Previous work on tag surface reconstruction was restricted for the "dark" tag lines; however, the use of HARP as proposed enables the reconstruction of isosurfaces based on their harmonic phase values. The use of HARP, also, provides a fast and accurate way for tag lines identification and tracking, and hence, generating the surfaces.

  2. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI.

    PubMed

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z; Stone, Maureen; Prince, Jerry L

    2014-12-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations. PMID:25155697

  3. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    SciTech Connect

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    2014-02-15

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  4. New method for detection of complex 3D fracture motion - Verification of an optical motion analysis system for biomechanical studies

    PubMed Central

    2012-01-01

    Background Fracture-healing depends on interfragmentary motion. For improved osteosynthesis and fracture-healing, the micromotion between fracture fragments is undergoing intensive research. The detection of 3D micromotions at the fracture gap still presents a challenge for conventional tactile measurement systems. Optical measurement systems may be easier to use than conventional systems, but, as yet, cannot guarantee accuracy. The purpose of this study was to validate the optical measurement system PONTOS 5M for use in biomechanical research, including measurement of micromotion. Methods A standardized transverse fracture model was created to detect interfragmentary motions under axial loadings of up to 200 N. Measurements were performed using the optical measurement system and compared with a conventional high-accuracy tactile system consisting of 3 standard digital dial indicators (1 μm resolution; 5 μm error limit). Results We found that the deviation in the mean average motion detection between the systems was at most 5.3 μm, indicating that detection of micromotion was possible with the optical measurement system. Furthermore, we could show two considerable advantages while using the optical measurement system. Only with the optical system interfragmentary motion could be analyzed directly at the fracture gap. Furthermore, the calibration of the optical system could be performed faster, safer and easier than that of the tactile system. Conclusion The PONTOS 5 M optical measurement system appears to be a favorable alternative to previously used tactile measurement systems for biomechanical applications. Easy handling, combined with a high accuracy for 3D detection of micromotions (≤ 5 μm), suggests the likelihood of high user acceptance. This study was performed in the context of the deployment of a new implant (dynamic locking screw; Synthes, Oberdorf, Switzerland). PMID:22405047

  5. Feasibility Study for Ballet E-Learning: Automatic Composition System for Ballet "Enchainement" with Online 3D Motion Data Archive

    ERIC Educational Resources Information Center

    Umino, Bin; Longstaff, Jeffrey Scott; Soga, Asako

    2009-01-01

    This paper reports on "Web3D dance composer" for ballet e-learning. Elementary "petit allegro" ballet steps were enumerated in collaboration with ballet teachers, digitally acquired through 3D motion capture systems, and categorised into families and sub-families. Digital data was manipulated into virtual reality modelling language (VRML) and fit…

  6. 3D cardiac motion reconstruction from CT data and tagged MRI.

    PubMed

    Wang, Xiaoxu; Mihalef, Viorel; Qian, Zhen; Voros, Szilard; Metaxas, Dimitris

    2012-01-01

    In this paper we present a novel method for left ventricle (LV) endocardium motion reconstruction using high resolution CT data and tagged MRI. High resolution CT data provide anatomic details on the LV endocardial surface, such as the papillary muscle and trabeculae carneae. Tagged MRI provides better time resolution. The combination of these two imaging techniques can give us better understanding on left ventricle motion. The high resolution CT images are segmented with mean shift method and generate the LV endocardium mesh. The meshless deformable model built with high resolution endocardium surface from CT data fit to the tagged MRI of the same phase. 3D deformation of the myocardium is computed with the Lagrangian dynamics and local Laplacian deformation. The segmented inner surface of left ventricle is compared with the heart inner surface picture and show high agreement. The papillary muscles are attached to the inner surface with roots. The free wall of the left ventricle inner surface is covered with trabeculae carneae. The deformation of the heart wall and the papillary muscle in the first half of the cardiac cycle is presented. The motion reconstruction results are very close to the live heart video. PMID:23366825

  7. 3D Cardiac Motion Reconstruction from CT Data and Tagged MRI

    PubMed Central

    Wang, Xiaoxu; Mihalef, Viorel; Qian, Zhen; Voros, Szilard; Metaxas, Dimitris

    2016-01-01

    In this paper we present a novel method for left ventricle (LV) endocardium motion reconstruction using high resolution CT data and tagged MRI. High resolution CT data provide anatomic details on the LV endocardial surface, such as the papillary muscle and trabeculae carneae. Tagged MRI provides better time resolution. The combination of these two imaging techniques can give us better understanding on left ventricle motion. The high resolution CT images are segmented with mean shift method and generate the LV endocardium mesh. The meshless deformable model built with high resolution endocardium surface from CT data fit to the tagged MRI of the same phase. 3D deformation of the myocardium is computed with the Lagrangian dynamics and local Laplacian deformation. The segmented inner surface of left ventricle is compared with the heart inner surface picture and show high agreement. The papillary muscles are attached to the inner surface with roots. The free wall of the left ventricle inner surface is covered with trabeculae carneae. The deformation of the heart wall and the papillary muscle in the first half of the cardiac cycle is presented. The motion reconstruction results are very close to the live heart video. PMID:23366825

  8. 3D hand motion trajectory prediction from EEG mu and beta bandpower.

    PubMed

    Korik, A; Sosnik, R; Siddique, N; Coyle, D

    2016-01-01

    A motion trajectory prediction (MTP) - based brain-computer interface (BCI) aims to reconstruct the three-dimensional (3D) trajectory of upper limb movement using electroencephalography (EEG). The most common MTP BCI employs a time series of bandpass-filtered EEG potentials (referred to here as the potential time-series, PTS, model) for reconstructing the trajectory of a 3D limb movement using multiple linear regression. These studies report the best accuracy when a 0.5-2Hz bandpass filter is applied to the EEG. In the present study, we show that spatiotemporal power distribution of theta (4-8Hz), mu (8-12Hz), and beta (12-28Hz) bands are more robust for movement trajectory decoding when the standard PTS approach is replaced with time-varying bandpower values of a specified EEG band, ie, with a bandpower time-series (BTS) model. A comprehensive analysis comprising of three subjects performing pointing movements with the dominant right arm toward six targets is presented. Our results show that the BTS model produces significantly higher MTP accuracy (R~0.45) compared to the standard PTS model (R~0.2). In the case of the BTS model, the highest accuracy was achieved across the three subjects typically in the mu (8-12Hz) and low-beta (12-18Hz) bands. Additionally, we highlight a limitation of the commonly used PTS model and illustrate how this model may be suboptimal for decoding motion trajectory relevant information. Although our results, showing that the mu and beta bands are prominent for MTP, are not in line with other MTP studies, they are consistent with the extensive literature on classical multiclass sensorimotor rhythm-based BCI studies (classification of limbs as opposed to motion trajectory prediction), which report the best accuracy of imagined limb movement classification using power values of mu and beta frequency bands. The methods proposed here provide a positive step toward noninvasive decoding of imagined 3D hand movements for movement-free BCIs

  9. Automated 3D Motion Tracking using Gabor Filter Bank, Robust Point Matching, and Deformable Models

    PubMed Central

    Wang, Xiaoxu; Chung, Sohae; Metaxas, Dimitris; Axel, Leon

    2013-01-01

    Tagged Magnetic Resonance Imaging (tagged MRI or tMRI) provides a means of directly and noninvasively displaying the internal motion of the myocardium. Reconstruction of the motion field is needed to quantify important clinical information, e.g., the myocardial strain, and detect regional heart functional loss. In this paper, we present a three-step method for this task. First, we use a Gabor filter bank to detect and locate tag intersections in the image frames, based on local phase analysis. Next, we use an improved version of the Robust Point Matching (RPM) method to sparsely track the motion of the myocardium, by establishing a transformation function and a one-to-one correspondence between grid tag intersections in different image frames. In particular, the RPM helps to minimize the impact on the motion tracking result of: 1) through-plane motion, and 2) relatively large deformation and/or relatively small tag spacing. In the final step, a meshless deformable model is initialized using the transformation function computed by RPM. The model refines the motion tracking and generates a dense displacement map, by deforming under the influence of image information, and is constrained by the displacement magnitude to retain its geometric structure. The 2D displacement maps in short and long axis image planes can be combined to drive a 3D deformable model, using the Moving Least Square method, constrained by the minimization of the residual error at tag intersections. The method has been tested on a numerical phantom, as well as on in vivo heart data from normal volunteers and heart disease patients. The experimental results show that the new method has a good performance on both synthetic and real data. Furthermore, the method has been used in an initial clinical study to assess the differences in myocardial strain distributions between heart disease (left ventricular hypertrophy) patients and the normal control group. The final results show that the proposed method

  10. Neural network techniques for invariant recognition and motion tracking of 3-D objects

    SciTech Connect

    Hwang, J.N.; Tseng, Y.H.

    1995-12-31

    Invariant recognition and motion tracking of 3-D objects under partial object viewing are difficult tasks. In this paper, we introduce a new neural network solution that is robust to noise corruption and partial viewing of objects. This method directly utilizes the acquired range data and requires no feature extraction. In the proposed approach, the object is first parametrically represented by a continuous distance transformation neural network (CDTNN) which is trained by the surface points of the exemplar object. When later presented with the surface points of an unknown object, this parametric representation allows the mismatch information to back-propagate through the CDTNN to gradually determine the best similarity transformation (translation and rotation) of the unknown object. The mismatch can be directly measured in the reconstructed representation domain between the model and the unknown object.

  11. Motion error analysis of the 3D coordinates of airborne lidar for typical terrains

    NASA Astrophysics Data System (ADS)

    Peng, Tao; Lan, Tian; Ni, Guoqiang

    2013-07-01

    A motion error model of 3D coordinates is established and the impact on coordinate errors caused by the non-ideal movement of the airborne platform is analyzed. The simulation results of the model show that when the lidar system operates at high altitude, the influence on the positioning errors derived from laser point cloud spacing is small. For the model the positioning errors obey simple harmonic vibration whose amplitude envelope gradually reduces with the increase of the vibration frequency. When the vibration period number is larger than 50, the coordinate errors are almost uncorrelated with time. The elevation error is less than the plane error and in the plane the error in the scanning direction is less than the error in the flight direction. Through the analysis of flight test data, the conclusion is verified.

  12. Modelling the 3D morphology and proper motions of the planetary nebula NGC 6302

    NASA Astrophysics Data System (ADS)

    Uscanga, L.; Velázquez, P. F.; Esquivel, A.; Raga, A. C.; Boumis, P.; Cantó, J.

    2014-08-01

    We present 3D hydrodynamical simulations of an isotropic fast wind interacting with a previously ejected toroidally shaped slow wind in order to model both the observed morphology and the kinematics of the planetary nebula (PN) NGC 6302. This source, also known as the Butterfly nebula, presents one of the most complex morphologies ever observed in PNe. From our numerical simulations, we have obtained an intensity map for the Hα emission to make a comparison with the Hubble Space Telescope (HST) observations of this object. We have also carried out a proper motion (PM) study from our numerical results, in order to compare with previous observational studies. We have found that the two interacting stellar wind model reproduce well the morphology of NGC 6302, and while the PMs in the models are similar to the observations, our results suggest that an acceleration mechanism is needed to explain the Hubble-type expansion found in HST observations.

  13. DLP technology application: 3D head tracking and motion correction in medical brain imaging

    NASA Astrophysics Data System (ADS)

    Olesen, Oline V.; Wilm, Jakob; Paulsen, Rasmus R.; Højgaard, Liselotte; Larsen, Rasmus

    2014-03-01

    In this paper we present a novel sensing system, robust Near-infrared Structured Light Scanning (NIRSL) for three-dimensional human model scanning application. Human model scanning due to its nature of various hair and dress appearance and body motion has long been a challenging task. Previous structured light scanning methods typically emitted visible coded light patterns onto static and opaque objects to establish correspondence between a projector and a camera for triangulation. In the success of these methods rely on scanning objects with proper reflective surface for visible light, such as plaster, light colored cloth. Whereas for human model scanning application, conventional methods suffer from low signal to noise ratio caused by low contrast of visible light over the human body. The proposed robust NIRSL, as implemented with the near infrared light, is capable of recovering those dark surfaces, such as hair, dark jeans and black shoes under visible illumination. Moreover, successful structured light scan relies on the assumption that the subject is static during scanning. Due to the nature of body motion, it is very time sensitive to keep this assumption in the case of human model scan. The proposed sensing system, by utilizing the new near-infrared capable high speed LightCrafter DLP projector, is robust to motion, provides accurate and high resolution three-dimensional point cloud, making our system more efficient and robust for human model reconstruction. Experimental results demonstrate that our system is effective and efficient to scan real human models with various dark hair, jeans and shoes, robust to human body motion and produces accurate and high resolution 3D point cloud.

  14. The representation of moving 3-D objects in apparent motion perception.

    PubMed

    Hidaka, Souta; Kawachi, Yousuke; Gyoba, Jiro

    2009-08-01

    In the present research, we investigated the depth information contained in the representations of apparently moving 3-D objects. By conducting three experiments, we measured the magnitude of representational momentum (RM) as an index of the consistency of an object's representation. Experiment 1A revealed that RM magnitude was greater when shaded, convex, apparently moving objects shifted to a flat circle than when they shifted to a shaded, concave, hemisphere. The difference diminished when the apparently moving objects were concave hemispheres (Experiment 1B). Using luminance-polarized circles, Experiment 2 confirmed that these results were not due to the luminance information of shading. Experiment 3 demonstrated that RM magnitude was greater when convex apparently moving objects shifted to particular blurred convex hemispheres with low-pass filtering than when they shifted to concave hemispheres. These results suggest that the internal object's representation in apparent motion contains incomplete depth information intermediate between that of 2-D and 3-D objects, particularly with regard to convexity information with low-spatial-frequency components. PMID:19633345

  15. Tactical 3D model generation using structure-from-motion on video from unmanned systems

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Bilinski, Mark; Nguyen, Kim B.; Powell, Darren

    2015-05-01

    Unmanned systems have been cited as one of the future enablers of all the services to assist the warfighter in dominating the battlespace. The potential benefits of unmanned systems are being closely investigated -- from providing increased and potentially stealthy surveillance, removing the warfighter from harms way, to reducing the manpower required to complete a specific job. In many instances, data obtained from an unmanned system is used sparingly, being applied only to the mission at hand. Other potential benefits to be gained from the data are overlooked and, after completion of the mission, the data is often discarded or lost. However, this data can be further exploited to offer tremendous tactical, operational, and strategic value. To show the potential value of this otherwise lost data, we designed a system that persistently stores the data in its original format from the unmanned vehicle and then generates a new, innovative data medium for further analysis. The system streams imagery and video from an unmanned system (original data format) and then constructs a 3D model (new data medium) using structure-from-motion. The 3D generated model provides warfighters additional situational awareness, tactical and strategic advantages that the original video stream lacks. We present our results using simulated unmanned vehicle data with Google Earth™providing the imagery as well as real-world data, including data captured from an unmanned aerial vehicle flight.

  16. Velocity and Density Models Incorporating the Cascadia Subduction Zone for 3D Earthquake Ground Motion Simulations

    USGS Publications Warehouse

    Stephenson, William J.

    2007-01-01

    INTRODUCTION In support of earthquake hazards and ground motion studies in the Pacific Northwest, three-dimensional P- and S-wave velocity (3D Vp and Vs) and density (3D rho) models incorporating the Cascadia subduction zone have been developed for the region encompassed from about 40.2?N to 50?N latitude, and from about -122?W to -129?W longitude. The model volume includes elevations from 0 km to 60 km (elevation is opposite of depth in model coordinates). Stephenson and Frankel (2003) presented preliminary ground motion simulations valid up to 0.1 Hz using an earlier version of these models. The version of the model volume described here includes more structural and geophysical detail, particularly in the Puget Lowland as required for scenario earthquake simulations in the development of the Seattle Urban Hazards Maps (Frankel and others, 2007). Olsen and others (in press) used the model volume discussed here to perform a Cascadia simulation up to 0.5 Hz using a Sumatra-Andaman Islands rupture history. As research from the EarthScope Program (http://www.earthscope.org) is published, a wealth of important detail can be added to these model volumes, particularly to depths of the upper-mantle. However, at the time of development for this model version, no EarthScope-specific results were incorporated. This report is intended to be a reference for colleagues and associates who have used or are planning to use this preliminary model in their research. To this end, it is intended that these models will be considered a beginning template for a community velocity model of the Cascadia region as more data and results become available.

  17. Probabilistic Seismic Hazard Maps for Seattle, Washington, Based on 3D Ground-Motion Simulations

    NASA Astrophysics Data System (ADS)

    Frankel, A. D.; Stephenson, W. J.; Carver, D. L.; Williams, R. A.; Odum, J. K.; Rhea, S.

    2007-12-01

    We have produced probabilistic seismic hazard maps for Seattle using over 500 3D finite-difference simulations of ground motions from earthquakes in the Seattle fault zone, Cascadia subduction zone, South Whidbey Island fault, and background shallow and deep source areas. The maps depict 1 Hz response spectral accelerations with 2, 5, and 10% probabilities of being exceeded in 50 years. The simulations were used to generate site and source dependent amplification factors that are applied to rock-site attenuation relations. The maps incorporate essentially the same fault sources and earthquake recurrence times as the 2002 national seismic hazard maps. The simulations included basin surface waves and basin-edge focusing effects from a 3D model of the Seattle basin. The 3D velocity model was validated by modeling several earthquakes in the region, including the 2001 M6.8 Nisqually earthquake, that were recorded by our Seattle Urban Seismic Network and the Pacific Northwest Seismic Network. The simulations duplicate our observation that earthquakes from the south and southwest typically produce larger amplifications in the Seattle basin than earthquakes from other azimuths, relative to rock sites outside the basin. Finite-fault simulations were run for earthquakes along the Seattle fault zone, with magnitudes ranging from 6.6 to 7.2, so that the effects of rupture directivity were included. Nonlinear amplification factors for soft-soil sites of fill and alluvium were also applied in the maps. For the Cascadia subduction zone, 3D simulations with point sources at different locations along the zone were used to determine amplification factors across Seattle expected for great subduction-zone earthquakes. These new urban seismic hazard maps are based on determinations of hazard for 7236 sites with a spacing of 280 m. The maps show that the highest hazard locations for this frequency band (around 1 Hz) are soft-soil sites (fill and alluvium) within the Seattle basin and

  18. 3D reconstruction for sinusoidal motion based on different feature detection algorithms

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Zhang, Jin; Deng, Huaxia; Yu, Liandong

    2015-02-01

    The dynamic testing of structures and components is an important area of research. Extensive researches on the methods of using sensors for vibration parameters have been studied for years. With the rapid development of industrial high-speed camera and computer hardware, the method of using stereo vision for dynamic testing has been the focus of the research since the advantages of non-contact, full-field, high resolution and high accuracy. But in the country there is not much research about the dynamic testing based on stereo vision, and yet few people publish articles about the three-dimensional (3D) reconstruction of feature points in the case of dynamic. It is essential to the following analysis whether it can obtain accurate movement of target objects. In this paper, an object with sinusoidal motion is detected by stereo vision and the accuracy with different feature detection algorithms is investigated. Three different marks including dot, square and circle are stuck on the object and the object is doing sinusoidal motion by vibration table. Then use feature detection algorithm speed-up robust feature (SURF) to detect point, detect square corners by Harris and position the center by Hough transform. After obtaining the pixel coordinate values of the feature point, the stereo calibration parameters are used to achieve three-dimensional reconstruction through triangulation principle. The trajectories of the specific direction according to the vibration frequency and the frequency camera acquisition are obtained. At last, the reconstruction accuracy of different feature detection algorithms is compared.

  19. Numerical Benchmark of 3D Ground Motion Simulation in the Alpine valley of Grenoble, France.

    NASA Astrophysics Data System (ADS)

    Tsuno, S.; Chaljub, E.; Cornou, C.; Bard, P.

    2006-12-01

    Thank to the use of sophisticated numerical methods and to the access to increasing computational resources, our predictions of strong ground motion become more and more realistic and need to be carefully compared. We report our effort of benchmarking numerical methods of ground motion simulation in the case of the valley of Grenoble in the French Alps. The Grenoble valley is typical of a moderate seismicity area where strong site effects occur. The benchmark consisted in computing the seismic response of the `Y'-shaped Grenoble valley to (i) two local earthquakes (Ml<=3) for which recordings were avalaible; and (ii) two local hypothetical events (Mw=6) occuring on the so-called Belledonne Border Fault (BBF) [1]. A free-style prediction was also proposed, in which participants were allowed to vary the source and/or the model parameters and were asked to provide the resulting uncertainty in their estimation of ground motion. We received a total of 18 contributions from 14 different groups; 7 of these use 3D methods, among which 3 could handle surface topography, the other half comprises predictions based upon 1D (2 contributions), 2D (4 contributions) and empirical Green's function (EGF) (3 contributions) methods. Maximal frequency analysed ranged between 2.5 Hz for 3D calculations and 40 Hz for EGF predictions. We present a detailed comparison of the different predictions using raw indicators (e.g. peak values of ground velocity and acceleration, Fourier spectra, site over reference spectral ratios, ...) as well as sophisticated misfit criteria based upon previous works [2,3]. We further discuss the variability in estimating the importance of particular effects such as non-linear rheology, or surface topography. References: [1] Thouvenot F. et al., The Belledonne Border Fault: identification of an active seismic strike-slip fault in the western Alps, Geophys. J. Int., 155 (1), p. 174-192, 2003. [2] Anderson J., Quantitative measure of the goodness-of-fit of

  20. Are There Side Effects to Watching 3D Movies? A Prospective Crossover Observational Study on Visually Induced Motion Sickness

    PubMed Central

    Solimini, Angelo G.

    2013-01-01

    Background The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. Methods and Findings A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Conclusions Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators. PMID:23418530

  1. SU-E-J-135: An Investigation of Ultrasound Imaging for 3D Intra-Fraction Prostate Motion Estimation

    SciTech Connect

    O'Shea, T; Harris, E; Bamber, J; Evans, P

    2014-06-01

    Purpose: This study investigates the use of a mechanically swept 3D ultrasound (US) probe to estimate intra-fraction motion of the prostate during radiation therapy using an US phantom and simulated transperineal imaging. Methods: A 3D motion platform was used to translate an US speckle phantom while simulating transperineal US imaging. Motion patterns for five representative types of prostate motion, generated from patient data previously acquired with a Calypso system, were using to move the phantom in 3D. The phantom was also implanted with fiducial markers and subsequently tracked using the CyberKnife kV x-ray system for comparison. A normalised cross correlation block matching algorithm was used to track speckle patterns in 3D and 2D US data. Motion estimation results were compared with known phantom translations. Results: Transperineal 3D US could track superior-inferior (axial) and anterior-posterior (lateral) motion to better than 0.8 mm root-mean-square error (RMSE) at a volume rate of 1.7 Hz (comparable with kV x-ray tracking RMSE). Motion estimation accuracy was poorest along the US probe's swept axis (right-left; RL; RMSE < 4.2 mm) but simple regularisation methods could be used to improve RMSE (< 2 mm). 2D US was found to be feasible for slowly varying motion (RMSE < 0.5 mm). 3D US could also allow accurate radiation beam gating with displacement thresholds of 2 mm and 5 mm exhibiting a RMSE of less than 0.5 mm. Conclusion: 2D and 3D US speckle tracking is feasible for prostate motion estimation during radiation delivery. Since RL prostate motion is small in magnitude and frequency, 2D or a hybrid (2D/3D) US imaging approach which also accounts for potential prostate rotations could be used. Regularisation methods could be used to ensure the accuracy of tracking data, making US a feasible approach for gating or tracking in standard or hypo-fractionated prostate treatments.

  2. 3D Modelling of Inaccessible Areas using UAV-based Aerial Photography and Structure from Motion

    NASA Astrophysics Data System (ADS)

    Obanawa, Hiroyuki; Hayakawa, Yuichi; Gomez, Christopher

    2014-05-01

    In hardly accessible areas, the collection of 3D point-clouds using TLS (Terrestrial Laser Scanner) can be very challenging, while airborne equivalent would not give a correct account of subvertical features and concave geometries like caves. To solve such problem, the authors have experimented an aerial photography based SfM (Structure from Motion) technique on a 'peninsular-rock' surrounded on three sides by the sea at a Pacific coast in eastern Japan. The research was carried out using UAS (Unmanned Aerial System) combined with a commercial small UAV (Unmanned Aerial Vehicle) carrying a compact camera. The UAV is a DJI PHANTOM: the UAV has four rotors (quadcopter), it has a weight of 1000 g, a payload of 400 g and a maximum flight time of 15 minutes. The camera is a GoPro 'HERO3 Black Edition': resolution 12 million pixels; weight 74 g; and 0.5 sec. interval-shot. The 3D model has been constructed by digital photogrammetry using a commercial SfM software, Agisoft PhotoScan Professional®, which can generate sparse and dense point-clouds, from which polygonal models and orthophotographs can be calculated. Using the 'flight-log' and/or GCPs (Ground Control Points), the software can generate digital surface model. As a result, high-resolution aerial orthophotographs and a 3D model were obtained. The results have shown that it was possible to survey the sea cliff and the wave cut-bench, which are unobservable from land side. In details, we could observe the complexity of the sea cliff that is nearly vertical as a whole while slightly overhanging over the thinner base. The wave cut bench is nearly flat and develops extensively at the base of the cliff. Although there are some evidences of small rockfalls at the upper part of the cliff, there is no evidence of very recent activity, because no fallen rock exists on the wave cut bench. This system has several merits: firstly lower cost than the existing measuring methods such as manned-flight survey and aerial laser

  3. Discovering motion primitives for unsupervised grouping and one-shot learning of human actions, gestures, and expressions.

    PubMed

    Yang, Yang; Saleemi, Imran; Shah, Mubarak

    2013-07-01

    This paper proposes a novel representation of articulated human actions and gestures and facial expressions. The main goals of the proposed approach are: 1) to enable recognition using very few examples, i.e., one or k-shot learning, and 2) meaningful organization of unlabeled datasets by unsupervised clustering. Our proposed representation is obtained by automatically discovering high-level subactions or motion primitives, by hierarchical clustering of observed optical flow in four-dimensional, spatial, and motion flow space. The completely unsupervised proposed method, in contrast to state-of-the-art representations like bag of video words, provides a meaningful representation conducive to visual interpretation and textual labeling. Each primitive action depicts an atomic subaction, like directional motion of limb or torso, and is represented by a mixture of four-dimensional Gaussian distributions. For one--shot and k-shot learning, the sequence of primitive labels discovered in a test video are labeled using KL divergence, and can then be represented as a string and matched against similar strings of training videos. The same sequence can also be collapsed into a histogram of primitives or be used to learn a Hidden Markov model to represent classes. We have performed extensive experiments on recognition by one and k-shot learning as well as unsupervised action clustering on six human actions and gesture datasets, a composite dataset, and a database of facial expressions. These experiments confirm the validity and discriminative nature of the proposed representation. PMID:23681992

  4. Evaluating the utility of 3D TRUS image information in guiding intra-procedure registration for motion compensation

    NASA Astrophysics Data System (ADS)

    De Silva, Tharindu; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    In targeted 3D transrectal ultrasound (TRUS)-guided biopsy, patient and prostate movement during the procedure can cause target misalignments that hinder accurate sampling of pre-planned suspicious tissue locations. Multiple solutions have been proposed for motion compensation via registration of intra-procedural TRUS images to a baseline 3D TRUS image acquired at the beginning of the biopsy procedure. While 2D TRUS images are widely used for intra-procedural guidance, some solutions utilize richer intra-procedural images such as bi- or multi-planar TRUS or 3D TRUS, acquired by specialized probes. In this work, we measured the impact of such richer intra-procedural imaging on motion compensation accuracy, to evaluate the tradeoff between cost and complexity of intra-procedural imaging versus improved motion compensation. We acquired baseline and intra-procedural 3D TRUS images from 29 patients at standard sextant-template biopsy locations. We used the planes extracted from the 3D intra-procedural scans to simulate 2D and 3D information available in different clinically relevant scenarios for registration. The registration accuracy was evaluated by calculating the target registration error (TRE) using manually identified homologous fiducial markers (micro-calcifications). Our results indicate that TRE improves gradually when the number of intra-procedural imaging planes used in registration is increased. Full 3D TRUS information helps the registration algorithm to robustly converge to more accurate solutions. These results can also inform the design of a fail-safe workflow during motion compensation in a system using a tracked 2D TRUS probe, by prescribing rotational acquisitions that can be performed quickly and easily by the physician immediately prior to needle targeting.

  5. Model-based lasso catheter tracking in monoplane fluoroscopy for 3D breathing motion compensation during EP procedures

    NASA Astrophysics Data System (ADS)

    Liao, Rui

    2010-02-01

    Radio-frequency catheter ablation (RFCA) of the pulmonary veins (PVs) attached to the left atrium (LA) is usually carried out under fluoroscopy guidance. Overlay of detailed anatomical structures via 3-D CT and/or MR volumes onto the fluoroscopy helps visualization and navigation in electrophysiology procedures (EP). Unfortunately, respiratory motion may impair the utility of static overlay of the volume with fluoroscopy for catheter navigation. In this paper, we propose a B-spline based method for tracking the circumferential catheter (lasso catheter) in monoplane fluoroscopy. The tracked motion can be used for the estimation of the 3-D trajectory of breathing motion and for subsequent motion compensation. A lasso catheter is typically used during EP procedures and is pushed against the ostia of the PVs to be ablated. Hence this method does not require additional instruments, and achieves motion estimation right at the site of ablation. The performance of the proposed tracking algorithm was evaluated on 340 monoplane frames with an average error of 0.68 +/- 0.36 mms. Our contributions in this work are twofold. First and foremost, we show how to design an effective, practical, and workflow-friendly 3-D motion compensation scheme for EP procedures in a monoplane setup. In addition, we develop an efficient and accurate method for model-based tracking of the circumferential lasso catheter in the low-dose EP fluoroscopy.

  6. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    SciTech Connect

    Dhou, S; Hurwitz, M; Lewis, J; Mishra, P

    2014-06-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT

  7. A study of the effects of degraded imagery on tactical 3D model generation using structure-from-motion

    NASA Astrophysics Data System (ADS)

    Bolick, Leslie; Harguess, Josh

    2016-05-01

    An emerging technology in the realm of airborne intelligence, surveillance, and reconnaissance (ISR) systems is structure-from-motion (SfM), which enables the creation of three-dimensional (3D) point clouds and 3D models from two-dimensional (2D) imagery. There are several existing tools, such as VisualSFM and open source project OpenSfM, to assist in this process, however, it is well-known that pristine imagery is usually required to create meaningful 3D data from the imagery. In military applications, such as the use of unmanned aerial vehicles (UAV) for surveillance operations, imagery is rarely pristine. Therefore, we present an analysis of structure-from-motion packages on imagery that has been degraded in a controlled manner.

  8. A 3D MR-acquisition scheme for nonrigid bulk motion correction in simultaneous PET-MR

    SciTech Connect

    Kolbitsch, Christoph Prieto, Claudia; Schaeffter, Tobias; Tsoumpas, Charalampos

    2014-08-15

    Purpose: Positron emission tomography (PET) is a highly sensitive medical imaging technique commonly used to detect and assess tumor lesions. Magnetic resonance imaging (MRI) provides high resolution anatomical images with different contrasts and a range of additional information important for cancer diagnosis. Recently, simultaneous PET-MR systems have been released with the promise to provide complementary information from both modalities in a single examination. Due to long scan times, subject nonrigid bulk motion, i.e., changes of the patient's position on the scanner table leading to nonrigid changes of the patient's anatomy, during data acquisition can negatively impair image quality and tracer uptake quantification. A 3D MR-acquisition scheme is proposed to detect and correct for nonrigid bulk motion in simultaneously acquired PET-MR data. Methods: A respiratory navigated three dimensional (3D) MR-acquisition with Radial Phase Encoding (RPE) is used to obtain T1- and T2-weighted data with an isotropic resolution of 1.5 mm. Healthy volunteers are asked to move the abdomen two to three times during data acquisition resulting in overall 19 movements at arbitrary time points. The acquisition scheme is used to retrospectively reconstruct dynamic 3D MR images with different temporal resolutions. Nonrigid bulk motion is detected and corrected in this image data. A simultaneous PET acquisition is simulated and the effect of motion correction is assessed on image quality and standardized uptake values (SUV) for lesions with different diameters. Results: Six respiratory gated 3D data sets with T1- and T2-weighted contrast have been obtained in healthy volunteers. All bulk motion shifts have successfully been detected and motion fields describing the transformation between the different motion states could be obtained with an accuracy of 1.71 ± 0.29 mm. The PET simulation showed errors of up to 67% in measured SUV due to bulk motion which could be reduced to less than

  9. Verification and validation of ShipMo3D ship motion predictions in the time and frequency domains

    NASA Astrophysics Data System (ADS)

    McTaggart, Kevin A.

    2011-03-01

    This paper compares frequency domain and time domain predictions from the ShipMo3D ship motion library with observed motions from model tests and sea trials. ShipMo3D evaluates hull radiation and diffraction forces using the frequency domain Green function for zero forward speed, which is a suitable approach for ships travelling at moderate speed (e.g., Froude numbers up to 0.4). Numerical predictions give generally good agreement with experiments. Frequency domain and linear time domain predictions are almost identical. Evaluation of nonlinear buoyancy and incident wave forces using the instantaneous wetted hull surface gives no improvement in numerical predictions. Consistent prediction of roll motions remains a challenge for seakeeping codes due to the associated viscous effects.

  10. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  11. 3D motion artifact compenstation in CT image with depth camera

    NASA Astrophysics Data System (ADS)

    Ko, Youngjun; Baek, Jongduk; Shim, Hyunjung

    2015-02-01

    Computed tomography (CT) is a medical imaging technology that projects computer-processed X-rays to acquire tomographic images or the slices of specific organ of body. A motion artifact caused by patient motion is a common problem in CT system and may introduce undesirable artifacts in CT images. This paper analyzes the critical problems in motion artifacts and proposes a new CT system for motion artifact compensation. We employ depth cameras to capture the patient motion and account it for the CT image reconstruction. In this way, we achieve the significant improvement in motion artifact compensation, which is not possible by previous techniques.

  12. Dynamics and cortical distribution of neural responses to 2D and 3D motion in human

    PubMed Central

    McKee, Suzanne P.; Norcia, Anthony M.

    2013-01-01

    The perception of motion-in-depth is important for avoiding collisions and for the control of vergence eye-movements and other motor actions. Previous psychophysical studies have suggested that sensitivity to motion-in-depth has a lower temporal processing limit than the perception of lateral motion. The present study used functional MRI-informed EEG source-imaging to study the spatiotemporal properties of the responses to lateral motion and motion-in-depth in human visual cortex. Lateral motion and motion-in-depth displays comprised stimuli whose only difference was interocular phase: monocular oscillatory motion was either in-phase in the two eyes (lateral motion) or in antiphase (motion-in-depth). Spectral analysis was used to break the steady-state visually evoked potentials responses down into even and odd harmonic components within five functionally defined regions of interest: V1, V4, lateral occipital complex, V3A, and hMT+. We also characterized the responses within two anatomically defined regions: the inferior and superior parietal cortex. Even harmonic components dominated the evoked responses and were a factor of approximately two larger for lateral motion than motion-in-depth. These responses were slower for motion-in-depth and were largely independent of absolute disparity. In each of our regions of interest, responses at odd-harmonics were relatively small, but were larger for motion-in-depth than lateral motion, especially in parietal cortex, and depended on absolute disparity. Taken together, our results suggest a plausible neural basis for reduced psychophysical sensitivity to rapid motion-in-depth. PMID:24198326

  13. Two-Step System Identification and Primitive-Based Motion Planning for Control of Small Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Grymin, David J.

    This dissertation addresses motion planning, modeling, and feedback control for autonomous vehicle systems. A hierarchical approach for motion planning and control of nonlinear systems operating in obstacle environments is presented. To reduce computation time during the motion planning process, dynamically feasible trajectories are generated in real-time through concatenation of pre-specified motion primitives. The motion planning task is posed as a search over a directed graph, and the applicability of informed graph search techniques is investigated. Specifically, a locally greedy algorithm with effective backtracking ability is developed and compared to weighted A* search. The greedy algorithm shows an advantage with respect to solution cost and computation time when larger motion primitive libraries that do not operate on a regular state lattice are utilized. Linearization of the nonlinear system equations about the motion primitive library results in a hybrid linear time-varying model, and an optimal control algorithm using the l 2-induced norm as the performance measure is applied to ensure that the system tracks the desired trajectory. The ability of the resulting controller to closely track the trajectory obtained from the motion planner, despite various disturbances and uncertainties, is demonstrated through simulation. Additionally, an approach for obtaining dynamically feasible reference trajectories and feedback controllers for a small unmanned aerial vehicle (UAV) based on an aerodynamic model derived from flight tests is presented. The modeling approach utilizes the two step method (TSM) with stepwise multiple regression to determine relevant explanatory terms for the aerodynamic models. Dynamically feasible trajectories are then obtained through the solution of an optimal control problem using pseudospectral optimal control software. Discretetime feedback controllers are then obtained to regulate the vehicle along the desired reference trajectory

  14. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson’s Disease

    PubMed Central

    Piro, Neltje E.; Piro, Lennart K.; Kassubek, Jan; Blechschmidt-Trapp, Ronald A.

    2016-01-01

    Remote monitoring of Parkinson’s Disease (PD) patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS) classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a) an examination based on video recordings as a clinical reference; (b) an automatically classified UPDRS; and (c) a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG) sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b) the automatically classified UPDRS is 0.48 and with (c) the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team. PMID:27338400

  15. Nonrigid motion correction in 3D using autofocusing with localized linear translations.

    PubMed

    Cheng, Joseph Y; Alley, Marcus T; Cunningham, Charles H; Vasanawala, Shreyas S; Pauly, John M; Lustig, Michael

    2012-12-01

    MR scans are sensitive to motion effects due to the scan duration. To properly suppress artifacts from nonrigid body motion, complex models with elements such as translation, rotation, shear, and scaling have been incorporated into the reconstruction pipeline. However, these techniques are computationally intensive and difficult to implement for online reconstruction. On a sufficiently small spatial scale, the different types of motion can be well approximated as simple linear translations. This formulation allows for a practical autofocusing algorithm that locally minimizes a given motion metric--more specifically, the proposed localized gradient-entropy metric. To reduce the vast search space for an optimal solution, possible motion paths are limited to the motion measured from multichannel navigator data. The novel navigation strategy is based on the so-called "Butterfly" navigators, which are modifications of the spin-warp sequence that provides intrinsic translational motion information with negligible overhead. With a 32-channel abdominal coil, sufficient number of motion measurements were found to approximate possible linear motion paths for every image voxel. The correction scheme was applied to free-breathing abdominal patient studies. In these scans, a reduction in artifacts from complex, nonrigid motion was observed. PMID:22307933

  16. Accurate 3D rigid-body target motion and structure estimation by using GMTI/HRR with template information

    NASA Astrophysics Data System (ADS)

    Wu, Shunguang; Hong, Lang

    2008-04-01

    A framework of simultaneously estimating the motion and structure parameters of a 3D object by using high range resolution (HRR) and ground moving target indicator (GMTI) measurements with template information is given. By decoupling the motion and structure information and employing rigid-body constraints, we have developed the kinematic and measurement equations of the problem. Since the kinematic system is unobservable by using only one scan HRR and GMTI measurements, we designed an architecture to run the motion and structure filters in parallel by using multi-scan measurements. Moreover, to improve the estimation accuracy in large noise and/or false alarm environments, an interacting multi-template joint tracking (IMTJT) algorithm is proposed. Simulation results have shown that the averaged root mean square errors for both motion and structure state vectors have been significantly reduced by using the template information.

  17. Performance of ultrasound based measurement of 3D displacement using a curvilinear probe for organ motion tracking

    NASA Astrophysics Data System (ADS)

    Harris, Emma J.; Miller, Naomi R.; Bamber, Jeffrey C.; Evans, Phillip M.; Symonds-Tayler, J. Richard N.

    2007-09-01

    Three-dimensional (3D) soft tissue tracking is of interest for monitoring organ motion during therapy. Our goal is to assess the tracking performance of a curvilinear 3D ultrasound probe in terms of the accuracy and precision of measured displacements. The first aim was to examine the depth dependence of the tracking performance. This is of interest because the spatial resolution varies with distance from the elevational focus and because the curvilinear geometry of the transducer causes the spatial sampling frequency to decrease with depth. Our second aim was to assess tracking performance as a function of the spatial sampling setting (low, medium or high sampling). These settings are incorporated onto 3D ultrasound machines to allow the user to control the trade-off between spatial sampling and temporal resolution. Volume images of a speckle-producing phantom were acquired before and after the probe had been moved by a known displacement (1, 2 or 8 mm). This allowed us to assess the optimum performance of the tracking algorithm, in the absence of motion. 3D speckle tracking was performed using 3D cross-correlation and sub-voxel displacements were estimated. The tracking performance was found to be best for axial displacements and poorest for elevational displacements. In general, the performance decreased with depth, although the nature of the depth dependence was complex. Under certain conditions, the tracking performance was sufficient to be useful for monitoring organ motion. For example, at the highest sampling setting, for a 2 mm displacement, good accuracy and precision (an error and standard deviation of <0.4 mm) were observed at all depths and for all directions of displacement. The trade-off between spatial sampling, temporal resolution and size of the field of view (FOV) is discussed.

  18. A stroboscopic structured illumination system used in dynamic 3D visualization of high-speed motion object

    NASA Astrophysics Data System (ADS)

    Su, Xianyu; Zhang, Qican; Li, Yong; Xiang, Liqun; Cao, Yiping; Chen, Wenjing

    2005-04-01

    A stroboscopic structured illumination system, which can be used in measurement for 3D shape and deformation of high-speed motion object, is proposed and verified by experiments. The system, present in this paper, can automatically detect the position of high-speed moving object and synchronously control the flash of LED to project a structured optical field onto surface of motion object and the shoot of imaging system to acquire an image of deformed fringe pattern, also can create a signal, set artificially through software, to synchronously control the LED and imaging system to do their job. We experiment on a civil electric fan, successful acquire a serial of instantaneous, sharp and clear images of rotation blade and reconstruct its 3D shapes in difference revolutions.

  19. Change of Re dependency of single bubble 3D motion by surface slip condition in surfactant solution

    NASA Astrophysics Data System (ADS)

    Tagawa, Yoshiyuki; Funakubo, Ami; Takagi, Shu; Matsumoto, Yoichiro

    2009-11-01

    Path instability of single bubble in water is sensitive to surfactant. One of the key effects of surfactant is to decrease bubble rising velocity (i.e. increase drag) and change bubble slip condition from free-slip to no-slip. This phenomenon is described as Marangoni effect. However, the surfactant effect to path instability is not fully investigated. In this research, we measured bubble 3D trajectories and velocity in dilute surfactant solution to reveal the relation between 3D motion mode and slip condition. Experimental parameters are types of surfactants, concentrations and bubble sizes. Bubble motions categorized as straight, spiral or zigzag are plotted on two-dimensional field of bubble Reynolds number Re and normalized drag coefficient CD^* which is strongly related to surface slip condition. Range of Re is from 200 to 1000 and CD^* is from 0 to 1. Our results show that when CD^* equals 0 or 1 (free-slip condition or no-slip condition, respectively), bubble motion mode is changed by Re. However when CD^* is 0.5, bubble motion is always spiral. It means that Re dependency of bubble motions is strongly affected by slip condition. We will discuss its mechanism in detail in our presentation.

  20. 3D Finite-Difference Modeling of Strong Ground Motion in the Upper Rhine Graben - 1356 Basel Earthquake

    NASA Astrophysics Data System (ADS)

    Oprsal, I.; Faeh, D.; Giardini, D.

    2002-12-01

    The disastrous Basel earthquake of October 18, 1356 (I0=X, M ≈ 6.9), appeared in, today seismically modest, Basel region (Upper Rhine Graben). The lack of strong ground motion seismic data can be effectively supplied by numerical modeling. We applied the 3D finite differences (FD) to predict ground motions which can be used for microzonation and hazard assessment studies. The FD method is formulated for topography models on irregular rectangular grids. It is a 3D explicit FD formulation of the hyperbolic partial differential equation (PDE). Elastodynamic PDE is solved in the time domain. The Hooke's isotropic inhomogeneous medium contains discontinuities and a topographic free surface. The 3D elastic FD modeling is applied on a newly established P and S-wave velocities structure model. This complex structure contains main interfaces and gradients inside some layers. It is adjacent to the earth surface and includes topography (Kind, Faeh and Giardini, 2002, A 3D Reference Model for the Area of Basel, in prep.). The first attempt was done for a double-couple point source and relatively simple source function. Numerical tests are planned for several finite-extent source histories because the 1356 Basel earthquake source features have not been well determined, yet. The presumed finite-extent source is adjacent to the free surface. The results are compared to the macroseismic information of the Basel area.

  1. Robust 2D/3D registration for fast-flexion motion of the knee joint using hybrid optimization.

    PubMed

    Ohnishi, Takashi; Suzuki, Masahiko; Kobayashi, Tatsuya; Naomoto, Shinji; Sukegawa, Tomoyuki; Nawata, Atsushi; Haneishi, Hideaki

    2013-01-01

    Previously, we proposed a 2D/3D registration method that uses Powell's algorithm to obtain 3D motion of a knee joint by 3D computed-tomography and bi-plane fluoroscopic images. The 2D/3D registration is performed consecutively and automatically for each frame of the fluoroscopic images. This method starts from the optimum parameters of the previous frame for each frame except for the first one, and it searches for the next set of optimum parameters using Powell's algorithm. However, if the flexion motion of the knee joint is fast, it is likely that Powell's algorithm will provide a mismatch because the initial parameters are far from the correct ones. In this study, we applied a hybrid optimization algorithm (HPS) combining Powell's algorithm with the Nelder-Mead simplex (NM-simplex) algorithm to overcome this problem. The performance of the HPS was compared with the separate performances of Powell's algorithm and the NM-simplex algorithm, the Quasi-Newton algorithm and hybrid optimization algorithm with the Quasi-Newton and NM-simplex algorithms with five patient data sets in terms of the root-mean-square error (RMSE), target registration error (TRE), success rate, and processing time. The RMSE, TRE, and the success rate of the HPS were better than those of the other optimization algorithms, and the processing time was similar to that of Powell's algorithm alone. PMID:23138929

  2. Websim3d: A Web-based System for Generation, Storage and Dissemination of Earthquake Ground Motion Simulations.

    NASA Astrophysics Data System (ADS)

    Olsen, K. B.

    2003-12-01

    Synthetic time histories from large-scale 3D ground motion simulations generally constitute large 'data' sets which typically require 100's of Mbytes or Gbytes of storage capacity. For the same reason, getting access to a researchers simulation output, for example for an earthquake engineer to perform site analysis, or a seismologist to perform seismic hazard analysis, can be a tedious procedure. To circumvent this problem we have developed a web-based ``community model'' (websim3D) for the generation, storage, and dissemination of ground motion simulation results. Websim3D allows user-friendly and fast access to view and download such simulation results for an earthquake-prone area. The user selects an earthquake scenario from a map of the region, which brings up a map of the area where simulation data is available. Now, by clicking on an arbitrary site location, synthetic seismograms and/or soil parameters for the site can be displayed at fixed or variable scaling and/or downloaded. Websim3D relies on PHP scripts for the dynamic plots of synthetic seismograms and soil profiles. Although not limited to a specific area, we illustrate the community model for simulation results from the Los Angeles basin, Wellington (New Zealand), and Mexico.

  3. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. PMID:23218511

  4. Nonlinear, nonlaminar - 3D computation of electron motion through the output cavity of a klystron.

    NASA Technical Reports Server (NTRS)

    Albers, L. U.; Kosmahl, H. G.

    1971-01-01

    The accurate computation is discussed of electron motion throughout the output cavity of a klystron amplifier. The assumptions are defined whereon the computation is based, and the equations of motion are reviewed, along with the space charge fields derived from a Green's function potential of a solid cylinder. The integration process is then examined with special attention to its most difficult and important aspect - namely, the accurate treatment of the dynamic effect of space charge forces on the motion of individual cell rings of equal volume and charge. The correct treatment is demonstrated upon four specific examples, and a few comments are given on the results obtained.-

  5. An eliminating method of motion-induced vertical parallax for time-division 3D display technology

    NASA Astrophysics Data System (ADS)

    Lin, Liyuan; Hou, Chunping

    2015-10-01

    A time difference between the left image and right image of the time-division 3D display makes a person perceive alternating vertical parallax when an object is moving vertically on a fixed depth plane, which causes the left image and right image perceived do not match and makes people more prone to visual fatigue. This mismatch cannot eliminate simply rely on the precise synchronous control of the left image and right image. Based on the principle of time-division 3D display technology and human visual system characteristics, this paper establishes a model of the true vertical motion velocity in reality and vertical motion velocity on the screen, and calculates the amount of the vertical parallax caused by vertical motion, and then puts forward a motion compensation method to eliminate the vertical parallax. Finally, subjective experiments are carried out to analyze how the time difference affects the stereo visual comfort by comparing the comfort values of the stereo image sequences before and after compensating using the eliminating method. The theoretical analysis and experimental results show that the proposed method is reasonable and efficient.

  6. Stereo and motion parallax cues in human 3D vision: can they vanish without a trace?

    PubMed

    Rauschecker, Andreas M; Solomon, Samuel G; Glennerster, Andrew

    2006-01-01

    In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the "correct" size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues. PMID:17209749

  7. Reconstruction Accuracy Assessment of Surface and Underwater 3D Motion Analysis: A New Approach

    PubMed Central

    de Jesus, Kelly; de Jesus, Karla; Figueiredo, Pedro; Vilas-Boas, João Paulo; Fernandes, Ricardo Jorge; Machado, Leandro José

    2015-01-01

    This study assessed accuracy of surface and underwater 3D reconstruction of a calibration volume with and without homography. A calibration volume (6000 × 2000 × 2500 mm) with 236 markers (64 above and 88 underwater control points—with 8 common points at water surface—and 92 validation points) was positioned on a 25 m swimming pool and recorded with two surface and four underwater cameras. Planar homography estimation for each calibration plane was computed to perform image rectification. Direct linear transformation algorithm for 3D reconstruction was applied, using 1600000 different combinations of 32 and 44 points out of the 64 and 88 control points for surface and underwater markers (resp.). Root Mean Square (RMS) error with homography of control and validations points was lower than without it for surface and underwater cameras (P ≤ 0.03). With homography, RMS errors of control and validation points were similar between surface and underwater cameras (P ≥ 0.47). Without homography, RMS error of control points was greater for underwater than surface cameras (P ≤ 0.04) and the opposite was observed for validation points (P ≤ 0.04). It is recommended that future studies using 3D reconstruction should include homography to improve swimming movement analysis accuracy. PMID:26175796

  8. 3-D Human Action Recognition by Shape Analysis of Motion Trajectories on Riemannian Manifold.

    PubMed

    Devanne, Maxime; Wannous, Hazem; Berretti, Stefano; Pala, Pietro; Daoudi, Mohamed; Del Bimbo, Alberto

    2015-07-01

    Recognizing human actions in 3-D video sequences is an important open problem that is currently at the heart of many research domains including surveillance, natural interfaces and rehabilitation. However, the design and development of models for action recognition that are both accurate and efficient is a challenging task due to the variability of the human pose, clothing and appearance. In this paper, we propose a new framework to extract a compact representation of a human action captured through a depth sensor, and enable accurate action recognition. The proposed solution develops on fitting a human skeleton model to acquired data so as to represent the 3-D coordinates of the joints and their change over time as a trajectory in a suitable action space. Thanks to such a 3-D joint-based framework, the proposed solution is capable to capture both the shape and the dynamics of the human body, simultaneously. The action recognition problem is then formulated as the problem of computing the similarity between the shape of trajectories in a Riemannian manifold. Classification using k-nearest neighbors is finally performed on this manifold taking advantage of Riemannian geometry in the open curve shape space. Experiments are carried out on four representative benchmarks to demonstrate the potential of the proposed solution in terms of accuracy/latency for a low-latency action recognition. Comparative results with state-of-the-art methods are reported. PMID:25216492

  9. Atmospheric Motion Vectors from INSAT-3D: Initial quality assessment and its impact on track forecast of cyclonic storm NANAUK

    NASA Astrophysics Data System (ADS)

    Deb, S. K.; Kishtawal, C. M.; Kumar, Prashant; Kiran Kumar, A. S.; Pal, P. K.; Kaushik, Nitesh; Sangar, Ghansham

    2016-03-01

    The advanced Indian meteorological geostationary satellite INSAT-3D was launched on 26 July 2013 with an improved imager and an infrared sounder and is placed at 82°E over the Indian Ocean region. With the advancement in retrieval techniques of different atmospheric parameters and with improved imager data have enhanced the scope for better understanding of the different tropical atmospheric processes over this region. The retrieval techniques and accuracy of one such parameter, Atmospheric Motion Vectors (AMV) has improved significantly with the availability of improved spatial resolution data along with more options of spectral channels in the INSAT-3D imager. The present work is mainly focused on providing brief descriptions of INSAT-3D data and AMV derivation processes using these data. It also discussed the initial quality assessment of INSAT-3D AMVs for a period of six months starting from 01 February 2014 to 31 July 2014 with other independent observations: i) Meteosat-7 AMVs available over this region, ii) in-situ radiosonde wind measurements, iii) cloud tracked winds from Multi-angle Imaging Spectro-Radiometer (MISR) and iv) numerical model analysis. It is observed from this study that the qualities of newly derived INSAT-3D AMVs are comparable with existing two versions of Meteosat-7 AMVs over this region. To demonstrate its initial application, INSAT-3D AMVs are assimilated in the Weather Research and Forecasting (WRF) model and it is found that the assimilation of newly derived AMVs has helped in reduction of track forecast errors of the recent cyclonic storm NANAUK over the Arabian Sea. Though, the present study is limited to its application to one case study, however, it will provide some guidance to the operational agencies for implementation of this new AMV dataset for future applications in the Numerical Weather Prediction (NWP) over the south Asia region.

  10. Sedimentary basin effects in Seattle, Washington: Ground-motion observations and 3D simulations

    USGS Publications Warehouse

    Frankel, Arthur; Stephenson, William; Carver, David

    2009-01-01

    Seismograms of local earthquakes recorded in Seattle exhibit surface waves in the Seattle basin and basin-edge focusing of S waves. Spectral ratios of Swaves and later arrivals at 1 Hz for stiff-soil sites in the Seattle basin show a dependence on the direction to the earthquake, with earthquakes to the south and southwest producing higher average amplification. Earthquakes to the southwest typically produce larger basin surface waves relative to S waves than earthquakes to the north and northwest, probably because of the velocity contrast across the Seattle fault along the southern margin of the Seattle basin. S to P conversions are observed for some events and are likely converted at the bottom of the Seattle basin. We model five earthquakes, including the M 6.8 Nisqually earthquake, using 3D finite-difference simulations accurate up to 1 Hz. The simulations reproduce the observed dependence of amplification on the direction to the earthquake. The simulations generally match the timing and character of basin surface waves observed for many events. The 3D simulation for the Nisqually earth-quake produces focusing of S waves along the southern margin of the Seattle basin near the area in west Seattle that experienced increased chimney damage from the earthquake, similar to the results of the higher-frequency 2D simulation reported by Stephenson et al. (2006). Waveforms from the 3D simulations show reasonable agreement with the data at low frequencies (0.2-0.4 Hz) for the Nisqually earthquake and an M 4.8 deep earthquake west of Seattle.

  11. Undersampled Cine 3D tagging for rapid assessment of cardiac motion

    PubMed Central

    2012-01-01

    Background CMR allows investigating cardiac contraction, rotation and torsion non-invasively by the use of tagging sequences. Three-dimensional tagging has been proposed to cover the whole-heart but data acquisition requires three consecutive breath holds and hence demands considerable patient cooperation. In this study we have implemented and studied k-t undersampled cine 3D tagging in conjunction with k-t PCA reconstruction to potentially permit for single breath-hold acquisitions. Methods The performance of undersampled cine 3D tagging was investigated using computer simulations and in-vivo measurements in 8 healthy subjects and 5 patients with myocardial infarction. Fully sampled data was obtained and compared to retrospectively and prospectively undersampled acquisitions. Fully sampled data was acquired in three consecutive breath holds. Prospectively undersampled data was obtained within a single breath hold. Based on harmonic phase (HARP) analysis, circumferential shortening, rotation and torsion were compared between fully sampled and undersampled data using Bland-Altman and linear regression analysis. Results In computer simulations, the error for circumferential shortening was 2.8 ± 2.3% and 2.7 ± 2.1% for undersampling rates of R = 3 and 4 respectively. Errors in ventricular rotation were 2.5 ± 1.9% and 3.0 ± 2.2% for R = 3 and 4. Comparison of results from fully sampled in-vivo data acquired with prospectively undersampled acquisitions showed a mean difference in circumferential shortening of −0.14 ± 5.18% and 0.71 ± 6.16% for R = 3 and 4. The mean differences in rotation were 0.44 ± 1.8° and 0.73 ± 1.67° for R = 3 and 4, respectively. In patients peak, circumferential shortening was significantly reduced (p < 0.002 for all patients) in regions with late gadolinium enhancement. Conclusion Undersampled cine 3D tagging enables significant reduction in scan time of whole-heart tagging and

  12. Prospective motion correction of 3D echo-planar imaging data for functional MRI using optical tracking

    PubMed Central

    Todd, Nick; Josephs, Oliver; Callaghan, Martina F.; Lutti, Antoine; Weiskopf, Nikolaus

    2015-01-01

    We evaluated the performance of an optical camera based prospective motion correction (PMC) system in improving the quality of 3D echo-planar imaging functional MRI data. An optical camera and external marker were used to dynamically track the head movement of subjects during fMRI scanning. PMC was performed by using the motion information to dynamically update the sequence's RF excitation and gradient waveforms such that the field-of-view was realigned to match the subject's head movement. Task-free fMRI experiments on five healthy volunteers followed a 2 × 2 × 3 factorial design with the following factors: PMC on or off; 3.0 mm or 1.5 mm isotropic resolution; and no, slow, or fast head movements. Visual and motor fMRI experiments were additionally performed on one of the volunteers at 1.5 mm resolution comparing PMC on vs PMC off for no and slow head movements. Metrics were developed to quantify the amount of motion as it occurred relative to k-space data acquisition. The motion quantification metric collapsed the very rich camera tracking data into one scalar value for each image volume that was strongly predictive of motion-induced artifacts. The PMC system did not introduce extraneous artifacts for the no motion conditions and improved the time series temporal signal-to-noise by 30% to 40% for all combinations of low/high resolution and slow/fast head movement relative to the standard acquisition with no prospective correction. The numbers of activated voxels (p < 0.001, uncorrected) in both task-based experiments were comparable for the no motion cases and increased by 78% and 330%, respectively, for PMC on versus PMC off in the slow motion cases. The PMC system is a robust solution to decrease the motion sensitivity of multi-shot 3D EPI sequences and thereby overcome one of the main roadblocks to their widespread use in fMRI studies. PMID:25783205

  13. Numerical scheme for riser motion calculation during 3-D VIV simulation

    NASA Astrophysics Data System (ADS)

    Huang, Kevin; Chen, Hamn-Ching; Chen, Chia-Rong

    2011-10-01

    This paper presents a numerical scheme for riser motion calculation and its application to riser VIV simulations. The discretisation of the governing differential equation is studied first. The top tensioned risers are simplified as tensioned beams. A centered space and forward time finite difference scheme is derived from the governing equations of motion. Then an implicit method is adopted for better numerical stability. The method meets von Neumann criteria and is shown to be unconditionally stable. The discretized linear algebraic equations are solved using a LU decomposition method. This approach is then applied to a series of benchmark cases with known solutions. The comparisons show good agreement. Finally the method is applied to practical riser VIV simulations. The studied cases cover a wide range of riser VIV problems, i.e. different riser outer diameter, length, tensioning conditions, and current profiles. Reasonable agreement is obtained between the numerical simulations and experimental data on riser motions and cross-flow VIV a/D . These validations and comparisons confirm that the present numerical scheme for riser motion calculation is valid and effective for long riser VIV simulation.

  14. Description of a 3D display with motion parallax and direct interaction

    NASA Astrophysics Data System (ADS)

    Tu, J.; Flynn, M. F.

    2014-03-01

    We present a description of a time sequential stereoscopic display which separates the images using a segmented polarization switch and passive eyewear. Additionally, integrated tracking cameras and an SDK on the host PC allow us to implement motion parallax in real time.

  15. Analysis of 3-D Tongue Motion from Tagged and Cine Magnetic Resonance Images

    ERIC Educational Resources Information Center

    Xing, Fangxu; Woo, Jonghye; Lee, Junghoon; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.

    2016-01-01

    Purpose: Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during…

  16. Motion Controllers for Learners to Manipulate and Interact with 3D Objects for Mental Rotation Training

    ERIC Educational Resources Information Center

    Yeh, Shih-Ching; Wang, Jin-Liang; Wang, Chin-Yeh; Lin, Po-Han; Chen, Gwo-Dong; Rizzo, Albert

    2014-01-01

    Mental rotation is an important spatial processing ability and an important element in intelligence tests. However, the majority of past attempts at training mental rotation have used paper-and-pencil tests or digital images. This study proposes an innovative mental rotation training approach using magnetic motion controllers to allow learners to…

  17. Prediction of 3D internal organ position from skin surface motion: results from electromagnetic tracking studies

    NASA Astrophysics Data System (ADS)

    Wong, Kenneth H.; Tang, Jonathan; Zhang, Hui J.; Varghese, Emmanuel; Cleary, Kevin R.

    2005-04-01

    An effective treatment method for organs that move with respiration (such as the lungs, pancreas, and liver) is a major goal of radiation medicine. In order to treat such tumors, we need (1) real-time knowledge of the current location of the tumor, and (2) the ability to adapt the radiation delivery system to follow this constantly changing location. In this study, we used electromagnetic tracking in a swine model to address the first challenge, and to determine if movement of a marker attached to the skin could accurately predict movement of an internal marker embedded in an organ. Under approved animal research protocols, an electromagnetically tracked needle was inserted into a swine liver and an electromagnetically tracked guidewire was taped to the abdominal skin of the animal. The Aurora (Northern Digital Inc., Waterloo, Canada) electromagnetic tracking system was then used to monitor the position of both of these sensors every 40 msec. Position readouts from the sensors were then tested to see if any of the movements showed correlation. The strongest correlations were observed between external anterior-posterior motion and internal inferior-superior motion, with many other axes exhibiting only weak correlation. We also used these data to build a predictive model of internal motion by taking segments from the data and using them to derive a general functional relationship between the internal needle and the external guidewire. For the axis with the strongest correlation, this model enabled us to predict internal organ motion to within 1 mm.

  18. Experience affects the use of ego-motion signals during 3D shape perception

    PubMed Central

    Jain, Anshul; Backus, Benjamin T.

    2011-01-01

    Experience has long-term effects on perceptual appearance (Q. Haijiang, J. A. Saunders, R. W. Stone, & B. T. Backus, 2006). We asked whether experience affects the appearance of structure-from-motion stimuli when the optic flow is caused by observer ego-motion. Optic flow is an ambiguous depth cue: a rotating object and its oppositely rotating, depth-inverted dual generate similar flow. However, the visual system exploits ego-motion signals to prefer the percept of an object that is stationary over one that rotates (M. Wexler, F. Panerai, I. Lamouret, & J. Droulez, 2001). We replicated this finding and asked whether this preference for stationarity, the “stationarity prior,” is modulated by experience. During training, two groups of observers were exposed to objects with identical flow, but that were either stationary or moving as determined by other cues. The training caused identical test stimuli to be seen preferentially as stationary or moving by the two groups, respectively. We then asked whether different priors can exist independently at different locations in the visual field. Observers were trained to see objects either as stationary or as moving at two different locations. Observers’ stationarity bias at the two respective locations was modulated in the directions consistent with training. Thus, the utilization of extraretinal ego-motion signals for disambiguating optic flow signals can be updated as the result of experience, consistent with the updating of a Bayesian prior for stationarity. PMID:21191132

  19. Motion-parallax smoothness of short-, medium-, and long-distance 3D image presentation using multi-view displays.

    PubMed

    Takaki, Yasuhiro; Urano, Yohei; Nishio, Hiroyuki

    2012-11-19

    The discontinuity of motion parallax offered by multi-view displays was assessed by subjective evaluation. A super multi-view head-up display, which provides dense viewing points and has short-, medium-, and long-distance display ranges, was used. The results showed that discontinuity perception depended on the ratio of an image shift between adjacent parallax images to a pixel pitch of three-dimensional (3D) images and the crosstalk between viewing points. When the ratio was less than 0.2 and the crosstalk was small, the discontinuity was not perceived. When the ratio was greater than 1 and the crosstalk was small, the discontinuity was perceived, and the resolution of the 3D images decreased twice. When the crosstalk was large, the discontinuity was not perceived even when the ratio was 1 or 2. However, the resolution decreased two or more times. PMID:23187574

  20. Flying triangulation - A motion-robust optical 3D sensor for the real-time shape acquisition of complex objects

    NASA Astrophysics Data System (ADS)

    Willomitzer, Florian; Ettl, Svenja; Arold, Oliver; Häusler, Gerd

    2013-05-01

    The three-dimensional shape acquisition of objects has become more and more important in the last years. Up to now, there are several well-established methods which already yield impressive results. However, even under quite common conditions like object movement or a complex shaping, most methods become unsatisfying. Thus, the 3D shape acquisition is still a difficult and non-trivial task. We present our measurement principle "Flying Triangulation" which enables a motion-robust 3D acquisition of complex-shaped object surfaces by a freely movable handheld sensor. Since "Flying Triangulation" is scalable, a whole sensor-zoo for different object sizes is presented. Concluding, an overview of current and future fields of investigation is given.

  1. Combining marker-less patient setup and respiratory motion monitoring using low cost 3D camera technology

    NASA Astrophysics Data System (ADS)

    Tahavori, F.; Adams, E.; Dabbs, M.; Aldridge, L.; Liversidge, N.; Donovan, E.; Jordan, T.; Evans, PM.; Wells, K.

    2015-03-01

    Patient set-up misalignment/motion can be a significant source of error within external beam radiotherapy, leading to unwanted dose to healthy tissues and sub-optimal dose to the target tissue. Such inadvertent displacement or motion of the target volume may be caused by treatment set-up error, respiratory motion or an involuntary movement potentially decreasing therapeutic benefit. The conventional approach to managing abdominal-thoracic patient set-up is via skin markers (tattoos) and laser-based alignment. Alignment of the internal target volume with its position in the treatment plan can be achieved using Deep Inspiration Breath Hold (DIBH) in conjunction with marker-based respiratory motion monitoring. We propose a marker-less single system solution for patient set-up and respiratory motion management based on low cost 3D depth camera technology (such as the Microsoft Kinect). In this new work we assess this approach in a study group of six volunteer subjects. Separate simulated treatment mimic treatment "fractions" or set-ups are compared for each subject, undertaken using conventional laser-based alignment and with intrinsic depth images produced by Kinect. Microsoft Kinect is also compared with the well-known RPM system for respiratory motion management in terms of monitoring free-breathing and DIBH. Preliminary results suggest that Kinect is able to produce mm-level surface alignment and a comparable DIBH respiratory motion management when compared to the popular RPM system. Such an approach may also yield significant benefits in terms of patient throughput as marker alignment and respiratory motion can be automated in a single system.

  2. Stereoscopic motion analysis in densely packed clusters: 3D analysis of the shimmering behaviour in Giant honey bees

    PubMed Central

    2011-01-01

    Background The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns. Results Stereoscopic imaging provided non-invasive, automated, simultaneous, in-situ 3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. Segmentation was the basis for the stereo matching, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the tracking procedure and triangulated into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb) of individual bees over time. Conclusions The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method could be used to study

  3. Pitching motion control of a butterfly-like 3D flapping wing-body model

    NASA Astrophysics Data System (ADS)

    Suzuki, Kosuke; Minami, Keisuke; Inamuro, Takaji

    2014-11-01

    Free flights and a pitching motion control of a butterfly-like flapping wing-body model are numerically investigated by using an immersed boundary-lattice Boltzmann method. The model flaps downward for generating the lift force and backward for generating the thrust force. Although the model can go upward against the gravity by the generated lift force, the model generates the nose-up torque, consequently gets off-balance. In this study, we discuss a way to control the pitching motion by flexing the body of the wing-body model like an actual butterfly. The body of the model is composed of two straight rigid rod connected by a rotary actuator. It is found that the pitching angle is suppressed in the range of +/-5° by using the proportional-plus-integral-plus-derivative (PID) control for the input torque of the rotary actuator.

  4. Using Fuzzy Gaussian Inference and Genetic Programming to Classify 3D Human Motions

    NASA Astrophysics Data System (ADS)

    Khoury, Mehdi; Liu, Honghai

    This research introduces and builds on the concept of Fuzzy Gaussian Inference (FGI) (Khoury and Liu in Proceedings of UKCI, 2008 and IEEE Workshop on Robotic Intelligence in Informationally Structured Space (RiiSS 2009), 2009) as a novel way to build Fuzzy Membership Functions that map to hidden Probability Distributions underlying human motions. This method is now combined with a Genetic Programming Fuzzy rule-based system in order to classify boxing moves from natural human Motion Capture data. In this experiment, FGI alone is able to recognise seven different boxing stances simultaneously with an accuracy superior to a GMM-based classifier. Results seem to indicate that adding an evolutionary Fuzzy Inference Engine on top of FGI improves the accuracy of the classifier in a consistent way.

  5. A very low-cost system for capturing 3D motion scans with color and texture data

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    This paper presents a technique for capturing 3D motion scans using hardware that can be constructed for approximately $5,000 in cost. This hardware-software solution, in addition to capturing the movement of the physical structures also captures color and texture data. The scanner configuration developed at the University of North Dakota is sufficient in size for capturing scans of a group of humans. Scanning starts with synchronization and then requires modeling of each frame. For some applications linking structural elements from frame-to-frame may also be required. The efficacy of this scanning approach is discussed and prospective applications for it are considered.

  6. Stereo photography of neutral density He-filled bubbles for 3-D fluid motion studies in an engine cylinder.

    PubMed

    Kent, J C; Eaton, A R

    1982-03-01

    A new technique has been developed for studies of fluid motion within the cylinder of a reciprocating piston engine during the air induction process. Helium-filled bubbles, serving as neutrally buoyant flow tracer particles, enter the cylinder along with the inducted air charge. The bubble motion is recorded by stereo cine photography through the transparent cylinder of a specially designed research engine. Quantitative data on the 3-D velocity field generated during induction is obtained from frame-to-frame analysis of the stereo images, taking into account refraction of the rays due to the transparent cylinder. Other applications for which this technique appears suitable include measurements of velocity fields within intake ports and flow-field dynamics within intake manifolds of multicylinder engines. PMID:20372559

  7. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    PubMed

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming. PMID:27590974

  8. Towards real-time 2D/3D registration for organ motion monitoring in image-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Gendrin, C.; Spoerk, J.; Bloch, C.; Pawiro, S. A.; Weber, C.; Figl, M.; Markelj, P.; Pernus, F.; Georg, D.; Bergmann, H.; Birkfellner, W.

    2010-02-01

    Nowadays, radiation therapy systems incorporate kV imaging units which allow for the real-time acquisition of intra-fractional X-ray images of the patient with high details and contrast. An application of this technology is tumor motion monitoring during irradiation. For tumor tracking, implanted markers or position sensors are used which requires an intervention. 2D/3D intensity based registration is an alternative, non-invasive method but the procedure must be accelerate to the update rate of the device, which lies in the range of 5 Hz. In this paper we investigate fast CT to a single kV X-ray 2D/3D image registration using a new porcine reference phantom with seven implanted fiducial markers. Several parameters influencing the speed and accuracy of the registrations are investigated. First, four intensity based merit functions, namely Cross-Correlation, Rank Correlation, Mutual Information and Correlation Ratio, are compared. Secondly, wobbled splatting and ray casting rendering techniques are implemented on the GPU and the influence of each algorithm on the performance of 2D/3D registration is evaluated. Rendering times for a single DRR of 20 ms were achieved. Different thresholds of the CT volume were also examined for rendering to find the setting that achieves the best possible correspondence with the X-ray images. Fast registrations below 4 s became possible with an inplane accuracy down to 0.8 mm.

  9. Uncertainty preserving patch-based online modeling for 3D model acquisition and integration from passive motion imagery

    NASA Astrophysics Data System (ADS)

    Tang, Hao; Chang, Peng; Molina, Edgardo; Zhu, Zhigang

    2012-06-01

    In both military and civilian applications, abundant data from diverse sources captured on airborne platforms are often available for a region attracting interest. Since the data often includes motion imagery streams collected from multiple platforms flying at different altitudes, with sensors of different field of views (FOVs), resolutions, frame rates and spectral bands, it is imperative that a cohesive site model encompassing all the information can be quickly built and presented to the analysts. In this paper, we propose to develop an Uncertainty Preserving Patch-based Online Modeling System (UPPOMS) leading towards the automatic creation and updating of a cohesive, geo-registered, uncertaintypreserving, efficient 3D site terrain model from passive imagery with varying field-of-views and phenomenologies. The proposed UPPOMS has the following technical thrusts that differentiate our approach from others: (1) An uncertaintypreserved, patch-based 3D model is generated, which enables the integration of images captured with a mixture of NFOV and WFOV and/or visible and infrared motion imagery sensors. (2) Patch-based stereo matching and multi-view 3D integration are utilized, which are suitable for scenes with many low texture regions, particularly in mid-wave infrared images. (3) In contrast to the conventional volumetric algorithms, whose computational and storage costs grow exponentially with the amount of input data and the scale of the scene, the proposed UPPOMS system employs an online algorithmic pipeline, and scales well to large amount of input data. Experimental results and discussions of future work will be provided.

  10. 3D PET image reconstruction including both motion correction and registration directly into an MR or stereotaxic spatial atlas

    NASA Astrophysics Data System (ADS)

    Gravel, Paul; Verhaeghe, Jeroen; Reader, Andrew J.

    2013-01-01

    This work explores the feasibility and impact of including both the motion correction and the image registration transformation parameters from positron emission tomography (PET) image space to magnetic resonance (MR), or stereotaxic, image space within the system matrix of PET image reconstruction. This approach is motivated by the fields of neuroscience and psychiatry, where PET is used to investigate differences in activation patterns between different groups of participants, requiring all images to be registered to a common spatial atlas. Currently, image registration is performed after image reconstruction which introduces interpolation effects into the final image. Furthermore, motion correction (also requiring registration) introduces a further level of interpolation, and the overall result of these operations can lead to resolution degradation and possibly artifacts. It is important to note that performing such operations on a post-reconstruction basis means, strictly speaking, that the final images are not ones which maximize the desired objective function (e.g. maximum likelihood (ML), or maximum a posteriori reconstruction (MAP)). To correctly seek parameter estimates in the desired spatial atlas which are in accordance with the chosen reconstruction objective function, it is necessary to include the transformation parameters for both motion correction and registration within the system modeling stage of image reconstruction. Such an approach not only respects the statistically chosen objective function (e.g. ML or MAP), but furthermore should serve to reduce the interpolation effects. To evaluate the proposed method, this work investigates registration (including motion correction) using 2D and 3D simulations based on the high resolution research tomograph (HRRT) PET scanner geometry, with and without resolution modeling, using the ML expectation maximization (MLEM) reconstruction algorithm. The quality of reconstruction was assessed using bias

  11. Modulated Magnetic Nanowires for Controlling Domain Wall Motion: Toward 3D Magnetic Memories.

    PubMed

    Ivanov, Yurii P; Chuvilin, Andrey; Lopatin, Sergei; Kosel, Jurgen

    2016-05-24

    Cylindrical magnetic nanowires are attractive materials for next generation data storage devices owing to the theoretically achievable high domain wall velocity and their efficient fabrication in highly dense arrays. In order to obtain control over domain wall motion, reliable and well-defined pinning sites are required. Here, we show that modulated nanowires consisting of alternating nickel and cobalt sections facilitate efficient domain wall pinning at the interfaces of those sections. By combining electron holography with micromagnetic simulations, the pinning effect can be explained by the interaction of the stray fields generated at the interface and the domain wall. Utilizing a modified differential phase contrast imaging, we visualized the pinned domain wall with a high resolution, revealing its three-dimensional vortex structure with the previously predicted Bloch point at its center. These findings suggest the potential of modulated nanowires for the development of high-density, three-dimensional data storage devices. PMID:27138460

  12. MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems

    NASA Astrophysics Data System (ADS)

    Kopecky, Ken; Winer, Eliot

    2014-06-01

    Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.

  13. Quantification of Ground Motion Reductions by Fault Zone Plasticity with 3D Spontaneous Rupture Simulations

    NASA Astrophysics Data System (ADS)

    Roten, D.; Olsen, K. B.; Cui, Y.; Day, S. M.

    2015-12-01

    We explore the effects of fault zone nonlinearity on peak ground velocities (PGVs) by simulating a suite of surface rupturing earthquakes in a visco-plastic medium. Our simulations, performed with the AWP-ODC 3D finite difference code, cover magnitudes from 6.5 to 8.0, with several realizations of the stochastic stress drop for a given magnitude. We test three different models of rock strength, with friction angles and cohesions based on criteria which are frequently applied to fractured rock masses in civil engineering and mining. We use a minimum shear-wave velocity of 500 m/s and a maximum frequency of 1 Hz. In rupture scenarios with average stress drop (~3.5 MPa), plastic yielding reduces near-fault PGVs by 15 to 30% in pre-fractured, low-strength rock, but less than 1% in massive, high quality rock. These reductions are almost insensitive to the scenario earthquake magnitude. In the case of high stress drop (~7 MPa), however, plasticity reduces near-fault PGVs by 38 to 45% in rocks of low strength and by 5 to 15% in rocks of high strength. Because plasticity reduces slip rates and static slip near the surface, these effects can partially be captured by defining a shallow velocity-strengthening layer. We also perform a dynamic nonlinear simulation of a high stress drop M 7.8 earthquake rupturing the southern San Andreas fault along 250 km from Indio to Lake Hughes. With respect to the viscoelastic solution (a), nonlinearity in the fault damage zone and in near-surface deposits would reduce long-period (> 1 s) peak ground velocities in the Los Angeles basin by 15-50% (b), depending on the strength of crustal rocks and shallow sediments. These simulation results suggest that nonlinear effects may be relevant even at long periods, especially for earthquakes with high stress drop.

  14. Nonlinear, nonlaminar-3D computation of electron motion through the output cavity of a klystron

    NASA Technical Reports Server (NTRS)

    Albers, L. U.; Kosmahl, H. G.

    1971-01-01

    The equations of motion used in the computation are discussed along with the space charge fields and the integration process. The following assumptions were used as a basis for the computation: (1) The beam is divided into N axisymmetric discs of equal charge and each disc into R rings of equal charge. (2) The velocity of each disc, its phase with respect to the gap voltage, and its radius at a specified position in the drift tunnel prior to the interaction gap is known from available large signal one dimensional programs. (3) The fringing rf fields are computed from exact analytical expressions derived from the wave equation assuming a known field shape between the tunnel tips at a radius a. (4) The beam is focused by an axisymmetric magnetic field. Both components of B, that is B sub z and B sub r, are taken into account. (5) Since this integration does not start at the cathode but rather further down the stream prior to entering the output cavity it is assumed that each electron moved along a laminar path from the cathode to the start of integration.

  15. 3D optical imagery for motion compensation in a limb ultrasound system

    NASA Astrophysics Data System (ADS)

    Ranger, Bryan J.; Feigin, Micha; Zhang, Xiang; Mireault, Al; Raskar, Ramesh; Herr, Hugh M.; Anthony, Brian W.

    2016-04-01

    Conventional processes for prosthetic socket fabrication are heavily subjective, often resulting in an interface to the human body that is neither comfortable nor completely functional. With nearly 100% of amputees reporting that they experience discomfort with the wearing of their prosthetic limb, designing an effective interface to the body can significantly affect quality of life and future health outcomes. Active research in medical imaging and biomechanical tissue modeling of residual limbs has led to significant advances in computer aided prosthetic socket design, demonstrating an interest in moving toward more quantifiable processes that are still patient-specific. In our work, medical ultrasonography is being pursued to acquire data that may quantify and improve the design process and fabrication of prosthetic sockets while greatly reducing cost compared to an MRI-based framework. This paper presents a prototype limb imaging system that uses a medical ultrasound probe, mounted to a mechanical positioning system and submerged in a water bath. The limb imaging is combined with three-dimensional optical imaging for motion compensation. Images are collected circumferentially around the limb and combined into cross-sectional axial image slices, resulting in a compound image that shows tissue distributions and anatomical boundaries similar to magnetic resonance imaging. In this paper we provide a progress update on our system development, along with preliminary results as we move toward full volumetric imaging of residual limbs for prosthetic socket design. This demonstrates a novel multi-modal approach to residual limb imaging.

  16. Study of human body: Kinematics and kinetics of a martial arts (Silat) performers using 3D-motion capture

    NASA Astrophysics Data System (ADS)

    Soh, Ahmad Afiq Sabqi Awang; Jafri, Mohd Zubir Mat; Azraai, Nur Zaidi

    2015-04-01

    The Interest in this studies of human kinematics goes back very far in human history drove by curiosity or need for the understanding the complexity of human body motion. To find new and accurate information about the human movement as the advance computing technology became available for human movement that can perform. Martial arts (silat) were chose and multiple type of movement was studied. This project has done by using cutting-edge technology which is 3D motion capture to characterize and to measure the motion done by the performers of martial arts (silat). The camera will detect the markers (infrared reflection by the marker) around the performer body (total of 24 markers) and will show as dot in the computer software. The markers detected were analyzing using kinematic kinetic approach and time as reference. A graph of velocity, acceleration and position at time,t (seconds) of each marker was plot. Then from the information obtain, more parameters were determined such as work done, momentum, center of mass of a body using mathematical approach. This data can be used for development of the effectiveness movement in martial arts which is contributed to the people in arts. More future works can be implemented from this project such as analysis of a martial arts competition.

  17. Effects of simple and complex motion patterns on gene expression of chondrocytes seeded in 3D scaffolds.

    PubMed

    Grad, Sibylle; Gogolewski, Sylwester; Alini, Mauro; Wimmer, Markus A

    2006-11-01

    This study investigated the effect of unidirectional and multidirectional motion patterns on gene expression and molecule release of chondrocyte-seeded 3D scaffolds. Resorbable porous polyurethane scaffolds were seeded with bovine articular chondrocytes and exposed to dynamic compression, applied with a ceramic hip ball, alone (group 1), with superimposed rotation of the scaffold around its cylindrical axis (group 2), oscillation of the ball over the scaffold surface (group 3), or oscillation of ball and scaffold in phase difference (group 4). Compared with group 1, the proteoglycan 4 (PRG4) and cartilage oligomeric matrix protein (COMP) mRNA expression levels were markedly increased by ball oscillation (groups 3 and 4). Furthermore, the collagen type II mRNA expression was enhanced in the groups 3 and 4, while the aggrecan and tissue inhibitor of metalloproteinase-3 (TIMP-3) mRNA expression levels were upregulated by multidirectional articular motion (group 4). Ball oscillation (groups 3 and 4) also increased the release of PRG4, COMP, and hyaluronan (HA) into the culture media. This indicates that the applied stimuli can contribute to the maintenance of the chondrocytic phenotype of the cells. The mechanical effects causing cell stimulation by applied surface motion might be related to fluid film buildup and/or frictional shear at the scaffold-ball interface. It is suggested that the oscillating ball drags the fluid into the joint space, thereby causing biophysical effects similar to those of fluid flow. PMID:17518631

  18. 3-D ground motion modeling for M7 dynamic rupture earthquake scenarios on the Wasatch fault, Utah

    NASA Astrophysics Data System (ADS)

    Roten, D.; Olsen, K. B.; Cruz Atienza, V. M.; Pechmann, J. C.; Magistrale, H. W.

    2009-12-01

    The Salt Lake City segment of the Wasatch fault (WFSLC), located on the eastern edge of the Salt Lake Basin (SLB), is capable of producing M7 earthquakes and represents a serious seismic hazard to Salt Lake City, Utah. We simulate a series of rupture scenarios on the WFSLC to quantify the ground motion expected from such M7 events and to assess the importance of amplification effects from basin focusing and source directivity. We use the newly revised Wasatch Front community velocity model for our simulations, which is tested by simulating records of three local Mw 3.3-3.7 earthquakes in the frequency band 0.5 to 1.0 Hz. The M7 earthquake scenarios make use of a detailed 3-D model geometry of the WFSLC that we developed based on geological observations. To obtain a suite of realistic source representations for M7 WFSLC simulations we perform spontaneous-rupture simulations on a planar 43 km by 23 km fault with the staggered-grid split-node finite-difference (FD) method. We estimate the initial distribution of shear stress using models that assume depth-dependent normal stress for a dipping, normal fault as well as simpler models which use constant (depth-independent) normal stress. The slip rate histories from the spontaneous rupture scenarios are projected onto the irregular dipping geometry of the WFSLC and used to simulate 0-1 Hz wave propagation in the SLB area using a 4th-order, staggered-grid visco-elastic FD method. We find that peak ground velocities tend to be larger on the low-velocity sediments on the hanging wall side of the fault than on outcropping rock on the footwall side, confirming results of previous studies on normal faulting earthquakes. The simulated ground motions reveal strong along-strike directivity effects for ruptures nucleating towards the ends of the WFSLC. The 0-1 Hz FD simulations are combined with local scattering operators to obtain broadband (0-10 Hz) synthetics and maps of average peak ground motions. Finally we use broadband

  19. How Plates Pull Transforms Apart: 3-D Numerical Models of Oceanic Transform Fault Response to Changes in Plate Motion Direction

    NASA Astrophysics Data System (ADS)

    Morrow, T. A.; Mittelstaedt, E. L.; Olive, J. A. L.

    2015-12-01

    Observations along oceanic fracture zones suggest that some mid-ocean ridge transform faults (TFs) previously split into multiple strike-slip segments separated by short (<~50 km) intra-transform spreading centers and then reunited to a single TF trace. This history of segmentation appears to correspond with changes in plate motion direction. Despite the clear evidence of TF segmentation, the processes governing its development and evolution are not well characterized. Here we use a 3-D, finite-difference / marker-in-cell technique to model the evolution of localized strain at a TF subjected to a sudden change in plate motion direction. We simulate the oceanic lithosphere and underlying asthenosphere at a ridge-transform-ridge setting using a visco-elastic-plastic rheology with a history-dependent plastic weakening law and a temperature- and stress-dependent mantle viscosity. To simulate the development of topography, a low density, low viscosity 'sticky air' layer is present above the oceanic lithosphere. The initial thermal gradient follows a half-space cooling solution with an offset across the TF. We impose an enhanced thermal diffusivity in the uppermost 6 km of lithosphere to simulate the effects of hydrothermal circulation. An initial weak seed in the lithosphere helps localize shear deformation between the two offset ridge axes to form a TF. For each model case, the simulation is run initially with TF-parallel plate motion until the thermal structure reaches a steady state. The direction of plate motion is then rotated either instantaneously or over a specified time period, placing the TF in a state of trans-tension. Model runs continue until the system reaches a new steady state. Parameters varied here include: initial TF length, spreading rate, and the rotation rate and magnitude of spreading obliquity. We compare our model predictions to structural observations at existing TFs and records of TF segmentation preserved in oceanic fracture zones.

  20. 3-D or median map? Earthquake scenario ground-motion maps from physics-based models versus maps from ground-motion prediction equations

    NASA Astrophysics Data System (ADS)

    Porter, K.

    2015-12-01

    There are two common ways to create a ground-motion map for a hypothetical earthquake: using ground motion prediction equations (by far the more common of the two) and using 3-D physics-based modeling. The former is very familiar to engineers, the latter much less so, and the difference can present a problem because engineers tend to trust the familiar and distrust novelty. Maps for essentially the same hypothetical earthquake using the two different methods can look very different, while appearing to present the same information. Using one or the other can lead an engineer or disaster planner to very different estimates of damage and risk. The reasons have to do with depiction of variability, spatial correlation of shaking, the skewed distribution of real-world shaking, and the upward-curving relationship between shaking and damage. The scientists who develop the two kinds of map tend to specialize in one or the other and seem to defend their turf, which can aggravate the problem of clearly communicating with engineers.The USGS Science Application for Risk Reduction's (SAFRR) HayWired scenario has addressed the challenge of explaining to engineers the differences between the two maps, and why, in a disaster planning scenario, one might want to use the less-familiar 3-D map.

  1. Bi-planar 2D-to-3D registration in Fourier domain for stereoscopic x-ray motion tracking

    NASA Astrophysics Data System (ADS)

    Zosso, Dominique; Le Callennec, Benoît; Bach Cuadra, Meritxell; Aminian, Kamiar; Jolles, Brigitte M.; Thiran, Jean-Philippe

    2008-03-01

    In this paper we present a new method to track bone movements in stereoscopic X-ray image series of the knee joint. The method is based on two different X-ray image sets: a rotational series of acquisitions of the still subject knee that allows the tomographic reconstruction of the three-dimensional volume (model), and a stereoscopic image series of orthogonal projections as the subject performs movements. Tracking the movements of bones throughout the stereoscopic image series means to determine, for each frame, the best pose of every moving element (bone) previously identified in the 3D reconstructed model. The quality of a pose is reflected in the similarity between its theoretical projections and the actual radiographs. We use direct Fourier reconstruction to approximate the three-dimensional volume of the knee joint. Then, to avoid the expensive computation of digitally rendered radiographs (DRR) for pose recovery, we develop a corollary to the 3-dimensional central-slice theorem and reformulate the tracking problem in the Fourier domain. Under the hypothesis of parallel X-ray beams, the heavy 2D-to-3D registration of projections in the signal domain is replaced by efficient slice-to-volume registration in the Fourier domain. Focusing on rotational movements, the translation-relevant phase information can be discarded and we only consider scalar Fourier amplitudes. The core of our motion tracking algorithm can be implemented as a classical frame-wise slice-to-volume registration task. Results on both synthetic and real images confirm the validity of our approach.

  2. Hybrid MV-kV 3D respiratory motion tracking during radiation therapy with low imaging dose

    NASA Astrophysics Data System (ADS)

    Yan, Huagang; Li, Haiyun; Liu, Zhixiang; Nath, Ravinder; Liu, Wu

    2012-12-01

    A novel real-time adaptive MV-kV imaging framework for image-guided radiation therapy is developed to reduce the thoracic and abdominal tumor targeting uncertainty caused by respiration-induced intrafraction motion with ultra-low patient imaging dose. In our method, continuous stereoscopic MV-kV imaging is used at the beginning of a radiation therapy delivery for several seconds to measure the implanted marker positions. After this stereoscopic imaging period, the kV imager is switched off except for the times when no fiducial marker is detected in the cine-MV images. The 3D time-varying marker positions are estimated by combining the MV 2D projection data and the motion correlations between directional components of marker motion established from the stereoscopic imaging period and updated afterwards; in particular, the most likely position is assumed to be the position on the projection line that has the shortest distance to the first principal component line segment constructed from previous trajectory points. An adaptive windowed auto-regressive prediction is utilized to predict the marker position a short time later (310 ms and 460 ms in this study) to allow for tracking system latency. To demonstrate the feasibility and evaluate the accuracy of the proposed method, computer simulations were performed for both arc and fixed-gantry deliveries using 66 h of retrospective tumor motion data from 42 patients treated for thoracic or abdominal cancers. The simulations reveal that using our hybrid approach, a smaller than 1.2 mm or 1.5 mm root-mean-square tracking error can be achieved at a system latency of 310 ms or 460 ms, respectively. Because the kV imaging is only used for a short period of time in our method, extra patient imaging dose can be reduced by an order of magnitude compared to continuous MV-kV imaging, while the clinical tumor targeting accuracy for thoracic or abdominal cancers is maintained. Furthermore, no additional hardware is required with the

  3. Does fluid infiltration affect the motion of sediment grains? - A 3-D numerical modelling approach using SPH

    NASA Astrophysics Data System (ADS)

    Bartzke, Gerhard; Rogers, Benedict D.; Fourtakas, Georgios; Mokos, Athanasios; Huhn, Katrin

    2016-04-01

    The processes that cause the creation of a variety of sediment morphological features, e.g. laminated beds, ripples, or dunes, are based on the initial motion of individual sediment grains. However, with experimental techniques it is difficult to measure the flow characteristics, i.e., the velocity of the pore water flow in sediments, at a sufficient resolution and in a non-intrusive way. As a result, the role of fluid infiltration at the surface and in the interior affecting the initiation of motion of a sediment bed is not yet fully understood. Consequently, there is a strong need for numerical models, since these are capable of quantifying fluid driven sediment transport processes of complex sediment beds composed of irregular shapes. The numerical method Smoothed Particle Hydrodynamics (SPH) satisfies this need. As a meshless and Lagrangian technique, SPH is ideally suited to simulating flows in sediment beds composed of various grain shapes, but also flow around single grains at a high temporal and spatial resolution. The solver chosen is DualSPHysics (www.dual.sphysics.org) since this is validated for a range of flow conditions. For the present investigation a 3-D numerical flume model was generated using SPH with a length of 4.0 cm, a width of 0.05 cm and a height of 0.2 cm where mobile sediment particles were deposited in a recess. An experimental setup was designed to test sediment configurations composed of irregular grain shapes (grain diameter, D50=1000 μm). Each bed consisted of 3500 mobile objects. After the bed generation process, the entire domain was flooded with 18 million fluid particles. To drive the flow, an oscillating motion perpendicular to the bed was applied to the fluid, reaching a peak value of 0.3 cm/s, simulating 4 seconds of real time. The model results showed that flow speeds decreased logarithmically from the top of the domain towards the surface of the beds, indicating a fully developed boundary layer. Analysis of the fluid

  4. Shoulder 3D range of motion and humerus rotation in two volleyball spike techniques: injury prevention and performance.

    PubMed

    Seminati, Elena; Marzari, Alessandra; Vacondio, Oreste; Minetti, Alberto E

    2015-06-01

    Repetitive stresses and movements on the shoulder in the volleyball spike expose this joint to overuse injuries, bringing athletes to a career threatening injury. Assuming that specific spike techniques play an important role in injury risk, we compared the kinematic of the traditional (TT) and the alternative (AT) techniques in 21 elite athletes, evaluating their safety with respect to performance. Glenohumeral joint was set as the centre of an imaginary sphere, intersected by the distal end of the humerus at different angles. Shoulder range of motion and angular velocities were calculated and compared to the joint limits. Ball speed and jump height were also assessed. Results indicated the trajectory of the humerus to be different for the TT, with maximal flexion of the shoulder reduced by 10 degrees, and horizontal abduction 15 degrees higher. No difference was found for external rotation angles, while axial rotation velocities were significantly higher in AT, with a 5% higher ball speed. Results suggest AT as a potential preventive solution to shoulder chronic pathologies, reducing shoulder flexion during spiking. The proposed method allows visualisation of risks associated with different overhead manoeuvres, by depicting humerus angles and velocities with respect to joint limits in the same 3D space. PMID:26151344

  5. SU-E-J-80: Interplay Effect Between VMAT Intensity Modulation and Tumor Motion in Hypofractioned Lung Treatment, Investigated with 3D Pressage Dosimeter

    SciTech Connect

    Touch, M; Wu, Q; Oldham, M

    2014-06-01

    Purpose: To demonstrate an embedded tissue equivalent presage dosimeter for measuring 3D doses in moving tumors and to study the interplay effect between the tumor motion and intensity modulation in hypofractioned Volumetric Modulated Arc Therapy(VMAT) lung treatment. Methods: Motion experiments were performed using cylindrical Presage dosimeters (5cm diameter by 7cm length) mounted inside the lung insert of a CIRS thorax phantom. Two different VMAT treatment plans were created and delivered in three different scenarios with the same prescribed dose of 18 Gy. Plan1, containing a 2 centimeter spherical CTV with an additional 2mm setup margin, was delivered on a stationary phantom. Plan2 used the same CTV except expanded by 1 cm in the Sup-Inf direction to generate ITV and PTV respectively. The dosimeters were irradiated in static and variable motion scenarios on a Truebeam system. After irradiation, high resolution 3D dosimetry was performed using the Duke Large Field-of-view Optical-CT Scanner, and compared to the calculated dose from Eclipse. Results: In the control case (no motion), good agreement was observed between the planned and delivered dose distributions as indicated by 100% 3D Gamma (3% of maximum planned dose and 3mm DTA) passing rates in the CTV. In motion cases gamma passing rates was 99% in CTV. DVH comparisons also showed good agreement between the planned and delivered dose in CTV for both control and motion cases. However, differences of 15% and 5% in dose to PTV were observed in the motion and control cases respectively. Conclusion: With very high dose nature of a hypofraction treatment, significant effect was observed only motion is introduced to the target. This can be resulted from the motion of the moving target and the modulation of the MLC. 3D optical dosimetry can be of great advantage in hypofraction treatment dose validation studies.

  6. Calculating the Probability of Strong Ground Motions Using 3D Seismic Waveform Modeling - SCEC CyberShake

    NASA Astrophysics Data System (ADS)

    Gupta, N.; Callaghan, S.; Graves, R.; Mehta, G.; Zhao, L.; Deelman, E.; Jordan, T. H.; Kesselman, C.; Okaya, D.; Cui, Y.; Field, E.; Gupta, V.; Vahi, K.; Maechling, P. J.

    2006-12-01

    Researchers from the SCEC Community Modeling Environment (SCEC/CME) project are utilizing the CyberShake computational platform and a distributed high performance computing environment that includes USC High Performance Computer Center and the NSF TeraGrid facilities to calculate physics-based probabilistic seismic hazard curves for several sites in the Southern California area. Traditionally, probabilistic seismic hazard analysis (PSHA) is conducted using intensity measure relationships based on empirical attenuation relationships. However, a more physics-based approach using waveform modeling could lead to significant improvements in seismic hazard analysis. Members of the SCEC/CME Project have integrated leading-edge PSHA software tools, SCEC-developed geophysical models, validated anelastic wave modeling software, and state-of-the-art computational technologies on the TeraGrid to calculate probabilistic seismic hazard curves using 3D waveform-based modeling. The CyberShake calculations for a single probablistic seismic hazard curve require tens of thousands of CPU hours and multiple terabytes of disk storage. The CyberShake workflows are run on high performance computing systems including multiple TeraGrid sites (currently SDSC and NCSA), and the USC Center for High Performance Computing and Communications. To manage the extensive job scheduling and data requirements, CyberShake utilizes a grid-based scientific workflow system based on the Virtual Data System (VDS), the Pegasus meta-scheduler system, and the Globus toolkit. Probabilistic seismic hazard curves for spectral acceleration at 3.0 seconds have been produced for eleven sites in the Southern California region, including rock and basin sites. At low ground motion levels, there is little difference between the CyberShake and attenuation relationship curves. At higher ground motion (lower probability) levels, the curves are similar for some sites (downtown LA, I-5/SR-14 interchange) but different for

  7. Tissue reconstruction in 3D-spheroids from rodent retina in a motion-free, bioreactor-based microstructure.

    PubMed

    Rieke, Matthias; Gottwald, Eric; Weibezahn, Karl-Friedrich; Layer, Paul Gottlob

    2008-12-01

    While conventional rotation culture-based retinal spheroids are most useful to study basic processes of retinogenesis and tissue regeneration, they are less appropriate for an easy and inexpensive mass production of histotypic 3-dimensional tissue spheroids, which will be of utmost importance for future bioengineering, e.g. for replacement of animal experimentation. Here we compared conventionally reaggregated spheroids derived from dissociated retinal cells from neonatal gerbils (Meriones unguiculatus) with spheroids cultured on a novel microscaffold cell chip (called cf-chip) in a motion-free bioreactor. Reaggregation and developmental processes leading to tissue formation, e.g. proliferation, apoptosis and differentiation were observed during the first 10 days in vitro (div). Remarkably, in each cf-chip micro-chamber, only one spheroid developed. In both culture systems, sphere sizes and proliferation rates were almost identical. However, apoptosis was only comparably high up to 5 div, but then became negligible in the cf-chip, while it up-rose again in the conventional culture. In both systems, immunohistochemical characterisation revealed the presence of Müller glia cells, of ganglion, amacrine, bipolar and horizontal cells at a highly comparable arrangement. In both systems, photoreceptors were detected only in spheroids from P3 retinae. Benefits of the chip-based 3D cell culture were a reliable sphere production at enhanced viability, the feasibility of single sphere observation during cultivation time, a high reproducibility and easy control of culture conditions. Further development of this approach should allow high-throughput systems not only for retinal but also other types of histotypic spheroids, to become suitable for environmental monitoring and biomedical diagnostics. PMID:19023488

  8. Creation of 3D digital anthropomorphic phantoms which model actual patient non-rigid body motion as determined from MRI and position tracking studies of volunteers

    NASA Astrophysics Data System (ADS)

    Connolly, C. M.; Konik, A.; Dasari, P. K. R.; Segars, P.; Zheng, S.; Johnson, K. L.; Dey, J.; King, M. A.

    2011-03-01

    Patient motion can cause artifacts, which can lead to difficulty in interpretation. The purpose of this study is to create 3D digital anthropomorphic phantoms which model the location of the structures of the chest and upper abdomen of human volunteers undergoing a series of clinically relevant motions. The 3D anatomy is modeled using the XCAT phantom and based on MRI studies. The NURBS surfaces of the XCAT are interactively adapted to fit the MRI studies. A detailed XCAT phantom is first developed from an EKG triggered Navigator acquisition composed of sagittal slices with a 3 x 3 x 3 mm voxel dimension. Rigid body motion states are then acquired at breath-hold as sagittal slices partially covering the thorax, centered on the heart, with 9 mm gaps between them. For non-rigid body motion requiring greater sampling, modified Navigator sequences covering the entire thorax with 3 mm gaps between slices are obtained. The structures of the initial XCAT are then adapted to fit these different motion states. Simultaneous to MRI imaging the positions of multiple reflective markers on stretchy bands about the volunteer's chest and abdomen are optically tracked in 3D via stereo imaging. These phantoms with combined position tracking will be used to investigate both imaging-data-driven and motion-tracking strategies to estimate and correct for patient motion. Our initial application will be to cardiacperfusion SPECT imaging where the XCAT phantoms will be used to create patient activity and attenuation distributions for each volunteer with corresponding motion tracking data from the markers on the body-surface. Monte Carlo methods will then be used to simulate SPECT acquisitions, which will be used to evaluate various motion estimation and correction strategies.

  9. Respiratory motion compensation for simultaneous PET/MR based on a 3D-2D registration of strongly undersampled radial MR data: a simulation study

    NASA Astrophysics Data System (ADS)

    Rank, Christopher M.; Heußer, Thorsten; Flach, Barbara; Brehm, Marcus; Kachelrieß, Marc

    2015-03-01

    We propose a new method for PET/MR respiratory motion compensation, which is based on a 3D-2D registration of strongly undersampled MR data and a) runs in parallel with the PET acquisition, b) can be interlaced with clinical MR sequences, and c) requires less than one minute of the total MR acquisition time per bed position. In our simulation study, we applied a 3D encoded radial stack-of-stars sampling scheme with 160 radial spokes per slice and an acquisition time of 38 s. Gated 4D MR images were reconstructed using a 4D iterative reconstruction algorithm. Based on these images, motion vector fields were estimated using our newly-developed 3D-2D registration framework. A 4D PET volume of a patient with eight hot lesions in the lungs and upper abdomen was simulated and MoCo 4D PET images were reconstructed based on the motion vector fields derived from MR. For evaluation, average SUVmean values of the artificial lesions were determined for a 3D, a gated 4D, a MoCo 4D and a reference (with ten-fold measurement time) gated 4D reconstruction. Compared to the reference, 3D reconstructions yielded an underestimation of SUVmean values due to motion blurring. In contrast, gated 4D reconstructions showed the highest variation of SUVmean due to low statistics. MoCo 4D reconstructions were only slightly affected by these two sources of uncertainty resulting in a significant visual and quantitative improvement in terms of SUVmean values. Whereas temporal resolution was comparable to the gated 4D images, signal-to-noise ratio and contrast-to-noise ratio were close to the 3D reconstructions.

  10. Real-time motion- and B0-correction for LASER-localized spiral-accelerated 3D-MRSI of the brain at 3T

    PubMed Central

    Bogner, Wolfgang; Hess, Aaron T; Gagoski, Borjan; Tisdall, M. Dylan; van der Kouwe, Andre J.W.; Trattnig, Siegfried; Rosen, Bruce; Andronesi, Ovidiu C

    2013-01-01

    The full potential of magnetic resonance spectroscopic imaging (MRSI) is often limited by localization artifacts, motion-related artifacts, scanner instabilities, and long measurement times. Localized adiabatic selective refocusing (LASER) provides accurate B1-insensitive spatial excitation even at high magnetic fields. Spiral encoding accelerates MRSI acquisition, and thus, enables 3D-coverage without compromising spatial resolution. Real-time position-and shim/frequency-tracking using MR navigators correct motion- and scanner instability-related artifacts. Each of these three advanced MRI techniques provides superior MRSI data compared to commonly used methods. In this work, we integrated in a single pulse sequence these three promising approaches. Real-time correction of motion, shim, and frequency-drifts using volumetric dual-contrast echo planar imaging-based navigators were implemented in an MRSI sequence that uses low-power gradient modulated short-echo time LASER localization and time efficient spiral readouts, in order to provide fast and robust 3D-MRSI in the human brain at 3T. The proposed sequence was demonstrated to be insensitive to motion- and scanner drift-related degradations of MRSI data in both phantoms and volunteers. Motion and scanner drift artifacts were eliminated and excellent spectral quality was recovered in the presence of strong movement. Our results confirm the expected benefits of combining a spiral 3D-LASER-MRSI sequence with real-time correction. The new sequence provides accurate, fast, and robust 3D metabolic imaging of the human brain at 3T. This will further facilitate the use of 3D-MRSI for neuroscience and clinical applications. PMID:24201013

  11. 3D SMoSIFT: three-dimensional sparse motion scale invariant feature transform for activity recognition from RGB-D videos

    NASA Astrophysics Data System (ADS)

    Wan, Jun; Ruan, Qiuqi; Li, Wei; An, Gaoyun; Zhao, Ruizhen

    2014-03-01

    Human activity recognition based on RGB-D data has received more attention in recent years. We propose a spatiotemporal feature named three-dimensional (3D) sparse motion scale-invariant feature transform (SIFT) from RGB-D data for activity recognition. First, we build pyramids as scale space for each RGB and depth frame, and then use Shi-Tomasi corner detector and sparse optical flow to quickly detect and track robust keypoints around the motion pattern in the scale space. Subsequently, local patches around keypoints, which are extracted from RGB-D data, are used to build 3D gradient and motion spaces. Then SIFT-like descriptors are calculated on both 3D spaces, respectively. The proposed feature is invariant to scale, transition, and partial occlusions. More importantly, the running time of the proposed feature is fast so that it is well-suited for real-time applications. We have evaluated the proposed feature under a bag of words model on three public RGB-D datasets: one-shot learning Chalearn Gesture Dataset, Cornell Activity Dataset-60, and MSR Daily Activity 3D dataset. Experimental results show that the proposed feature outperforms other spatiotemporal features and are comparative to other state-of-the-art approaches, even though there is only one training sample for each class.

  12. Effect of Task-Correlated Physiological Fluctuations and Motion in 2D and 3D Echo-Planar Imaging in a Higher Cognitive Level fMRI Paradigm

    PubMed Central

    Ladstein, Jarle; Evensmoen, Hallvard R.; Håberg, Asta K.; Kristoffersen, Anders; Goa, Pål E.

    2016-01-01

    Purpose: To compare 2D and 3D echo-planar imaging (EPI) in a higher cognitive level fMRI paradigm. In particular, to study the link between the presence of task-correlated physiological fluctuations and motion and the fMRI contrast estimates from either 2D EPI or 3D EPI datasets, with and without adding nuisance regressors to the model. A signal model in the presence of partly task-correlated fluctuations is derived, and predictions for contrast estimates with and without nuisance regressors are made. Materials and Methods: Thirty-one healthy volunteers were scanned using 2D EPI and 3D EPI during a virtual environmental learning paradigm. In a subgroup of 7 subjects, heart rate and respiration were logged, and the correlation with the paradigm was evaluated. FMRI analysis was performed using models with and without nuisance regressors. Differences in the mean contrast estimates were investigated by analysis-of-variance using Subject, Sequence, Day, and Run as factors. The distributions of group level contrast estimates were compared. Results: Partially task-correlated fluctuations in respiration, heart rate and motion were observed. Statistically significant differences were found in the mean contrast estimates between the 2D EPI and 3D EPI when using a model without nuisance regressors. The inclusion of nuisance regressors for cardiorespiratory effects and motion reduced the difference to a statistically non-significant level. Furthermore, the contrast estimate values shifted more when including nuisance regressors for 3D EPI compared to 2D EPI. Conclusion: The results are consistent with 3D EPI having a higher sensitivity to fluctuations compared to 2D EPI. In the presence partially task-correlated physiological fluctuations or motion, proper correction is necessary to get expectation correct contrast estimates when using 3D EPI. As such task-correlated physiological fluctuations or motion is difficult to avoid in paradigms exploring higher cognitive functions, 2

  13. Computer numerical control (CNC) lithography: light-motion synchronized UV-LED lithography for 3D microfabrication

    NASA Astrophysics Data System (ADS)

    Kim, Jungkwun; Yoon, Yong-Kyu; Allen, Mark G.

    2016-03-01

    This paper presents a computer-numerical-controlled ultraviolet light-emitting diode (CNC UV-LED) lithography scheme for three-dimensional (3D) microfabrication. The CNC lithography scheme utilizes sequential multi-angled UV light exposures along with a synchronized switchable UV light source to create arbitrary 3D light traces, which are transferred into the photosensitive resist. The system comprises a switchable, movable UV-LED array as a light source, a motorized tilt-rotational sample holder, and a computer-control unit. System operation is such that the tilt-rotational sample holder moves in a pre-programmed routine, and the UV-LED is illuminated only at desired positions of the sample holder during the desired time period, enabling the formation of complex 3D microstructures. This facilitates easy fabrication of complex 3D structures, which otherwise would have required multiple manual exposure steps as in the previous multidirectional 3D UV lithography approach. Since it is batch processed, processing time is far less than that of the 3D printing approach at the expense of some reduction in the degree of achievable 3D structure complexity. In order to produce uniform light intensity from the arrayed LED light source, the UV-LED array stage has been kept rotating during exposure. UV-LED 3D fabrication capability was demonstrated through a plurality of complex structures such as V-shaped micropillars, micropanels, a micro-‘hi’ structure, a micro-‘cat’s claw,’ a micro-‘horn,’ a micro-‘calla lily,’ a micro-‘cowboy’s hat,’ and a micro-‘table napkin’ array.

  14. Influence of Head Motion on the Accuracy of 3D Reconstruction with Cone-Beam CT: Landmark Identification Errors in Maxillofacial Surface Model

    PubMed Central

    Song, Jin-Myoung; Cho, Jin-Hyoung

    2016-01-01

    Purpose The purpose of this study was to investigate the influence of head motion on the accuracy of three-dimensional (3D) reconstruction with cone-beam computed tomography (CBCT) scan. Materials and Methods Fifteen dry skulls were incorporated into a motion controller which simulated four types of head motion during CBCT scan: 2 horizontal rotations (to the right/to the left) and 2 vertical rotations (upward/downward). Each movement was triggered to occur at the start of the scan for 1 second by remote control. Four maxillofacial surface models with head motion and one control surface model without motion were obtained for each skull. Nine landmarks were identified on the five maxillofacial surface models for each skull, and landmark identification errors were compared between the control model and each of the models with head motion. Results Rendered surface models with head motion were similar to the control model in appearance; however, the landmark identification errors showed larger values in models with head motion than in the control. In particular, the Porion in the horizontal rotation models presented statistically significant differences (P < .05). Statistically significant difference in the errors between the right and left side landmark was present in the left side rotation which was opposite direction to the scanner rotation (P < .05). Conclusions Patient movement during CBCT scan might cause landmark identification errors on the 3D surface model in relation to the direction of the scanner rotation. Clinicians should take this into consideration to prevent patient movement during CBCT scan, particularly horizontal movement. PMID:27065238

  15. Imaging bacterial 3D motion using digital in-line holographic microscopy and correlation-based de-noising algorithm.

    PubMed

    Molaei, Mehdi; Sheng, Jian

    2014-12-29

    Better understanding of bacteria environment interactions in the context of biofilm formation requires accurate 3-dimentional measurements of bacteria motility. Digital Holographic Microscopy (DHM) has demonstrated its capability in resolving 3D distribution and mobility of particulates in a dense suspension. Due to their low scattering efficiency, bacteria are substantially difficult to be imaged by DHM. In this paper, we introduce a novel correlation-based de-noising algorithm to remove the background noise and enhance the quality of the hologram. Implemented in conjunction with DHM, we demonstrate that the method allows DHM to resolve 3-D E. coli bacteria locations of a dense suspension (>107 cells/ml) with submicron resolutions (<0.5 µm) over substantial depth and to obtain thousands of 3D cell trajectories. PMID:25607177

  16. Imaging bacterial 3D motion using digital in-line holographic microscopy and correlation-based de-noising algorithm

    PubMed Central

    Molaei, Mehdi; Sheng, Jian

    2014-01-01

    Abstract: Better understanding of bacteria environment interactions in the context of biofilm formation requires accurate 3-dimentional measurements of bacteria motility. Digital Holographic Microscopy (DHM) has demonstrated its capability in resolving 3D distribution and mobility of particulates in a dense suspension. Due to their low scattering efficiency, bacteria are substantially difficult to be imaged by DHM. In this paper, we introduce a novel correlation-based de-noising algorithm to remove the background noise and enhance the quality of the hologram. Implemented in conjunction with DHM, we demonstrate that the method allows DHM to resolve 3-D E. coli bacteria locations of a dense suspension (>107 cells/ml) with submicron resolutions (<0.5 µm) over substantial depth and to obtain thousands of 3D cell trajectories. PMID:25607177

  17. Bedside assistance in freehand ultrasonic diagnosis by real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion

    NASA Astrophysics Data System (ADS)

    Fukuzawa, M.; Kawata, K.; Nakamori, N.; Kitsunezuka, Y.

    2011-03-01

    By real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion, freehand ultrasonic diagnosis of neonatal ischemic diseases has been assisted at the bedside. The 2D ultrasonic movie was taken with a conventional ultrasonic apparatus (ATL HDI5000) and ultrasonic probes of 5-7 MHz with the compact tilt-sensor to measure the probe orientation. The real-time 3D visualization was realized by developing an extended version of the PC-based visualization system. The software was originally developed on the DirectX platform and optimized with the streaming SIMD extensions. The 3D scatter diagram of the latest pulsatile tissues has been continuously generated and visualized as projection image with the ultrasonic movie in the current section more than 15 fps. It revealed the 3D structure of pulsatile tissues such as middle and posterior cerebral arteries, Willis ring and cerebellar arteries, in which pediatricians have great interests in the blood flow because asphyxiated and/or low-birth-weight neonates have a high risk of ischemic diseases such as hypoxic-ischemic encephalopathy and periventricular leukomalacia. Since the pulsatile tissue-motion is due to local blood flow, it can be concluded that the system developed in this work is very useful to assist freehand ultrasonic diagnosis of ischemic diseases in the neonatal cranium.

  18. Integrating structure-from-motion photogrammetry with geospatial software as a novel technique for quantifying 3D ecological characteristics of coral reefs.

    PubMed

    Burns, Jhr; Delparte, D; Gates, R D; Takabayashi, M

    2015-01-01

    The structural complexity of coral reefs plays a major role in the biodiversity, productivity, and overall functionality of reef ecosystems. Conventional metrics with 2-dimensional properties are inadequate for characterization of reef structural complexity. A 3-dimensional (3D) approach can better quantify topography, rugosity and other structural characteristics that play an important role in the ecology of coral reef communities. Structure-from-Motion (SfM) is an emerging low-cost photogrammetric method for high-resolution 3D topographic reconstruction. This study utilized SfM 3D reconstruction software tools to create textured mesh models of a reef at French Frigate Shoals, an atoll in the Northwestern Hawaiian Islands. The reconstructed orthophoto and digital elevation model were then integrated with geospatial software in order to quantify metrics pertaining to 3D complexity. The resulting data provided high-resolution physical properties of coral colonies that were then combined with live cover to accurately characterize the reef as a living structure. The 3D reconstruction of reef structure and complexity can be integrated with other physiological and ecological parameters in future research to develop reliable ecosystem models and improve capacity to monitor changes in the health and function of coral reef ecosystems. PMID:26207190

  19. Integrating structure-from-motion photogrammetry with geospatial software as a novel technique for quantifying 3D ecological characteristics of coral reefs

    PubMed Central

    Delparte, D; Gates, RD; Takabayashi, M

    2015-01-01

    The structural complexity of coral reefs plays a major role in the biodiversity, productivity, and overall functionality of reef ecosystems. Conventional metrics with 2-dimensional properties are inadequate for characterization of reef structural complexity. A 3-dimensional (3D) approach can better quantify topography, rugosity and other structural characteristics that play an important role in the ecology of coral reef communities. Structure-from-Motion (SfM) is an emerging low-cost photogrammetric method for high-resolution 3D topographic reconstruction. This study utilized SfM 3D reconstruction software tools to create textured mesh models of a reef at French Frigate Shoals, an atoll in the Northwestern Hawaiian Islands. The reconstructed orthophoto and digital elevation model were then integrated with geospatial software in order to quantify metrics pertaining to 3D complexity. The resulting data provided high-resolution physical properties of coral colonies that were then combined with live cover to accurately characterize the reef as a living structure. The 3D reconstruction of reef structure and complexity can be integrated with other physiological and ecological parameters in future research to develop reliable ecosystem models and improve capacity to monitor changes in the health and function of coral reef ecosystems. PMID:26207190

  20. Validation and Comparison of 2D and 3D Codes for Nearshore Motion of Long Waves Using Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey

    2016-04-01

    Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT

  1. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    NASA Astrophysics Data System (ADS)

    Min, Yugang; Santhanam, Anand; Neelakkantan, Harini; Ruddy, Bari H.; Meeks, Sanford L.; Kupelian, Patrick A.

    2010-09-01

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  2. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    PubMed Central

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-01-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399

  3. Impact of assimilation of INSAT-3D retrieved atmospheric motion vectors on short-range forecast of summer monsoon 2014 over the South Asian region

    NASA Astrophysics Data System (ADS)

    Kumar, Prashant; Deb, Sanjib K.; Kishtawal, C. M.; Pal, P. K.

    2016-01-01

    The Weather Research and Forecasting (WRF) model and its three-dimensional variational data assimilation system are used in this study to assimilate the INSAT-3D, a recently launched Indian geostationary meteorological satellite derived from atmospheric motion vectors (AMVs) over the South Asian region during peak Indian summer monsoon month (i.e., July 2014). A total of four experiments were performed daily with and without assimilation of INSAT-3D-derived AMVs and the other AMVs available through Global Telecommunication System (GTS) for the entire month of July 2014. Before assimilating these newly derived INSAT-3D AMVs in the numerical model, a preliminary evaluation of these AMVs is performed with National Centers for Environmental Prediction (NCEP) final model analyses. The preliminary validation results show that root-mean-square vector difference (RMSVD) for INSAT-3D AMVs is ˜3.95, 6.66, and 5.65 ms-1 at low, mid, and high levels, respectively, and slightly more RMSVDs are noticed in GTS AMVs (˜4.0, 8.01, and 6.43 ms-1 at low, mid, and high levels, respectively). The assimilation of AMVs has improved the WRF model of produced wind speed, temperature, and moisture analyses as well as subsequent model forecasts over the Indian Ocean, Arabian Sea, Australia, and South Africa. Slightly more improvements are noticed in the experiment where only the INSAT-3D AMVs are assimilated compared to the experiment where only GTS AMVs are assimilated. The results also show improvement in rainfall predictions over the Indian region after AMV assimilation. Overall, the assimilation of INSAT-3D AMVs improved the WRF model short-range predictions over the South Asian region as compared to control experiments.

  4. Automated 3D architecture reconstruction from photogrammetric structure-and-motion: A case study of the One Pilla pagoda, Hanoi, Vienam

    NASA Astrophysics Data System (ADS)

    To, T.; Nguyen, D.; Tran, G.

    2015-04-01

    Heritage system of Vietnam has decline because of poor-conventional condition. For sustainable development, it is required a firmly control, space planning organization, and reasonable investment. Moreover, in the field of Cultural Heritage, the use of automated photogrammetric systems, based on Structure from Motion techniques (SfM), is widely used. With the potential of high-resolution, low-cost, large field of view, easiness, rapidity and completeness, the derivation of 3D metric information from Structure-and- Motion images is receiving great attention. In addition, heritage objects in form of 3D physical models are recorded not only for documentation issues, but also for historical interpretation, restoration, cultural and educational purposes. The study suggests the archaeological documentation of the "One Pilla" pagoda placed in Hanoi capital, Vietnam. The data acquired through digital camera Cannon EOS 550D, CMOS APS-C sensor 22.3 x 14.9 mm. Camera calibration and orientation were carried out by VisualSFM, CMPMVS (Multi-View Reconstruction) and SURE (Photogrammetric Surface Reconstruction from Imagery) software. The final result represents a scaled 3D model of the One Pilla Pagoda and displayed different views in MeshLab software.

  5. Generation of fluoroscopic 3D images with a respiratory motion model based on an external surrogate signal

    NASA Astrophysics Data System (ADS)

    Hurwitz, Martina; Williams, Christopher L.; Mishra, Pankaj; Rottmann, Joerg; Dhou, Salam; Wagar, Matthew; Mannarino, Edward G.; Mak, Raymond H.; Lewis, John H.

    2015-01-01

    Respiratory motion during radiotherapy can cause uncertainties in definition of the target volume and in estimation of the dose delivered to the target and healthy tissue. In this paper, we generate volumetric images of the internal patient anatomy during treatment using only the motion of a surrogate signal. Pre-treatment four-dimensional CT imaging is used to create a patient-specific model correlating internal respiratory motion with the trajectory of an external surrogate placed on the chest. The performance of this model is assessed with digital and physical phantoms reproducing measured irregular patient breathing patterns. Ten patient breathing patterns are incorporated in a digital phantom. For each patient breathing pattern, the model is used to generate images over the course of thirty seconds. The tumor position predicted by the model is compared to ground truth information from the digital phantom. Over the ten patient breathing patterns, the average absolute error in the tumor centroid position predicted by the motion model is 1.4 mm. The corresponding error for one patient breathing pattern implemented in an anthropomorphic physical phantom was 0.6 mm. The global voxel intensity error was used to compare the full image to the ground truth and demonstrates good agreement between predicted and true images. The model also generates accurate predictions for breathing patterns with irregular phases or amplitudes.

  6. Validation of 3D motion tracking of pulmonary lesions using CT fluoroscopy images for robotically assisted lung biopsy

    NASA Astrophysics Data System (ADS)

    Xu, Sheng; Fichtinger, Gabor; Taylor, Russell H.; Cleary, Kevin R.

    2005-04-01

    As recently proposed in our previous work, the two-dimensional CT fluoroscopy image series can be used to track the three-dimensional motion of a pulmonary lesion. The assumption is that the lung tissue is locally rigid, so that the real-time CT fluoroscopy image can be combined with a preoperative CT volume to infer the position of the lesion when the lesion is not in the CT fluoroscopy imaging plane. In this paper, we validate the basic properties of our tracking algorithm using a synthetic four-dimensional lung dataset. The motion tracking result is compared to the ground truth of the four-dimensional dataset. The optimal parameter configurations of the algorithm are discussed. The robustness and accuracy of the tracking algorithm are presented. The error analysis shows that the local rigidity error is the principle component of the tracking error. The error increases as the lesion moves away from the image region being registered. Using the synthetic four-dimensional lung data, the average tracking error over a complete respiratory cycle is 0.8 mm for target lesions inside the lung. As a result, the motion tracking algorithm can potentially alleviate the effect of respiratory motion in CT fluoroscopy-guided lung biopsy.

  7. Simultaneous 3D imaging of sound-induced motions of the tympanic membrane and middle ear ossicles.

    PubMed

    Chang, Ernest W; Cheng, Jeffrey T; Röösli, Christof; Kobler, James B; Rosowski, John J; Yun, Seok Hyun

    2013-10-01

    Efficient transfer of sound by the middle ear ossicles is essential for hearing. Various pathologies can impede the transmission of sound and thereby cause conductive hearing loss. Differential diagnosis of ossicular disorders can be challenging since the ossicles are normally hidden behind the tympanic membrane (TM). Here we describe the use of a technique termed optical coherence tomography (OCT) vibrography to view the sound-induced motion of the TM and ossicles simultaneously. With this method, we were able to capture three-dimensional motion of the intact TM and ossicles of the chinchilla ear with nanometer-scale sensitivity at sound frequencies from 0.5 to 5 kHz. The vibration patterns of the TM were complex and highly frequency dependent with mean amplitudes of 70-120 nm at 100 dB sound pressure level. The TM motion was only marginally sensitive to stapes fixation and incus-stapes joint interruption; however, when additional information derived from the simultaneous measurement of ossicular motion was added, it was possible to clearly distinguish these different simulated pathologies. The technique may be applicable to clinical diagnosis in Otology and to basic research in audition and acoustics. PMID:23811181

  8. Hybrid 3-D rocket trajectory program. Part 1: Formulation and analysis. Part 2: Computer programming and user's instruction. [computerized simulation using three dimensional motion analysis

    NASA Technical Reports Server (NTRS)

    Huang, L. C. P.; Cook, R. A.

    1973-01-01

    Models utilizing various sub-sets of the six degrees of freedom are used in trajectory simulation. A 3-D model with only linear degrees of freedom is especially attractive, since the coefficients for the angular degrees of freedom are the most difficult to determine and the angular equations are the most time consuming for the computer to evaluate. A computer program is developed that uses three separate subsections to predict trajectories. A launch rail subsection is used until the rocket has left its launcher. The program then switches to a special 3-D section which computes motions in two linear and one angular degrees of freedom. When the rocket trims out, the program switches to the standard, three linear degrees of freedom model.

  9. Development of the dynamic motion simulator of 3D micro-gravity with a combined passive/active suspension system

    NASA Technical Reports Server (NTRS)

    Yoshida, Kazuya; Hirose, Shigeo; Ogawa, Tadashi

    1994-01-01

    The establishment of those in-orbit operations like 'Rendez-Vous/Docking' and 'Manipulator Berthing' with the assistance of robotics or autonomous control technology, is essential for the near future space programs. In order to study the control methods, develop the flight models, and verify how the system works, we need a tool or a testbed which enables us to simulate mechanically the micro-gravity environment. There have been many attempts to develop the micro-gravity testbeds, but once the simulation goes into the docking and berthing operation that involves mechanical contacts among multi bodies, the requirement becomes critical. A group at the Tokyo Institute of Technology has proposed a method that can simulate the 3D micro-gravity producing a smooth response to the impact phenomena with relatively simple apparatus. Recently the group carried out basic experiments successfully using a prototype hardware model of the testbed. This paper will present our idea of the 3D micro-gravity simulator and report the results of our initial experiments.

  10. The 3-D motion of the centre of gravity of the human body during level walking. II. Lower limb amputees.

    PubMed

    Tesio, L; Lanzi, D; Detrembleur, C

    1998-03-01

    OBJECTIVE: To analyse the motion of the centre of gravity (CG) of the body during gait in unilateral lower limb amputees with good kinematic patterns. DESIGN: Three transtibial (below-knee, BK) and four transfemoral (above-knee, AK) amputees were required to perform successive walks over a 2.4 m long force plate, at freely chosen cadence and speed. BACKGROUND: In previous studies it has been shown that in unilateral lower limb amputee gait, the motion of the CG can be more asymmetric than might be suspected from kinematic analysis. METHODS: The mechanical energy changes of the CG due to its motion in the vertical, forward and lateral direction were measured. Gait speed ranged 0.75-1.32 m s(-1) in the different subjects. This allowed calculation of (a) the positive work done by muscles to maintain the motion of the CG with respect to the ground ('external' work, W(ext)) and (b) the amount of the pendulum-like, energy-saving transfer between gravitational potential energy and kinetic energy of the CG during each step (percent recovery, R). Step length and vertical displacement of the CG were also measured. RESULTS: The recorded variables were kept within the normal limits, calculated in a previous work, when an average was made of the steps performed on the prosthetic (P) and on the normal (N) limb. Asymmetries were found, however, between the P and the N step. In BK amputees, the P step R was 5% greater and W(ext) was 21% lower than in the N step; in AK amputees, in the P step R was 54% greater and W(ext) was 66% lower than in the N step. Asymmetries were also found in the relative magnitude of the external work provided by each lower limb during the single stance as compared with the double stance: a marked deficit of work occurred at the P to N transition. PMID:11415775

  11. Quantum Primitives

    NASA Astrophysics Data System (ADS)

    Spring, Joseph

    2009-04-01

    We explore possible characterisations of entanglement classes which may be interpreted as gates acting on globally distributed systems. The cyclic nature of a selection of entanglement gates, primitives, is explored commencing with gates for generating Bell, GHZ and W states.

  12. Evaluation of Structure from Motion Software to Create 3D Models of Late Nineteenth Century Great Lakes Shipwrecks Using Archived Diver-Acquired Video Surveys

    NASA Astrophysics Data System (ADS)

    Mertes, J.; Thomsen, T.; Gulley, J.

    2014-12-01

    Here we demonstrate the ability to use archived video surveys to create photorealistic 3D models of submerged archeological sites. We created 3D models of two nineteenth century Great Lakes shipwrecks using diver-acquired video surveys and Structure from Motion (SfM) software. Models were georeferenced using archived hand survey data. Comparison of hand survey measurements and digital measurements made using the models demonstrate that spatial analysis produces results with reasonable accuracy when wreck maps are available. Error associated with digital measurements displayed an inverse relationship to object size. Measurement error ranged from a maximum of 18 % (on 0.37 m object) and a minimum of 0.56 % (on a 4.21 m object). Our results demonstrate SfM can generate models of large maritime archaeological sites that for research, education and outreach purposes. Where site maps are available, these 3D models can be georeferenced to allow additional spatial analysis long after on-site data collection.

  13. Accuracy and precision of a custom camera-based system for 2-d and 3-d motion tracking during speech and nonspeech motor tasks.

    PubMed

    Feng, Yongqiang; Max, Ludo

    2014-04-01

    PURPOSE Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and submillimeter accuracy. METHOD The authors examined the accuracy and precision of 2-D and 3-D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially available computer software (APAS, Ariel Dynamics), and a custom calibration device. RESULTS Overall root-mean-square error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3- vs. 6-mm diameter) was negligible at all frame rates for both 2-D and 3-D data. CONCLUSION Motion tracking with consumer-grade digital cameras and the APAS software can achieve submillimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  14. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  15. Real-time intensity based 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Steiner, E.; Stock, M.; Georg, D.; Birkfellner, W.

    2014-03-01

    Intra-fractional respiratorymotion during radiotherapy is one of themain sources of uncertainty in dose application creating the need to extend themargins of the planning target volume (PTV). Real-time tumormotion tracking by 2D/3D registration using on-board kilo-voltage (kV) imaging can lead to a reduction of the PTV. One limitation of this technique when using one projection image, is the inability to resolve motion along the imaging beam axis. We present a retrospective patient study to investigate the impact of paired portal mega-voltage (MV) and kV images, on registration accuracy. We used data from eighteen patients suffering from non small cell lung cancer undergoing regular treatment at our center. For each patient we acquired a planning CT and sequences of kV and MV images during treatment. Our evaluation consisted of comparing the accuracy of motion tracking in 6 degrees-of-freedom(DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. We use graphics processing unit rendering for real-time performance. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 3.3 mm to 1.8 mm and the motion along AP was successfully extracted. The mean registration time was of 190+/-35ms. Our evaluation shows that using kVMV image pairs leads to improved motion extraction in 6 DOF. Therefore, this approach is suitable for accurate, real-time tumor motion tracking with a conventional LINAC.

  16. 3D Motions of Iron in Six-Coordinate {FeNO}(7) Hemes by Nuclear Resonance Vibration Spectroscopy.

    PubMed

    Peng, Qian; Pavlik, Jeffrey W; Silvernail, Nathan J; Alp, E Ercan; Hu, Michael Y; Zhao, Jiyong; Sage, J Timothy; Scheidt, W Robert

    2016-04-25

    The vibrational spectrum of a six-coordinate nitrosyl iron porphyrinate, monoclinic [Fe(TpFPP)(1-MeIm)(NO)] (TpFPP=tetra-para-fluorophenylporphyrin; 1-MeIm=1-methylimidazole), has been studied by oriented single-crystal nuclear resonance vibrational spectroscopy (NRVS). The crystal was oriented to give spectra perpendicular to the porphyrin plane and two in-plane spectra perpendicular or parallel to the projection of the FeNO plane. These enable assignment of the FeNO bending and stretching modes. The measurements reveal that the two in-plane spectra have substantial differences that result from the strongly bonded axial NO ligand. The direction of the in-plane iron motion is found to be largely parallel and perpendicular to the projection of the bent FeNO on the porphyrin plane. The out-of-plane Fe-N-O stretching and bending modes are strongly mixed with each other, as well as with porphyrin ligand modes. The stretch is mixed with v50 as was also observed for dioxygen complexes. The frequency of the assigned stretching mode of eight Fe-X-O (X=N, C, and O) complexes is correlated with the Fe-XO bond lengths. The nature of highest frequency band at ≈560 cm(-1) has also been examined in two additional new derivatives. Previously assigned as the Fe-NO stretch (by resonance Raman), it is better described as the bend, as the motion of the central nitrogen atom of the FeNO group is very large. There is significant mixing of this mode. The results emphasize the importance of mode mixing; the extent of mixing must be related to the peripheral phenyl substituents. PMID:26999733

  17. Determining inter-fractional motion of the uterus using 3D ultrasound imaging during radiotherapy for cervical cancer

    NASA Astrophysics Data System (ADS)

    Baker, Mariwan; Jensen, Jørgen Arendt; Behrens, Claus F.

    2014-03-01

    Uterine positional changes can reduce the accuracy of radiotherapy for cervical cancer patients. The purpose of this study was to; 1) Quantify the inter-fractional uterine displacement using a novel 3D ultrasound (US) imaging system, and 2) Compare the result with the bone match shift determined by Cone- Beam CT (CBCT) imaging.Five cervical cancer patients were enrolled in the study. Three of them underwent weekly CBCT imaging prior to treatment and bone match shift was applied. After treatment delivery they underwent a weekly US scan. The transabdominal scans were conducted using a Clarity US system (Clarity® Model 310C00). Uterine positional shifts based on soft-tissue match using US was performed and compared to bone match shifts for the three directions. Mean value (+/-1 SD) of the US shifts were (mm); anterior-posterior (A/P): (3.8+/-5.5), superior-inferior (S/I) (-3.5+/-5.2), and left-right (L/R): (0.4+/-4.9). The variations were larger than the CBCT shifts. The largest inter-fractional displacement was from -2 mm to +14 mm in the AP-direction for patient 3. Thus, CBCT bone matching underestimates the uterine positional displacement due to neglecting internal uterine positional change to the bone structures. Since the US images were significantly better than the CBCT images in terms of soft-tissue visualization, the US system can provide an optional image-guided radiation therapy (IGRT) system. US imaging might be a better IGRT system than CBCT, despite difficulty in capturing the entire uterus. Uterine shifts based on US imaging contains relative uterus-bone displacement, which is not taken into consideration using CBCT bone match.

  18. The birth of a dinosaur footprint: Subsurface 3D motion reconstruction and discrete element simulation reveal track ontogeny

    PubMed Central

    2014-01-01

    Locomotion over deformable substrates is a common occurrence in nature. Footprints represent sedimentary distortions that provide anatomical, functional, and behavioral insights into trackmaker biology. The interpretation of such evidence can be challenging, however, particularly for fossil tracks recovered at bedding planes below the originally exposed surface. Even in living animals, the complex dynamics that give rise to footprint morphology are obscured by both foot and sediment opacity, which conceals animal–substrate and substrate–substrate interactions. We used X-ray reconstruction of moving morphology (XROMM) to image and animate the hind limb skeleton of a chicken-like bird traversing a dry, granular material. Foot movement differed significantly from walking on solid ground; the longest toe penetrated to a depth of ∼5 cm, reaching an angle of 30° below horizontal before slipping backward on withdrawal. The 3D kinematic data were integrated into a validated substrate simulation using the discrete element method (DEM) to create a quantitative model of limb-induced substrate deformation. Simulation revealed that despite sediment collapse yielding poor quality tracks at the air–substrate interface, subsurface displacements maintain a high level of organization owing to grain–grain support. Splitting the substrate volume along “virtual bedding planes” exposed prints that more closely resembled the foot and could easily be mistaken for shallow tracks. DEM data elucidate how highly localized deformations associated with foot entry and exit generate specific features in the final tracks, a temporal sequence that we term “track ontogeny.” This combination of methodologies fosters a synthesis between the surface/layer-based perspective prevalent in paleontology and the particle/volume-based perspective essential for a mechanistic understanding of sediment redistribution during track formation. PMID:25489092

  19. Horse-like walking, trotting, and galloping derived from kinematic Motion Primitives (kMPs) and their application to walk/trot transitions in a compliant quadruped robot.

    PubMed

    Moro, Federico L; Spröwitz, Alexander; Tuleu, Alexandre; Vespignani, Massimo; Tsagarakis, Nikos G; Ijspeert, Auke J; Caldwell, Darwin G

    2013-06-01

    This manuscript proposes a method to directly transfer the features of horse walking, trotting, and galloping to a quadruped robot, with the aim of creating a much more natural (horse-like) locomotion profile. A principal component analysis on horse joint trajectories shows that walk, trot, and gallop can be described by a set of four kinematic Motion Primitives (kMPs). These kMPs are used to generate valid, stable gaits that are tested on a compliant quadruped robot. Tests on the effects of gait frequency scaling as follows: results indicate a speed optimal walking frequency around 3.4 Hz, and an optimal trotting frequency around 4 Hz. Following, a criterion to synthesize gait transitions is proposed, and the walk/trot transitions are successfully tested on the robot. The performance of the robot when the transitions are scaled in frequency is evaluated by means of roll and pitch angle phase plots. PMID:23463501

  20. 34/45-Mbps 3D HDTV digital coding scheme using modified motion compensation with disparity vectors

    NASA Astrophysics Data System (ADS)

    Naito, Sei; Matsumoto, Shuichi

    1998-12-01

    This paper describes a digital compression coding scheme for transmitting three dimensional stereo HDTV signals with full resolution at bit-rates around 30 to 40 Mbps to be adapted for PDH networks of the CCITT 3rd digital hierarchy, 34 Mbps and 45 Mbps, SDH networks of 52 Mbps and ATM networks. In order to achieve a satisfactory quality for stereo HDTV pictures, three advanced key technologies are introduced into the MPEG-2 Multi-View Profile, i.e., a modified motion compensation using disparity vectors estimated between the left and right pictures, an adaptive rate control using a common buffer memory for left and right pictures encoding, and a discriminatory bit allocation which results in the improvement of left pictures quality without any degradation of right pictures. From the results of coding experiment conducted to evaluate the coding picture achieved by this coding scheme, it is confirmed that our coding scheme gives satisfactory picture quality even at 34 Mbps including audio and FEC data.

  1. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network.

    PubMed

    Bukhari, W; Hong, S-M

    2016-03-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient's breathing cycle. The algorithm, named EKF-GPRN(+) , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN(+) prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN(+) implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN(+) . The experimental results show that the EKF-GPRN(+) algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN(+) algorithm can further reduce the prediction error by employing the gating

  2. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network

    NASA Astrophysics Data System (ADS)

    Bukhari, W.; Hong, S.-M.

    2016-03-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN+ , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN+ prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN+ implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN+ . The experimental results show that the EKF-GPRN+ algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN+ algorithm can further reduce the prediction error by employing the gating function, albeit

  3. Dynamic Primitives of Motor Behavior

    PubMed Central

    Hogan, Neville; Sternad, Dagmar

    2013-01-01

    We present in outline a theory of sensorimotor control based on dynamic primitives, which we define as attractors. To account for the broad class of human interactive behaviors—especially tool use—we propose three distinct primitives: submovements, oscillations and mechanical impedances, the latter necessary for interaction with objects. Due to fundamental features of the neuromuscular system, most notably its slow response, we argue that encoding in terms of parameterized primitives may be an essential simplification required for learning, performance, and retention of complex skills. Primitives may simultaneously and sequentially be combined to produce observable forces and motions. This may be achieved by defining a virtual trajectory composed of submovements and/or oscillations interacting with impedances. Identifying primitives requires care: in principle, overlapping submovements would be sufficient to compose all observed movements but biological evidence shows that oscillations are a distinct primitive. Conversely, we suggest that kinematic synergies, frequently discussed as primitives of complex actions, may be an emergent consequence of neuromuscular impedance. To illustrate how these dynamic primitives may account for complex actions, we briefly review three types of interactive behaviors: constrained motion, impact tasks, and manipulation of dynamic objects. PMID:23124919

  4. Primitive Clay.

    ERIC Educational Resources Information Center

    Chorches, Joan

    A five-week unit providing first hand experience with primitive ceramic techniques is described in this curriculum guide, which includes course goals and objectives, a daily schedule of class activities, and handouts for students. The unit features construction of a sawdust kiln as a group problem-solving activity; students work in groups…

  5. 3D crustal structure and long-period ground motions from a M9.0 megathrust earthquake in the Pacific Northwest region

    NASA Astrophysics Data System (ADS)

    Olsen, Kim B.; Stephenson, William J.; Geisselmeyer, Andreas

    2008-04-01

    We have developed a community velocity model for the Pacific Northwest region from northern California to southern Canada and carried out the first 3D simulation of a Mw 9.0 megathrust earthquake rupturing along the Cascadia subduction zone using a parallel supercomputer. A long-period (<0.5 Hz) source model was designed by mapping the inversion results for the December 26, 2004 Sumatra-Andaman earthquake (Han et al., Science 313(5787):658-662, 2006) onto the Cascadia subduction zone. Representative peak ground velocities for the metropolitan centers of the region include 42 cm/s in the Seattle area and 8-20 cm/s in the Tacoma, Olympia, Vancouver, and Portland areas. Combined with an extended duration of the shaking up to 5 min, these long-period ground motions may inflict significant damage on the built environment, in particular on the highrises in downtown Seattle.

  6. 3D crustal structure and long-period ground motions from a M9.0 megathrust earthquake in the Pacific Northwest region

    USGS Publications Warehouse

    Olsen, K.B.; Stephenson, W.J.; Geisselmeyer, A.

    2008-01-01

    We have developed a community velocity model for the Pacific Northwest region from northern California to southern Canada and carried out the first 3D simulation of a Mw 9.0 megathrust earthquake rupturing along the Cascadia subduction zone using a parallel supercomputer. A long-period (<0.5 Hz) source model was designed by mapping the inversion results for the December 26, 2004 Sumatra–Andaman earthquake (Han et al., Science 313(5787):658–662, 2006) onto the Cascadia subduction zone. Representative peak ground velocities for the metropolitan centers of the region include 42 cm/s in the Seattle area and 8–20 cm/s in the Tacoma, Olympia, Vancouver, and Portland areas. Combined with an extended duration of the shaking up to 5 min, these long-period ground motions may inflict significant damage on the built environment, in particular on the highrises in downtown Seattle.

  7. Real-Time Motion Capture Toolbox (RTMocap): an open-source code for recording 3-D motion kinematics to study action-effect anticipations during motor and social interactions.

    PubMed

    Lewkowicz, Daniel; Delevoye-Turrell, Yvonne

    2016-03-01

    We present here a toolbox for the real-time motion capture of biological movements that runs in the cross-platform MATLAB environment (The MathWorks, Inc., Natick, MA). It provides instantaneous processing of the 3-D movement coordinates of up to 20 markers at a single instant. Available functions include (1) the setting of reference positions, areas, and trajectories of interest; (2) recording of the 3-D coordinates for each marker over the trial duration; and (3) the detection of events to use as triggers for external reinforcers (e.g., lights, sounds, or odors). Through fast online communication between the hardware controller and RTMocap, automatic trial selection is possible by means of either a preset or an adaptive criterion. Rapid preprocessing of signals is also provided, which includes artifact rejection, filtering, spline interpolation, and averaging. A key example is detailed, and three typical variations are developed (1) to provide a clear understanding of the importance of real-time control for 3-D motion in cognitive sciences and (2) to present users with simple lines of code that can be used as starting points for customizing experiments using the simple MATLAB syntax. RTMocap is freely available ( http://sites.google.com/site/RTMocap/ ) under the GNU public license for noncommercial use and open-source development, together with sample data and extensive documentation. PMID:25805426

  8. Estimating 3D L5/S1 moments and ground reaction forces during trunk bending using a full-body ambulatory inertial motion capture system.

    PubMed

    Faber, G S; Chang, C C; Kingma, I; Dennerlein, J T; van Dieën, J H

    2016-04-11

    Inertial motion capture (IMC) systems have become increasingly popular for ambulatory movement analysis. However, few studies have attempted to use these measurement techniques to estimate kinetic variables, such as joint moments and ground reaction forces (GRFs). Therefore, we investigated the performance of a full-body ambulatory IMC system in estimating 3D L5/S1 moments and GRFs during symmetric, asymmetric and fast trunk bending, performed by nine male participants. Using an ambulatory IMC system (Xsens/MVN), L5/S1 moments were estimated based on the upper-body segment kinematics using a top-down inverse dynamics analysis, and GRFs were estimated based on full-body segment accelerations. As a reference, a laboratory measurement system was utilized: GRFs were measured with Kistler force plates (FPs), and L5/S1 moments were calculated using a bottom-up inverse dynamics model based on FP data and lower-body kinematics measured with an optical motion capture system (OMC). Correspondence between the OMC+FP and IMC systems was quantified by calculating root-mean-square errors (RMSerrors) of moment/force time series and the interclass correlation (ICC) of the absolute peak moments/forces. Averaged over subjects, L5/S1 moment RMSerrors remained below 10Nm (about 5% of the peak extension moment) and 3D GRF RMSerrors remained below 20N (about 2% of the peak vertical force). ICCs were high for the peak L5/S1 extension moment (0.971) and vertical GRF (0.998). Due to lower amplitudes, smaller ICCs were found for the peak asymmetric L5/S1 moments (0.690-0.781) and horizontal GRFs (0.559-0.948). In conclusion, close correspondence was found between the ambulatory IMC-based and laboratory-based estimates of back load. PMID:26795123

  9. Mapping motion from 4D-MRI to 3D-CT for use in 4D dose calculations: A technical feasibility study

    SciTech Connect

    Boye, Dirk; Lomax, Tony; Knopf, Antje

    2013-06-15

    Purpose: Target sites affected by organ motion require a time resolved (4D) dose calculation. Typical 4D dose calculations use 4D-CT as a basis. Unfortunately, 4D-CT images have the disadvantage of being a 'snap-shot' of the motion during acquisition and of assuming regularity of breathing. In addition, 4D-CT acquisitions involve a substantial additional dose burden to the patient making many, repeated 4D-CT acquisitions undesirable. Here the authors test the feasibility of an alternative approach to generate patient specific 4D-CT data sets. Methods: In this approach motion information is extracted from 4D-MRI. Simulated 4D-CT data sets [which the authors call 4D-CT(MRI)] are created by warping extracted deformation fields to a static 3D-CT data set. The employment of 4D-MRI sequences for this has the advantage that no assumptions on breathing regularity are made, irregularities in breathing can be studied and, if necessary, many repeat imaging studies (and consequently simulated 4D-CT data sets) can be performed on patients and/or volunteers. The accuracy of 4D-CT(MRI)s has been validated by 4D proton dose calculations. Our 4D dose algorithm takes into account displacements as well as deformations on the originating 4D-CT/4D-CT(MRI) by calculating the dose of each pencil beam based on an individual time stamp of when that pencil beam is applied. According to corresponding displacement and density-variation-maps the position and the water equivalent range of the dose grid points is adjusted at each time instance. Results: 4D dose distributions, using 4D-CT(MRI) data sets as input were compared to results based on a reference conventional 4D-CT data set capturing similar motion characteristics. Almost identical 4D dose distributions could be achieved, even though scanned proton beams are very sensitive to small differences in the patient geometry. In addition, 4D dose calculations have been performed on the same patient, but using 4D-CT(MRI) data sets based on

  10. Identifying the origin of differences between 3D numerical simulations of ground motion in sedimentary basins: lessons from stringent canonical test models in the E2VP framework

    NASA Astrophysics Data System (ADS)

    Chaljub, Emmanuel; Maufroy, Emeline; Moczo, Peter; Kristek, Jozef; Priolo, Enrico; Klin, Peter; De Martin, Florent; Zhang, Zenghuo; Hollender, Fabrice; Bard, Pierre-Yves

    2013-04-01

    Numerical simulation is playing a role of increasing importance in the field of seismic hazard by providing quantitative estimates of earthquake ground motion, its variability, and its sensitivity to geometrical and mechanical properties of the medium. Continuous efforts to develop accurate and computationally efficient numerical methods, combined with increasing computational power have made it technically feasible to calculate seismograms in 3D realistic configurations and for frequencies of interest in seismic design applications. Now, in order to foster the use of numerical simulations in practical prediction of earthquake ground motion, it is important to evaluate the accuracy of current numerical methods when applied to realistic 3D sites. This process of verification is a necessary prerequisite to confrontation of numerical predictions and observations. Through the ongoing Euroseistest Verification and Validation Project (E2VP), which focuses on the Mygdonian basin (northern Greece), we investigated the capability of numerical methods to predict earthquake ground motion for frequencies up to 4 Hz. Numerical predictions obtained by several teams using a wide variety of methods were compared using quantitative goodness-of-fit criteria. In order to better understand the cause of misfits between different simulations, initially performed for the realistic geometry of the Mygdonian basin, we defined five stringent canonical configurations. The canonical models allow for identifying sources of misfits and quantify their importance. Detailed quantitative comparison of simulations in relation to dominant features of the models shows that even relatively simple heterogeneous models must be treated with maximum care in order to achieve sufficient level of accuracy. One important conclusion is that the numerical representation of models with strong variations (e.g. discontinuities) may considerably vary from one method to the other, and may become a dominant source of

  11. Lifetime of inner-shell hole states of Ar (2p) and Kr (3d) using equation-of-motion coupled cluster method

    SciTech Connect

    Ghosh, Aryya; Vaval, Nayana; Pal, Sourav

    2015-07-14

    Auger decay is an efficient ultrafast relaxation process of core-shell or inner-shell excited atom or molecule. Generally, it occurs in femto-second or even atto-second time domain. Direct measurement of lifetimes of Auger process of single ionized and double ionized inner-shell state of an atom or molecule is an extremely difficult task. In this paper, we have applied the highly correlated complex absorbing potential-equation-of-motion coupled cluster (CAP-EOMCC) approach which is a combination of CAP and EOMCC approach to calculate the lifetime of the states arising from 2p inner-shell ionization of an Ar atom and 3d inner-shell ionization of Kr atom. We have also calculated the lifetime of Ar{sup 2+}(2p{sup −1}3p{sup −1}) {sup 1}D, Ar{sup 2+}(2p{sup −1}3p{sup −1}) {sup 1}S, and Ar{sup 2+}(2p{sup −1}3s{sup −1}) {sup 1}P double ionized states. The predicted results are compared with the other theoretical results as well as experimental results available in the literature.

  12. A New Accurate 3D Measurement Tool to Assess the Range of Motion of the Tongue in Oral Cancer Patients: A Standardized Model.

    PubMed

    van Dijk, Simone; van Alphen, Maarten J A; Jacobi, Irene; Smeele, Ludwig E; van der Heijden, Ferdinand; Balm, Alfons J M

    2016-02-01

    In oral cancer treatment, function loss such as speech and swallowing deterioration can be severe, mostly due to reduced lingual mobility. Until now, there is no standardized measurement tool for tongue mobility and pre-operative prediction of function loss is based on expert opinion instead of evidence based insight. The purpose of this study was to assess the reliability of a triple-camera setup for the measurement of tongue range of motion (ROM) in healthy adults and its feasibility in patients with partial glossectomy. A triple-camera setup was used, and 3D coordinates of the tongue in five standardized tongue positions were achieved in 15 healthy volunteers. Maximum distances between the tip of the tongue and the maxillary midline were calculated. Each participant was recorded twice, and each movie was analysed three times by two separate raters. Intrarater, interrater and test-retest reliability were the main outcome measures. Secondly, feasibility of the method was tested in ten patients treated for oral tongue carcinoma. Intrarater, interrater and test-retest reliability all showed high correlation coefficients of >0.9 in both study groups. All healthy subjects showed perfect symmetrical tongue ROM. In patients, significant differences in lateral tongue movements were found, due to restricted tongue mobility after surgery. This triple-camera setup is a reliable measurement tool to assess three-dimensional information of tongue ROM. It constitutes an accurate tool for objective grading of reduced tongue mobility after partial glossectomy. PMID:26516075

  13. Quantitative anatomical analysis of facial expression using a 3D motion capture system: Application to cosmetic surgery and facial recognition technology.

    PubMed

    Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin

    2015-09-01

    The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots. PMID:25872024

  14. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  15. Using Averaging-Based Factorization to Compare Seismic Hazard Models Derived from 3D Earthquake Simulations with NGA Ground Motion Prediction Equations

    NASA Astrophysics Data System (ADS)

    Wang, F.; Jordan, T. H.

    2012-12-01

    Seismic hazard models based on empirical ground motion prediction equations (GMPEs) employ a model-based factorization to account for source, propagation, and path effects. An alternative is to simulate these effects directly using earthquake source models combined with three-dimensional (3D) models of Earth structure. We have developed an averaging-based factorization (ABF) scheme that facilitates the geographically explicit comparison of these two types of seismic hazard models. For any fault source k with epicentral position x, slip spatial and temporal distribution f, and moment magnitude m, we calculate the excitation functions G(s, k, x, m, f) for sites s in a geographical region R, such as 5% damped spectral acceleration at a particular period. Through a sequence of weighted-averaging and normalization operations following a certain hierarchy over f, m, x, k, and s, we uniquely factorize G(s, k, x, m, f) into six components: A, B(s), C(s, k), D(s, k, x), E(s, k, x, m), and F(s, k, x, m, f). Factors for a target model can be divided by those of a reference model to obtain six corresponding factor ratios, or residual factors: a, b(s), c(s, k), d(s, k, x), e(s, k, x, m), and f(s, k, x, m, f). We show that these residual factors characterize differences in basin effects primarily through b(s), distance scaling primarily through c(s, k), and source directivity primarily through d(s, k, x). We illustrate the ABF scheme by comparing CyberShake Hazard Model (CSHM) for the Los Angeles region (Graves et. al. 2010) with the Next Generation Attenuation (NGA) GMPEs modified according to the directivity relations of Spudich and Chiou (2008). Relative to CSHM, all NGA models underestimate the directivity and basin effects. In particular, the NGA models do not account for the coupling between source directivity and basin excitation that substantially enhance the low-frequency seismic hazards in the sedimentary basins of the Los Angeles region. Assuming Cyber

  16. Quantitative Evaluation of 3D Mouse Behaviors and Motor Function in the Open-Field after Spinal Cord Injury Using Markerless Motion Tracking

    PubMed Central

    Sheets, Alison L.; Lai, Po-Lun; Fisher, Lesley C.; Basso, D. Michele

    2013-01-01

    Thousands of scientists strive to identify cellular mechanisms that could lead to breakthroughs in developing ameliorative treatments for debilitating neural and muscular conditions such as spinal cord injury (SCI). Most studies use rodent models to test hypotheses, and these are all limited by the methods available to evaluate animal motor function. This study’s goal was to develop a behavioral and locomotor assessment system in a murine model of SCI that enables quantitative kinematic measurements to be made automatically in the open-field by applying markerless motion tracking approaches. Three-dimensional movements of eight naïve, five mild, five moderate, and four severe SCI mice were recorded using 10 cameras (100 Hz). Background subtraction was used in each video frame to identify the animal’s silhouette, and the 3D shape at each time was reconstructed using shape-from-silhouette. The reconstructed volume was divided into front and back halves using k-means clustering. The animal’s front Center of Volume (CoV) height and whole-body CoV speed were calculated and used to automatically classify animal behaviors including directed locomotion, exploratory locomotion, meandering, standing, and rearing. More detailed analyses of CoV height, speed, and lateral deviation during directed locomotion revealed behavioral differences and functional impairments in animals with mild, moderate, and severe SCI when compared with naïve animals. Naïve animals displayed the widest variety of behaviors including rearing and crossing the center of the open-field, the fastest speeds, and tallest rear CoV heights. SCI reduced the range of behaviors, and decreased speed (r = .70 p<.005) and rear CoV height (r = .65 p<.01) were significantly correlated with greater lesion size. This markerless tracking approach is a first step toward fundamentally changing how rodent movement studies are conducted. By providing scientists with sensitive, quantitative measurement

  17. TU-F-17A-04: Respiratory Phase-Resolved 3D MRI with Isotropic High Spatial Resolution: Determination of the Average Breathing Motion Pattern for Abdominal Radiotherapy Planning

    SciTech Connect

    Deng, Z; Pang, J; Yang, W; Yue, Y; Tuli, R; Fraass, B; Li, D; Fan, Z

    2014-06-15

    Purpose: To develop a retrospective 4D-MRI technique (respiratory phase-resolved 3D-MRI) for providing an accurate assessment of tumor motion secondary to respiration. Methods: A 3D projection reconstruction (PR) sequence with self-gating (SG) was developed for 4D-MRI on a 3.0T MRI scanner. The respiration-induced shift of the imaging target was recorded by SG signals acquired in the superior-inferior direction every 15 radial projections (i.e. temporal resolution 98 ms). A total of 73000 radial projections obtained in 8-min were retrospectively sorted into 10 time-domain evenly distributed respiratory phases based on the SG information. Ten 3D image sets were then reconstructed offline. The technique was validated on a motion phantom (gadolinium-doped water-filled box, frequency of 10 and 18 cycles/min) and humans (4 healthy and 2 patients with liver tumors). Imaging protocol included 8-min 4D-MRI followed by 1-min 2D-realtime (498 ms/frame) MRI as a reference. Results: The multiphase 3D image sets with isotropic high spatial resolution (1.56 mm) permits flexible image reformatting and visualization. No intra-phase motion-induced blurring was observed. Comparing to 2D-realtime, 4D-MRI yielded similar motion range (phantom: 10.46 vs. 11.27 mm; healthy subject: 25.20 vs. 17.9 mm; patient: 11.38 vs. 9.30 mm), reasonable displacement difference averaged over the 10 phases (0.74mm; 3.63mm; 1.65mm), and excellent cross-correlation (0.98; 0.96; 0.94) between the two displacement series. Conclusion: Our preliminary study has demonstrated that the 4D-MRI technique can provide high-quality respiratory phase-resolved 3D images that feature: a) isotropic high spatial resolution, b) a fixed scan time of 8 minutes, c) an accurate estimate of average motion pattern, and d) minimal intra-phase motion artifact. This approach has the potential to become a viable alternative solution to assess the impact of breathing on tumor motion and determine appropriate treatment margins

  18. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  19. QUANTIFYING UNCERTAINTIES IN GROUND MOTION SIMULATIONS FOR SCENARIO EARTHQUAKES ON THE HAYWARD-RODGERS CREEK FAULT SYSTEM USING THE USGS 3D VELOCITY MODEL AND REALISTIC PSEUDODYNAMIC RUPTURE MODELS

    SciTech Connect

    Rodgers, A; Xie, X

    2008-01-09

    This project seeks to compute ground motions for large (M>6.5) scenario earthquakes on the Hayward Fault using realistic pseudodynamic ruptures, the USGS three-dimensional (3D) velocity model and anelastic finite difference simulations on parallel computers. We will attempt to bound ground motions by performing simulations with suites of stochastic rupture models for a given scenario on a given fault segment. The outcome of this effort will provide the average, spread and range of ground motions that can be expected from likely large earthquake scenarios. The resulting ground motions will be based on first-principles calculations and include the effects of slip heterogeneity, fault geometry and directivity, however, they will be band-limited to relatively low-frequency (< 1 Hz).

  20. Accuracy and Precision of a Custom Camera-Based System for 2-D and 3-D Motion Tracking during Speech and Nonspeech Motor Tasks

    ERIC Educational Resources Information Center

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose: Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable…

  1. Predation by the Dwarf Seahorse on Copepods: Quantifying Motion and Flows Using 3D High Speed Digital Holographic Cinematography - When Seahorses Attack!

    NASA Astrophysics Data System (ADS)

    Gemmell, Brad; Sheng, Jian; Buskey, Ed

    2008-11-01

    Copepods are an important planktonic food source for most of the world's fish species. This high predation pressure has led copepods to evolve an extremely effective escape response, with reaction times to hydrodynamic disturbances of less than 4 ms and escape speeds of over 500 body lengths per second. Using 3D high speed digital holographic cinematography (up to 2000 frames per second) we elucidate the role of entrainment flow fields generated by a natural visual predator, the dwarf seahorse (Hippocampus zosterae) during attacks on its prey, Acartia tonsa. Using phytoplankton as a tracer, we recorded and reconstructed 3D flow fields around the head of the seahorse and its prey during both successful and unsuccessful attacks to better understand how some attacks lead to capture with little or no detection from the copepod while others result in failed attacks. Attacks start with a slow approach to minimize the hydro-mechanical disturbance which is used by copepods to detect the approach of a potential predator. Successful attacks result in the seahorse using its pipette-like mouth to create suction faster than the copepod's response latency. As these characteristic scales of entrainment increase, a successful escape becomes more likely.

  2. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  3. The 3-D motion of the centre of gravity of the human body during level walking. I. Normal subjects at low and intermediate walking speeds.

    PubMed

    Tesio, L; Lanzi, D; Detrembleur, C

    1998-03-01

    OBJECTIVE: To measure the mechanical energy changes of the centre of gravity (CG) of the body in the forward, lateral and vertical direction during normal level walking at intermediate and low speeds. DESIGN: Eight healthy adults performed successive walks at speeds ranging from 0.25 to 1.75 m s(-1) over a dedicated force platform system. BACKGROUND: In previous studies, it was shown that the motion of the CG during gait can be altered more than the motion of individual segments. However, more detailed normative data are needed for clinical analysis. METHODS: The positive work done during the step to accelerate the body CG in the forward direction, W(f), to lift it, W(v), to accelerate it in the lateral direction, W(I), and the actual work done by the muscles to maintain its motion with respect to the ground ('external' work), W(ext), were measured. This allowed the calculation of the pendulum-like transfer between gravitational potential energy and kinetic energy of the CG, (percentage recovery, R). At the optimal speed of about 1.3 m s(-1), this transfer allows saving of as much as 65% of the muscular work which would have been otherwise needed to keep the body in motion with respect to the ground. The distance covered by the CG at each step either forward (step length, S(I)), or vertically (vertical displacement, S(v)) was also recorded. RESULTS: W(I) was, as a median, only 1.6-5.9% of W(ext). This ratio was higher, the lower the speed. At each step, W(ext) is needed to sustain two distinct increments of the total mechanical energy of the CG, E(tot). The increment a takes place during the double stance phase; the increment b takes place during the single stance phase. Both of these increments increased with speed. Over the speed range analyzed, the power spent to to sustain the a increment was 2.8-3.9 times higher than the power spent to sustain the b increment. PMID:11415774

  4. A multiple-shape memory polymer-metal composite actuator capable of programmable control, creating complex 3D motion of bending, twisting, and oscillation

    PubMed Central

    Shen, Qi; Trabia, Sarah; Stalbaum, Tyler; Palmre, Viljar; Kim, Kwang; Oh, Il-Kwon

    2016-01-01

    Development of biomimetic actuators has been an essential motivation in the study of smart materials. However, few materials are capable of controlling complex twisting and bending deformations simultaneously or separately using a dynamic control system. Here, we report an ionic polymer-metal composite actuator having multiple-shape memory effect, and is able to perform complex motion by two external inputs, electrical and thermal. Prior to the development of this type of actuator, this capability only could be realized with existing actuator technologies by using multiple actuators or another robotic system. This paper introduces a soft multiple-shape-memory polymer-metal composite (MSMPMC) actuator having multiple degrees-of-freedom that demonstrates high maneuverability when controlled by two external inputs, electrical and thermal. These multiple inputs allow for complex motions that are routine in nature, but that would be otherwise difficult to obtain with a single actuator. To the best of the authors’ knowledge, this MSMPMC actuator is the first solitary actuator capable of multiple-input control and the resulting deformability and maneuverability. PMID:27080134

  5. A multiple-shape memory polymer-metal composite actuator capable of programmable control, creating complex 3D motion of bending, twisting, and oscillation.

    PubMed

    Shen, Qi; Trabia, Sarah; Stalbaum, Tyler; Palmre, Viljar; Kim, Kwang; Oh, Il-Kwon

    2016-01-01

    Development of biomimetic actuators has been an essential motivation in the study of smart materials. However, few materials are capable of controlling complex twisting and bending deformations simultaneously or separately using a dynamic control system. Here, we report an ionic polymer-metal composite actuator having multiple-shape memory effect, and is able to perform complex motion by two external inputs, electrical and thermal. Prior to the development of this type of actuator, this capability only could be realized with existing actuator technologies by using multiple actuators or another robotic system. This paper introduces a soft multiple-shape-memory polymer-metal composite (MSMPMC) actuator having multiple degrees-of-freedom that demonstrates high maneuverability when controlled by two external inputs, electrical and thermal. These multiple inputs allow for complex motions that are routine in nature, but that would be otherwise difficult to obtain with a single actuator. To the best of the authors' knowledge, this MSMPMC actuator is the first solitary actuator capable of multiple-input control and the resulting deformability and maneuverability. PMID:27080134

  6. A multiple-shape memory polymer-metal composite actuator capable of programmable control, creating complex 3D motion of bending, twisting, and oscillation

    NASA Astrophysics Data System (ADS)

    Shen, Qi; Trabia, Sarah; Stalbaum, Tyler; Palmre, Viljar; Kim, Kwang; Oh, Il-Kwon

    2016-04-01

    Development of biomimetic actuators has been an essential motivation in the study of smart materials. However, few materials are capable of controlling complex twisting and bending deformations simultaneously or separately using a dynamic control system. Here, we report an ionic polymer-metal composite actuator having multiple-shape memory effect, and is able to perform complex motion by two external inputs, electrical and thermal. Prior to the development of this type of actuator, this capability only could be realized with existing actuator technologies by using multiple actuators or another robotic system. This paper introduces a soft multiple-shape-memory polymer-metal composite (MSMPMC) actuator having multiple degrees-of-freedom that demonstrates high maneuverability when controlled by two external inputs, electrical and thermal. These multiple inputs allow for complex motions that are routine in nature, but that would be otherwise difficult to obtain with a single actuator. To the best of the authors’ knowledge, this MSMPMC actuator is the first solitary actuator capable of multiple-input control and the resulting deformability and maneuverability.

  7. Advantages of fibre lasers in 3D metal cutting and welding applications supported by a 'beam in motion (BIM)' beam delivery system

    NASA Astrophysics Data System (ADS)

    Scheller, Torsten; Bastick, André; Griebel, Martin

    2012-03-01

    Modern laser technology is continuously opening up new fields of applications. Driven by the development of increasingly efficient laser sources, the new technology is successfully entering classical applications such as 3D cutting and welding of metals. Especially in light weight applications in the automotive industry laser manufacturing is key. Only by this technology the reduction of welding widths could be realised as well as the efficient machining of aluminium and the abrasion free machining of hardened steel. The paper compares the operation of different laser types in metal machining regarding wavelength, laser power, laser brilliance, process speed and welding depth to give an estimation for best use of single mode or multi mode lasers in this field of application. The experimental results will be presented by samples of applied parts. In addition a correlation between the process and the achieved mechanical properties will be made. For this application JENOPTIK Automatisierungstechnik GmbH is using the BIM beam control system in its machines, which is the first one to realize a fully integrated combination of beam control and robot. The wide performance and wavelength range of the laser radiation which can be transmitted opens up diverse possibilities of application and makes BIM a universal tool.

  8. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  9. Dynamic primitives in the control of locomotion

    PubMed Central

    Hogan, Neville; Sternad, Dagmar

    2013-01-01

    Humans achieve locomotor dexterity that far exceeds the capability of modern robots, yet this is achieved despite slower actuators, imprecise sensors, and vastly slower communication. We propose that this spectacular performance arises from encoding motor commands in terms of dynamic primitives. We propose three primitives as a foundation for a comprehensive theoretical framework that can embrace a wide range of upper- and lower-limb behaviors. Building on previous work that suggested discrete and rhythmic movements as elementary dynamic behaviors, we define submovements and oscillations: as discrete movements cannot be combined with sufficient flexibility, we argue that suitably-defined submovements are primitives. As the term “rhythmic” may be ambiguous, we define oscillations as the corresponding class of primitives. We further propose mechanical impedances as a third class of dynamic primitives, necessary for interaction with the physical environment. Combination of these three classes of primitive requires care. One approach is through a generalized equivalent network: a virtual trajectory composed of simultaneous and/or sequential submovements and/or oscillations that interacts with mechanical impedances to produce observable forces and motions. Reliable experimental identification of these dynamic primitives presents challenges: identification of mechanical impedances is exquisitely sensitive to assumptions about their dynamic structure; identification of submovements and oscillations is sensitive to their assumed form and to details of the algorithm used to extract them. Some methods to address these challenges are presented. Some implications of this theoretical framework for locomotor rehabilitation are considered. PMID:23801959

  10. Performance assessment of HIFU lesion detection by Harmonic Motion Imaging for Focused Ultrasound (HMIFU): A 3D finite-element-based framework with experimental validation

    PubMed Central

    Hou, Gary Y.; Luo, Jianwen; Marquet, Fabrice; Maleke, Caroline; Vappou, Jonathan; Konofagou, Elisa E.

    2014-01-01

    Harmonic Motion Imaging for Focused Ultrasound (HMIFU) is a novel high-intensity focused ultrasound (HIFU) therapy monitoring method with feasibilities demonstrated in vitro, ex vivo and in vivo. Its principle is based on Amplitude-modulated (AM) - Harmonic Motion Imaging (HMI), an oscillatory radiation force used for imaging the tissue mechanical response during thermal ablation. In this study, a theoretical framework of HMIFU is presented, comprising a customized nonlinear wave propagation model, a finite-element (FE) analysis module, and an image-formation model. The objective of this study is to develop such a framework in order to 1) assess the fundamental performance of HMIFU in detecting HIFU lesions based on the change in tissue apparent elasticity, i.e., the increasing Young's modulus, and the HIFU lesion size with respect to the HIFU exposure time and 2) validate the simulation findings ex vivo. The same HMI and HMIFU parameters as in the experimental studies were used, i.e., 4.5-MHz HIFU frequency and 25 Hz AM frequency. For a lesion-to-background Young's modulus ratio of 3, 6, and 9, the FE and estimated HMI displacement ratios were equal to 1.83, 3.69, 5.39 and 1.65, 3.19, 4.59, respectively. In experiments, the HMI displacement followed a similar increasing trend of 1.19, 1.28, and 1.78 at 10-s, 20-s, and 30-s HIFU exposure, respectively. In addition, moderate agreement in lesion size growth was also found in both simulations (16.2, 73.1 and 334.7 mm2) and experiments (26.2, 94.2 and 206.2 mm2). Therefore, the feasibility of HMIFU for HIFU lesion detection based on the underlying tissue elasticity changes was verified through the developed theoretical framework, i.e., validation of the fundamental performance of the HMIFU system for lesion detection, localization and quantification, was demonstrated both theoretically and ex vivo. PMID:22036637