Science.gov

Sample records for 3d motion capture

  1. Markerless 3D motion capture for animal locomotion studies

    PubMed Central

    Sellers, William Irvin; Hirasaki, Eishi

    2014-01-01

    ABSTRACT Obtaining quantitative data describing the movements of animals is an essential step in understanding their locomotor biology. Outside the laboratory, measuring animal locomotion often relies on video-based approaches and analysis is hampered because of difficulties in calibration and often the limited availability of possible camera positions. It is also usually restricted to two dimensions, which is often an undesirable over-simplification given the essentially three-dimensional nature of many locomotor performances. In this paper we demonstrate a fully three-dimensional approach based on 3D photogrammetric reconstruction using multiple, synchronised video cameras. This approach allows full calibration based on the separation of the individual cameras and will work fully automatically with completely unmarked and undisturbed animals. As such it has the potential to revolutionise work carried out on free-ranging animals in sanctuaries and zoological gardens where ad hoc approaches are essential and access within enclosures often severely restricted. The paper demonstrates the effectiveness of video-based 3D photogrammetry with examples from primates and birds, as well as discussing the current limitations of this technique and illustrating the accuracies that can be obtained. All the software required is open source so this can be a very cost effective approach and provides a methodology of obtaining data in situations where other approaches would be completely ineffective. PMID:24972869

  2. Markerless 3D motion capture for animal locomotion studies.

    PubMed

    Sellers, William Irvin; Hirasaki, Eishi

    2014-01-01

    Obtaining quantitative data describing the movements of animals is an essential step in understanding their locomotor biology. Outside the laboratory, measuring animal locomotion often relies on video-based approaches and analysis is hampered because of difficulties in calibration and often the limited availability of possible camera positions. It is also usually restricted to two dimensions, which is often an undesirable over-simplification given the essentially three-dimensional nature of many locomotor performances. In this paper we demonstrate a fully three-dimensional approach based on 3D photogrammetric reconstruction using multiple, synchronised video cameras. This approach allows full calibration based on the separation of the individual cameras and will work fully automatically with completely unmarked and undisturbed animals. As such it has the potential to revolutionise work carried out on free-ranging animals in sanctuaries and zoological gardens where ad hoc approaches are essential and access within enclosures often severely restricted. The paper demonstrates the effectiveness of video-based 3D photogrammetry with examples from primates and birds, as well as discussing the current limitations of this technique and illustrating the accuracies that can be obtained. All the software required is open source so this can be a very cost effective approach and provides a methodology of obtaining data in situations where other approaches would be completely ineffective. PMID:24972869

  3. Motion-Capture-Enabled Software for Gestural Control of 3D Models

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey S.; Luo, Victor; Crockett, Thomas M.; Shams, Khawaja S.; Powell, Mark W.; Valderrama, Anthony

    2012-01-01

    Current state-of-the-art systems use general-purpose input devices such as a keyboard, mouse, or joystick that map to tasks in unintuitive ways. This software enables a person to control intuitively the position, size, and orientation of synthetic objects in a 3D virtual environment. It makes possible the simultaneous control of the 3D position, scale, and orientation of 3D objects using natural gestures. Enabling the control of 3D objects using a commercial motion-capture system allows for natural mapping of the many degrees of freedom of the human body to the manipulation of the 3D objects. It reduces training time for this kind of task, and eliminates the need to create an expensive, special-purpose controller.

  4. Nearly automatic motion capture system for tracking octopus arm movements in 3D space.

    PubMed

    Zelman, Ido; Galun, Meirav; Akselrod-Ballin, Ayelet; Yekutieli, Yoram; Hochner, Binyamin; Flash, Tamar

    2009-08-30

    Tracking animal movements in 3D space is an essential part of many biomechanical studies. The most popular technique for human motion capture uses markers placed on the skin which are tracked by a dedicated system. However, this technique may be inadequate for tracking animal movements, especially when it is impossible to attach markers to the animal's body either because of its size or shape or because of the environment in which the animal performs its movements. Attaching markers to an animal's body may also alter its behavior. Here we present a nearly automatic markerless motion capture system that overcomes these problems and successfully tracks octopus arm movements in 3D space. The system is based on three successive tracking and processing stages. The first stage uses a recently presented segmentation algorithm to detect the movement in a pair of video sequences recorded by two calibrated cameras. In the second stage, the results of the first stage are processed to produce 2D skeletal representations of the moving arm. Finally, the 2D skeletons are used to reconstruct the octopus arm movement as a sequence of 3D curves varying in time. Motion tracking, segmentation and reconstruction are especially difficult problems in the case of octopus arm movements because of the deformable, non-rigid structure of the octopus arm and the underwater environment in which it moves. Our successful results suggest that the motion-tracking system presented here may be used for tracking other elongated objects. PMID:19505502

  5. Markerless motion capture can provide reliable 3D gait kinematics in the sagittal and frontal plane.

    PubMed

    Sandau, Martin; Koblauch, Henrik; Moeslund, Thomas B; Aanæs, Henrik; Alkjær, Tine; Simonsen, Erik B

    2014-09-01

    Estimating 3D joint rotations in the lower extremities accurately and reliably remains unresolved in markerless motion capture, despite extensive studies in the past decades. The main problems have been ascribed to the limited accuracy of the 3D reconstructions. Accordingly, the purpose of the present study was to develop a new approach based on highly detailed 3D reconstructions in combination with a translational and rotational unconstrained articulated model. The highly detailed 3D reconstructions were synthesized from an eight camera setup using a stereo vision approach. The subject specific articulated model was generated with three rotational and three translational degrees of freedom for each limb segment and without any constraints to the range of motion. This approach was tested on 3D gait analysis and compared to a marker based method. The experiment included ten healthy subjects in whom hip, knee and ankle joint were analysed. Flexion/extension angles as well as hip abduction/adduction closely resembled those obtained from the marker based system. However, the internal/external rotations, knee abduction/adduction and ankle inversion/eversion were less reliable. PMID:25085672

  6. A very low-cost system for capturing 3D motion scans with color and texture data

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    This paper presents a technique for capturing 3D motion scans using hardware that can be constructed for approximately $5,000 in cost. This hardware-software solution, in addition to capturing the movement of the physical structures also captures color and texture data. The scanner configuration developed at the University of North Dakota is sufficient in size for capturing scans of a group of humans. Scanning starts with synchronization and then requires modeling of each frame. For some applications linking structural elements from frame-to-frame may also be required. The efficacy of this scanning approach is discussed and prospective applications for it are considered.

  7. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  8. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. PMID:23218511

  9. Study of human body: Kinematics and kinetics of a martial arts (Silat) performers using 3D-motion capture

    NASA Astrophysics Data System (ADS)

    Soh, Ahmad Afiq Sabqi Awang; Jafri, Mohd Zubir Mat; Azraai, Nur Zaidi

    2015-04-01

    The Interest in this studies of human kinematics goes back very far in human history drove by curiosity or need for the understanding the complexity of human body motion. To find new and accurate information about the human movement as the advance computing technology became available for human movement that can perform. Martial arts (silat) were chose and multiple type of movement was studied. This project has done by using cutting-edge technology which is 3D motion capture to characterize and to measure the motion done by the performers of martial arts (silat). The camera will detect the markers (infrared reflection by the marker) around the performer body (total of 24 markers) and will show as dot in the computer software. The markers detected were analyzing using kinematic kinetic approach and time as reference. A graph of velocity, acceleration and position at time,t (seconds) of each marker was plot. Then from the information obtain, more parameters were determined such as work done, momentum, center of mass of a body using mathematical approach. This data can be used for development of the effectiveness movement in martial arts which is contributed to the people in arts. More future works can be implemented from this project such as analysis of a martial arts competition.

  10. 3D Human Motion Editing and Synthesis: A Survey

    PubMed Central

    Wang, Xin; Chen, Qiudi; Wang, Wanliang

    2014-01-01

    The ways to compute the kinematics and dynamic quantities of human bodies in motion have been studied in many biomedical papers. This paper presents a comprehensive survey of 3D human motion editing and synthesis techniques. Firstly, four types of methods for 3D human motion synthesis are introduced and compared. Secondly, motion capture data representation, motion editing, and motion synthesis are reviewed successively. Finally, future research directions are suggested. PMID:25045395

  11. Estimating 3D L5/S1 moments and ground reaction forces during trunk bending using a full-body ambulatory inertial motion capture system.

    PubMed

    Faber, G S; Chang, C C; Kingma, I; Dennerlein, J T; van Dieën, J H

    2016-04-11

    Inertial motion capture (IMC) systems have become increasingly popular for ambulatory movement analysis. However, few studies have attempted to use these measurement techniques to estimate kinetic variables, such as joint moments and ground reaction forces (GRFs). Therefore, we investigated the performance of a full-body ambulatory IMC system in estimating 3D L5/S1 moments and GRFs during symmetric, asymmetric and fast trunk bending, performed by nine male participants. Using an ambulatory IMC system (Xsens/MVN), L5/S1 moments were estimated based on the upper-body segment kinematics using a top-down inverse dynamics analysis, and GRFs were estimated based on full-body segment accelerations. As a reference, a laboratory measurement system was utilized: GRFs were measured with Kistler force plates (FPs), and L5/S1 moments were calculated using a bottom-up inverse dynamics model based on FP data and lower-body kinematics measured with an optical motion capture system (OMC). Correspondence between the OMC+FP and IMC systems was quantified by calculating root-mean-square errors (RMSerrors) of moment/force time series and the interclass correlation (ICC) of the absolute peak moments/forces. Averaged over subjects, L5/S1 moment RMSerrors remained below 10Nm (about 5% of the peak extension moment) and 3D GRF RMSerrors remained below 20N (about 2% of the peak vertical force). ICCs were high for the peak L5/S1 extension moment (0.971) and vertical GRF (0.998). Due to lower amplitudes, smaller ICCs were found for the peak asymmetric L5/S1 moments (0.690-0.781) and horizontal GRFs (0.559-0.948). In conclusion, close correspondence was found between the ambulatory IMC-based and laboratory-based estimates of back load. PMID:26795123

  12. Quantitative anatomical analysis of facial expression using a 3D motion capture system: Application to cosmetic surgery and facial recognition technology.

    PubMed

    Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin

    2015-09-01

    The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots. PMID:25872024

  13. Real-Time Motion Capture Toolbox (RTMocap): an open-source code for recording 3-D motion kinematics to study action-effect anticipations during motor and social interactions.

    PubMed

    Lewkowicz, Daniel; Delevoye-Turrell, Yvonne

    2016-03-01

    We present here a toolbox for the real-time motion capture of biological movements that runs in the cross-platform MATLAB environment (The MathWorks, Inc., Natick, MA). It provides instantaneous processing of the 3-D movement coordinates of up to 20 markers at a single instant. Available functions include (1) the setting of reference positions, areas, and trajectories of interest; (2) recording of the 3-D coordinates for each marker over the trial duration; and (3) the detection of events to use as triggers for external reinforcers (e.g., lights, sounds, or odors). Through fast online communication between the hardware controller and RTMocap, automatic trial selection is possible by means of either a preset or an adaptive criterion. Rapid preprocessing of signals is also provided, which includes artifact rejection, filtering, spline interpolation, and averaging. A key example is detailed, and three typical variations are developed (1) to provide a clear understanding of the importance of real-time control for 3-D motion in cognitive sciences and (2) to present users with simple lines of code that can be used as starting points for customizing experiments using the simple MATLAB syntax. RTMocap is freely available ( http://sites.google.com/site/RTMocap/ ) under the GNU public license for noncommercial use and open-source development, together with sample data and extensive documentation. PMID:25805426

  14. Preference for motion and depth in 3D film

    NASA Astrophysics Data System (ADS)

    Hartle, Brittney; Lugtigheid, Arthur; Kazimi, Ali; Allison, Robert S.; Wilcox, Laurie M.

    2015-03-01

    While heuristics have evolved over decades for the capture and display of conventional 2D film, it is not clear these always apply well to stereoscopic 3D (S3D) film. Further, while there has been considerable recent research on viewer comfort in S3D media, little attention has been paid to audience preferences for filming parameters in S3D. Here we evaluate viewers' preferences for moving S3D film content in a theatre setting. Specifically we examine preferences for combinations of camera motion (speed and direction) and stereoscopic depth (IA). The amount of IA had no impact on clip preferences regardless of the direction or speed of camera movement. However, preferences were influenced by camera speed, but only in the in-depth condition where viewers preferred faster motion. Given that previous research shows that slower speeds are more comfortable for viewing S3D content, our results show that viewing preferences cannot be predicted simply from measures of comfort. Instead, it is clear that viewer response to S3D film is complex and that film parameters selected to enhance comfort may in some instances produce less appealing content.

  15. Stereo and motion in the display of 3-D scattergrams

    SciTech Connect

    Littlefield, R.J.

    1982-04-01

    A display technique is described that is useful for detecting structure in a 3-dimensional distribution of points. The technique uses a high resolution color raster display to produce a 3-D scattergram. Depth cueing is provided by motion parallax using a capture-replay mechanism. Stereo vision depth cues can also be provided. The paper discusses some general aspects of stereo scattergrams and describes their implementation as red/green anaglyphs. These techniques have been used with data sets containing over 20,000 data points. They can be implemented on relatively inexpensive hardware. (A film of the display was shown at the conference.)

  16. Concurrent 3-D motion segmentation and 3-D interpretation of temporal sequences of monocular images.

    PubMed

    Sekkati, Hicham; Mitiche, Amar

    2006-03-01

    The purpose of this study is to investigate a variational method for joint multiregion three-dimensional (3-D) motion segmentation and 3-D interpretation of temporal sequences of monocular images. Interpretation consists of dense recovery of 3-D structure and motion from the image sequence spatiotemporal variations due to short-range image motion. The method is direct insomuch as it does not require prior computation of image motion. It allows movement of both viewing system and multiple independently moving objects. The problem is formulated following a variational statement with a functional containing three terms. One term measures the conformity of the interpretation within each region of 3-D motion segmentation to the image sequence spatiotemporal variations. The second term is of regularization of depth. The assumption that environmental objects are rigid accounts automatically for the regularity of 3-D motion within each region of segmentation. The third and last term is for the regularity of segmentation boundaries. Minimization of the functional follows the corresponding Euler-Lagrange equations. This results in iterated concurrent computation of 3-D motion segmentation by curve evolution, depth by gradient descent, and 3-D motion by least squares within each region of segmentation. Curve evolution is implemented via level sets for topology independence and numerical stability. This algorithm and its implementation are verified on synthetic and real image sequences. Viewers presented with anaglyphs of stereoscopic images constructed from the algorithm's output reported a strong perception of depth. PMID:16519351

  17. Faceless identification: a model for person identification using the 3D shape and 3D motion as cues

    NASA Astrophysics Data System (ADS)

    Klasen, Lena M.; Li, Haibo

    1999-02-01

    Person identification by using biometric methods based on image sequences, or still images, often requires a controllable and cooperative environment during the image capturing stage. In the forensic case the situation is more likely to be the opposite. In this work we propose a method that makes use of the anthropometry of the human body and human actions as cues for identification. Image sequences from surveillance systems are used, which can be seen as monocular image sequences. A 3D deformable wireframe body model is used as a platform to handle the non-rigid information of the 3D shape and 3D motion of the human body from the image sequence. A recursive method for estimating global motion and local shape variations is presented, using two recursive feedback systems.

  18. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439

  19. [Evaluation of Motion Sickness Induced by 3D Video Clips].

    PubMed

    Matsuura, Yasuyuki; Takada, Hiroki

    2016-01-01

    The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology. PMID:26832611

  20. MOEMS-based time-of-flight camera for 3D video capturing

    NASA Astrophysics Data System (ADS)

    You, Jang-Woo; Park, Yong-Hwa; Cho, Yong-Chul; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Lee, Seung-Wan

    2013-03-01

    We suggest a Time-of-Flight (TOF) video camera capturing real-time depth images (a.k.a depth map), which are generated from the fast-modulated IR images utilizing a novel MOEMS modulator having switching speed of 20 MHz. In general, 3 or 4 independent IR (e.g. 850nm) images are required to generate a single frame of depth image. Captured video image of a moving object frequently shows motion drag between sequentially captured IR images, which results in so called `motion blur' problem even when the frame rate of depth image is fast (e.g. 30 to 60 Hz). We propose a novel `single shot' TOF 3D camera architecture generating a single depth image out of synchronized captured IR images. The imaging system constitutes of 2x2 imaging lens array, MOEMS optical shutters (modulator) placed on each lens aperture and a standard CMOS image sensor. The IR light reflected from object is modulated by optical shutters on the apertures of 2x2 lens array and then transmitted images are captured on the image sensor resulting in 2x2 sub-IR images. As a result, the depth image is generated with those simultaneously captured 4 independent sub-IR images, hence the motion blur problem is canceled. The resulting performance is very useful in the applications of 3D camera to a human-machine interaction device such as user interface of TV, monitor, or hand held devices and motion capturing of human body. In addition, we show that the presented 3D camera can be modified to capture color together with depth image simultaneously on `single shot' frame rate.

  1. The capture and recreation of 3D auditory scenes

    NASA Astrophysics Data System (ADS)

    Li, Zhiyun

    The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.

  2. 3D tongue motion from tagged and cine MR images.

    PubMed

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z; Lee, Junghoon; Stone, Maureen; Prince, Jerry L

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach suffers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information to yield improved estimation of 3D tongue motion. The method uses the harmonic phase (HARP) algorithm to extract motion from tags and diffeomorphic demons to provide surface deformation. It then uses an incompressible deformation estimation algorithm to incorporate both sources of displacement information to form an estimate of the 3D whole tongue motion. Experimental results show that use of combined information improves motion estimation near the tongue surface, a problem that has previously been reported as problematic in HARP analysis, while preserving accurate internal motion estimates. Results on both normal and abnormal tongue motions are shown. PMID:24505742

  3. Motion field estimation for a dynamic scene using a 3D LiDAR.

    PubMed

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  4. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR

    PubMed Central

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  5. The 3D Human Motion Control Through Refined Video Gesture Annotation

    NASA Astrophysics Data System (ADS)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  6. High Quality 3D data capture from UAV imagery

    NASA Astrophysics Data System (ADS)

    Haala, Norbert; Cramer, Michael; Rothermel, Mathias

    2014-05-01

    The flexible use of unmanned airborne systems is especially beneficial while aiming at data capture for geodetic-photogrammetric applications within areas of limited extent. This can include tasks like topographical mapping in the context of land management and consolidation or natural hazard mapping for the documentation of landslide areas. Our presentation discusses the suitability of UAV-systems for such tasks based on a pilot project for the Landesamt für Geoinformation und Landentwicklung Baden-Württemberg (LGL BW). This study evaluated the efficiency and accuracy of photogrammetric image collection by UAV-systems for demands of national mapping authorities. For this purpose the use of different UAV platforms and cameras for the generation of photogrammetric standard products like ortho images and digital surface models were evaluated. However, main focus of the presentation is the investigation of the quality potential of UAV-based 3D data capture at high resolution and accuracies. This is exemplary evaluated by the documentation of a small size (700x350m2) landslide area by a UAV flight. For this purpose the UAV images were used to generate 3D point clouds at a resolution of 5-8cm, which corresponds to the ground sampling distance GSD of the original images. This was realized by dense, pixel-wise matching algorithms both available in off-the-shelf and research software tools. Suitable results can especially be derived if large redundancy is available from highly overlapping image blocks. Since UAV images can be collected easily at a high overlap due to their low cruising speed. Thus, our investigations clearly demonstrated the feasibility of relatively simple UAV-platforms and cameras for 3D point determination close to the sub-pixel level.

  7. Compression of point-texture 3D motion sequences

    NASA Astrophysics Data System (ADS)

    Song, In-Wook; Kim, Chang-Su; Lee, Sang-Uk

    2005-10-01

    In this work, we propose two compression algorithms for PointTexture 3D sequences: the octree-based scheme and the motion-compensated prediction scheme. The first scheme represents each PointTexture frame hierarchically using an octree. The geometry information in the octree nodes is encoded by the predictive partial matching (PPM) method. The encoder supports the progressive transmission of the 3D frame by transmitting the octree nodes in a top-down manner. The second scheme adopts the motion-compensated prediction to exploit the temporal correlation in 3D sequences. It first divides each frame into blocks, and then estimates the motion of each block using the block matching algorithm. In contrast to the motion-compensated 2D video coding, the prediction residual may take more bits than the original signal. Thus, in our approach, the motion compensation is used only for the blocks that can be replaced by the matching blocks. The other blocks are PPM-encoded. Extensive simulation results demonstrate that the proposed algorithms provide excellent compression performances.

  8. Motion capture for human motion measuring by using single camera with triangle markers

    NASA Astrophysics Data System (ADS)

    Takahashi, Hidenori; Tanaka, Takayuki; Kaneko, Shun'ichi

    2005-12-01

    This study aims to realize a motion capture for measuring 3D human motions by using single camera. Although motion capture by using multiple cameras is widely used in sports field, medical field, engineering field and so on, optical motion capture method with one camera is not established. In this paper, the authors achieved a 3D motion capture by using one camera, named as Mono-MoCap (MMC), on the basis of two calibration methods and triangle markers which each length of side is given. The camera calibration methods made 3D coordinates transformation parameter and a lens distortion parameter with Modified DLT method. The triangle markers enabled to calculate a coordinate value of a depth direction on a camera coordinate. Experiments of 3D position measurement by using the MMC on a measurement space of cubic 2 m on each side show an average error of measurement of a center of gravity of a triangle marker was less than 2 mm. As compared with conventional motion capture method by using multiple cameras, the MMC has enough accuracy for 3D measurement. Also, by putting a triangle marker on each human joint, the MMC was able to capture a walking motion, a standing-up motion and a bending and stretching motion. In addition, a method using a triangle marker together with conventional spherical markers was proposed. Finally, a method to estimate a position of a marker by measuring the velocity of the marker was proposed in order to improve the accuracy of MMC.

  9. Learning Projectile Motion with the Computer Game ``Scorched 3D``

    NASA Astrophysics Data System (ADS)

    Jurcevic, John S.

    2008-01-01

    For most of our students, video games are a normal part of their lives. We should take advantage of this medium to teach physics in a manner that is engrossing for our students. In particular, modern video games incorporate accurate physics in their game engines, and they allow us to visualize the physics through flashy and captivating graphics. I recently used the game "Scorched 3D" to help my students understand projectile motion.

  10. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    PubMed Central

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722

  11. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    NASA Astrophysics Data System (ADS)

    Dhou, S.; Hurwitz, M.; Mishra, P.; Cai, W.; Rottmann, J.; Li, R.; Williams, C.; Wagar, M.; Berbeco, R.; Ionascu, D.; Lewis, J. H.

    2015-05-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  12. Feasibility Study for Ballet E-Learning: Automatic Composition System for Ballet "Enchainement" with Online 3D Motion Data Archive

    ERIC Educational Resources Information Center

    Umino, Bin; Longstaff, Jeffrey Scott; Soga, Asako

    2009-01-01

    This paper reports on "Web3D dance composer" for ballet e-learning. Elementary "petit allegro" ballet steps were enumerated in collaboration with ballet teachers, digitally acquired through 3D motion capture systems, and categorised into families and sub-families. Digital data was manipulated into virtual reality modelling language (VRML) and fit…

  13. 3D Guided Wave Motion Analysis on Laminated Composites

    NASA Technical Reports Server (NTRS)

    Tian, Zhenhua; Leckey, Cara; Yu, Lingyu

    2013-01-01

    Ultrasonic guided waves have proved useful for structural health monitoring (SHM) and nondestructive evaluation (NDE) due to their ability to propagate long distances with less energy loss compared to bulk waves and due to their sensitivity to small defects in the structure. Analysis of actively transmitted ultrasonic signals has long been used to detect and assess damage. However, there remain many challenging tasks for guided wave based SHM due to the complexity involved with propagating guided waves, especially in the case of composite materials. The multimodal nature of the ultrasonic guided waves complicates the related damage analysis. This paper presents results from parallel 3D elastodynamic finite integration technique (EFIT) simulations used to acquire 3D wave motion in the subject laminated carbon fiber reinforced polymer composites. The acquired 3D wave motion is then analyzed by frequency-wavenumber analysis to study the wave propagation and interaction in the composite laminate. The frequency-wavenumber analysis enables the study of individual modes and visualization of mode conversion. Delamination damage has been incorporated into the EFIT model to generate "damaged" data. The potential for damage detection in laminated composites is discussed in the end.

  14. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  15. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  16. 3D Measurement of Forearm and Upper Arm during Throwing Motion using Body Mounted Sensor

    NASA Astrophysics Data System (ADS)

    Koda, Hideharu; Sagawa, Koichi; Kuroshima, Kouta; Tsukamoto, Toshiaki; Urita, Kazutaka; Ishibashi, Yasuyuki

    The aim of this study is to propose the measurement method of three-dimensional (3D) movement of forearm and upper arm during pitching motion of baseball using inertial sensors without serious consideration of sensor installation. Although high accuracy measurement of sports motion is achieved by using optical motion capture system at present, it has some disadvantages such as the calibration of cameras and limitation of measurement place. Whereas the proposed method for 3D measurement of pitching motion using body mounted sensors provides trajectory and orientation of upper arm by the integration of acceleration and angular velocity measured on upper limb. The trajectory of forearm is derived so that the elbow joint axis of forearm corresponds to that of upper arm. Spatial relation between upper limb and sensor system is obtained by performing predetermined movements of upper limb and utilizing angular velocity and gravitational acceleration. The integration error is modified so that the estimated final position, velocity and posture of upper limb agree with the actual ones. The experimental results of the measurement of pitching motion show that trajectories of shoulder, elbow and wrist estimated by the proposed method are highly correlated to those from the motion capture system within the estimation error of about 10 [%].

  17. The Visual Priming of Motion-Defined 3D Objects

    PubMed Central

    Jiang, Xiong; Jiang, Yang

    2015-01-01

    The perception of a stimulus can be influenced by previous perceptual experience, a phenomenon known as perceptual priming. However, there has been limited investigation on perceptual priming of shape perception of three-dimensional object structures defined by moving dots. Here we examined the perceptual priming of a 3D object shape defined purely by motion-in-depth cues (i.e., Shape-From-Motion, SFM) using a classic prime-target paradigm. The results from the first two experiments revealed a significant increase in accuracy when a “cloudy” SFM stimulus (whose object structure was difficult to recognize due to the presence of strong noise) was preceded by an unambiguous SFM that clearly defined the same transparent 3D shape. In contrast, results from Experiment 3 revealed no change in accuracy when a “cloudy” SFM stimulus was preceded by a static shape or a semantic word that defined the same object shape. Instead, there was a significant decrease in accuracy when preceded by a static shape or a semantic word that defined a different object shape. These results suggested that the perception of a noisy SFM stimulus can be facilitated by a preceding unambiguous SFM stimulus—but not a static image or a semantic stimulus—that defined the same shape. The potential neural and computational mechanisms underlying the difference in priming are discussed. PMID:26658496

  18. 3D deformable organ model based liver motion tracking in ultrasound videos

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Bae; Hwang, Youngkyoo; Oh, Young-Taek; Bang, Won-Chul; Lee, Heesae; Kim, James D. K.; Kim, Chang Yeong

    2013-03-01

    This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor's 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.

  19. 4DCBCT-based motion modeling and 3D fluoroscopic image generation for lung cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Berbeco, Ross; Lewis, John

    2015-03-01

    A method is developed to build patient-specific motion models based on 4DCBCT images taken at treatment time and use them to generate 3D time-varying images (referred to as 3D fluoroscopic images). Motion models are built by applying Principal Component Analysis (PCA) on the displacement vector fields (DVFs) estimated by performing deformable image registration on each phase of 4DCBCT relative to a reference phase. The resulting PCA coefficients are optimized iteratively by comparing 2D projections captured at treatment time with projections estimated using the motion model. The optimized coefficients are used to generate 3D fluoroscopic images. The method is evaluated using anthropomorphic physical and digital phantoms reproducing real patient trajectories. For physical phantom datasets, the average tumor localization error (TLE) and (95th percentile) in two datasets were 0.95 (2.2) mm. For digital phantoms assuming superior image quality of 4DCT and no anatomic or positioning disparities between 4DCT and treatment time, the average TLE and the image intensity error (IIE) in six datasets were smaller using 4DCT-based motion models. When simulating positioning disparities and tumor baseline shifts at treatment time compared to planning 4DCT, the average TLE (95th percentile) and IIE were 4.2 (5.4) mm and 0.15 using 4DCT-based models, while they were 1.2 (2.2) mm and 0.10 using 4DCBCT-based ones, respectively. 4DCBCT-based models were shown to perform better when there are positioning and tumor baseline shift uncertainties at treatment time. Thus, generating 3D fluoroscopic images based on 4DCBCT-based motion models can capture both inter- and intra- fraction anatomical changes during treatment.

  20. Brownian motion using video capture

    NASA Astrophysics Data System (ADS)

    Salmon, Reese; Robbins, Candace; Forinash, Kyle

    2002-05-01

    Although other researchers had previously observed the random motion of pollen grains suspended in water through a microscope, Robert Brown's name is associated with this behaviour based on observations he made in 1828. It was not until Einstein's work in the early 1900s however, that the origin of this irregular motion was established to be the result of collisions with molecules which were so small as to be invisible in a light microscope (Einstein A 1965 Investigations on the Theory of the Brownian Movement ed R Furth (New York: Dover) (transl. Cowper A D) (5 papers)). Jean Perrin in 1908 (Perrin J 1923 Atoms (New York: Van Nostrand-Reinhold) (transl. Hammick D)) was able, through a series of painstaking experiments, to establish the validity of Einstein's equation. We describe here the details of a junior level undergraduate physics laboratory experiment where students used a microscope, a video camera and video capture software to verify Einstein's famous calculation of 1905.

  1. Reconstructing 3-D Ship Motion for Synthetic Aperture Sonar Processing

    NASA Astrophysics Data System (ADS)

    Thomsen, D. R.; Chadwell, C. D.; Sandwell, D.

    2004-12-01

    We are investigating the feasibility of coherent ping-to-ping processing of multibeam sonar data for high-resolution mapping and change detection in the deep ocean. Theoretical calculations suggest that standard multibeam resolution can be improved from 100 m to ~10 m through coherent summation of pings similar to synthetic aperture radar image formation. A requirement for coherent summation of pings is to correct the phase of the return echoes to an accuracy of ~3 cm at a sampling rate of ~10 Hz. In September of 2003, we conducted a seagoing experiment aboard R/V Revelle to test these ideas. Three geodetic-quality GPS receivers were deployed to recover 3-D ship motion to an accuracy of +- 3cm at a 1 Hz sampling rate [Chadwell and Bock, GRL, 2001]. Additionally, inertial navigation data (INS) from fiber-optic gyroscopes and pendulum-type accelerometers were collected at a 10 Hz rate. Independent measurements of ship orientation (yaw, pitch, and roll) from the GPS and INS show agreement to an RMS accuracy of better than 0.1 degree. Because inertial navigation hardware is susceptible to drift, these measurements were combined with the GPS to achieve both high accuracy and high sampling rate. To preserve the short-timescale accuracy of the INS and the long-timescale accuracy of the GPS measurements, time-filtered differences between the GPS and INS were subtracted from the INS integrated linear velocities. An optimal filter length of 25 s was chosen to force the RMS difference between the GPS and the integrated INS to be on the order of the accuracy of the GPS measurements. This analysis provides an upper bound on 3-D ship motion accuracy. Additionally, errors in the attitude can translate to the projections of motion for individual hydrophones. With lever arms on the order of 5m, these errors will likely be ~1mm. Based on these analyses, we expect to achieve the 3-cm accuracy requirement. Using full-resolution hydrophone data collected by a SIMRAD EM/120 echo sounder

  2. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  3. Automation of 3D scan data capturing and processing

    NASA Astrophysics Data System (ADS)

    Sitnik, Robert; Karaszewski, Maciej; Załuski, Wojciech; Rutkiewicz, Jan

    2010-02-01

    In this paper a fully automated 3D shape measurement and processing method is presented. It assumes that positioning of measurement head in relation to measured object can be realized by specialized computer-controlled manipulator. On the base of existing 3D scans, the proposed method calculates "next best view" position for measurement head. All 3D data processing (filtering, ICP based fitting and final views integration) is performed automatically. Final 3D model is created on the base of user specified parameters like accuracy of surface representation or density of surface sampling. Exemplary system that implements all mentioned functionalities will be presented. The goal of this system is to automatically (without any user attention) and rapidly (from days and weeks to hours) measure whole object with some limitations to its properties: maximum measurement volume is described as a cylinder with 2,5m height and 1m radius, maximum object's weight is 2 tons. Measurement head is automatically calibrated by the system and its possible working volume starts from 120mm x 80mm x 60mm and ends up to 1,2m x 0,8m x 0,6m. Exemplary measurement result is presented.

  4. Samba: A Real-Time Motion Capture System Using Wireless Camera Sensor Networks

    PubMed Central

    Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai

    2014-01-01

    There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments. PMID:24658618

  5. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  6. Introducing the depth transfer curve for 3D capture system characterization

    NASA Astrophysics Data System (ADS)

    Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas

    2011-03-01

    3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.

  7. Implementation of wireless 3D stereo image capture system and 3D exaggeration algorithm for the region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Badarch, Luubaatar

    2015-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We aslo comments on the GPU hardware and CUDA programming for implementation of 3D exaggeraion algorithm for ROI by adjusting and synthesizing the disparity value of ROI (region of interest) in real time. We comment on the pattern of aperture for deblurring of CMOS camera module based on the Kirchhoff diffraction formula and clarify the reason why we can get more sharp and clear image by blocking some portion of aperture or geometric sampling. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  8. 3D motion of DNA-Au nanoconjugates in graphene liquid cell electron microscopy.

    PubMed

    Chen, Qian; Smith, Jessica M; Park, Jungwon; Kim, Kwanpyo; Ho, Davy; Rasool, Haider I; Zettl, Alex; Alivisatos, A Paul

    2013-09-11

    Liquid-phase transmission electron microscopy (TEM) can probe and visualize dynamic events with structural or functional details at the nanoscale in a liquid medium. Earlier efforts have focused on the growth and transformation kinetics of hard material systems, relying on their stability under electron beam. Our recently developed graphene liquid cell technique pushed the spatial resolution of such imaging to the atomic scale but still focused on growth trajectories of metallic nanocrystals. Here, we adopt this technique to imaging three-dimensional (3D) dynamics of soft materials instead, double strand (dsDNA) connecting Au nanocrystals as one example, at nanometer resolution. We demonstrate first that a graphene liquid cell can seal an aqueous sample solution of a lower vapor pressure than previously investigated well against the high vacuum in TEM. Then, from quantitative analysis of real time nanocrystal trajectories, we show that the status and configuration of dsDNA dictate the motions of linked nanocrystals throughout the imaging time of minutes. This sustained connecting ability of dsDNA enables this unprecedented continuous imaging of its dynamics via TEM. Furthermore, the inert graphene surface minimizes sample-substrate interaction and allows the whole nanostructure to rotate freely in the liquid environment; we thus develop and implement the reconstruction of 3D configuration and motions of the nanostructure from the series of 2D projected TEM images captured while it rotates. In addition to further proving the nanoconjugate structural stability, this reconstruction demonstrates 3D dynamic imaging by TEM beyond its conventional use in seeing a flattened and dry sample. Altogether, we foresee the new and exciting use of graphene liquid cell TEM in imaging 3D biomolecular transformations or interaction dynamics at nanometer resolution. PMID:23944844

  9. Determining 3-D motion and structure from image sequences

    NASA Technical Reports Server (NTRS)

    Huang, T. S.

    1982-01-01

    A method of determining three-dimensional motion and structure from two image frames is presented. The method requires eight point correspondences between the two frames, from which motion and structure parameters are determined by solving a set of eight linear equations and a singular value decomposition of a 3x3 matrix. It is shown that the solution thus obtained is unique.

  10. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846

  11. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis

    PubMed Central

    Cerveri, Pietro; Barros, Ricardo M. L.; Marins, João C. B.; Silvatti, Amanda P.

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846

  12. Validation of INSAT-3D atmospheric motion vectors for monsoon 2015

    NASA Astrophysics Data System (ADS)

    Sharma, Priti; Rani, S. Indira; Das Gupta, M.

    2016-05-01

    Atmospheric Motion Vector (AMV) over Indian Ocean and surrounding region is one of the most important sources of tropospheric wind information assimilated in numerical weather prediction (NWP) system. Earlier studies showed that the quality of Indian geo-stationary satellite Kalpana-1 AMVs was not comparable to that of other geostationary satellites over this region and hence not used in NWP system. Indian satellite INSAT-3D was successfully launched on July 26, 2013 with upgraded imaging system as compared to that of previous Indian satellite Kalpana-1. INSAT-3D has middle infrared band (3.80 - 4.00 μm) which is capable of night time pictures of low clouds and fog. Three consecutive images of 30-minutes interval are used to derive the AMVs. New height assignment scheme (using NWP first guess and replacing old empirical GA method) along with modified quality control scheme were implemented for deriving INSAT-3D AMVs. In this paper an attempt has been made to validate these AMVs against in-situ observations as well as against NCMRWF's NWP first guess for monsoon 2015. AMVs are subdivided into three different pressure levels in the vertical viz. low (1000 - 700 hPa), middle (700 - 400 hPa) and high (400 - 100 hPa) for validation purpose. Several statistics viz. normalized root mean square vector difference; biases etc. have been computed over different latitudinal belt. Result shows that the general mean monsoon circulations along with all the transient monsoon systems are well captured by INSAT-3D AMVs, as well as the error statistics viz., RMSE etc of INSAT-3D AMVs is now comparable to other geostationary satellites.

  13. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  14. LV motion tracking from 3D echocardiography using textural and structural information.

    PubMed

    Myronenko, Andriy; Song, Xubo; Sahn, David J

    2007-01-01

    Automated motion reconstruction of the left ventricle (LV) from 3D echocardiography provides insight into myocardium architecture and function. Low image quality and artifacts make 3D ultrasound image processing a challenging problem. We introduce a LV tracking method, which combines textural and structural information to overcome the image quality limitations. Our method automatically reconstructs the motion of the LV contour (endocardium and epicardium) from a sequence of 3D ultrasound images. PMID:18044597

  15. Biodynamic Doppler imaging of subcellular motion inside 3D living tissue culture and biopsies (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Nolte, David D.

    2016-03-01

    Biodynamic imaging is an emerging 3D optical imaging technology that probes up to 1 mm deep inside three-dimensional living tissue using short-coherence dynamic light scattering to measure the intracellular motions of cells inside their natural microenvironments. Biodynamic imaging is label-free and non-invasive. The information content of biodynamic imaging is captured through tissue dynamics spectroscopy that displays the changes in the Doppler signatures from intracellular constituents in response to applied compounds. The affected dynamic intracellular mechanisms include organelle transport, membrane undulations, cytoskeletal restructuring, strain at cellular adhesions, cytokinesis, mitosis, exo- and endo-cytosis among others. The development of 3D high-content assays such as biodynamic profiling can become a critical new tool for assessing efficacy of drugs and the suitability of specific types of tissue growth for drug discovery and development. The use of biodynamic profiling to predict clinical outcome of living biopsies to cancer therapeutics can be developed into a phenotypic companion diagnostic, as well as a new tool for therapy selection in personalized medicine. This invited talk will present an overview of the optical, physical and physiological processes involved in biodynamic imaging. Several different biodynamic imaging modalities include motility contrast imaging (MCI), tissue-dynamics spectroscopy (TDS) and tissue-dynamics imaging (TDI). A wide range of potential applications will be described that include process monitoring for 3D tissue culture, drug discovery and development, cancer therapy selection, embryo assessment for in-vitro fertilization and artificial reproductive technologies, among others.

  16. Tracking 3-D body motion for docking and robot control

    NASA Technical Reports Server (NTRS)

    Donath, M.; Sorensen, B.; Yang, G. B.; Starr, R.

    1987-01-01

    An advanced method of tracking three-dimensional motion of bodies has been developed. This system has the potential to dynamically characterize machine and other structural motion, even in the presence of structural flexibility, thus facilitating closed loop structural motion control. The system's operation is based on the concept that the intersection of three planes defines a point. Three rotating planes of laser light, fixed and moving photovoltaic diode targets, and a pipe-lined architecture of analog and digital electronics are used to locate multiple targets whose number is only limited by available computer memory. Data collection rates are a function of the laser scan rotation speed and are currently selectable up to 480 Hz. The tested performance on a preliminary prototype designed for 0.1 in accuracy (for tracking human motion) at a 480 Hz data rate includes a worst case resolution of 0.8 mm (0.03 inches), a repeatability of plus or minus 0.635 mm (plus or minus 0.025 inches), and an absolute accuracy of plus or minus 2.0 mm (plus or minus 0.08 inches) within an eight cubic meter volume with all results applicable at the 95 percent level of confidence along each coordinate region. The full six degrees of freedom of a body can be computed by attaching three or more target detectors to the body of interest.

  17. Rigid Body Motion in Stereo 3D Simulation

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2010-01-01

    This paper addresses the difficulties experienced by first-grade students studying rigid body motion at Sofia University. Most quantities describing the rigid body are in relations that the students find hard to visualize and understand. They also lose the notion of cause-result relations between vector quantities, such as the relation between…

  18. Flash trajectory imaging of target 3D motion

    NASA Astrophysics Data System (ADS)

    Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang

    2011-03-01

    We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.

  19. Display of travelling 3D scenes from single integral-imaging capture

    NASA Astrophysics Data System (ADS)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  20. 2D-3D rigid registration to compensate for prostate motion during 3D TRUS-guided biopsy

    NASA Astrophysics Data System (ADS)

    De Silva, Tharindu; Fenster, Aaron; Bax, Jeffrey; Gardi, Lori; Romagnoli, Cesare; Samarabandu, Jagath; Ward, Aaron D.

    2012-02-01

    Prostate biopsy is the clinical standard for prostate cancer diagnosis. To improve the accuracy of targeting suspicious locations, systems have been developed that can plan and record biopsy locations in a 3D TRUS image acquired at the beginning of the procedure. Some systems are designed for maximum compatibility with existing ultrasound equipment and are thus designed around the use of a conventional 2D TRUS probe, using controlled axial rotation of this probe to acquire a 3D TRUS reference image at the start of the biopsy procedure. Prostate motion during the biopsy procedure causes misalignments between the prostate in the live 2D TRUS images and the pre-acquired 3D TRUS image. We present an image-based rigid registration technique that aligns live 2D TRUS images, acquired immediately prior to biopsy needle insertion, with the pre-acquired 3D TRUS image to compensate for this motion. Our method was validated using 33 manually identified intrinsic fiducials in eight subjects and the target registration error was found to be 1.89 mm. We analysed the suitability of two image similarity metrics (normalized cross correlation and mutual information) for this task by plotting these metrics as a function of varying parameters in the six degree-of-freedom transformation space, with the ground truth plane obtained from registration as the starting point for the parameter exploration. We observed a generally convex behaviour of the similarity metrics. This encourages their use for this registration problem, and could assist in the design of a tool for the detection of misalignment, which could trigger the execution of a non-real-time registration, when needed during the procedure.

  1. Simple 3-D stimulus for motion parallax and its simulation.

    PubMed

    Ono, Hiroshi; Chornenkyy, Yevgen; D'Amour, Sarah

    2013-01-01

    Simulation of a given stimulus situation should produce the same perception as the original. Rogers et al (2009 Perception 38 907-911) simulated Wheeler's (1982, PhD thesis, Rutgers University, NJ) motion parallax stimulus and obtained quite different perceptions. Wheeler's observers were unable to reliably report the correct direction of depth, whereas Rogers's were. With three experiments we explored the possible reasons for the discrepancy. Our results suggest that Rogers was able to see depth from the simulation partly due to his experience seeing depth with random dot surfaces. PMID:23964382

  2. Tactical 3D model generation using structure-from-motion on video from unmanned systems

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Bilinski, Mark; Nguyen, Kim B.; Powell, Darren

    2015-05-01

    Unmanned systems have been cited as one of the future enablers of all the services to assist the warfighter in dominating the battlespace. The potential benefits of unmanned systems are being closely investigated -- from providing increased and potentially stealthy surveillance, removing the warfighter from harms way, to reducing the manpower required to complete a specific job. In many instances, data obtained from an unmanned system is used sparingly, being applied only to the mission at hand. Other potential benefits to be gained from the data are overlooked and, after completion of the mission, the data is often discarded or lost. However, this data can be further exploited to offer tremendous tactical, operational, and strategic value. To show the potential value of this otherwise lost data, we designed a system that persistently stores the data in its original format from the unmanned vehicle and then generates a new, innovative data medium for further analysis. The system streams imagery and video from an unmanned system (original data format) and then constructs a 3D model (new data medium) using structure-from-motion. The 3D generated model provides warfighters additional situational awareness, tactical and strategic advantages that the original video stream lacks. We present our results using simulated unmanned vehicle data with Google Earth™providing the imagery as well as real-world data, including data captured from an unmanned aerial vehicle flight.

  3. Nonrigid Autofocus Motion Correction for Coronary MR Angiography with a 3D Cones Trajectory

    PubMed Central

    Ingle, R. Reeve; Wu, Holden H.; Addy, Nii Okai; Cheng, Joseph Y.; Yang, Phillip C.; Hu, Bob S.; Nishimura, Dwight G.

    2014-01-01

    Purpose: To implement a nonrigid autofocus motion correction technique to improve respiratory motion correction of free-breathing whole-heart coronary magnetic resonance angiography (CMRA) acquisitions using an image-navigated 3D cones sequence. Methods: 2D image navigators acquired every heartbeat are used to measure superior-inferior, anterior-posterior, and right-left translation of the heart during a free-breathing CMRA scan using a 3D cones readout trajectory. Various tidal respiratory motion patterns are modeled by independently scaling the three measured displacement trajectories. These scaled motion trajectories are used for 3D translational compensation of the acquired data, and a bank of motion-compensated images is reconstructed. From this bank, a gradient entropy focusing metric is used to generate a nonrigid motion-corrected image on a pixel-by-pixel basis. The performance of the autofocus motion correction technique is compared with rigid-body translational correction and no correction in phantom, volunteer, and patient studies. Results: Nonrigid autofocus motion correction yields improved image quality compared to rigid-body-corrected images and uncorrected images. Quantitative vessel sharpness measurements indicate superiority of the proposed technique in 14 out of 15 coronary segments from three patient and two volunteer studies. Conclusion: The proposed technique corrects nonrigid motion artifacts in free-breathing 3D cones acquisitions, improving image quality compared to rigid-body motion correction. PMID:24006292

  4. Low-cost structured-light based 3D capture system design

    NASA Astrophysics Data System (ADS)

    Dong, Jing; Bengtson, Kurt R.; Robinson, Barrett F.; Allebach, Jan P.

    2014-03-01

    Most of the 3D capture products currently in the market are high-end and pricey. They are not targeted for consumers, but rather for research, medical, or industrial usage. Very few aim to provide a solution for home and small business applications. Our goal is to fill in this gap by only using low-cost components to build a 3D capture system that can satisfy the needs of this market segment. In this paper, we present a low-cost 3D capture system based on the structured-light method. The system is built around the HP TopShot LaserJet Pro M275. For our capture device, we use the 8.0 Mpixel camera that is part of the M275. We augment this hardware with two 3M MPro 150 VGA (640 × 480) pocket projectors. We also describe an analytical approach to predicting the achievable resolution of the reconstructed 3D object based on differentials and small signal theory, and an experimental procedure for validating that the system under test meets the specifications for reconstructed object resolution that are predicted by our analytical model. By comparing our experimental measurements from the camera-projector system with the simulation results based on the model for this system, we conclude that our prototype system has been correctly configured and calibrated. We also conclude that with the analytical models, we have an effective means for specifying system parameters to achieve a given target resolution for the reconstructed object.

  5. Phenotyping transgenic embryos: a rapid 3-D screening method based on episcopic fluorescence image capturing.

    PubMed

    Weninger, Wolfgang Johann; Mohun, Timothy

    2002-01-01

    We describe a technique suitable for routine three-dimensional (3-D) analysis of mouse embryos that is based on episcopic fluorescence images captured during serial sectioning of wax-embedded specimens. We have used this procedure to describe the cardiac phenotype and associated blood vessels of trisomic 16 (Ts16) and Cited2-null mutant mice, as well as the expression pattern of an Myf5 enhancer/beta-galactosidase transgene. The consistency of the images and their precise alignment are ideally suited for 3-D analysis using video animations, virtual resectioning or commercial 3-D reconstruction software packages. Episcopic fluorescence image capturing (EFIC) provides a simple and powerful tool for analyzing embryo and organ morphology in normal and transgenic embryos. PMID:11743576

  6. Motion-Corrected 3D Sonic Anemometer for Tethersondes and Other Moving Platforms

    NASA Technical Reports Server (NTRS)

    Bognar, John

    2012-01-01

    To date, it has not been possible to apply 3D sonic anemometers on tethersondes or similar atmospheric research platforms due to the motion of the supporting platform. A tethersonde module including both a 3D sonic anemometer and associated motion correction sensors has been developed, enabling motion-corrected 3D winds to be measured from a moving platform such as a tethersonde. Blimps and other similar lifting systems are used to support tethersondes meteorological devices that fly on the tether of a blimp or similar platform. To date, tethersondes have been limited to making basic meteorological measurements (pressure, temperature, humidity, and wind speed and direction). The motion of the tethersonde has precluded the addition of 3D sonic anemometers, which can be used for high-speed flux measurements, thereby limiting what has been achieved to date with tethersondes. The tethersonde modules fly on a tether that can be constantly moving and swaying. This would introduce enormous error into the output of an uncorrected 3D sonic anemometer. The motion correction that is required must be implemented in a low-weight, low-cost manner to be suitable for this application. Until now, flux measurements using 3D sonic anemometers could only be made if the 3D sonic anemometer was located on a rigid, fixed platform such as a tower. This limited the areas in which they could be set up and used. The purpose of the innovation was to enable precise 3D wind and flux measurements to be made using tether - sondes. In brief, a 3D accelerometer and a 3D gyroscope were added to a tethersonde module along with a 3D sonic anemometer. This combination allowed for the necessary package motions to be measured, which were then mathematically combined with the measured winds to yield motion-corrected 3D winds. At the time of this reporting, no tethersonde has been able to make any wind measurement other than a basic wind speed and direction measurement. The addition of a 3D sonic

  7. Tracking 3D Picometer-Scale Motions of Single Nanoparticles with High-Energy Electron Probes

    PubMed Central

    Ogawa, Naoki; Hoshisashi, Kentaro; Sekiguchi, Hiroshi; Ichiyanagi, Kouhei; Matsushita, Yufuku; Hirohata, Yasuhisa; Suzuki, Seiichi; Ishikawa, Akira; Sasaki, Yuji C.

    2013-01-01

    We observed the high-speed anisotropic motion of an individual gold nanoparticle in 3D at the picometer scale using a high-energy electron probe. Diffracted electron tracking (DET) using the electron back-scattered diffraction (EBSD) patterns of labeled nanoparticles under wet-SEM allowed us to super-accurately measure the time-resolved 3D motion of individual nanoparticles in aqueous conditions. The highly precise DET data corresponded to the 3D anisotropic log-normal Gaussian distributions over time at the millisecond scale. PMID:23868465

  8. Computing 3-D structure of rigid objects using stereo and motion

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1987-01-01

    Work performed as a step toward an intelligent automatic machine vision system for 3-D imaging is discussed. The problem considered is the quantitative 3-D reconstruction of rigid objects. Motion and stereo are the two clues considered in this system. The system basically consists of three processes: the low level process to extract image features, the middle level process to establish the correspondence in the stereo (spatial) and motion (temporal) modalities, and the high level process to compute the 3-D coordinates of the corner points by integrating the spatial and temporal correspondences.

  9. Blind watermark algorithm on 3D motion model based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Qi, Hu; Zhai, Lang

    2013-12-01

    With the continuous development of 3D vision technology, digital watermark technology, as the best choice for copyright protection, has fused with it gradually. This paper proposed a blind watermark plan of 3D motion model based on wavelet transform, and made it loaded into the Vega real-time visual simulation system. Firstly, put 3D model into affine transform, and take the distance from the center of gravity to the vertex of 3D object in order to generate a one-dimensional discrete signal; then make this signal into wavelet transform to change its frequency coefficients and embed watermark, finally generate 3D motion model with watermarking. In fixed affine space, achieve the robustness in translation, revolving and proportion transforms. The results show that this approach has better performances not only in robustness, but also in watermark- invisibility.

  10. Model-based risk assessment for motion effects in 3D radiotherapy of lung tumors

    NASA Astrophysics Data System (ADS)

    Werner, René; Ehrhardt, Jan; Schmidt-Richberg, Alexander; Handels, Heinz

    2012-02-01

    Although 4D CT imaging becomes available in an increasing number of radiotherapy facilities, 3D imaging and planning is still standard in current clinical practice. In particular for lung tumors, respiratory motion is a known source of uncertainty and should be accounted for during radiotherapy planning - which is difficult by using only a 3D planning CT. In this contribution, we propose applying a statistical lung motion model to predict patients' motion patterns and to estimate dosimetric motion effects in lung tumor radiotherapy if only 3D images are available. Being generated based on 4D CT images of patients with unimpaired lung motion, the model tends to overestimate lung tumor motion. It therefore promises conservative risk assessment regarding tumor dose coverage. This is exemplarily evaluated using treatment plans of lung tumor patients with different tumor motion patterns and for two treatment modalities (conventional 3D conformal radiotherapy and step-&- shoot intensity modulated radiotherapy). For the test cases, 4D CT images are available. Thus, also a standard registration-based 4D dose calculation is performed, which serves as reference to judge plausibility of the modelbased 4D dose calculation. It will be shown that, if combined with an additional simple patient-specific breathing surrogate measurement (here: spirometry), the model-based dose calculation provides reasonable risk assessment of respiratory motion effects.

  11. On the integrability of the motion of 3D-Swinging Atwood machine and related problems

    NASA Astrophysics Data System (ADS)

    Elmandouh, A. A.

    2016-03-01

    In the present article, we study the problem of the motion of 3D- Swinging Atwood machine. A new integrable case for this problem is announced. We point out a new integrable case describing the motion of a heavy particle on a titled cone.

  12. Structural response to 3D simulated earthquake motions in San Bernardino Valley

    USGS Publications Warehouse

    Safak, E.; Frankel, A.

    1994-01-01

    Structural repsonse to one- and three-dimensional (3D) simulated motions in San Bernardino Valley from a hypothetical earthquake along the San Andreas fault with moment magnitude 6.5 and rupture length of 30km is investigated. The results show that the ground motions and the structural response vary dramatically with the type of simulation and the location. -from Authors

  13. The effect of motion on IMRT - looking at interplay with 3D measurements

    NASA Astrophysics Data System (ADS)

    Thomas, A.; Yan, H.; Oldham, M.; Juang, T.; Adamovics, J.; Yin, F. F.

    2013-06-01

    Clinical recommendations to address tumor motion management have been derived from studies dealing with simulations and 2D measurements. 3D measurements may provide more insight and possibly alter the current motion management guidelines. This study provides an initial look at true 3D measurements involving leaf motion deliveries by use of a motion phantom and the PRESAGE/DLOS dosimetry system. An IMRT and VMAT plan were delivered to the phantom and analyzed by means of DVHs to determine whether the expansion of treatment volumes based on known imaging motion adequately cover the target. DVHs confirmed that for these deliveries the expansion volumes were adequate to treat the intended target although further studies should be conducted to allow for differences in parameters that could alter the results, such as delivery dose and breathe rate.

  14. 3D motion artifact compenstation in CT image with depth camera

    NASA Astrophysics Data System (ADS)

    Ko, Youngjun; Baek, Jongduk; Shim, Hyunjung

    2015-02-01

    Computed tomography (CT) is a medical imaging technology that projects computer-processed X-rays to acquire tomographic images or the slices of specific organ of body. A motion artifact caused by patient motion is a common problem in CT system and may introduce undesirable artifacts in CT images. This paper analyzes the critical problems in motion artifacts and proposes a new CT system for motion artifact compensation. We employ depth cameras to capture the patient motion and account it for the CT image reconstruction. In this way, we achieve the significant improvement in motion artifact compensation, which is not possible by previous techniques.

  15. 3D model-based catheter tracking for motion compensation in EP procedures

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Liao, Rui; Hornegger, Joachim; Strobel, Norbert

    2010-02-01

    Atrial fibrillation is the most common sustained heart arrhythmia and a leading cause of stroke. Its treatment by radio-frequency catheter ablation, performed using fluoroscopic image guidance, is gaining increasingly more importance. Two-dimensional fluoroscopic navigation can take advantage of overlay images derived from pre-operative 3-D data to add anatomical details otherwise not visible under X-ray. Unfortunately, respiratory motion may impair the utility of these static overlay images for catheter navigation. We developed an approach for image-based 3-D motion compensation as a solution to this problem. A bi-plane C-arm system is used to take X-ray images of a special circumferential mapping catheter from two directions. In the first step of the method, a 3-D model of the device is reconstructed. Three-dimensional respiratory motion at the site of ablation is then estimated by tracking the reconstructed catheter model in 3-D. This step involves bi-plane fluoroscopy and 2-D/3-D registration. Phantom data and clinical data were used to assess our model-based catheter tracking method. Experiments involving a moving heart phantom yielded an average 2-D tracking error of 1.4 mm and an average 3-D tracking error of 1.1 mm. Our evaluation of clinical data sets comprised 469 bi-plane fluoroscopy frames (938 monoplane fluoroscopy frames). We observed an average 2-D tracking error of 1.0 mm +/- 0.4 mm and an average 3-D tracking error of 0.8 mm +/- 0.5 mm. These results demonstrate that model-based motion-compensation based on 2-D/3-D registration is both feasible and accurate.

  16. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    SciTech Connect

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan

    2015-11-15

    achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μ{sub recon} = − 2.7 × 10{sup −3} mm{sup −1}, σ{sub recon} = 7.0 × 10{sup −3} mm{sup −1}) and (μ{sub CT} = − 2.5 × 10{sup −3} mm{sup −1}, σ{sub CT} = 5.3 × 10{sup −3} mm{sup −1}), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.

  17. Markerless motion capture of multiple characters using multiview image segmentation.

    PubMed

    Liu, Yebin; Gall, Juergen; Stoll, Carsten; Dai, Qionghai; Seidel, Hans-Peter; Theobalt, Christian

    2013-11-01

    Capturing the skeleton motion and detailed time-varying surface geometry of multiple, closely interacting peoples is a very challenging task, even in a multicamera setup, due to frequent occlusions and ambiguities in feature-to-person assignments. To address this task, we propose a framework that exploits multiview image segmentation. To this end, a probabilistic shape and appearance model is employed to segment the input images and to assign each pixel uniquely to one person. Given the articulated template models of each person and the labeled pixels, a combined optimization scheme, which splits the skeleton pose optimization problem into a local one and a lower dimensional global one, is applied one by one to each individual, followed with surface estimation to capture detailed nonrigid deformations. We show on various sequences that our approach can capture the 3D motion of humans accurately even if they move rapidly, if they wear wide apparel, and if they are engaged in challenging multiperson motions, including dancing, wrestling, and hugging. PMID:24051731

  18. Intrathoracic tumour motion estimation from CT imaging using the 3D optical flow method

    NASA Astrophysics Data System (ADS)

    Guerrero, Thomas; Zhang, Geoffrey; Huang, Tzung-Chi; Lin, Kang-Ping

    2004-09-01

    The purpose of this work was to develop and validate an automated method for intrathoracic tumour motion estimation from breath-hold computed tomography (BH CT) imaging using the three-dimensional optical flow method (3D OFM). A modified 3D OFM algorithm provided 3D displacement vectors for each voxel which were used to map tumour voxels on expiration BH CT onto inspiration BH CT images. A thoracic phantom and simulated expiration/inspiration BH CT pairs were used for validation. The 3D OFM was applied to the measured inspiration and expiration BH CT images from one lung cancer and one oesophageal cancer patient. The resulting displacements were plotted in histogram format and analysed to provide insight regarding the tumour motion. The phantom tumour displacement was measured as 1.20 and 2.40 cm with full-width at tenth maximum (FWTM) for the distribution of displacement estimates of 0.008 and 0.006 cm, respectively. The maximum error of any single voxel's motion estimate was 1.1 mm along the z-dimension or approximately one-third of the z-dimension voxel size. The simulated BH CT pairs revealed an rms error of less than 0.25 mm. The displacement of the oesophageal tumours was nonuniform and up to 1.4 cm, this was a new finding. A lung tumour maximum displacement of 2.4 cm was found in the case evaluated. In conclusion, 3D OFM provided an accurate estimation of intrathoracic tumour motion, with estimated errors less than the voxel dimension in a simulated motion phantom study. Surprisingly, oesophageal tumour motion was large and nonuniform, with greatest motion occurring at the gastro-oesophageal junction. Presented at The IASTED Second International Conference on Biomedical Engineering (BioMED 2004), Innsbruck, Austria, 16-18 February 2004.

  19. Applications of markerless motion capture in gait recognition.

    PubMed

    Sandau, Martin

    2016-03-01

    This thesis is based on four manuscripts where two of them were accepted and two were submitted to peer-reviewed journals. The experimental work behind the thesis was conducted at the Institute of Neuroscience and Pharmacology, University of Copenhagen. The purpose of the studies was to explore the variability of human gait and to conduct new methods for precise estimation of the kinematic parameters applied in forensic gait analysis. The gait studies were conducted in a custom built gait laboratory designed to obtain optimal conditions for markerless motion analysis. The set-up consisted of eight synchronised cameras located in the corners of the laboratory, which were connected to a single computer. The captured images were processed with stereovision-based algorithms to provide accurate 3D reconstructions of the participants. The 3D reconstructions of the participants were obtained during normal walking and the kinematics were extracted with manual and automatic methods. The kinematic results from the automatic approach were compared to marker-based motion capture to validate the precision. The results showed that the proposed markerless motion capture method had a precision comparable to marker-based methods in the frontal plane and the sagittal plane. Similar markerless motion capture methods could therefore provide the basis for reliable gait recognition based on kinematic parameters. The manual annotations were compared to the actual anthropometric measurements obtained from MRI scans and the intra- and inter-observer variability was also quantified to observe the associated effect on recognition. The results showed not only that the kinematics in the lower extremities were important but also that the kinematics in the shoulders had a high discriminatory power. Likewise, the shank length was also highly discriminatory, which has not been previously reported. However, it is important that the same expert performs all annotations, as the inter

  20. Motion-induced phase error estimation and correction in 3D diffusion tensor imaging.

    PubMed

    Van, Anh T; Hernando, Diego; Sutton, Bradley P

    2011-11-01

    A multishot data acquisition strategy is one way to mitigate B0 distortion and T2∗ blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to significant image artifacts. This work proposes a maximum likelihood estimation and k-space correction of motion-induced phase errors in 3D multishot diffusion tensor imaging. The proposed error estimation is robust, unbiased, and approaches the Cramer-Rao lower bound. For rigid body motion, the proposed correction effectively removes motion-induced phase errors regardless of the k-space trajectory used and gives comparable performance to the more computationally expensive 3D iterative nonlinear phase error correction method. The method has been extended to handle multichannel data collected using phased-array coils. Simulation and in vivo data are shown to demonstrate the performance of the method. PMID:21652284

  1. Meshless deformable models for 3D cardiac motion and strain analysis from tagged MRI.

    PubMed

    Wang, Xiaoxu; Chen, Ting; Zhang, Shaoting; Schaerer, Joël; Qian, Zhen; Huh, Suejung; Metaxas, Dimitris; Axel, Leon

    2015-01-01

    Tagged magnetic resonance imaging (TMRI) provides a direct and noninvasive way to visualize the in-wall deformation of the myocardium. Due to the through-plane motion, the tracking of 3D trajectories of the material points and the computation of 3D strain field call for the necessity of building 3D cardiac deformable models. The intersections of three stacks of orthogonal tagging planes are material points in the myocardium. With these intersections as control points, 3D motion can be reconstructed with a novel meshless deformable model (MDM). Volumetric MDMs describe an object as point cloud inside the object boundary and the coordinate of each point can be written in parametric functions. A generic heart mesh is registered on the TMRI with polar decomposition. A 3D MDM is generated and deformed with MR image tagging lines. Volumetric MDMs are deformed by calculating the dynamics function and minimizing the local Laplacian coordinates. The similarity transformation of each point is computed by assuming its neighboring points are making the same transformation. The deformation is computed iteratively until the control points match the target positions in the consecutive image frame. The 3D strain field is computed from the 3D displacement field with moving least squares. We demonstrate that MDMs outperformed the finite element method and the spline method with a numerical phantom. Meshless deformable models can track the trajectory of any material point in the myocardium and compute the 3D strain field of any particular area. The experimental results on in vivo healthy and patient heart MRI show that the MDM can fully recover the myocardium motion in three dimensions. PMID:25157446

  2. Meshless deformable models for 3D cardiac motion and strain analysis from tagged MRI

    PubMed Central

    Wang, Xiaoxu; Chen, Ting; Zhang, Shaoting; Schaerer, Joël; Qian, Zhen; Huh, Suejung; Metaxas, Dimitris; Axel, Leon

    2016-01-01

    Tagged magnetic resonance imaging (TMRI) provides a direct and noninvasive way to visualize the in-wall deformation of the myocardium. Due to the through-plane motion, the tracking of 3D trajectories of the material points and the computation of 3D strain field call for the necessity of building 3D cardiac deformable models. The intersections of three stacks of orthogonal tagging planes are material points in the myocardium. With these intersections as control points, 3D motion can be reconstructed with a novel meshless deformable model (MDM). Volumetric MDMs describe an object as point cloud inside the object boundary and the coordinate of each point can be written in parametric functions. A generic heart mesh is registered on the TMRI with polar decomposition. A 3D MDM is generated and deformed with MR image tagging lines. Volumetric MDMs are deformed by calculating the dynamics function and minimizing the local Laplacian coordinates. The similarity transformation of each point is computed by assuming its neighboring points are making the same transformation. The deformation is computed iteratively until the control points match the target positions in the consecutive image frame. The 3D strain field is computed from the 3D displacement field with moving least squares. We demonstrate that MDMs outperformed the finite element method and the spline method with a numerical phantom. Meshless deformable models can track the trajectory of any material point in the myocardium and compute the 3D strain field of any particular area. The experimental results on in vivo healthy and patient heart MRI show that the MDM can fully recover the myocardium motion in three dimensions. PMID:25157446

  3. Recovery of liver motion and deformation due to respiration using laparoscopic freehand 3D ultrasound system.

    PubMed

    Nakamoto, Masahiko; Hirayama, Hiroaki; Sato, Yoshinobu; Konishi, Kozo; Kakeji, Yoshihiro; Hashizume, Makoto; Tamura, Shinichi

    2006-01-01

    This paper describes a rapid method for intraoperative recovery of liver motion and deformation due to respiration by using a laparoscopic freehand 3D ultrasound (US) system. Using the proposed method, 3D US images of the liver can be extended to 4D US images by acquiring additional several sequences of 2D US images during a couple of respiration cycles. Time-varying 2D US images are acquired on several sagittal image planes and their 3D positions and orientations are measured using a laparoscopic ultrasound probe to which a miniature magnetic 3D position sensor is attached. During the acquisition, the probe is assumed to move together with the liver surface. In-plane 2D deformation fields and respiratory phase are estimated from the time-varying 2D US images, and then the time-varying 3D deformation fields on the sagittal image planes are obtained by combining 3D positions and orientations of the image planes. The time-varying 3D deformation field of the volume is obtained by interpolating the 3D deformation fields estimated on several planes. The proposed method was evaluated by in vivo experiments using a pig liver. PMID:17354794

  4. A Prototype Digital Library for 3D Collections: Tools To Capture, Model, Analyze, and Query Complex 3D Data.

    ERIC Educational Resources Information Center

    Rowe, Jeremy; Razdan, Anshuman

    The Partnership for Research in Spatial Modeling (PRISM) project at Arizona State University (ASU) developed modeling and analytic tools to respond to the limitations of two-dimensional (2D) data representations perceived by affiliated discipline scientists, and to take advantage of the enhanced capabilities of three-dimensional (3D) data that…

  5. A low cost PSD-based monocular motion capture system

    NASA Astrophysics Data System (ADS)

    Ryu, Young Kee; Oh, Choonsuk

    2007-10-01

    This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system is compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and the 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system has the advantages of the compact size, the low cost, the easy installation, and the high frame rates to be suitable for high speed motion tracking in games.

  6. X-ray stereo imaging for micro 3D motions within non-transparent objects

    NASA Astrophysics Data System (ADS)

    Salih, Wasil H. M.; Buytaert, Jan A. N.; Dirckx, Joris J. J.

    2012-03-01

    We propose a new technique to measure the 3D motion of marker points along a straight path within an object using x-ray stereo projections. From recordings of two x-ray projections with 90° separation angle, the 3D coordinates of marker points can be determined. By synchronizing the x-ray exposure time to the motion event, a moving marker leaves a trace in the image of which the gray scale is linearly proportional to the marker velocity. From the gray scale along the motion path, the 3D motion (velocity) is obtained. The path of motion was reconstructed and compared with the applied waveform. The results showed that the accuracy is in order of 5%. The difference of displacement amplitude between the new method and laser vibrometry was less than 5μm. We demonstrated the method on the malleus ossicle motion in the gerbil middle ear as a function of pressure applied on the eardrum. The new method has the advantage over existing methods such as laser vibrometry that the structures under study do not need to be visually exposed. Due to the short measurement time and the high resolution, the method can be useful in the field of biomechanics for a variety of applications.

  7. Motion corrected LV quantification based on 3D modelling for improved functional assessment in cardiac MRI

    NASA Astrophysics Data System (ADS)

    Liew, Y. M.; McLaughlin, R. A.; Chan, B. T.; Aziz, Y. F. Abdul; Chee, K. H.; Ung, N. M.; Tan, L. K.; Lai, K. W.; Ng, S.; Lim, E.

    2015-04-01

    Cine MRI is a clinical reference standard for the quantitative assessment of cardiac function, but reproducibility is confounded by motion artefacts. We explore the feasibility of a motion corrected 3D left ventricle (LV) quantification method, incorporating multislice image registration into the 3D model reconstruction, to improve reproducibility of 3D LV functional quantification. Multi-breath-hold short-axis and radial long-axis images were acquired from 10 patients and 10 healthy subjects. The proposed framework reduced misalignment between slices to subpixel accuracy (2.88 to 1.21 mm), and improved interstudy reproducibility for 5 important clinical functional measures, i.e. end-diastolic volume, end-systolic volume, ejection fraction, myocardial mass and 3D-sphericity index, as reflected in a reduction in the sample size required to detect statistically significant cardiac changes: a reduction of 21-66%. Our investigation on the optimum registration parameters, including both cardiac time frames and number of long-axis (LA) slices, suggested that a single time frame is adequate for motion correction whereas integrating more LA slices can improve registration and model reconstruction accuracy for improved functional quantification especially on datasets with severe motion artefacts.

  8. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0–1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  9. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy.

    PubMed

    Stemkens, Bjorn; Tijssen, Rob H N; de Senneville, Baudouin Denis; Lagendijk, Jan J W; van den Berg, Cornelis A T

    2016-07-21

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy. PMID:27362636

  10. A comparison of 3D scapular kinematics between dominant and nondominant shoulders during multiplanar arm motion

    PubMed Central

    Lee, Sang Ki; Yang, Dae Suk; Kim, Ha Yong; Choy, Won Sik

    2013-01-01

    Background: Generally, the scapular motions of pathologic and contralateral normal shoulders are compared to characterize shoulder disorders. However, the symmetry of scapular motion of normal shoulders remains undetermined. Therefore, the aim of this study was to compare 3dimensinal (3D) scapular motion between dominant and nondominant shoulders during three different planes of arm motion by using an optical tracking system. Materials and Methods: Twenty healthy subjects completed five repetitions of elevation and lowering in sagittal plane flexion, scapular plane abduction, and coronal plane abduction. The 3D scapular motion was measured using an optical tracking system, after minimizing reflective marker skin slippage using ultrasonography. The dynamic 3D motion of the scapula of dominant and nondominant shoulders, and the scapulohumeral rhythm (SHR) were analyzed at each 10° increment during the three planes of arm motion. Results: There was no significant difference in upward rotation or internal rotation (P > 0.05) of the scapula between dominant and nondominant shoulders during the three planes of arm motion. However, there was a significant difference in posterior tilting (P = 0.018) during coronal plane abduction. The SHR was a large positive or negative number in the initial phase of sagittal plane flexion and scapular plane abduction. However, the SHR was a small positive or negative number in the initial phase of coronal plane abduction. Conclusions: Only posterior tilting of the scapula during coronal plane abduction was asymmetrical in our healthy subjects, and depending on the plane of arm motion, the pattern of the SHR differed as well. These differences should be considered in the clinical assessment of shoulder pathology. PMID:23682174

  11. Multiple capture locations for 3D ultrasound-guided robotic retrieval of moving bodies from a beating heart

    NASA Astrophysics Data System (ADS)

    Thienphrapa, Paul; Ramachandran, Bharat; Elhawary, Haytham; Taylor, Russell H.; Popovic, Aleksandra

    2012-02-01

    Free moving bodies in the heart pose a serious health risk as they may be released in the arteries causing blood flow disruption. These bodies may be the result of various medical conditions and trauma. The conventional approach to removing these objects involves open surgery with sternotomy, the use of cardiopulmonary bypass, and a wide resection of the heart muscle. We advocate a minimally invasive surgical approach using a flexible robotic end effector guided by 3D transesophageal echocardiography. In a phantom study, we track a moving body in a beating heart using a modified normalized cross-correlation method, with mean RMS errors of 2.3 mm. We previously found the foreign body motion to be fast and abrupt, rendering infeasible a retrieval method based on direct tracking. We proposed a strategy based on guiding a robot to the most spatially probable location of the fragment and securing it upon its reentry to said location. To improve efficacy in the context of a robotic retrieval system, we extend this approach by exploring multiple candidate capture locations. Salient locations are identified based on spatial probability, dwell time, and visit frequency; secondary locations are also examined. Aggregate results indicate that the location of highest spatial probability (50% occupancy) is distinct from the longest-dwelled location (0.84 seconds). Such metrics are vital in informing the design of a retrieval system and capture strategies, and they can be computed intraoperatively to select the best capture location based on constraints such as workspace, time, and device manipulability. Given the complex nature of fragment motion, the ability to analyze multiple capture locations is a desirable capability in an interventional system.

  12. Effects of 3D random correlated velocity perturbations on predicted ground motions

    USGS Publications Warehouse

    Hartzell, S.; Harmsen, S.; Frankel, A.

    2010-01-01

    Three-dimensional, finite-difference simulations of a realistic finite-fault rupture on the southern Hayward fault are used to evaluate the effects of random, correlated velocity perturbations on predicted ground motions. Velocity perturbations are added to a three-dimensional (3D) regional seismic velocity model of the San Francisco Bay Area using a 3D von Karman random medium. Velocity correlation lengths of 5 and 10 km and standard deviations in the velocity of 5% and 10% are considered. The results show that significant deviations in predicted ground velocities are seen in the calculated frequency range (≤1 Hz) for standard deviations in velocity of 5% to 10%. These results have implications for the practical limits on the accuracy of scenario ground-motion calculations and on retrieval of source parameters using higher-frequency, strong-motion data.

  13. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Smith, Philip W.

    1993-01-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  14. A 3D space-time motion evaluation for image registration in digital subtraction angiography.

    PubMed

    Taleb, N; Bentoutou, Y; Deforges, O; Taleb, M

    2001-01-01

    In modern clinical practice, Digital Subtraction Angiography (DSA) is a powerful technique for the visualization of blood vessels in a sequence of X-ray images. A serious problem encountered in this technique is the presence of artifacts due to patient motion. The resulting artifacts frequently lead to misdiagnosis or rejection of a DSA image sequence. In this paper, a new technique for removing both global and local motion artifacts is presented. It is based on a 3D space-time motion evaluation for separating pixels changing values because of motion from those changing values because of contrast flow. This technique is proved to be very efficient to correct for patient motion artifacts and is computationally cheap. Experimental results with several clinical data sets show that this technique is very fast and results in higher quality images. PMID:11179698

  15. Brightness-compensated 3-D optical flow algorithm for monitoring cochlear motion patterns

    NASA Astrophysics Data System (ADS)

    von Tiedemann, Miriam; Fridberger, Anders; Ulfendahl, Mats; de Monvel, Jacques Boutet

    2010-09-01

    A method for three-dimensional motion analysis designed for live cell imaging by fluorescence confocal microscopy is described. The approach is based on optical flow computation and takes into account brightness variations in the image scene that are not due to motion, such as photobleaching or fluorescence variations that may reflect changes in cellular physiology. The 3-D optical flow algorithm allowed almost perfect motion estimation on noise-free artificial sequences, and performed with a relative error of <10% on noisy images typical of real experiments. The method was applied to a series of 3-D confocal image stacks from an in vitro preparation of the guinea pig cochlea. The complex motions caused by slow pressure changes in the cochlear compartments were quantified. At the surface of the hearing organ, the largest motion component was the transverse one (normal to the surface), but significant radial and longitudinal displacements were also present. The outer hair cell displayed larger radial motion at their basolateral membrane than at their apical surface. These movements reflect mechanical interactions between different cellular structures, which may be important for communicating sound-evoked vibrations to the sensory cells. A better understanding of these interactions is important for testing realistic models of cochlear mechanics.

  16. 3D imaging of particle-scale rotational motion in cyclically driven granular flows

    NASA Astrophysics Data System (ADS)

    Harrington, Matt; Powers, Dylan; Cooper, Eric; Losert, Wolfgang

    Recent experimental advances have enabled three-dimensional (3D) imaging of motion, structure, and failure within granular systems. 3D imaging allows researchers to directly characterize bulk behaviors that arise from particle- and meso-scale features. For instance, segregation of a bidisperse system of spheres under cyclic shear can originate from microscopic irreversibilities and the development of convective secondary flows. Rotational motion and frictional rotational coupling, meanwhile, have been less explored in such experimental 3D systems, especially under cyclic forcing. In particular, relative amounts of sliding and/or rolling between pairs of contacting grains could influence the reversibility of both trajectories, in terms of both position and orientation. In this work, we apply the Refractive Index Matched Scanning technique to a granular system that is cyclically driven and measure both translational and rotational motion of individual grains. We relate measured rotational motion to resulting shear bands and convective flows, further indicating the degree to which pairs and neighborhoods of grains collectively rotate.

  17. Reconstructing 3-D skin surface motion for the DIET breast cancer screening system.

    PubMed

    Botterill, Tom; Lotz, Thomas; Kashif, Amer; Chase, J Geoffrey

    2014-05-01

    Digital image-based elasto-tomography (DIET) is a prototype system for breast cancer screening. A breast is imaged while being vibrated, and the observed surface motion is used to infer the internal stiffness of the breast, hence identifying tumors. This paper describes a computer vision system for accurately measuring 3-D surface motion. A model-based segmentation is used to identify the profile of the breast in each image, and the 3-D surface is reconstructed by fitting a model to the profiles. The surface motion is measured using a modern optical flow implementation customized to the application, then trajectories of points on the 3-D surface are given by fusing the optical flow with the reconstructed surfaces. On data from human trials, the system is shown to exceed the performance of an earlier marker-based system at tracking skin surface motion. We demonstrate that the system can detect a 10 mm tumor in a silicone phantom breast. PMID:24770915

  18. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson's Disease.

    PubMed

    Piro, Neltje E; Piro, Lennart K; Kassubek, Jan; Blechschmidt-Trapp, Ronald A

    2016-01-01

    Remote monitoring of Parkinson's Disease (PD) patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS) classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a) an examination based on video recordings as a clinical reference; (b) an automatically classified UPDRS; and (c) a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG) sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b) the automatically classified UPDRS is 0.48 and with (c) the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team. PMID:27338400

  19. An embedded human motion capture system for an assistive walking robot.

    PubMed

    Zong, Cong; Clady, Xavier; Chetouani, Mohamed

    2011-01-01

    An embedded 3D body motion capture system for an assistive walking robot is presented in this paper. A 3D camera and infrared sensors are installed on a wheeled walker. We compare the positions of the human articular joints computed with our embedded system and the ones obtained with an other accurate system using embodied markers, the Codamotion. The obtained results valid our approach. PMID:22275639

  20. Solutions for 3D self-reconfiguration in a modular robotic system: implementation and motion planning

    NASA Astrophysics Data System (ADS)

    Unsal, Cem; Khosla, Pradeep K.

    2000-10-01

    In this manuscript, we discuss new solutions for mechanical design and motion planning for a class of 3D modular self- reconfigurable robotic system, namely I-Cubes. This system is a bipartite collection of active links that provide motions for self-reconfiguration, and cubes acting as connection points. The links are three degree of freedom manipulators that can attach to and detach from the cube faces. The cubes can be positioned and oriented using the links. These capabilities enable the system to change its shape and perform locomotion tasks over difficult terrain. This paper describes the scaled down version of the system previously described in and details the new design and manufacturing approaches. Initially designed algorithms for motion planning of I-Cubes are improved to provide better results. Results of our tests are given and issues related to motion planning are discussed. The user interfaces designed for the control of the system and algorithm evaluation is also described.

  1. The capture and dissemination of integrated 3D geospatial knowledge at the British Geological Survey using GSI3D software and methodology

    NASA Astrophysics Data System (ADS)

    Kessler, Holger; Mathers, Steve; Sobisch, Hans-Georg

    2009-06-01

    The Geological Surveying and Investigation in 3 Dimensions (GSI3D) software tool and methodology has been developed over the last 15 years. Since 2001 this has been in cooperation with the British Geological Survey (BGS). To-date over a hundred BGS geologists have learned to use the software that is now routinely deployed in building systematic and commercial 3D geological models. The success of the GSI3D methodology and software is based on its intuitive design and the fact that it utilises exactly the same data and methods, albeit in digital forms, that geologists have been using for two centuries in order to make geological maps and cross-sections. The geologist constructs models based on a career of observation of geological phenomena, thereby incorporating tacit knowledge into the model. This knowledge capture is a key element to the GSI3D approach. In BGS GSI3D is part of a much wider set of systems and work processes that together make up the cyberinfrastructure of a modern geological survey. The GSI3D software is not yet designed to cope with bedrock structures in which individual stratigraphic surfaces are repeated or inverted, but the software is currently being extended by BGS to encompass these more complex geological scenarios. A further challenge for BGS is to enable its 3D geological models to become part of the semantic Web using GML application schema like GeoSciML. The biggest benefits of widely available systematic geological models will be an enhanced public understanding of the sub-surface in 3D, and the teaching of geoscience students.

  2. Uncertainty preserving patch-based online modeling for 3D model acquisition and integration from passive motion imagery

    NASA Astrophysics Data System (ADS)

    Tang, Hao; Chang, Peng; Molina, Edgardo; Zhu, Zhigang

    2012-06-01

    In both military and civilian applications, abundant data from diverse sources captured on airborne platforms are often available for a region attracting interest. Since the data often includes motion imagery streams collected from multiple platforms flying at different altitudes, with sensors of different field of views (FOVs), resolutions, frame rates and spectral bands, it is imperative that a cohesive site model encompassing all the information can be quickly built and presented to the analysts. In this paper, we propose to develop an Uncertainty Preserving Patch-based Online Modeling System (UPPOMS) leading towards the automatic creation and updating of a cohesive, geo-registered, uncertaintypreserving, efficient 3D site terrain model from passive imagery with varying field-of-views and phenomenologies. The proposed UPPOMS has the following technical thrusts that differentiate our approach from others: (1) An uncertaintypreserved, patch-based 3D model is generated, which enables the integration of images captured with a mixture of NFOV and WFOV and/or visible and infrared motion imagery sensors. (2) Patch-based stereo matching and multi-view 3D integration are utilized, which are suitable for scenes with many low texture regions, particularly in mid-wave infrared images. (3) In contrast to the conventional volumetric algorithms, whose computational and storage costs grow exponentially with the amount of input data and the scale of the scene, the proposed UPPOMS system employs an online algorithmic pipeline, and scales well to large amount of input data. Experimental results and discussions of future work will be provided.

  3. Edge preserving motion estimation with occlusions correction for assisted 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Pohl, Petr; Sirotenko, Michael; Tolstaya, Ekaterina; Bucha, Victor

    2014-02-01

    In this article we propose high quality motion estimation based on variational optical flow formulation with non-local regularization term. To improve motion in occlusion areas we introduce occlusion motion inpainting based on 3-frame motion clustering. Variational formulation of optical flow proved itself to be very successful, however a global optimization of cost function can be time consuming. To achieve acceptable computation times we adapted the algorithm that optimizes convex function in coarse-to-fine pyramid strategy and is suitable for modern GPU hardware implementation. We also introduced two simplifications of cost function that significantly decrease computation time with acceptable decrease of quality. For motion clustering based motion inpaitning in occlusion areas we introduce effective method of occlusion aware joint 3-frame motion clustering using RANSAC algorithm. Occlusion areas are inpainted by motion model taken from cluster that shows consistency in opposite direction. We tested our algorithm on Middlebury optical flow benchmark, where we scored around 20th position, but being one of the fastest method near the top. We also successfully used this algorithm in semi-automatic 2D to 3D conversion tool for spatio-temporal background inpainting, automatic adaptive key frame detection and key points tracking.

  4. Using a wireless motion controller for 3D medical image catheter interactions

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    State-of-the-art morphological imaging techniques usually provide high resolution 3D images with a huge number of slices. In clinical practice, however, 2D slice-based examinations are still the method of choice even for these large amounts of data. Providing intuitive interaction methods for specific 3D medical visualization applications is therefore a critical feature for clinical imaging applications. For the domain of catheter navigation and surgery planning, it is crucial to assist the physician with appropriate visualization techniques, such as 3D segmentation maps, fly-through cameras or virtual interaction approaches. There has been an ongoing development and improvement for controllers that help to interact with 3D environments in the domain of computer games. These controllers are based on both motion and infrared sensors and are typically used to detect 3D position and orientation. We have investigated how a state-of-the-art wireless motion sensor controller (Wiimote), developed by Nintendo, can be used for catheter navigation and planning purposes. By default the Wiimote controller only measure rough acceleration over a range of +/- 3g with 10% sensitivity and orientation. Therefore, a pose estimation algorithm was developed for computing accurate position and orientation in 3D space regarding 4 Infrared LEDs. Current results show that for the translation it is possible to obtain a mean error of (0.38cm, 0.41cm, 4.94cm) and for the rotation (0.16, 0.28) respectively. Within this paper we introduce a clinical prototype that allows steering of a virtual fly-through camera attached to the catheter tip by the Wii controller on basis of a segmented vessel tree.

  5. Angle-independent measure of motion for image-based gating in 3D coronary angiography

    SciTech Connect

    Lehmann, Glen C.; Holdsworth, David W.; Drangova, Maria

    2006-05-15

    The role of three-dimensional (3D) image guidance for interventional procedures and minimally invasive surgeries is increasing for the treatment of vascular disease. Currently, most interventional procedures are guided by two-dimensional x-ray angiography, but computed rotational angiography has the potential to provide 3D geometric information about the coronary arteries. The creation of 3D angiographic images of the coronary arteries requires synchronization of data acquisition with respect to the cardiac cycle, in order to minimize motion artifacts. This can be achieved by inferring the extent of motion from a patient's electrocardiogram (ECG) signal. However, a direct measurement of motion (from the 2D angiograms) has the potential to improve the 3D angiographic images by ensuring that only projections acquired during periods of minimal motion are included in the reconstruction. This paper presents an image-based metric for measuring the extent of motion in 2D x-ray angiographic images. Adaptive histogram equalization was applied to projection images to increase the sharpness of coronary arteries and the superior-inferior component of the weighted centroid (SIC) was measured. The SIC constitutes an image-based metric that can be used to track vessel motion, independent of apparent motion induced by the rotational acquisition. To evaluate the technique, six consecutive patients scheduled for routine coronary angiography procedures were studied. We compared the end of the SIC rest period ({rho}) to R-waves (R) detected in the patient's ECG and found a mean difference of 14{+-}80 ms. Two simultaneous angular positions were acquired and {rho} was detected for each position. There was no statistically significant difference (P=0.79) between {rho} in the two simultaneously acquired angular positions. Thus we have shown the SIC to be independent of view angle, which is critical for rotational angiography. A preliminary image-based gating strategy that employed the SIC

  6. Ultrasonic diaphragm tracking for cardiac interventional navigation on 3D motion compensated static roadmaps

    NASA Astrophysics Data System (ADS)

    Timinger, Holger; Kruger, Sascha; Dietmayer, Klaus; Borgert, Joern

    2005-04-01

    In this paper, a novel approach to cardiac interventional navigation on 3D motion-compensated static roadmaps is presented. Current coronary interventions, e.g. percutaneous transluminal coronary angioplasties, are performed using 2D X-ray fluoroscopy. This comes along with well-known drawbacks like radiation exposure, use of contrast agent, and limited visualization, e.g. overlap and foreshortening, due to projection imaging. In the presented approach, the interventional device, i.e. the catheter, is tracked using an electromagnetic tracking system (MTS). Therefore, the catheters position is mapped into a static 3D image of the volume of interest (VOI) by means of an affine registration. In order to compensate for respiratory motion of the catheter with respect to the static image, a parameterized affine motion model is used which is driven by a respiratory sensor signal. This signal is derived from ultrasonic diaphragm tracking. The motion compensation for the heartbeat is done using ECG-gating. The methods are validated using a heart- and diaphragm-phantom. The mean displacement of the catheter due to the simulated organ motion decreases from approximately 9 mm to 1.3 mm. This result indicates that the proposed method is able to reconstruct the catheter position within the VOI accurately and that it can help to overcome drawbacks of current interventional procedures.

  7. A motion- and sound-activated, 3D-printed, chalcogenide-based triboelectric nanogenerator.

    PubMed

    Kanik, Mehmet; Say, Mehmet Girayhan; Daglar, Bihter; Yavuz, Ahmet Faruk; Dolas, Muhammet Halit; El-Ashry, Mostafa M; Bayindir, Mehmet

    2015-04-01

    A multilayered triboelectric nanogenerator (MULTENG) that can be actuated by acoustic waves, vibration of a moving car, and tapping motion is built using a 3D-printing technique. The MULTENG can generate an open-circuit voltage of up to 396 V and a short-circuit current of up to 1.62 mA, and can power 38 LEDs. The layers of the triboelectric generator are made of polyetherimide nanopillars and chalcogenide core-shell nanofibers. PMID:25722118

  8. 3D Geometry and Motion Estimations of Maneuvering Targets for Interferometric ISAR With Sparse Aperture.

    PubMed

    Xu, Gang; Xing, Mengdao; Xia, Xiang-Gen; Zhang, Lei; Chen, Qianqian; Bao, Zheng

    2016-05-01

    In the current scenario of high-resolution inverse synthetic aperture radar (ISAR) imaging, the non-cooperative targets may have strong maneuverability, which tends to cause time-variant Doppler modulation and imaging plane in the echoed data. Furthermore, it is still a challenge to realize ISAR imaging of maneuvering targets from sparse aperture (SA) data. In this paper, we focus on the problem of 3D geometry and motion estimations of maneuvering targets for interferometric ISAR (InISAR) with SA. For a target of uniformly accelerated rotation, the rotational modulation in echo is formulated as chirp sensing code under a chirp-Fourier dictionary to represent the maneuverability. In particular, a joint multi-channel imaging approach is developed to incorporate the multi-channel data and treat the multi-channel ISAR image formation as a joint-sparsity constraint optimization. Then, a modified orthogonal matching pursuit (OMP) algorithm is employed to solve the optimization problem to produce high-resolution range-Doppler (RD) images and chirp parameter estimation. The 3D target geometry and the motion estimations are followed by using the acquired RD images and chirp parameters. Herein, a joint estimation approach of 3D geometry and rotation motion is presented to realize outlier removing and error reduction. In comparison with independent single-channel processing, the proposed joint multi-channel imaging approach performs better in 2D imaging, 3D imaging, and motion estimation. Finally, experiments using both simulated and measured data are performed to confirm the effectiveness of the proposed algorithm. PMID:26930684

  9. Ground motion simulations in Marmara (Turkey) region from 3D finite difference method

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Ulrich, Thomas; Douglas, John

    2016-04-01

    In the framework of the European project MARSite (2012-2016), one of the main contributions from our research team was to provide ground-motion simulations for the Marmara region from various earthquake source scenarios. We adopted a 3D finite difference code, taking into account the 3D structure around the Sea of Marmara (including the bathymetry) and the sea layer. We simulated two moderate earthquakes (about Mw4.5) and found that the 3D structure improves significantly the waveforms compared to the 1D layer model. Simulations were carried out for different earthquakes (moderate point sources and large finite sources) in order to provide shake maps (Aochi and Ulrich, BSSA, 2015), to study the variability of ground-motion parameters (Douglas & Aochi, BSSA, 2016) as well as to provide synthetic seismograms for the blind inversion tests (Diao et al., GJI, 2016). The results are also planned to be integrated in broadband ground-motion simulations, tsunamis generation and simulations of triggered landslides (in progress by different partners). The simulations are freely shared among the partners via the internet and the visualization of the results is diffused on the project's homepage. All these simulations should be seen as a reference for this region, as they are based on the latest knowledge that obtained during the MARSite project, although their refinement and validation of the model parameters and the simulations are a continuing research task relying on continuing observations. The numerical code used, the models and the simulations are available on demand.

  10. Free-Breathing 3D Whole Heart Black Blood Imaging with Motion Sensitized Driven Equilibrium

    PubMed Central

    Srinivasan, Subashini; Hu, Peng; Kissinger, Kraig V.; Goddu, Beth; Goepfert, Lois; Schmidt, Ehud J.; Kozerke, Sebastian; Nezafat, Reza

    2012-01-01

    Purpose To assess the efficacy and robustness of motion sensitized driven equilibrium (MSDE) for blood suppression in volumetric 3D whole heart cardiac MR. Materials and Methods To investigate the efficacy of MSDE on blood suppression and myocardial SNR loss on different imaging sequences. 7 healthy adult subjects were imaged using 3D ECG-triggered MSDE-prep T1-weighted turbo spin echo (TSE), and spoiled gradient echo (GRE), after optimization of MSDE parameters in a pilot study of 5 subjects. Imaging artifacts, myocardial and blood SNR were assessed. Subsequently, the feasibility of isotropic spatial resolution MSDE-prep black-blood was assessed in 6 subjects. Finally, 15 patients with known or suspected cardiovascular disease were recruited to be imaged using conventional multi-slice 2D DIR TSE imaging sequence and 3D MSDE-prep spoiled GRE. Results The MSDE-prep yields significant blood suppression (75-92%), enabling a volumetric 3D black-blood assessment of the whole heart with significantly improved visualization of the chamber walls. The MSDE-prep also allowed successful acquisition of black-blood images with isotropic spatial resolution. In the patient study, 3D black-blood MSDE-prep and DIR resulted in similar blood suppression in LV and RV walls but the MSDE prep had superior myocardial signal and wall sharpness. Conclusion MSDE-prep allows volumetric black-blood imaging of the heart. PMID:22517477

  11. Inferred motion perception of light sources in 3D scenes is color-blind.

    PubMed

    Gerhard, Holly E; Maloney, Laurence T

    2013-01-01

    In everyday scenes, the illuminant can vary spatially in chromaticity and luminance, and change over time (e.g. sunset). Such variation generates dramatic image effects too complex for any contemporary machine vision system to overcome, yet human observers are remarkably successful at inferring object properties separately from lighting, an ability linked with estimation and tracking of light field parameters. Which information does the visual system use to infer light field dynamics? Here, we specifically ask whether color contributes to inferred light source motion. Observers viewed 3D surfaces illuminated by an out-of-view moving collimated source (sun) and a diffuse source (sky). In half of the trials, the two sources differed in chromaticity, thereby providing more information about motion direction. Observers discriminated light motion direction above chance, and only the least sensitive observer benefited slightly from the added color information, suggesting that color plays only a very minor role for inferring light field dynamics. PMID:23755354

  12. Towards robust 3D visual tracking for motion compensation in beating heart surgery.

    PubMed

    Richa, Rogério; Bó, Antônio P L; Poignet, Philippe

    2011-06-01

    In the context of minimally invasive cardiac surgery, active vision-based motion compensation schemes have been proposed for mitigating problems related to physiological motion. However, robust and accurate visual tracking remains a difficult task. The purpose of this paper is to present a robust visual tracking method that estimates the 3D temporal and spatial deformation of the heart surface using stereo endoscopic images. The novelty is the combination of a visual tracking method based on a Thin-Plate Spline (TPS) model for representing the heart surface deformations with a temporal heart motion model based on a time-varying dual Fourier series for overcoming tracking disturbances or failures. The considerable improvements in tracking robustness facing specular reflections and occlusions are demonstrated through experiments using images of in vivo porcine and human beating hearts. PMID:21277821

  13. 3-D Human Action Recognition by Shape Analysis of Motion Trajectories on Riemannian Manifold.

    PubMed

    Devanne, Maxime; Wannous, Hazem; Berretti, Stefano; Pala, Pietro; Daoudi, Mohamed; Del Bimbo, Alberto

    2015-07-01

    Recognizing human actions in 3-D video sequences is an important open problem that is currently at the heart of many research domains including surveillance, natural interfaces and rehabilitation. However, the design and development of models for action recognition that are both accurate and efficient is a challenging task due to the variability of the human pose, clothing and appearance. In this paper, we propose a new framework to extract a compact representation of a human action captured through a depth sensor, and enable accurate action recognition. The proposed solution develops on fitting a human skeleton model to acquired data so as to represent the 3-D coordinates of the joints and their change over time as a trajectory in a suitable action space. Thanks to such a 3-D joint-based framework, the proposed solution is capable to capture both the shape and the dynamics of the human body, simultaneously. The action recognition problem is then formulated as the problem of computing the similarity between the shape of trajectories in a Riemannian manifold. Classification using k-nearest neighbors is finally performed on this manifold taking advantage of Riemannian geometry in the open curve shape space. Experiments are carried out on four representative benchmarks to demonstrate the potential of the proposed solution in terms of accuracy/latency for a low-latency action recognition. Comparative results with state-of-the-art methods are reported. PMID:25216492

  14. Integration of 3D Structure from Disparity into Biological Motion Perception Independent of Depth Awareness

    PubMed Central

    Wang, Ying; Jiang, Yi

    2014-01-01

    Images projected onto the retinas of our two eyes come from slightly different directions in the real world, constituting binocular disparity that serves as an important source for depth perception - the ability to see the world in three dimensions. It remains unclear whether the integration of disparity cues into visual perception depends on the conscious representation of stereoscopic depth. Here we report evidence that, even without inducing discernible perceptual representations, the disparity-defined depth information could still modulate the visual processing of 3D objects in depth-irrelevant aspects. Specifically, observers who could not discriminate disparity-defined in-depth facing orientations of biological motions (i.e., approaching vs. receding) due to an excessive perceptual bias nevertheless exhibited a robust perceptual asymmetry in response to the indistinguishable facing orientations, similar to those who could consciously discriminate such 3D information. These results clearly demonstrate that the visual processing of biological motion engages the disparity cues independent of observers’ depth awareness. The extraction and utilization of binocular depth signals thus can be dissociable from the conscious representation of 3D structure in high-level visual perception. PMID:24586622

  15. A Little Knowledge of Ground Motion: Explaining 3-D Physics-Based Modeling to Engineers

    NASA Astrophysics Data System (ADS)

    Porter, K.

    2014-12-01

    Users of earthquake planning scenarios require the ground-motion map to be credible enough to justify costly planning efforts, but not all ground-motion maps are right for all uses. There are two common ways to create a map of ground motion for a hypothetical earthquake. One approach is to map the median shaking estimated by empirical attenuation relationships. The other uses 3-D physics-based modeling, in which one analyzes a mathematical model of the earth's crust near the fault rupture and calculates the generation and propagation of seismic waves from source to ground surface by first principles. The two approaches produce different-looking maps. The more-familiar median maps smooth out variability and correlation. Using them in a planning scenario can lead to a systematic underestimation of damage and loss, and could leave a community underprepared for realistic shaking. The 3-D maps show variability, including some very high values that can disconcert non-scientists. So when the USGS Science Application for Risk Reduction's (SAFRR) Haywired scenario project selected 3-D maps, it was necessary to explain to scenario users—especially engineers who often use median maps—the differences, advantages, and disadvantages of the two approaches. We used authority, empirical evidence, and theory to support our choice. We prefaced our explanation with SAFRR's policy of using the best available earth science, and cited the credentials of the maps' developers and the reputation of the journal in which they published the maps. We cited recorded examples from past earthquakes of extreme ground motions that are like those in the scenario map. We explained the maps on theoretical grounds as well, explaining well established causes of variability: directivity, basin effects, and source parameters. The largest mapped motions relate to potentially unfamiliar extreme-value theory, so we used analogies to human longevity and the average age of the oldest person in samples of

  16. Capturing natural-colour 3D models of insects for species discovery and diagnostics.

    PubMed

    Nguyen, Chuong V; Lovell, David R; Adcock, Matt; La Salle, John

    2014-01-01

    Collections of biological specimens are fundamental to scientific understanding and characterization of natural diversity-past, present and future. This paper presents a system for liberating useful information from physical collections by bringing specimens into the digital domain so they can be more readily shared, analyzed, annotated and compared. It focuses on insects and is strongly motivated by the desire to accelerate and augment current practices in insect taxonomy which predominantly use text, 2D diagrams and images to describe and characterize species. While these traditional kinds of descriptions are informative and useful, they cannot cover insect specimens "from all angles" and precious specimens are still exchanged between researchers and collections for this reason. Furthermore, insects can be complex in structure and pose many challenges to computer vision systems. We present a new prototype for a practical, cost-effective system of off-the-shelf components to acquire natural-colour 3D models of insects from around 3 mm to 30 mm in length. ("Natural-colour" is used to contrast with "false-colour", i.e., colour generated from, or applied to, gray-scale data post-acquisition.) Colour images are captured from different angles and focal depths using a digital single lens reflex (DSLR) camera rig and two-axis turntable. These 2D images are processed into 3D reconstructions using software based on a visual hull algorithm. The resulting models are compact (around 10 megabytes), afford excellent optical resolution, and can be readily embedded into documents and web pages, as well as viewed on mobile devices. The system is portable, safe, relatively affordable, and complements the sort of volumetric data that can be acquired by computed tomography. This system provides a new way to augment the description and documentation of insect species holotypes, reducing the need to handle or ship specimens. It opens up new opportunities to collect data for research

  17. Capturing Natural-Colour 3D Models of Insects for Species Discovery and Diagnostics

    PubMed Central

    Nguyen, Chuong V.; Lovell, David R.; Adcock, Matt; La Salle, John

    2014-01-01

    Collections of biological specimens are fundamental to scientific understanding and characterization of natural diversity—past, present and future. This paper presents a system for liberating useful information from physical collections by bringing specimens into the digital domain so they can be more readily shared, analyzed, annotated and compared. It focuses on insects and is strongly motivated by the desire to accelerate and augment current practices in insect taxonomy which predominantly use text, 2D diagrams and images to describe and characterize species. While these traditional kinds of descriptions are informative and useful, they cannot cover insect specimens “from all angles” and precious specimens are still exchanged between researchers and collections for this reason. Furthermore, insects can be complex in structure and pose many challenges to computer vision systems. We present a new prototype for a practical, cost-effective system of off-the-shelf components to acquire natural-colour 3D models of insects from around 3 mm to 30 mm in length. (“Natural-colour” is used to contrast with “false-colour”, i.e., colour generated from, or applied to, gray-scale data post-acquisition.) Colour images are captured from different angles and focal depths using a digital single lens reflex (DSLR) camera rig and two-axis turntable. These 2D images are processed into 3D reconstructions using software based on a visual hull algorithm. The resulting models are compact (around 10 megabytes), afford excellent optical resolution, and can be readily embedded into documents and web pages, as well as viewed on mobile devices. The system is portable, safe, relatively affordable, and complements the sort of volumetric data that can be acquired by computed tomography. This system provides a new way to augment the description and documentation of insect species holotypes, reducing the need to handle or ship specimens. It opens up new opportunities to collect data

  18. 3D Motion Planning Algorithms for Steerable Needles Using Inverse Kinematics

    PubMed Central

    Duindam, Vincent; Xu, Jijie; Alterovitz, Ron; Sastry, Shankar; Goldberg, Ken

    2010-01-01

    Steerable needles can be used in medical applications to reach targets behind sensitive or impenetrable areas. The kinematics of a steerable needle are nonholonomic and, in 2D, equivalent to a Dubins car with constant radius of curvature. In 3D, the needle can be interpreted as an airplane with constant speed and pitch rate, zero yaw, and controllable roll angle. We present a constant-time motion planning algorithm for steerable needles based on explicit geometric inverse kinematics similar to the classic Paden-Kahan subproblems. Reachability and path competitivity are analyzed using analytic comparisons with shortest path solutions for the Dubins car (for 2D) and numerical simulations (for 3D). We also present an algorithm for local path adaptation using null-space results from redundant manipulator theory. Finally, we discuss several ways to use and extend the inverse kinematics solution to generate needle paths that avoid obstacles. PMID:21359051

  19. 3D delivered dose assessment using a 4DCT-based motion model

    SciTech Connect

    Cai, Weixing; Hurwitz, Martina H.; Williams, Christopher L.; Dhou, Salam; Berbeco, Ross I.; Mishra, Pankaj E-mail: jhlewis@lroc.harvard.edu; Lewis, John H. E-mail: jhlewis@lroc.harvard.edu; Seco, Joao

    2015-06-15

    Purpose: The purpose of this work is to develop a clinically feasible method of calculating actual delivered dose distributions for patients who have significant respiratory motion during the course of stereotactic body radiation therapy (SBRT). Methods: A novel approach was proposed to calculate the actual delivered dose distribution for SBRT lung treatment. This approach can be specified in three steps. (1) At the treatment planning stage, a patient-specific motion model is created from planning 4DCT data. This model assumes that the displacement vector field (DVF) of any respiratory motion deformation can be described as a linear combination of some basis DVFs. (2) During the treatment procedure, 2D time-varying projection images (either kV or MV projections) are acquired, from which time-varying “fluoroscopic” 3D images of the patient are reconstructed using the motion model. The DVF of each timepoint in the time-varying reconstruction is an optimized linear combination of basis DVFs such that the 2D projection of the 3D volume at this timepoint matches the projection image. (3) 3D dose distribution is computed for each timepoint in the set of 3D reconstructed fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating deformed dose distributions. This approach was first validated using two modified digital extended cardio-torso (XCAT) phantoms with lung tumors and different respiratory motions. The estimated doses were compared to the dose that would be calculated for routine 4DCT-based planning and to the actual delivered dose that was calculated using “ground truth” XCAT phantoms at all timepoints. The approach was also tested using one set of patient data, which demonstrated the application of our method in a clinical scenario. Results: For the first XCAT phantom that has a mostly regular breathing pattern, the errors in 95% volume dose (D95) are 0.11% and 0.83%, respectively for 3D fluoroscopic images

  20. 3D delivered dose assessment using a 4DCT-based motion model

    PubMed Central

    Cai, Weixing; Hurwitz, Martina H.; Williams, Christopher L.; Dhou, Salam; Berbeco, Ross I.; Seco, Joao; Mishra, Pankaj; Lewis, John H.

    2015-01-01

    Purpose: The purpose of this work is to develop a clinically feasible method of calculating actual delivered dose distributions for patients who have significant respiratory motion during the course of stereotactic body radiation therapy (SBRT). Methods: A novel approach was proposed to calculate the actual delivered dose distribution for SBRT lung treatment. This approach can be specified in three steps. (1) At the treatment planning stage, a patient-specific motion model is created from planning 4DCT data. This model assumes that the displacement vector field (DVF) of any respiratory motion deformation can be described as a linear combination of some basis DVFs. (2) During the treatment procedure, 2D time-varying projection images (either kV or MV projections) are acquired, from which time-varying “fluoroscopic” 3D images of the patient are reconstructed using the motion model. The DVF of each timepoint in the time-varying reconstruction is an optimized linear combination of basis DVFs such that the 2D projection of the 3D volume at this timepoint matches the projection image. (3) 3D dose distribution is computed for each timepoint in the set of 3D reconstructed fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating deformed dose distributions. This approach was first validated using two modified digital extended cardio-torso (XCAT) phantoms with lung tumors and different respiratory motions. The estimated doses were compared to the dose that would be calculated for routine 4DCT-based planning and to the actual delivered dose that was calculated using “ground truth” XCAT phantoms at all timepoints. The approach was also tested using one set of patient data, which demonstrated the application of our method in a clinical scenario. Results: For the first XCAT phantom that has a mostly regular breathing pattern, the errors in 95% volume dose (D95) are 0.11% and 0.83%, respectively for 3D fluoroscopic images

  1. Computational optical-sectioning microscopy for 3D quantization of cell motion: results and challenges

    NASA Astrophysics Data System (ADS)

    McNally, James G.

    1994-09-01

    How cells move and navigate within a 3D tissue mass is of central importance in such diverse problems as embryonic development, wound healing and metastasis. This locomotion can now be visualized and quantified by using computation optical-sectioning microscopy. In this approach, a series of 2D images at different depths in a specimen are stacked to construct a 3D image, and then with a knowledge of the microscope's point-spread function, the actual distribution of fluorescent intensity in the specimen is estimated via computation. When coupled with wide-field optics and a cooled CCD camera, this approach permits non-destructive 3D imaging of living specimens over long time periods. With these techniques, we have observed a complex diversity of motile behaviors in a model embryonic system, the cellular slime mold Dictyostelium. To understand the mechanisms which control these various behaviors, we are examining motion in various Dictyostelium mutants with known defects in proteins thought to be essential for signal reception, cell-cell adhesion or locomotion. This application of computational techniques to analyze 3D cell locomotion raises several technical challenges. Image restoration techniques must be fast enough to process numerous 1 Gbyte time-lapse data sets (16 Mbytes per 3D image X 60 time points). Because some cells are weakly labeled and background intensity is often high due to unincorporated dye, the SNR in some of these images is poor. Currently, the images are processed by a regularized linear least- squares restoration method, and occasionally by a maximum-likelihood method. Also required for these studies are accurate automated- tracking procedures to generate both 3D trajectories for individual cells and 3D flows for a group of cells. Tracking is currently done independently for each cell, using a cell's image as a template to search for a similar image at the next time point. Finally, sophisticated visualization techniques are needed to view the

  2. Broadband Near-Field Ground Motion Simulations in 3D Scattering Media

    NASA Astrophysics Data System (ADS)

    Imperatori, Walter; Mai, Martin

    2013-04-01

    The heterogeneous nature of Earth's crust is manifested in the scattering of propagating seismic waves. In recent years, different techniques have been developed to include such phenomenon in broadband ground-motion calculations, either considering scattering as a semi-stochastic or pure stochastic process. In this study, we simulate broadband (0-10 Hz) ground motions using a 3D finite-difference wave propagation solver using several 3D media characterized by Von Karman correlation functions with different correlation lengths and standard deviation values. Our goal is to investigate scattering characteristics and its influence on the seismic wave-field at short and intermediate distances from the source in terms of ground motion parameters. We also examine other relevant scattering-related phenomena, such as the loss of radiation pattern and the directivity breakdown. We first simulate broadband ground motions for a point-source characterized by a classic omega-squared spectrum model. Fault finiteness is then introduced by means of a Haskell-type source model presenting both sub-shear and super-shear rupture speed. Results indicate that scattering plays an important role in ground motion even at short distances from the source, where source effects are thought to be dominating. In particular, peak ground motion parameters can be affected even at relatively low frequencies, implying that earthquake ground-motion simulations should include scattering also for PGV calculations. At the same time, we find a gradual loss of the source signature in the 2-5 Hz frequency range, together with a distortion of the Mach cones in case of super-shear rupture. For more complex source models and truly heterogeneous Earth, these effects may occur even at lower frequencies. Our simulations suggest that Von Karman correlation functions with correlation length between several hundred meters and few kilometers, Hurst exponent around 0.3 and standard deviation in the 5-10% range

  3. Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller

    PubMed Central

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  4. Exploring direct 3D interaction for full horizontal parallax light field displays using leap motion controller.

    PubMed

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  5. On-line 3D motion estimation using low resolution MRI

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; de Senneville, B. Denis; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2015-08-01

    Image processing such as deformable image registration finds its way into radiotherapy as a means to track non-rigid anatomy. With the advent of magnetic resonance imaging (MRI) guided radiotherapy, intrafraction anatomy snapshots become technically feasible. MRI provides the needed tissue signal for high-fidelity image registration. However, acquisitions, especially in 3D, take a considerable amount of time. Pushing towards real-time adaptive radiotherapy, MRI needs to be accelerated without degrading the quality of information. In this paper, we investigate the impact of image resolution on the quality of motion estimations. Potentially, spatially undersampled images yield comparable motion estimations. At the same time, their acquisition times would reduce greatly due to the sparser sampling. In order to substantiate this hypothesis, exemplary 4D datasets of the abdomen were downsampled gradually. Subsequently, spatiotemporal deformations are extracted consistently using the same motion estimation for each downsampled dataset. Errors between the original and the respectively downsampled version of the dataset are then evaluated. Compared to ground-truth, results show high similarity of deformations estimated from downsampled image data. Using a dataset with {{≤ft(2.5 \\text{mm}\\right)}3} voxel size, deformation fields could be recovered well up to a downsampling factor of 2, i.e. {{≤ft(5 \\text{mm}\\right)}3} . In a therapy guidance scenario MRI, imaging speed could accordingly increase approximately fourfold, with acceptable loss of estimated motion quality.

  6. 3D motion tracking of the heart using Harmonic Phase (HARP) isosurfaces

    NASA Astrophysics Data System (ADS)

    Soliman, Abraam S.; Osman, Nael F.

    2010-03-01

    Tags are non-invasive features induced in the heart muscle that enable the tracking of heart motion. Each tag line, in fact, corresponds to a 3D tag surface that deforms with the heart muscle during the cardiac cycle. Tracking of tag surfaces deformation is useful for the analysis of left ventricular motion. Cardiac material markers (Kerwin et al, MIA, 1997) can be obtained from the intersections of orthogonal surfaces which can be reconstructed from short- and long-axis tagged images. The proposed method uses Harmonic Phase (HARP) method for tracking tag lines corresponding to a specific harmonic phase value and then the reconstruction of grid tag surfaces is achieved by a Delaunay triangulation-based interpolation for sparse tag points. Having three different tag orientations from short- and long-axis images, the proposed method showed the deformation of 3D tag surfaces during the cardiac cycle. Previous work on tag surface reconstruction was restricted for the "dark" tag lines; however, the use of HARP as proposed enables the reconstruction of isosurfaces based on their harmonic phase values. The use of HARP, also, provides a fast and accurate way for tag lines identification and tracking, and hence, generating the surfaces.

  7. Interactive Motion Planning for Steerable Needles in 3D Environments with Obstacles

    PubMed Central

    Patil, Sachin; Alterovitz, Ron

    2011-01-01

    Bevel-tip steerable needles for minimally invasive medical procedures can be used to reach clinical targets that are behind sensitive or impenetrable areas and are inaccessible to straight, rigid needles. We present a fast algorithm that can compute motion plans for steerable needles to reach targets in complex, 3D environments with obstacles at interactive rates. The fast computation makes this method suitable for online control of the steerable needle based on 3D imaging feedback and allows physicians to interactively edit the planning environment in real-time by adding obstacle definitions as they are discovered or become relevant. We achieve this fast performance by using a Rapidly Exploring Random Tree (RRT) combined with a reachability-guided sampling heuristic to alleviate the sensitivity of the RRT planner to the choice of the distance metric. We also relax the constraint of constant-curvature needle trajectories by relying on duty-cycling to realize bounded-curvature needle trajectories. These characteristics enable us to achieve orders of magnitude speed-up compared to previous approaches; we compute steerable needle motion plans in under 1 second for challenging environments containing complex, polyhedral obstacles and narrow passages. PMID:22294214

  8. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking.

    PubMed

    Dettmer, Simon L; Keyser, Ulrich F; Pagliara, Stefano

    2014-02-01

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces. PMID:24593372

  9. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    2014-02-01

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  10. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    SciTech Connect

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    2014-02-15

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  11. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI.

    PubMed

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z; Stone, Maureen; Prince, Jerry L

    2014-12-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations. PMID:25155697

  12. New method for detection of complex 3D fracture motion - Verification of an optical motion analysis system for biomechanical studies

    PubMed Central

    2012-01-01

    Background Fracture-healing depends on interfragmentary motion. For improved osteosynthesis and fracture-healing, the micromotion between fracture fragments is undergoing intensive research. The detection of 3D micromotions at the fracture gap still presents a challenge for conventional tactile measurement systems. Optical measurement systems may be easier to use than conventional systems, but, as yet, cannot guarantee accuracy. The purpose of this study was to validate the optical measurement system PONTOS 5M for use in biomechanical research, including measurement of micromotion. Methods A standardized transverse fracture model was created to detect interfragmentary motions under axial loadings of up to 200 N. Measurements were performed using the optical measurement system and compared with a conventional high-accuracy tactile system consisting of 3 standard digital dial indicators (1 μm resolution; 5 μm error limit). Results We found that the deviation in the mean average motion detection between the systems was at most 5.3 μm, indicating that detection of micromotion was possible with the optical measurement system. Furthermore, we could show two considerable advantages while using the optical measurement system. Only with the optical system interfragmentary motion could be analyzed directly at the fracture gap. Furthermore, the calibration of the optical system could be performed faster, safer and easier than that of the tactile system. Conclusion The PONTOS 5 M optical measurement system appears to be a favorable alternative to previously used tactile measurement systems for biomechanical applications. Easy handling, combined with a high accuracy for 3D detection of micromotions (≤ 5 μm), suggests the likelihood of high user acceptance. This study was performed in the context of the deployment of a new implant (dynamic locking screw; Synthes, Oberdorf, Switzerland). PMID:22405047

  13. Capture and 3D culture of colonic crypts and colonoids in a microarray platform

    PubMed Central

    Wang, Yuli; Ahmad, Asad A.; Shah, Pavak K.; Sims, Christopher E.; Magness, Scott T.; Allbritton, Nancy L.

    2013-01-01

    Crypts are the basic structural and functional units of colonic epithelium and can be isolated from the colon and cultured in vitro into multi-cell spheroids termed “colonoids”. Both crypts and colonoids are ideal building blocks for construction of an in vitro tissue model of the colon. Here we proposed and tested a microengineered platform for capture and in vitro 3D culture of colonic crypts and colonoids. An integrated platform was fabricated from polydimethylsiloxane which contained two fluidic layers separated by an array of cylindrical microwells (150-μm diameter, 150-μm depth) with perforated bottoms (30-μm opening, 10-μm depth) termed “microstrainers”. As fluid moved through the array, crypts or colonoids were retained in the microstrainers with a >90% array-filling efficiency. Matrigel as an extracellular matrix was then applied to the microstrainers to generate isolated Matrigel pockets encapsulating the crypts or colonoids. After supplying the essential growth factors, epidermal growth factor, Wnt-3A, R-spondin 2 and noggin, 63±13% of the crypts and 77±8% of the colonoids cultured in the microstrainers over a 48–72 h period formed viable 3D colonoids. Thus colonoid growth on the array was similar to that under standard culture conditions (78±5%). Additionally the colonoids displayed the same morphology and similar numbers of stem and progenitor cells as those under standard culture conditions. Immunofluorescence staining confirmed that the differentiated cell-types of the colon, goblet cells, enteroendocrine cells and absorptive enterocytes, formed on the array. To demonstrating the utility of the array in tracking the colonoid fate, quantitative fluorescence analysis was performed on the arrayed colonoids exposed to reagents such as Wnt-3A and the γ-secretase inhibitor LY-411575. The successful formation of viable, multi-cell type colonic tissue on the microengineered platform represents a first step in the building of a

  14. 3D cardiac motion reconstruction from CT data and tagged MRI.

    PubMed

    Wang, Xiaoxu; Mihalef, Viorel; Qian, Zhen; Voros, Szilard; Metaxas, Dimitris

    2012-01-01

    In this paper we present a novel method for left ventricle (LV) endocardium motion reconstruction using high resolution CT data and tagged MRI. High resolution CT data provide anatomic details on the LV endocardial surface, such as the papillary muscle and trabeculae carneae. Tagged MRI provides better time resolution. The combination of these two imaging techniques can give us better understanding on left ventricle motion. The high resolution CT images are segmented with mean shift method and generate the LV endocardium mesh. The meshless deformable model built with high resolution endocardium surface from CT data fit to the tagged MRI of the same phase. 3D deformation of the myocardium is computed with the Lagrangian dynamics and local Laplacian deformation. The segmented inner surface of left ventricle is compared with the heart inner surface picture and show high agreement. The papillary muscles are attached to the inner surface with roots. The free wall of the left ventricle inner surface is covered with trabeculae carneae. The deformation of the heart wall and the papillary muscle in the first half of the cardiac cycle is presented. The motion reconstruction results are very close to the live heart video. PMID:23366825

  15. 3D Cardiac Motion Reconstruction from CT Data and Tagged MRI

    PubMed Central

    Wang, Xiaoxu; Mihalef, Viorel; Qian, Zhen; Voros, Szilard; Metaxas, Dimitris

    2016-01-01

    In this paper we present a novel method for left ventricle (LV) endocardium motion reconstruction using high resolution CT data and tagged MRI. High resolution CT data provide anatomic details on the LV endocardial surface, such as the papillary muscle and trabeculae carneae. Tagged MRI provides better time resolution. The combination of these two imaging techniques can give us better understanding on left ventricle motion. The high resolution CT images are segmented with mean shift method and generate the LV endocardium mesh. The meshless deformable model built with high resolution endocardium surface from CT data fit to the tagged MRI of the same phase. 3D deformation of the myocardium is computed with the Lagrangian dynamics and local Laplacian deformation. The segmented inner surface of left ventricle is compared with the heart inner surface picture and show high agreement. The papillary muscles are attached to the inner surface with roots. The free wall of the left ventricle inner surface is covered with trabeculae carneae. The deformation of the heart wall and the papillary muscle in the first half of the cardiac cycle is presented. The motion reconstruction results are very close to the live heart video. PMID:23366825

  16. 3D hand motion trajectory prediction from EEG mu and beta bandpower.

    PubMed

    Korik, A; Sosnik, R; Siddique, N; Coyle, D

    2016-01-01

    A motion trajectory prediction (MTP) - based brain-computer interface (BCI) aims to reconstruct the three-dimensional (3D) trajectory of upper limb movement using electroencephalography (EEG). The most common MTP BCI employs a time series of bandpass-filtered EEG potentials (referred to here as the potential time-series, PTS, model) for reconstructing the trajectory of a 3D limb movement using multiple linear regression. These studies report the best accuracy when a 0.5-2Hz bandpass filter is applied to the EEG. In the present study, we show that spatiotemporal power distribution of theta (4-8Hz), mu (8-12Hz), and beta (12-28Hz) bands are more robust for movement trajectory decoding when the standard PTS approach is replaced with time-varying bandpower values of a specified EEG band, ie, with a bandpower time-series (BTS) model. A comprehensive analysis comprising of three subjects performing pointing movements with the dominant right arm toward six targets is presented. Our results show that the BTS model produces significantly higher MTP accuracy (R~0.45) compared to the standard PTS model (R~0.2). In the case of the BTS model, the highest accuracy was achieved across the three subjects typically in the mu (8-12Hz) and low-beta (12-18Hz) bands. Additionally, we highlight a limitation of the commonly used PTS model and illustrate how this model may be suboptimal for decoding motion trajectory relevant information. Although our results, showing that the mu and beta bands are prominent for MTP, are not in line with other MTP studies, they are consistent with the extensive literature on classical multiclass sensorimotor rhythm-based BCI studies (classification of limbs as opposed to motion trajectory prediction), which report the best accuracy of imagined limb movement classification using power values of mu and beta frequency bands. The methods proposed here provide a positive step toward noninvasive decoding of imagined 3D hand movements for movement-free BCIs

  17. Using Fuzzy Gaussian Inference and Genetic Programming to Classify 3D Human Motions

    NASA Astrophysics Data System (ADS)

    Khoury, Mehdi; Liu, Honghai

    This research introduces and builds on the concept of Fuzzy Gaussian Inference (FGI) (Khoury and Liu in Proceedings of UKCI, 2008 and IEEE Workshop on Robotic Intelligence in Informationally Structured Space (RiiSS 2009), 2009) as a novel way to build Fuzzy Membership Functions that map to hidden Probability Distributions underlying human motions. This method is now combined with a Genetic Programming Fuzzy rule-based system in order to classify boxing moves from natural human Motion Capture data. In this experiment, FGI alone is able to recognise seven different boxing stances simultaneously with an accuracy superior to a GMM-based classifier. Results seem to indicate that adding an evolutionary Fuzzy Inference Engine on top of FGI improves the accuracy of the classifier in a consistent way.

  18. Automated 3D Motion Tracking using Gabor Filter Bank, Robust Point Matching, and Deformable Models

    PubMed Central

    Wang, Xiaoxu; Chung, Sohae; Metaxas, Dimitris; Axel, Leon

    2013-01-01

    Tagged Magnetic Resonance Imaging (tagged MRI or tMRI) provides a means of directly and noninvasively displaying the internal motion of the myocardium. Reconstruction of the motion field is needed to quantify important clinical information, e.g., the myocardial strain, and detect regional heart functional loss. In this paper, we present a three-step method for this task. First, we use a Gabor filter bank to detect and locate tag intersections in the image frames, based on local phase analysis. Next, we use an improved version of the Robust Point Matching (RPM) method to sparsely track the motion of the myocardium, by establishing a transformation function and a one-to-one correspondence between grid tag intersections in different image frames. In particular, the RPM helps to minimize the impact on the motion tracking result of: 1) through-plane motion, and 2) relatively large deformation and/or relatively small tag spacing. In the final step, a meshless deformable model is initialized using the transformation function computed by RPM. The model refines the motion tracking and generates a dense displacement map, by deforming under the influence of image information, and is constrained by the displacement magnitude to retain its geometric structure. The 2D displacement maps in short and long axis image planes can be combined to drive a 3D deformable model, using the Moving Least Square method, constrained by the minimization of the residual error at tag intersections. The method has been tested on a numerical phantom, as well as on in vivo heart data from normal volunteers and heart disease patients. The experimental results show that the new method has a good performance on both synthetic and real data. Furthermore, the method has been used in an initial clinical study to assess the differences in myocardial strain distributions between heart disease (left ventricular hypertrophy) patients and the normal control group. The final results show that the proposed method

  19. Neural network techniques for invariant recognition and motion tracking of 3-D objects

    SciTech Connect

    Hwang, J.N.; Tseng, Y.H.

    1995-12-31

    Invariant recognition and motion tracking of 3-D objects under partial object viewing are difficult tasks. In this paper, we introduce a new neural network solution that is robust to noise corruption and partial viewing of objects. This method directly utilizes the acquired range data and requires no feature extraction. In the proposed approach, the object is first parametrically represented by a continuous distance transformation neural network (CDTNN) which is trained by the surface points of the exemplar object. When later presented with the surface points of an unknown object, this parametric representation allows the mismatch information to back-propagate through the CDTNN to gradually determine the best similarity transformation (translation and rotation) of the unknown object. The mismatch can be directly measured in the reconstructed representation domain between the model and the unknown object.

  20. Motion error analysis of the 3D coordinates of airborne lidar for typical terrains

    NASA Astrophysics Data System (ADS)

    Peng, Tao; Lan, Tian; Ni, Guoqiang

    2013-07-01

    A motion error model of 3D coordinates is established and the impact on coordinate errors caused by the non-ideal movement of the airborne platform is analyzed. The simulation results of the model show that when the lidar system operates at high altitude, the influence on the positioning errors derived from laser point cloud spacing is small. For the model the positioning errors obey simple harmonic vibration whose amplitude envelope gradually reduces with the increase of the vibration frequency. When the vibration period number is larger than 50, the coordinate errors are almost uncorrelated with time. The elevation error is less than the plane error and in the plane the error in the scanning direction is less than the error in the flight direction. Through the analysis of flight test data, the conclusion is verified.

  1. Modelling the 3D morphology and proper motions of the planetary nebula NGC 6302

    NASA Astrophysics Data System (ADS)

    Uscanga, L.; Velázquez, P. F.; Esquivel, A.; Raga, A. C.; Boumis, P.; Cantó, J.

    2014-08-01

    We present 3D hydrodynamical simulations of an isotropic fast wind interacting with a previously ejected toroidally shaped slow wind in order to model both the observed morphology and the kinematics of the planetary nebula (PN) NGC 6302. This source, also known as the Butterfly nebula, presents one of the most complex morphologies ever observed in PNe. From our numerical simulations, we have obtained an intensity map for the Hα emission to make a comparison with the Hubble Space Telescope (HST) observations of this object. We have also carried out a proper motion (PM) study from our numerical results, in order to compare with previous observational studies. We have found that the two interacting stellar wind model reproduce well the morphology of NGC 6302, and while the PMs in the models are similar to the observations, our results suggest that an acceleration mechanism is needed to explain the Hubble-type expansion found in HST observations.

  2. DLP technology application: 3D head tracking and motion correction in medical brain imaging

    NASA Astrophysics Data System (ADS)

    Olesen, Oline V.; Wilm, Jakob; Paulsen, Rasmus R.; Højgaard, Liselotte; Larsen, Rasmus

    2014-03-01

    In this paper we present a novel sensing system, robust Near-infrared Structured Light Scanning (NIRSL) for three-dimensional human model scanning application. Human model scanning due to its nature of various hair and dress appearance and body motion has long been a challenging task. Previous structured light scanning methods typically emitted visible coded light patterns onto static and opaque objects to establish correspondence between a projector and a camera for triangulation. In the success of these methods rely on scanning objects with proper reflective surface for visible light, such as plaster, light colored cloth. Whereas for human model scanning application, conventional methods suffer from low signal to noise ratio caused by low contrast of visible light over the human body. The proposed robust NIRSL, as implemented with the near infrared light, is capable of recovering those dark surfaces, such as hair, dark jeans and black shoes under visible illumination. Moreover, successful structured light scan relies on the assumption that the subject is static during scanning. Due to the nature of body motion, it is very time sensitive to keep this assumption in the case of human model scan. The proposed sensing system, by utilizing the new near-infrared capable high speed LightCrafter DLP projector, is robust to motion, provides accurate and high resolution three-dimensional point cloud, making our system more efficient and robust for human model reconstruction. Experimental results demonstrate that our system is effective and efficient to scan real human models with various dark hair, jeans and shoes, robust to human body motion and produces accurate and high resolution 3D point cloud.

  3. The representation of moving 3-D objects in apparent motion perception.

    PubMed

    Hidaka, Souta; Kawachi, Yousuke; Gyoba, Jiro

    2009-08-01

    In the present research, we investigated the depth information contained in the representations of apparently moving 3-D objects. By conducting three experiments, we measured the magnitude of representational momentum (RM) as an index of the consistency of an object's representation. Experiment 1A revealed that RM magnitude was greater when shaded, convex, apparently moving objects shifted to a flat circle than when they shifted to a shaded, concave, hemisphere. The difference diminished when the apparently moving objects were concave hemispheres (Experiment 1B). Using luminance-polarized circles, Experiment 2 confirmed that these results were not due to the luminance information of shading. Experiment 3 demonstrated that RM magnitude was greater when convex apparently moving objects shifted to particular blurred convex hemispheres with low-pass filtering than when they shifted to concave hemispheres. These results suggest that the internal object's representation in apparent motion contains incomplete depth information intermediate between that of 2-D and 3-D objects, particularly with regard to convexity information with low-spatial-frequency components. PMID:19633345

  4. Velocity and Density Models Incorporating the Cascadia Subduction Zone for 3D Earthquake Ground Motion Simulations

    USGS Publications Warehouse

    Stephenson, William J.

    2007-01-01

    INTRODUCTION In support of earthquake hazards and ground motion studies in the Pacific Northwest, three-dimensional P- and S-wave velocity (3D Vp and Vs) and density (3D rho) models incorporating the Cascadia subduction zone have been developed for the region encompassed from about 40.2?N to 50?N latitude, and from about -122?W to -129?W longitude. The model volume includes elevations from 0 km to 60 km (elevation is opposite of depth in model coordinates). Stephenson and Frankel (2003) presented preliminary ground motion simulations valid up to 0.1 Hz using an earlier version of these models. The version of the model volume described here includes more structural and geophysical detail, particularly in the Puget Lowland as required for scenario earthquake simulations in the development of the Seattle Urban Hazards Maps (Frankel and others, 2007). Olsen and others (in press) used the model volume discussed here to perform a Cascadia simulation up to 0.5 Hz using a Sumatra-Andaman Islands rupture history. As research from the EarthScope Program (http://www.earthscope.org) is published, a wealth of important detail can be added to these model volumes, particularly to depths of the upper-mantle. However, at the time of development for this model version, no EarthScope-specific results were incorporated. This report is intended to be a reference for colleagues and associates who have used or are planning to use this preliminary model in their research. To this end, it is intended that these models will be considered a beginning template for a community velocity model of the Cascadia region as more data and results become available.

  5. Capturing Motion and Depth Before Cinematography.

    PubMed

    Wade, Nicholas J

    2016-01-01

    Visual representations of biological states have traditionally faced two problems: they lacked motion and depth. Attempts were made to supply these wants over many centuries, but the major advances were made in the early-nineteenth century. Motion was synthesized by sequences of slightly different images presented in rapid succession and depth was added by presenting slightly different images to each eye. Apparent motion and depth were combined some years later, but they tended to be applied separately. The major figures in this early period were Wheatstone, Plateau, Horner, Duboscq, Claudet, and Purkinje. Others later in the century, like Marey and Muybridge, were stimulated to extend the uses to which apparent motion and photography could be applied to examining body movements. These developments occurred before the birth of cinematography, and significant insights were derived from attempts to combine motion and depth. PMID:26684420

  6. Probabilistic Seismic Hazard Maps for Seattle, Washington, Based on 3D Ground-Motion Simulations

    NASA Astrophysics Data System (ADS)

    Frankel, A. D.; Stephenson, W. J.; Carver, D. L.; Williams, R. A.; Odum, J. K.; Rhea, S.

    2007-12-01

    We have produced probabilistic seismic hazard maps for Seattle using over 500 3D finite-difference simulations of ground motions from earthquakes in the Seattle fault zone, Cascadia subduction zone, South Whidbey Island fault, and background shallow and deep source areas. The maps depict 1 Hz response spectral accelerations with 2, 5, and 10% probabilities of being exceeded in 50 years. The simulations were used to generate site and source dependent amplification factors that are applied to rock-site attenuation relations. The maps incorporate essentially the same fault sources and earthquake recurrence times as the 2002 national seismic hazard maps. The simulations included basin surface waves and basin-edge focusing effects from a 3D model of the Seattle basin. The 3D velocity model was validated by modeling several earthquakes in the region, including the 2001 M6.8 Nisqually earthquake, that were recorded by our Seattle Urban Seismic Network and the Pacific Northwest Seismic Network. The simulations duplicate our observation that earthquakes from the south and southwest typically produce larger amplifications in the Seattle basin than earthquakes from other azimuths, relative to rock sites outside the basin. Finite-fault simulations were run for earthquakes along the Seattle fault zone, with magnitudes ranging from 6.6 to 7.2, so that the effects of rupture directivity were included. Nonlinear amplification factors for soft-soil sites of fill and alluvium were also applied in the maps. For the Cascadia subduction zone, 3D simulations with point sources at different locations along the zone were used to determine amplification factors across Seattle expected for great subduction-zone earthquakes. These new urban seismic hazard maps are based on determinations of hazard for 7236 sites with a spacing of 280 m. The maps show that the highest hazard locations for this frequency band (around 1 Hz) are soft-soil sites (fill and alluvium) within the Seattle basin and

  7. 3D reconstruction for sinusoidal motion based on different feature detection algorithms

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Zhang, Jin; Deng, Huaxia; Yu, Liandong

    2015-02-01

    The dynamic testing of structures and components is an important area of research. Extensive researches on the methods of using sensors for vibration parameters have been studied for years. With the rapid development of industrial high-speed camera and computer hardware, the method of using stereo vision for dynamic testing has been the focus of the research since the advantages of non-contact, full-field, high resolution and high accuracy. But in the country there is not much research about the dynamic testing based on stereo vision, and yet few people publish articles about the three-dimensional (3D) reconstruction of feature points in the case of dynamic. It is essential to the following analysis whether it can obtain accurate movement of target objects. In this paper, an object with sinusoidal motion is detected by stereo vision and the accuracy with different feature detection algorithms is investigated. Three different marks including dot, square and circle are stuck on the object and the object is doing sinusoidal motion by vibration table. Then use feature detection algorithm speed-up robust feature (SURF) to detect point, detect square corners by Harris and position the center by Hough transform. After obtaining the pixel coordinate values of the feature point, the stereo calibration parameters are used to achieve three-dimensional reconstruction through triangulation principle. The trajectories of the specific direction according to the vibration frequency and the frequency camera acquisition are obtained. At last, the reconstruction accuracy of different feature detection algorithms is compared.

  8. Numerical Benchmark of 3D Ground Motion Simulation in the Alpine valley of Grenoble, France.

    NASA Astrophysics Data System (ADS)

    Tsuno, S.; Chaljub, E.; Cornou, C.; Bard, P.

    2006-12-01

    Thank to the use of sophisticated numerical methods and to the access to increasing computational resources, our predictions of strong ground motion become more and more realistic and need to be carefully compared. We report our effort of benchmarking numerical methods of ground motion simulation in the case of the valley of Grenoble in the French Alps. The Grenoble valley is typical of a moderate seismicity area where strong site effects occur. The benchmark consisted in computing the seismic response of the `Y'-shaped Grenoble valley to (i) two local earthquakes (Ml<=3) for which recordings were avalaible; and (ii) two local hypothetical events (Mw=6) occuring on the so-called Belledonne Border Fault (BBF) [1]. A free-style prediction was also proposed, in which participants were allowed to vary the source and/or the model parameters and were asked to provide the resulting uncertainty in their estimation of ground motion. We received a total of 18 contributions from 14 different groups; 7 of these use 3D methods, among which 3 could handle surface topography, the other half comprises predictions based upon 1D (2 contributions), 2D (4 contributions) and empirical Green's function (EGF) (3 contributions) methods. Maximal frequency analysed ranged between 2.5 Hz for 3D calculations and 40 Hz for EGF predictions. We present a detailed comparison of the different predictions using raw indicators (e.g. peak values of ground velocity and acceleration, Fourier spectra, site over reference spectral ratios, ...) as well as sophisticated misfit criteria based upon previous works [2,3]. We further discuss the variability in estimating the importance of particular effects such as non-linear rheology, or surface topography. References: [1] Thouvenot F. et al., The Belledonne Border Fault: identification of an active seismic strike-slip fault in the western Alps, Geophys. J. Int., 155 (1), p. 174-192, 2003. [2] Anderson J., Quantitative measure of the goodness-of-fit of

  9. Are There Side Effects to Watching 3D Movies? A Prospective Crossover Observational Study on Visually Induced Motion Sickness

    PubMed Central

    Solimini, Angelo G.

    2013-01-01

    Background The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. Methods and Findings A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Conclusions Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators. PMID:23418530

  10. Weigh-In-Motion Waveform Capture Systems

    2007-09-01

    Input data is generated from multiple weight sensor signals embedded in a thin weighing pad. This information is then reduced to total weight and position of a wheel rolling over the pad. This produces a signal which includes both the wheel weight and it inertial effects due to vehicle bounce, engine noise, and other mechanical vibrations. In order to extract accurate weight information of the wheel from the extraneous information, it is necessary to firstmore » capture the waveform and then perform a form of modal analysis. This program captures the above data and formats it into a useable form for analysis.« less

  11. SU-E-J-135: An Investigation of Ultrasound Imaging for 3D Intra-Fraction Prostate Motion Estimation

    SciTech Connect

    O'Shea, T; Harris, E; Bamber, J; Evans, P

    2014-06-01

    Purpose: This study investigates the use of a mechanically swept 3D ultrasound (US) probe to estimate intra-fraction motion of the prostate during radiation therapy using an US phantom and simulated transperineal imaging. Methods: A 3D motion platform was used to translate an US speckle phantom while simulating transperineal US imaging. Motion patterns for five representative types of prostate motion, generated from patient data previously acquired with a Calypso system, were using to move the phantom in 3D. The phantom was also implanted with fiducial markers and subsequently tracked using the CyberKnife kV x-ray system for comparison. A normalised cross correlation block matching algorithm was used to track speckle patterns in 3D and 2D US data. Motion estimation results were compared with known phantom translations. Results: Transperineal 3D US could track superior-inferior (axial) and anterior-posterior (lateral) motion to better than 0.8 mm root-mean-square error (RMSE) at a volume rate of 1.7 Hz (comparable with kV x-ray tracking RMSE). Motion estimation accuracy was poorest along the US probe's swept axis (right-left; RL; RMSE < 4.2 mm) but simple regularisation methods could be used to improve RMSE (< 2 mm). 2D US was found to be feasible for slowly varying motion (RMSE < 0.5 mm). 3D US could also allow accurate radiation beam gating with displacement thresholds of 2 mm and 5 mm exhibiting a RMSE of less than 0.5 mm. Conclusion: 2D and 3D US speckle tracking is feasible for prostate motion estimation during radiation delivery. Since RL prostate motion is small in magnitude and frequency, 2D or a hybrid (2D/3D) US imaging approach which also accounts for potential prostate rotations could be used. Regularisation methods could be used to ensure the accuracy of tracking data, making US a feasible approach for gating or tracking in standard or hypo-fractionated prostate treatments.

  12. 3D Modelling of Inaccessible Areas using UAV-based Aerial Photography and Structure from Motion

    NASA Astrophysics Data System (ADS)

    Obanawa, Hiroyuki; Hayakawa, Yuichi; Gomez, Christopher

    2014-05-01

    In hardly accessible areas, the collection of 3D point-clouds using TLS (Terrestrial Laser Scanner) can be very challenging, while airborne equivalent would not give a correct account of subvertical features and concave geometries like caves. To solve such problem, the authors have experimented an aerial photography based SfM (Structure from Motion) technique on a 'peninsular-rock' surrounded on three sides by the sea at a Pacific coast in eastern Japan. The research was carried out using UAS (Unmanned Aerial System) combined with a commercial small UAV (Unmanned Aerial Vehicle) carrying a compact camera. The UAV is a DJI PHANTOM: the UAV has four rotors (quadcopter), it has a weight of 1000 g, a payload of 400 g and a maximum flight time of 15 minutes. The camera is a GoPro 'HERO3 Black Edition': resolution 12 million pixels; weight 74 g; and 0.5 sec. interval-shot. The 3D model has been constructed by digital photogrammetry using a commercial SfM software, Agisoft PhotoScan Professional®, which can generate sparse and dense point-clouds, from which polygonal models and orthophotographs can be calculated. Using the 'flight-log' and/or GCPs (Ground Control Points), the software can generate digital surface model. As a result, high-resolution aerial orthophotographs and a 3D model were obtained. The results have shown that it was possible to survey the sea cliff and the wave cut-bench, which are unobservable from land side. In details, we could observe the complexity of the sea cliff that is nearly vertical as a whole while slightly overhanging over the thinner base. The wave cut bench is nearly flat and develops extensively at the base of the cliff. Although there are some evidences of small rockfalls at the upper part of the cliff, there is no evidence of very recent activity, because no fallen rock exists on the wave cut bench. This system has several merits: firstly lower cost than the existing measuring methods such as manned-flight survey and aerial laser

  13. An Alignment Method for the Integration of Underwater 3D Data Captured by a Stereovision System and an Acoustic Camera

    PubMed Central

    Lagudi, Antonio; Bianco, Gianfranco; Muzzupappa, Maurizio; Bruno, Fabio

    2016-01-01

    The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera. PMID:27089344

  14. An Alignment Method for the Integration of Underwater 3D Data Captured by a Stereovision System and an Acoustic Camera.

    PubMed

    Lagudi, Antonio; Bianco, Gianfranco; Muzzupappa, Maurizio; Bruno, Fabio

    2016-01-01

    The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera. PMID:27089344

  15. Evaluating the utility of 3D TRUS image information in guiding intra-procedure registration for motion compensation

    NASA Astrophysics Data System (ADS)

    De Silva, Tharindu; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    In targeted 3D transrectal ultrasound (TRUS)-guided biopsy, patient and prostate movement during the procedure can cause target misalignments that hinder accurate sampling of pre-planned suspicious tissue locations. Multiple solutions have been proposed for motion compensation via registration of intra-procedural TRUS images to a baseline 3D TRUS image acquired at the beginning of the biopsy procedure. While 2D TRUS images are widely used for intra-procedural guidance, some solutions utilize richer intra-procedural images such as bi- or multi-planar TRUS or 3D TRUS, acquired by specialized probes. In this work, we measured the impact of such richer intra-procedural imaging on motion compensation accuracy, to evaluate the tradeoff between cost and complexity of intra-procedural imaging versus improved motion compensation. We acquired baseline and intra-procedural 3D TRUS images from 29 patients at standard sextant-template biopsy locations. We used the planes extracted from the 3D intra-procedural scans to simulate 2D and 3D information available in different clinically relevant scenarios for registration. The registration accuracy was evaluated by calculating the target registration error (TRE) using manually identified homologous fiducial markers (micro-calcifications). Our results indicate that TRE improves gradually when the number of intra-procedural imaging planes used in registration is increased. Full 3D TRUS information helps the registration algorithm to robustly converge to more accurate solutions. These results can also inform the design of a fail-safe workflow during motion compensation in a system using a tracked 2D TRUS probe, by prescribing rotational acquisitions that can be performed quickly and easily by the physician immediately prior to needle targeting.

  16. Model-based lasso catheter tracking in monoplane fluoroscopy for 3D breathing motion compensation during EP procedures

    NASA Astrophysics Data System (ADS)

    Liao, Rui

    2010-02-01

    Radio-frequency catheter ablation (RFCA) of the pulmonary veins (PVs) attached to the left atrium (LA) is usually carried out under fluoroscopy guidance. Overlay of detailed anatomical structures via 3-D CT and/or MR volumes onto the fluoroscopy helps visualization and navigation in electrophysiology procedures (EP). Unfortunately, respiratory motion may impair the utility of static overlay of the volume with fluoroscopy for catheter navigation. In this paper, we propose a B-spline based method for tracking the circumferential catheter (lasso catheter) in monoplane fluoroscopy. The tracked motion can be used for the estimation of the 3-D trajectory of breathing motion and for subsequent motion compensation. A lasso catheter is typically used during EP procedures and is pushed against the ostia of the PVs to be ablated. Hence this method does not require additional instruments, and achieves motion estimation right at the site of ablation. The performance of the proposed tracking algorithm was evaluated on 340 monoplane frames with an average error of 0.68 +/- 0.36 mms. Our contributions in this work are twofold. First and foremost, we show how to design an effective, practical, and workflow-friendly 3-D motion compensation scheme for EP procedures in a monoplane setup. In addition, we develop an efficient and accurate method for model-based tracking of the circumferential lasso catheter in the low-dose EP fluoroscopy.

  17. Motion of the Ca2+-pump captured.

    PubMed

    Yokokawa, Masatoshi; Takeyasu, Kunio

    2011-09-01

    Studies of ion pumps, such as ATP synthetase and Ca(2+)-ATPase, have a long history. The crystal structures of several kinds of ion pump have been resolved, and provide static pictures of mechanisms of ion transport. In this study, using fast-scanning atomic force microscopy, we have visualized conformational changes in the sarcoplasmic reticulum Ca(2+)-ATPase (SERCA) in real time at the single-molecule level. The analyses of individual SERCA molecules in the presence of both ATP and free Ca(2+) revealed up-down structural changes corresponding to the Albers-Post scheme. This fluctuation was strongly affected by the ATP and Ca(2+) concentrations, and was prevented by an inhibitor, thapsigargin. Interestingly, at a physiological ATP concentrations, the up-down motion disappeared completely. These results indicate that SERCA does not transit through the shortest structure, and has a catalytic pathway different from the ordinary Albers-Post scheme under physiological conditions. PMID:21707923

  18. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    SciTech Connect

    Dhou, S; Hurwitz, M; Lewis, J; Mishra, P

    2014-06-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT

  19. Quantification of Ground Motion Reductions by Fault Zone Plasticity with 3D Spontaneous Rupture Simulations

    NASA Astrophysics Data System (ADS)

    Roten, D.; Olsen, K. B.; Cui, Y.; Day, S. M.

    2015-12-01

    We explore the effects of fault zone nonlinearity on peak ground velocities (PGVs) by simulating a suite of surface rupturing earthquakes in a visco-plastic medium. Our simulations, performed with the AWP-ODC 3D finite difference code, cover magnitudes from 6.5 to 8.0, with several realizations of the stochastic stress drop for a given magnitude. We test three different models of rock strength, with friction angles and cohesions based on criteria which are frequently applied to fractured rock masses in civil engineering and mining. We use a minimum shear-wave velocity of 500 m/s and a maximum frequency of 1 Hz. In rupture scenarios with average stress drop (~3.5 MPa), plastic yielding reduces near-fault PGVs by 15 to 30% in pre-fractured, low-strength rock, but less than 1% in massive, high quality rock. These reductions are almost insensitive to the scenario earthquake magnitude. In the case of high stress drop (~7 MPa), however, plasticity reduces near-fault PGVs by 38 to 45% in rocks of low strength and by 5 to 15% in rocks of high strength. Because plasticity reduces slip rates and static slip near the surface, these effects can partially be captured by defining a shallow velocity-strengthening layer. We also perform a dynamic nonlinear simulation of a high stress drop M 7.8 earthquake rupturing the southern San Andreas fault along 250 km from Indio to Lake Hughes. With respect to the viscoelastic solution (a), nonlinearity in the fault damage zone and in near-surface deposits would reduce long-period (> 1 s) peak ground velocities in the Los Angeles basin by 15-50% (b), depending on the strength of crustal rocks and shallow sediments. These simulation results suggest that nonlinear effects may be relevant even at long periods, especially for earthquakes with high stress drop.

  20. Laas Geel (somaliland): 5000 Year-Old Paintings Captured in 3D

    NASA Astrophysics Data System (ADS)

    Grenier, L.; Antoniotti, P.; Hamon, G.; Happe, D.

    2013-07-01

    Discovered in 2002 by a French archaeology team conducted by Prof. X. Gutherz, Laas Geel (Somaliland), is probably one of the most remarkable archaeological site in the horn of Africa. Located in an isolated arid region, it is made of natural rocky shelters on which hundreds of colored paintings still remain in a particularly good state of conservation. The first studies achieved in the last decade let suppose that they are 5000 years old. After several studying and exploring expeditions, a 3Ddigitizing campaign has been carried out by Art Graphique et Patrimoine, under the direction of X. Gutherz, with the support of the cultural service of the French Embassy in Djibouti. The project was focused on three main goals: production of a high accuracy 3D-documentation for scientific needs, archiving the 3D digital print recorded on site for the conservation and the saving of this heritage, and finally diffusing the results throughout various kinds of media to reveal the site to the public, insisting on its vulnerability.

  1. A study of the effects of degraded imagery on tactical 3D model generation using structure-from-motion

    NASA Astrophysics Data System (ADS)

    Bolick, Leslie; Harguess, Josh

    2016-05-01

    An emerging technology in the realm of airborne intelligence, surveillance, and reconnaissance (ISR) systems is structure-from-motion (SfM), which enables the creation of three-dimensional (3D) point clouds and 3D models from two-dimensional (2D) imagery. There are several existing tools, such as VisualSFM and open source project OpenSfM, to assist in this process, however, it is well-known that pristine imagery is usually required to create meaningful 3D data from the imagery. In military applications, such as the use of unmanned aerial vehicles (UAV) for surveillance operations, imagery is rarely pristine. Therefore, we present an analysis of structure-from-motion packages on imagery that has been degraded in a controlled manner.

  2. A 3D MR-acquisition scheme for nonrigid bulk motion correction in simultaneous PET-MR

    SciTech Connect

    Kolbitsch, Christoph Prieto, Claudia; Schaeffter, Tobias; Tsoumpas, Charalampos

    2014-08-15

    Purpose: Positron emission tomography (PET) is a highly sensitive medical imaging technique commonly used to detect and assess tumor lesions. Magnetic resonance imaging (MRI) provides high resolution anatomical images with different contrasts and a range of additional information important for cancer diagnosis. Recently, simultaneous PET-MR systems have been released with the promise to provide complementary information from both modalities in a single examination. Due to long scan times, subject nonrigid bulk motion, i.e., changes of the patient's position on the scanner table leading to nonrigid changes of the patient's anatomy, during data acquisition can negatively impair image quality and tracer uptake quantification. A 3D MR-acquisition scheme is proposed to detect and correct for nonrigid bulk motion in simultaneously acquired PET-MR data. Methods: A respiratory navigated three dimensional (3D) MR-acquisition with Radial Phase Encoding (RPE) is used to obtain T1- and T2-weighted data with an isotropic resolution of 1.5 mm. Healthy volunteers are asked to move the abdomen two to three times during data acquisition resulting in overall 19 movements at arbitrary time points. The acquisition scheme is used to retrospectively reconstruct dynamic 3D MR images with different temporal resolutions. Nonrigid bulk motion is detected and corrected in this image data. A simultaneous PET acquisition is simulated and the effect of motion correction is assessed on image quality and standardized uptake values (SUV) for lesions with different diameters. Results: Six respiratory gated 3D data sets with T1- and T2-weighted contrast have been obtained in healthy volunteers. All bulk motion shifts have successfully been detected and motion fields describing the transformation between the different motion states could be obtained with an accuracy of 1.71 ± 0.29 mm. The PET simulation showed errors of up to 67% in measured SUV due to bulk motion which could be reduced to less than

  3. Transitional modes of motion and capture regions of vibroshock systems

    NASA Technical Reports Server (NTRS)

    Ragulskene, V. L.

    1973-01-01

    A numerical analysis of the transitional modes of motion for a vibroshock system was conducted. The capture regions of the system are emphasized. The three initial parameters for a nonautonomous vibroshock system with one degree of freedom are identified as: (1) coordinates, (2) velocity, and (3) time. Mathematical models are developed to show the relationship of the parameters. Graphs are included to show the nature of the capture regions and to portray the trajectory of motion of mass with time, by solution of differential equations during increase and decrease in time.

  4. Verification and validation of ShipMo3D ship motion predictions in the time and frequency domains

    NASA Astrophysics Data System (ADS)

    McTaggart, Kevin A.

    2011-03-01

    This paper compares frequency domain and time domain predictions from the ShipMo3D ship motion library with observed motions from model tests and sea trials. ShipMo3D evaluates hull radiation and diffraction forces using the frequency domain Green function for zero forward speed, which is a suitable approach for ships travelling at moderate speed (e.g., Froude numbers up to 0.4). Numerical predictions give generally good agreement with experiments. Frequency domain and linear time domain predictions are almost identical. Evaluation of nonlinear buoyancy and incident wave forces using the instantaneous wetted hull surface gives no improvement in numerical predictions. Consistent prediction of roll motions remains a challenge for seakeeping codes due to the associated viscous effects.

  5. Coordination between Understanding Historic Buildings and BIM Modelling: A 3D-Output Oriented and typological Data Capture Method

    NASA Astrophysics Data System (ADS)

    Li, K.; Li, S. J.; Liu, Y.; Wang, W.; Wu, C.

    2015-08-01

    At the present, in trend of shifting the old 2D-output oriented survey to a new 3D-output oriented survey based on BIM technology, the corresponding working methods and workflow for data capture, process, representation, etc. have to be changed.Based on case study of two buildings in the Summer Palace of Beijing, and Jiayuguan Pass at the west end of the Great Wall (both World Heritage sites), this paper puts forward a "structure-and-type method" by means of typological method used in archaeology, Revit family system, and the tectonic logic of building to realize a good coordination between understanding of historic buildings and BIM modelling.

  6. Multi-flexible-body dynamics capturing motion-induced stiffness

    NASA Technical Reports Server (NTRS)

    Banerjee, Arun K.; Lemak, Mark E.; Dickens, John M.

    1989-01-01

    A multi-flexible-body dynamics formulation incorporating a recently developed theory for capturing motion induced stiffness for a arbitrary structure undergoing large rotation and translation accompanied by small vibrations is presented. In essence, the method consists of correcting prematurely linearized dynamical equations for an arbitrary flexible body with generalized active forces due to geometric stiffness corresponding to a system of twelve inertia forces and nine inertia couples distributed over the body. Equations of motion are derived by means of Kane's method. A useful feature of the formulation is its treatment of prescribed motions and interaction forces. Results of simulations of motions of three flexible spacecraft, involving stiffening during spinup motion, dynamic buckling, and a repositioning maneuver, demonstrate the validity and generality of the theory.

  7. Dynamics and cortical distribution of neural responses to 2D and 3D motion in human

    PubMed Central

    McKee, Suzanne P.; Norcia, Anthony M.

    2013-01-01

    The perception of motion-in-depth is important for avoiding collisions and for the control of vergence eye-movements and other motor actions. Previous psychophysical studies have suggested that sensitivity to motion-in-depth has a lower temporal processing limit than the perception of lateral motion. The present study used functional MRI-informed EEG source-imaging to study the spatiotemporal properties of the responses to lateral motion and motion-in-depth in human visual cortex. Lateral motion and motion-in-depth displays comprised stimuli whose only difference was interocular phase: monocular oscillatory motion was either in-phase in the two eyes (lateral motion) or in antiphase (motion-in-depth). Spectral analysis was used to break the steady-state visually evoked potentials responses down into even and odd harmonic components within five functionally defined regions of interest: V1, V4, lateral occipital complex, V3A, and hMT+. We also characterized the responses within two anatomically defined regions: the inferior and superior parietal cortex. Even harmonic components dominated the evoked responses and were a factor of approximately two larger for lateral motion than motion-in-depth. These responses were slower for motion-in-depth and were largely independent of absolute disparity. In each of our regions of interest, responses at odd-harmonics were relatively small, but were larger for motion-in-depth than lateral motion, especially in parietal cortex, and depended on absolute disparity. Taken together, our results suggest a plausible neural basis for reduced psychophysical sensitivity to rapid motion-in-depth. PMID:24198326

  8. A Virtual Reality Dance Training System Using Motion Capture Technology

    ERIC Educational Resources Information Center

    Chan, J. C. P.; Leung, H.; Tang, J. K. T.; Komura, T.

    2011-01-01

    In this paper, a new dance training system based on the motion capture and virtual reality (VR) technologies is proposed. Our system is inspired by the traditional way to learn new movements-imitating the teacher's movements and listening to the teacher's feedback. A prototype of our proposed system is implemented, in which a student can imitate…

  9. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson’s Disease

    PubMed Central

    Piro, Neltje E.; Piro, Lennart K.; Kassubek, Jan; Blechschmidt-Trapp, Ronald A.

    2016-01-01

    Remote monitoring of Parkinson’s Disease (PD) patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS) classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a) an examination based on video recordings as a clinical reference; (b) an automatically classified UPDRS; and (c) a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG) sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b) the automatically classified UPDRS is 0.48 and with (c) the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team. PMID:27338400

  10. Health Problems Discovery from Motion-Capture Data of Elderly

    NASA Astrophysics Data System (ADS)

    Pogorelc, B.; Gams, M.

    Rapid aging of the population of the developed countries could exceed the society's capacity for taking care for them. In order to help solving this problem, we propose a system for automatic discovery of health problems from motion-capture data of gait of elderly. The gait of the user is captured with the motion capture system, which consists of tags attached to the body and sensors situated in the apartment. Position of the tags is acquired by the sensors and the resulting time series of position coordinates are analyzed with machine learning algorithms in order to identify the specific health problem. We propose novel features for training a machine learning classifier that classifies the user's gait into: i) normal, ii) with hemiplegia, iii) with Parkinson's disease, iv) with pain in the back and v) with pain in the leg. Results show that naive Bayes needs more tags and less noise to reach classification accuracy of 98 % than support vector machines for 99 %.

  11. Development of motion capture system using alternating magnetic field

    NASA Astrophysics Data System (ADS)

    Kumagai, Masaaki; Akamatsu, Kazuyoshi

    2005-12-01

    Motion capture systems are widely used for virtual reality, motion acquisition for medical researches, for humanoid robots, for video games, etc. Several types of them have been developed and used for applications considering their advantages and restrictions. Another type of motion capture system that uses alternating magnetic field is proposed in this paper. The system uses a field exciting coil that covers measuring area and a pickup coil attached to target. First, six alternating fields are generated simultaneously in measuring area, and signals are induced on pickup coils according to attitude and position of it. These signals are processed to extract amplitude of exciting components, and state of the pickup coil is calculated from those components. It can detect attitude and displacement of target with high resolution and fast response speed. The principles of detection and brief experimental results are described.

  12. Scalable sensing electronics towards a motion capture suit

    NASA Astrophysics Data System (ADS)

    Xu, Daniel; Gisby, Todd A.; Xie, Shane; Anderson, Iain A.

    2013-04-01

    Being able to accurately record body motion allows complex movements to be characterised and studied. This is especially important in the film or sport coaching industry. Unfortunately, the human body has over 600 skeletal muscles, giving rise to multiple degrees of freedom. In order to accurately capture motion such as hand gestures, elbow or knee flexion and extension, vast numbers of sensors are required. Dielectric elastomer (DE) sensors are an emerging class of electroactive polymer (EAP) that is soft, lightweight and compliant. These characteristics are ideal for a motion capture suit. One challenge is to design sensing electronics that can simultaneously measure multiple sensors. This paper describes a scalable capacitive sensing device that can measure up to 8 different sensors with an update rate of 20Hz.

  13. 3D papillary image capturing by the stereo fundus camera system for clinical diagnosis on retina and optic nerve

    NASA Astrophysics Data System (ADS)

    Motta, Danilo A.; Serillo, André; de Matos, Luciana; Yasuoka, Fatima M. M.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.

    2014-03-01

    Glaucoma is the second main cause of the blindness in the world and there is a tendency to increase this number due to the lifetime expectation raise of the population. Glaucoma is related to the eye conditions, which leads the damage to the optic nerve. This nerve carries visual information from eye to brain, then, if it has damage, it compromises the visual quality of the patient. In the majority cases the damage of the optic nerve is irreversible and it happens due to increase of intraocular pressure. One of main challenge for the diagnosis is to find out this disease, because any symptoms are not present in the initial stage. When is detected, it is already in the advanced stage. Currently the evaluation of the optic disc is made by sophisticated fundus camera, which is inaccessible for the majority of Brazilian population. The purpose of this project is to develop a specific fundus camera without fluorescein angiography and red-free system to accomplish 3D image of optic disc region. The innovation is the new simplified design of a stereo-optical system, in order to make capable the 3D image capture and in the same time quantitative measurements of excavation and topography of optic nerve; something the traditional fundus cameras do not do. The dedicated hardware and software is developed for this ophthalmic instrument, in order to permit quick capture and print of high resolution 3D image and videos of optic disc region (20° field-of-view) in the mydriatic and nonmydriatic mode.

  14. Nonrigid motion correction in 3D using autofocusing with localized linear translations.

    PubMed

    Cheng, Joseph Y; Alley, Marcus T; Cunningham, Charles H; Vasanawala, Shreyas S; Pauly, John M; Lustig, Michael

    2012-12-01

    MR scans are sensitive to motion effects due to the scan duration. To properly suppress artifacts from nonrigid body motion, complex models with elements such as translation, rotation, shear, and scaling have been incorporated into the reconstruction pipeline. However, these techniques are computationally intensive and difficult to implement for online reconstruction. On a sufficiently small spatial scale, the different types of motion can be well approximated as simple linear translations. This formulation allows for a practical autofocusing algorithm that locally minimizes a given motion metric--more specifically, the proposed localized gradient-entropy metric. To reduce the vast search space for an optimal solution, possible motion paths are limited to the motion measured from multichannel navigator data. The novel navigation strategy is based on the so-called "Butterfly" navigators, which are modifications of the spin-warp sequence that provides intrinsic translational motion information with negligible overhead. With a 32-channel abdominal coil, sufficient number of motion measurements were found to approximate possible linear motion paths for every image voxel. The correction scheme was applied to free-breathing abdominal patient studies. In these scans, a reduction in artifacts from complex, nonrigid motion was observed. PMID:22307933

  15. Miniature low-power inertial sensors: promising technology for implantable motion capture systems.

    PubMed

    Lambrecht, Joris M; Kirsch, Robert F

    2014-11-01

    Inertial and magnetic sensors are valuable for untethered, self-contained human movement analysis. Very recently, complete integration of inertial sensors, magnetic sensors, and processing into single packages, has resulted in miniature, low power devices that could feasibly be employed in an implantable motion capture system. We developed a wearable sensor system based on a commercially available system-in-package inertial and magnetic sensor. We characterized the accuracy of the system in measuring 3-D orientation-with and without magnetometer-based heading compensation-relative to a research grade optical motion capture system. The root mean square error was less than 4° in dynamic and static conditions about all axes. Using four sensors, recording from seven degrees-of-freedom of the upper limb (shoulder, elbow, wrist) was demonstrated in one subject during reaching motions. Very high correlation and low error was found across all joints relative to the optical motion capture system. Findings were similar to previous publications using inertial sensors, but at a fraction of the power consumption and size of the sensors. Such ultra-small, low power sensors provide exciting new avenues for movement monitoring for various movement disorders, movement-based command interfaces for assistive devices, and implementation of kinematic feedback systems for assistive interventions like functional electrical stimulation. PMID:24846651

  16. Accurate 3D rigid-body target motion and structure estimation by using GMTI/HRR with template information

    NASA Astrophysics Data System (ADS)

    Wu, Shunguang; Hong, Lang

    2008-04-01

    A framework of simultaneously estimating the motion and structure parameters of a 3D object by using high range resolution (HRR) and ground moving target indicator (GMTI) measurements with template information is given. By decoupling the motion and structure information and employing rigid-body constraints, we have developed the kinematic and measurement equations of the problem. Since the kinematic system is unobservable by using only one scan HRR and GMTI measurements, we designed an architecture to run the motion and structure filters in parallel by using multi-scan measurements. Moreover, to improve the estimation accuracy in large noise and/or false alarm environments, an interacting multi-template joint tracking (IMTJT) algorithm is proposed. Simulation results have shown that the averaged root mean square errors for both motion and structure state vectors have been significantly reduced by using the template information.

  17. Performance of ultrasound based measurement of 3D displacement using a curvilinear probe for organ motion tracking

    NASA Astrophysics Data System (ADS)

    Harris, Emma J.; Miller, Naomi R.; Bamber, Jeffrey C.; Evans, Phillip M.; Symonds-Tayler, J. Richard N.

    2007-09-01

    Three-dimensional (3D) soft tissue tracking is of interest for monitoring organ motion during therapy. Our goal is to assess the tracking performance of a curvilinear 3D ultrasound probe in terms of the accuracy and precision of measured displacements. The first aim was to examine the depth dependence of the tracking performance. This is of interest because the spatial resolution varies with distance from the elevational focus and because the curvilinear geometry of the transducer causes the spatial sampling frequency to decrease with depth. Our second aim was to assess tracking performance as a function of the spatial sampling setting (low, medium or high sampling). These settings are incorporated onto 3D ultrasound machines to allow the user to control the trade-off between spatial sampling and temporal resolution. Volume images of a speckle-producing phantom were acquired before and after the probe had been moved by a known displacement (1, 2 or 8 mm). This allowed us to assess the optimum performance of the tracking algorithm, in the absence of motion. 3D speckle tracking was performed using 3D cross-correlation and sub-voxel displacements were estimated. The tracking performance was found to be best for axial displacements and poorest for elevational displacements. In general, the performance decreased with depth, although the nature of the depth dependence was complex. Under certain conditions, the tracking performance was sufficient to be useful for monitoring organ motion. For example, at the highest sampling setting, for a 2 mm displacement, good accuracy and precision (an error and standard deviation of <0.4 mm) were observed at all depths and for all directions of displacement. The trade-off between spatial sampling, temporal resolution and size of the field of view (FOV) is discussed.

  18. A stroboscopic structured illumination system used in dynamic 3D visualization of high-speed motion object

    NASA Astrophysics Data System (ADS)

    Su, Xianyu; Zhang, Qican; Li, Yong; Xiang, Liqun; Cao, Yiping; Chen, Wenjing

    2005-04-01

    A stroboscopic structured illumination system, which can be used in measurement for 3D shape and deformation of high-speed motion object, is proposed and verified by experiments. The system, present in this paper, can automatically detect the position of high-speed moving object and synchronously control the flash of LED to project a structured optical field onto surface of motion object and the shoot of imaging system to acquire an image of deformed fringe pattern, also can create a signal, set artificially through software, to synchronously control the LED and imaging system to do their job. We experiment on a civil electric fan, successful acquire a serial of instantaneous, sharp and clear images of rotation blade and reconstruct its 3D shapes in difference revolutions.

  19. Change of Re dependency of single bubble 3D motion by surface slip condition in surfactant solution

    NASA Astrophysics Data System (ADS)

    Tagawa, Yoshiyuki; Funakubo, Ami; Takagi, Shu; Matsumoto, Yoichiro

    2009-11-01

    Path instability of single bubble in water is sensitive to surfactant. One of the key effects of surfactant is to decrease bubble rising velocity (i.e. increase drag) and change bubble slip condition from free-slip to no-slip. This phenomenon is described as Marangoni effect. However, the surfactant effect to path instability is not fully investigated. In this research, we measured bubble 3D trajectories and velocity in dilute surfactant solution to reveal the relation between 3D motion mode and slip condition. Experimental parameters are types of surfactants, concentrations and bubble sizes. Bubble motions categorized as straight, spiral or zigzag are plotted on two-dimensional field of bubble Reynolds number Re and normalized drag coefficient CD^* which is strongly related to surface slip condition. Range of Re is from 200 to 1000 and CD^* is from 0 to 1. Our results show that when CD^* equals 0 or 1 (free-slip condition or no-slip condition, respectively), bubble motion mode is changed by Re. However when CD^* is 0.5, bubble motion is always spiral. It means that Re dependency of bubble motions is strongly affected by slip condition. We will discuss its mechanism in detail in our presentation.

  20. Robust 2D/3D registration for fast-flexion motion of the knee joint using hybrid optimization.

    PubMed

    Ohnishi, Takashi; Suzuki, Masahiko; Kobayashi, Tatsuya; Naomoto, Shinji; Sukegawa, Tomoyuki; Nawata, Atsushi; Haneishi, Hideaki

    2013-01-01

    Previously, we proposed a 2D/3D registration method that uses Powell's algorithm to obtain 3D motion of a knee joint by 3D computed-tomography and bi-plane fluoroscopic images. The 2D/3D registration is performed consecutively and automatically for each frame of the fluoroscopic images. This method starts from the optimum parameters of the previous frame for each frame except for the first one, and it searches for the next set of optimum parameters using Powell's algorithm. However, if the flexion motion of the knee joint is fast, it is likely that Powell's algorithm will provide a mismatch because the initial parameters are far from the correct ones. In this study, we applied a hybrid optimization algorithm (HPS) combining Powell's algorithm with the Nelder-Mead simplex (NM-simplex) algorithm to overcome this problem. The performance of the HPS was compared with the separate performances of Powell's algorithm and the NM-simplex algorithm, the Quasi-Newton algorithm and hybrid optimization algorithm with the Quasi-Newton and NM-simplex algorithms with five patient data sets in terms of the root-mean-square error (RMSE), target registration error (TRE), success rate, and processing time. The RMSE, TRE, and the success rate of the HPS were better than those of the other optimization algorithms, and the processing time was similar to that of Powell's algorithm alone. PMID:23138929

  1. Websim3d: A Web-based System for Generation, Storage and Dissemination of Earthquake Ground Motion Simulations.

    NASA Astrophysics Data System (ADS)

    Olsen, K. B.

    2003-12-01

    Synthetic time histories from large-scale 3D ground motion simulations generally constitute large 'data' sets which typically require 100's of Mbytes or Gbytes of storage capacity. For the same reason, getting access to a researchers simulation output, for example for an earthquake engineer to perform site analysis, or a seismologist to perform seismic hazard analysis, can be a tedious procedure. To circumvent this problem we have developed a web-based ``community model'' (websim3D) for the generation, storage, and dissemination of ground motion simulation results. Websim3D allows user-friendly and fast access to view and download such simulation results for an earthquake-prone area. The user selects an earthquake scenario from a map of the region, which brings up a map of the area where simulation data is available. Now, by clicking on an arbitrary site location, synthetic seismograms and/or soil parameters for the site can be displayed at fixed or variable scaling and/or downloaded. Websim3D relies on PHP scripts for the dynamic plots of synthetic seismograms and soil profiles. Although not limited to a specific area, we illustrate the community model for simulation results from the Los Angeles basin, Wellington (New Zealand), and Mexico.

  2. 3D Finite-Difference Modeling of Strong Ground Motion in the Upper Rhine Graben - 1356 Basel Earthquake

    NASA Astrophysics Data System (ADS)

    Oprsal, I.; Faeh, D.; Giardini, D.

    2002-12-01

    The disastrous Basel earthquake of October 18, 1356 (I0=X, M ≈ 6.9), appeared in, today seismically modest, Basel region (Upper Rhine Graben). The lack of strong ground motion seismic data can be effectively supplied by numerical modeling. We applied the 3D finite differences (FD) to predict ground motions which can be used for microzonation and hazard assessment studies. The FD method is formulated for topography models on irregular rectangular grids. It is a 3D explicit FD formulation of the hyperbolic partial differential equation (PDE). Elastodynamic PDE is solved in the time domain. The Hooke's isotropic inhomogeneous medium contains discontinuities and a topographic free surface. The 3D elastic FD modeling is applied on a newly established P and S-wave velocities structure model. This complex structure contains main interfaces and gradients inside some layers. It is adjacent to the earth surface and includes topography (Kind, Faeh and Giardini, 2002, A 3D Reference Model for the Area of Basel, in prep.). The first attempt was done for a double-couple point source and relatively simple source function. Numerical tests are planned for several finite-extent source histories because the 1356 Basel earthquake source features have not been well determined, yet. The presumed finite-extent source is adjacent to the free surface. The results are compared to the macroseismic information of the Basel area.

  3. Inertial motion capture system for biomechanical analysis in pressure suits

    NASA Astrophysics Data System (ADS)

    Di Capua, Massimiliano

    A non-invasive system has been developed at the University of Maryland Space System Laboratory with the goal of providing a new capability for quantifying the motion of the human inside a space suit. Based on an array of six microprocessors and eighteen microelectromechanical (MEMS) inertial measurement units (IMUs), the Body Pose Measurement System (BPMS) allows the monitoring of the kinematics of the suit occupant in an unobtrusive, self-contained, lightweight and compact fashion, without requiring any external equipment such as those necessary with modern optical motion capture systems. BPMS measures and stores the accelerations, angular rates and magnetic fields acting upon each IMU, which are mounted on the head, torso, and each segment of each limb. In order to convert the raw data into a more useful form, such as a set of body segment angles quantifying pose and motion, a series of geometrical models and a non-linear complimentary filter were implemented. The first portion of this works focuses on assessing system performance, which was measured by comparing the BPMS filtered data against rigid body angles measured through an external VICON optical motion capture system. This type of system is the industry standard, and is used here for independent measurement of body pose angles. By comparing the two sets of data, performance metrics such as BPMS system operational conditions, accuracy, and drift were evaluated and correlated against VICON data. After the system and models were verified and their capabilities and limitations assessed, a series of pressure suit evaluations were conducted. Three different pressure suits were used to identify the relationship between usable range of motion and internal suit pressure. In addition to addressing range of motion, a series of exploration tasks were also performed, recorded, and analysed in order to identify different motion patterns and trajectories as suit pressure is increased and overall suit mobility is reduced

  4. Reference equations of motion for automatic rendezvous and capture

    NASA Technical Reports Server (NTRS)

    Henderson, David M.

    1992-01-01

    The analysis presented in this paper defines the reference coordinate frames, equations of motion, and control parameters necessary to model the relative motion and attitude of spacecraft in close proximity with another space system during the Automatic Rendezvous and Capture phase of an on-orbit operation. The relative docking port target position vector and the attitude control matrix are defined based upon an arbitrary spacecraft design. These translation and rotation control parameters could be used to drive the error signal input to the vehicle flight control system. Measurements for these control parameters would become the bases for an autopilot or feedback control system (FCS) design for a specific spacecraft.

  5. Nonlinear, nonlaminar - 3D computation of electron motion through the output cavity of a klystron.

    NASA Technical Reports Server (NTRS)

    Albers, L. U.; Kosmahl, H. G.

    1971-01-01

    The accurate computation is discussed of electron motion throughout the output cavity of a klystron amplifier. The assumptions are defined whereon the computation is based, and the equations of motion are reviewed, along with the space charge fields derived from a Green's function potential of a solid cylinder. The integration process is then examined with special attention to its most difficult and important aspect - namely, the accurate treatment of the dynamic effect of space charge forces on the motion of individual cell rings of equal volume and charge. The correct treatment is demonstrated upon four specific examples, and a few comments are given on the results obtained.-

  6. Feasibility of Using Low-Cost Motion Capture for Automated Screening of Shoulder Motion Limitation after Breast Cancer Surgery

    PubMed Central

    Gritsenko, Valeriya; Dailey, Eric; Kyle, Nicholas; Taylor, Matt; Whittacre, Sean; Swisher, Anne K.

    2015-01-01

    Objective To determine if a low-cost, automated motion analysis system using Microsoft Kinect could accurately measure shoulder motion and detect motion impairments in women following breast cancer surgery. Design Descriptive study of motion measured via 2 methods. Setting Academic cancer center oncology clinic. Participants 20 women (mean age = 60 yrs) were assessed for active and passive shoulder motions during a routine post-operative clinic visit (mean = 18 days after surgery) following mastectomy (n = 4) or lumpectomy (n = 16) for breast cancer. Interventions Participants performed 3 repetitions of active and passive shoulder motions on the side of the breast surgery. Arm motion was recorded using motion capture by Kinect for Windows sensor and on video. Goniometric values were determined from video recordings, while motion capture data were transformed to joint angles using 2 methods (body angle and projection angle). Main Outcome Measure Correlation of motion capture with goniometry and detection of motion limitation. Results Active shoulder motion measured with low-cost motion capture agreed well with goniometry (r = 0.70–0.80), while passive shoulder motion measurements did not correlate well. Using motion capture, it was possible to reliably identify participants whose range of shoulder motion was reduced by 40% or more. Conclusions Low-cost, automated motion analysis may be acceptable to screen for moderate to severe motion impairments in active shoulder motion. Automatic detection of motion limitation may allow quick screening to be performed in an oncologist's office and trigger timely referrals for rehabilitation. PMID:26076031

  7. 3D simulation of interdendritic flow through a Al-18wt.%Cu structure captured with X-ray microtomography

    NASA Astrophysics Data System (ADS)

    Domitner, J.; Hölzl, C.; Kharicha, A.; Wu, M.; Ludwig, A.; Köhler, M.; Ratke, L.

    2012-01-01

    A central parameter to describe the formation of porosity and macrosegregation during casting processes is the permeability of the dendritic mushy zone. To determine this specific feature for a binary Al-18wt.%Cu alloy, flow simulations based on the Lattice Boltz-mann (LB) method were performed. The LB method allows an efficient solving of fluid flow problems dealing with complex shapes within an acceptable period of time. The 3D structure required as input for the simulations was captured with X-ray microtomography, which enables the generation of representative geometries for permeability investigations. Removing the eutectic phase from the measured dataset generated a remaining network of solid primary dendrites. In the simulations, a pressure gradient was applied to force the liquid through the free interdendritic channels. The permeability of the structure was then calculated from the resulting flow velocity pattern using Darcy's law. To examine the influence of different boundary conditions on the results obtained, several simulations were conducted.

  8. An eliminating method of motion-induced vertical parallax for time-division 3D display technology

    NASA Astrophysics Data System (ADS)

    Lin, Liyuan; Hou, Chunping

    2015-10-01

    A time difference between the left image and right image of the time-division 3D display makes a person perceive alternating vertical parallax when an object is moving vertically on a fixed depth plane, which causes the left image and right image perceived do not match and makes people more prone to visual fatigue. This mismatch cannot eliminate simply rely on the precise synchronous control of the left image and right image. Based on the principle of time-division 3D display technology and human visual system characteristics, this paper establishes a model of the true vertical motion velocity in reality and vertical motion velocity on the screen, and calculates the amount of the vertical parallax caused by vertical motion, and then puts forward a motion compensation method to eliminate the vertical parallax. Finally, subjective experiments are carried out to analyze how the time difference affects the stereo visual comfort by comparing the comfort values of the stereo image sequences before and after compensating using the eliminating method. The theoretical analysis and experimental results show that the proposed method is reasonable and efficient.

  9. Stereo and motion parallax cues in human 3D vision: can they vanish without a trace?

    PubMed

    Rauschecker, Andreas M; Solomon, Samuel G; Glennerster, Andrew

    2006-01-01

    In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the "correct" size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues. PMID:17209749

  10. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  11. Accuracy and precision of a custom camera-based system for 2-d and 3-d motion tracking during speech and nonspeech motor tasks.

    PubMed

    Feng, Yongqiang; Max, Ludo

    2014-04-01

    PURPOSE Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and submillimeter accuracy. METHOD The authors examined the accuracy and precision of 2-D and 3-D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially available computer software (APAS, Ariel Dynamics), and a custom calibration device. RESULTS Overall root-mean-square error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3- vs. 6-mm diameter) was negligible at all frame rates for both 2-D and 3-D data. CONCLUSION Motion tracking with consumer-grade digital cameras and the APAS software can achieve submillimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  12. Lower body reaction testing using ultrasonic motion capture.

    PubMed

    Taylor, K; Lennon, O; Blake, C; Fitzgerald, D; Fox, D; Bleakley, C J

    2014-01-01

    This paper presents a lower body reaction test that utilizes a new portable ultra-sound based motion capture system (MobiFit) combined with a synchronized visual stimulus. This novel system was tested first for criterion validity and agreement against a gold standard laboratory based optical motion capture system (CODA). It was subsequently tested in the field during Gaelic football (GAA) team gym sessions with 35 subjects to demonstrate its utility and versatility. The lower body reaction test itself is novel in that it can be applied to a gross motor task. During testing, participants had sensors attached to their lower limbs and trunk. The speed of movement for each sensor was monitored at 500Hz using the Mobifit motion capture system, and reaction time was measured as the elapsed time from the appearance of a green indicator on the screen to a sensor reaching a set threshold velocity as the participant raised the corresponding leg. Pearson's correlation coefficient tested criterion validity against the CODA system and Intra class correlation coefficients and Bland-Altman plots assessed agreement of velocity measures obtained from the MobiFit and CODA systems. Results indicate that the MobiFit system is an accurate device to assess lower body reaction time and has advantage over standard laboratory measures in terms of portability and ease of set-up. PMID:25570017

  13. Reconstruction Accuracy Assessment of Surface and Underwater 3D Motion Analysis: A New Approach

    PubMed Central

    de Jesus, Kelly; de Jesus, Karla; Figueiredo, Pedro; Vilas-Boas, João Paulo; Fernandes, Ricardo Jorge; Machado, Leandro José

    2015-01-01

    This study assessed accuracy of surface and underwater 3D reconstruction of a calibration volume with and without homography. A calibration volume (6000 × 2000 × 2500 mm) with 236 markers (64 above and 88 underwater control points—with 8 common points at water surface—and 92 validation points) was positioned on a 25 m swimming pool and recorded with two surface and four underwater cameras. Planar homography estimation for each calibration plane was computed to perform image rectification. Direct linear transformation algorithm for 3D reconstruction was applied, using 1600000 different combinations of 32 and 44 points out of the 64 and 88 control points for surface and underwater markers (resp.). Root Mean Square (RMS) error with homography of control and validations points was lower than without it for surface and underwater cameras (P ≤ 0.03). With homography, RMS errors of control and validation points were similar between surface and underwater cameras (P ≥ 0.47). Without homography, RMS error of control points was greater for underwater than surface cameras (P ≤ 0.04) and the opposite was observed for validation points (P ≤ 0.04). It is recommended that future studies using 3D reconstruction should include homography to improve swimming movement analysis accuracy. PMID:26175796

  14. Atmospheric Motion Vectors from INSAT-3D: Initial quality assessment and its impact on track forecast of cyclonic storm NANAUK

    NASA Astrophysics Data System (ADS)

    Deb, S. K.; Kishtawal, C. M.; Kumar, Prashant; Kiran Kumar, A. S.; Pal, P. K.; Kaushik, Nitesh; Sangar, Ghansham

    2016-03-01

    The advanced Indian meteorological geostationary satellite INSAT-3D was launched on 26 July 2013 with an improved imager and an infrared sounder and is placed at 82°E over the Indian Ocean region. With the advancement in retrieval techniques of different atmospheric parameters and with improved imager data have enhanced the scope for better understanding of the different tropical atmospheric processes over this region. The retrieval techniques and accuracy of one such parameter, Atmospheric Motion Vectors (AMV) has improved significantly with the availability of improved spatial resolution data along with more options of spectral channels in the INSAT-3D imager. The present work is mainly focused on providing brief descriptions of INSAT-3D data and AMV derivation processes using these data. It also discussed the initial quality assessment of INSAT-3D AMVs for a period of six months starting from 01 February 2014 to 31 July 2014 with other independent observations: i) Meteosat-7 AMVs available over this region, ii) in-situ radiosonde wind measurements, iii) cloud tracked winds from Multi-angle Imaging Spectro-Radiometer (MISR) and iv) numerical model analysis. It is observed from this study that the qualities of newly derived INSAT-3D AMVs are comparable with existing two versions of Meteosat-7 AMVs over this region. To demonstrate its initial application, INSAT-3D AMVs are assimilated in the Weather Research and Forecasting (WRF) model and it is found that the assimilation of newly derived AMVs has helped in reduction of track forecast errors of the recent cyclonic storm NANAUK over the Arabian Sea. Though, the present study is limited to its application to one case study, however, it will provide some guidance to the operational agencies for implementation of this new AMV dataset for future applications in the Numerical Weather Prediction (NWP) over the south Asia region.

  15. Exercise Sensing and Pose Recovery Inference Tool (ESPRIT) - A Compact Stereo-based Motion Capture Solution For Exercise Monitoring

    NASA Technical Reports Server (NTRS)

    Lee, Mun Wai

    2015-01-01

    Crew exercise is important during long-duration space flight not only for maintaining health and fitness but also for preventing adverse health problems, such as losses in muscle strength and bone density. Monitoring crew exercise via motion capture and kinematic analysis aids understanding of the effects of microgravity on exercise and helps ensure that exercise prescriptions are effective. Intelligent Automation, Inc., has developed ESPRIT to monitor exercise activities, detect body markers, extract image features, and recover three-dimensional (3D) kinematic body poses. The system relies on prior knowledge and modeling of the human body and on advanced statistical inference techniques to achieve robust and accurate motion capture. In Phase I, the company demonstrated motion capture of several exercises, including walking, curling, and dead lifting. Phase II efforts focused on enhancing algorithms and delivering an ESPRIT prototype for testing and demonstration.

  16. A New Position Measurement System Using a Motion-Capture Camera for Wind Tunnel Tests

    PubMed Central

    Park, Hyo Seon; Kim, Ji Young; Kim, Jin Gi; Choi, Se Woon; Kim, Yousok

    2013-01-01

    Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS) could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS). The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape) of the test specimen using system identification methods (frequency domain decomposition, FDD). By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape) with the 3D measurements. PMID:24064600

  17. Sedimentary basin effects in Seattle, Washington: Ground-motion observations and 3D simulations

    USGS Publications Warehouse

    Frankel, Arthur; Stephenson, William; Carver, David

    2009-01-01

    Seismograms of local earthquakes recorded in Seattle exhibit surface waves in the Seattle basin and basin-edge focusing of S waves. Spectral ratios of Swaves and later arrivals at 1 Hz for stiff-soil sites in the Seattle basin show a dependence on the direction to the earthquake, with earthquakes to the south and southwest producing higher average amplification. Earthquakes to the southwest typically produce larger basin surface waves relative to S waves than earthquakes to the north and northwest, probably because of the velocity contrast across the Seattle fault along the southern margin of the Seattle basin. S to P conversions are observed for some events and are likely converted at the bottom of the Seattle basin. We model five earthquakes, including the M 6.8 Nisqually earthquake, using 3D finite-difference simulations accurate up to 1 Hz. The simulations reproduce the observed dependence of amplification on the direction to the earthquake. The simulations generally match the timing and character of basin surface waves observed for many events. The 3D simulation for the Nisqually earth-quake produces focusing of S waves along the southern margin of the Seattle basin near the area in west Seattle that experienced increased chimney damage from the earthquake, similar to the results of the higher-frequency 2D simulation reported by Stephenson et al. (2006). Waveforms from the 3D simulations show reasonable agreement with the data at low frequencies (0.2-0.4 Hz) for the Nisqually earthquake and an M 4.8 deep earthquake west of Seattle.

  18. Undersampled Cine 3D tagging for rapid assessment of cardiac motion

    PubMed Central

    2012-01-01

    Background CMR allows investigating cardiac contraction, rotation and torsion non-invasively by the use of tagging sequences. Three-dimensional tagging has been proposed to cover the whole-heart but data acquisition requires three consecutive breath holds and hence demands considerable patient cooperation. In this study we have implemented and studied k-t undersampled cine 3D tagging in conjunction with k-t PCA reconstruction to potentially permit for single breath-hold acquisitions. Methods The performance of undersampled cine 3D tagging was investigated using computer simulations and in-vivo measurements in 8 healthy subjects and 5 patients with myocardial infarction. Fully sampled data was obtained and compared to retrospectively and prospectively undersampled acquisitions. Fully sampled data was acquired in three consecutive breath holds. Prospectively undersampled data was obtained within a single breath hold. Based on harmonic phase (HARP) analysis, circumferential shortening, rotation and torsion were compared between fully sampled and undersampled data using Bland-Altman and linear regression analysis. Results In computer simulations, the error for circumferential shortening was 2.8 ± 2.3% and 2.7 ± 2.1% for undersampling rates of R = 3 and 4 respectively. Errors in ventricular rotation were 2.5 ± 1.9% and 3.0 ± 2.2% for R = 3 and 4. Comparison of results from fully sampled in-vivo data acquired with prospectively undersampled acquisitions showed a mean difference in circumferential shortening of −0.14 ± 5.18% and 0.71 ± 6.16% for R = 3 and 4. The mean differences in rotation were 0.44 ± 1.8° and 0.73 ± 1.67° for R = 3 and 4, respectively. In patients peak, circumferential shortening was significantly reduced (p < 0.002 for all patients) in regions with late gadolinium enhancement. Conclusion Undersampled cine 3D tagging enables significant reduction in scan time of whole-heart tagging and

  19. Prospective motion correction of 3D echo-planar imaging data for functional MRI using optical tracking

    PubMed Central

    Todd, Nick; Josephs, Oliver; Callaghan, Martina F.; Lutti, Antoine; Weiskopf, Nikolaus

    2015-01-01

    We evaluated the performance of an optical camera based prospective motion correction (PMC) system in improving the quality of 3D echo-planar imaging functional MRI data. An optical camera and external marker were used to dynamically track the head movement of subjects during fMRI scanning. PMC was performed by using the motion information to dynamically update the sequence's RF excitation and gradient waveforms such that the field-of-view was realigned to match the subject's head movement. Task-free fMRI experiments on five healthy volunteers followed a 2 × 2 × 3 factorial design with the following factors: PMC on or off; 3.0 mm or 1.5 mm isotropic resolution; and no, slow, or fast head movements. Visual and motor fMRI experiments were additionally performed on one of the volunteers at 1.5 mm resolution comparing PMC on vs PMC off for no and slow head movements. Metrics were developed to quantify the amount of motion as it occurred relative to k-space data acquisition. The motion quantification metric collapsed the very rich camera tracking data into one scalar value for each image volume that was strongly predictive of motion-induced artifacts. The PMC system did not introduce extraneous artifacts for the no motion conditions and improved the time series temporal signal-to-noise by 30% to 40% for all combinations of low/high resolution and slow/fast head movement relative to the standard acquisition with no prospective correction. The numbers of activated voxels (p < 0.001, uncorrected) in both task-based experiments were comparable for the no motion cases and increased by 78% and 330%, respectively, for PMC on versus PMC off in the slow motion cases. The PMC system is a robust solution to decrease the motion sensitivity of multi-shot 3D EPI sequences and thereby overcome one of the main roadblocks to their widespread use in fMRI studies. PMID:25783205

  20. Multitemporal 3D data capturing and GIS analysis of fluvial processes and geomorphological changes with terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Hämmerle, Martin; Forbriger, Markus; Höfle, Bernhard

    2013-04-01

    LiDAR is a state of the art method for directly capturing 3D geodata. A laser beam is emitted in a known direction. The time of flight of the laser pulse is recorded and transformed into the distance between sensor and scanned object. The result of the scanning process is a 3D laser point cloud densely covering the surveyed area. LiDAR is used in a vast variety of research fields. In this study, the focus is on the application of terrestrial laser scanning (TLS), the static and ground-based LiDAR operation, in a multitemporal analysis of fluvial geomorphology. Within the framework of two study projects in 2011/2012, two TLS surveys were carried out. The surveys covered a gravel bar of about 150 m × 25 m size in a side branch of the Neckar River near Heidelberg (49°28'36''N, 8°34'32''E) located in a nature reserve with natural river characteristics. The first survey was performed in November 2011, the second in June 2012. Due to seasonally changing water levels, the gravel bar was flooded and the morphology changed. For the field campaigns, a Riegl VZ-400 was available. Height control points and tie points for registration and georeferencing were obtained with a total station and GPS equipment. The first survey was done from 6 scan positions (77 million points) and the second from 5 positions (89 million points). The point spacing for each single scan was set to 3 mm at 10 m distance. Co-registration of the individual campaigns was done via an Iterative Closest Point algorithm. Thereafter, co-registration and fine georeferencing of both epochs was performed using manually selected tie points and least-squares adjustment. After filtering of vegetation in the 3D point cloud in the software OPALS, a digital terrain model (DTM) with 0.25 m by 0.25 m cell size was generated for each epoch. A difference raster model of the two DTMs for assessing the changes was derived excluding water surface areas using the signal amplitude recorded for each echo. From the mean

  1. Analysis of 3-D Tongue Motion from Tagged and Cine Magnetic Resonance Images

    ERIC Educational Resources Information Center

    Xing, Fangxu; Woo, Jonghye; Lee, Junghoon; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.

    2016-01-01

    Purpose: Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during…

  2. Description of a 3D display with motion parallax and direct interaction

    NASA Astrophysics Data System (ADS)

    Tu, J.; Flynn, M. F.

    2014-03-01

    We present a description of a time sequential stereoscopic display which separates the images using a segmented polarization switch and passive eyewear. Additionally, integrated tracking cameras and an SDK on the host PC allow us to implement motion parallax in real time.

  3. Prediction of 3D internal organ position from skin surface motion: results from electromagnetic tracking studies

    NASA Astrophysics Data System (ADS)

    Wong, Kenneth H.; Tang, Jonathan; Zhang, Hui J.; Varghese, Emmanuel; Cleary, Kevin R.

    2005-04-01

    An effective treatment method for organs that move with respiration (such as the lungs, pancreas, and liver) is a major goal of radiation medicine. In order to treat such tumors, we need (1) real-time knowledge of the current location of the tumor, and (2) the ability to adapt the radiation delivery system to follow this constantly changing location. In this study, we used electromagnetic tracking in a swine model to address the first challenge, and to determine if movement of a marker attached to the skin could accurately predict movement of an internal marker embedded in an organ. Under approved animal research protocols, an electromagnetically tracked needle was inserted into a swine liver and an electromagnetically tracked guidewire was taped to the abdominal skin of the animal. The Aurora (Northern Digital Inc., Waterloo, Canada) electromagnetic tracking system was then used to monitor the position of both of these sensors every 40 msec. Position readouts from the sensors were then tested to see if any of the movements showed correlation. The strongest correlations were observed between external anterior-posterior motion and internal inferior-superior motion, with many other axes exhibiting only weak correlation. We also used these data to build a predictive model of internal motion by taking segments from the data and using them to derive a general functional relationship between the internal needle and the external guidewire. For the axis with the strongest correlation, this model enabled us to predict internal organ motion to within 1 mm.

  4. Experience affects the use of ego-motion signals during 3D shape perception

    PubMed Central

    Jain, Anshul; Backus, Benjamin T.

    2011-01-01

    Experience has long-term effects on perceptual appearance (Q. Haijiang, J. A. Saunders, R. W. Stone, & B. T. Backus, 2006). We asked whether experience affects the appearance of structure-from-motion stimuli when the optic flow is caused by observer ego-motion. Optic flow is an ambiguous depth cue: a rotating object and its oppositely rotating, depth-inverted dual generate similar flow. However, the visual system exploits ego-motion signals to prefer the percept of an object that is stationary over one that rotates (M. Wexler, F. Panerai, I. Lamouret, & J. Droulez, 2001). We replicated this finding and asked whether this preference for stationarity, the “stationarity prior,” is modulated by experience. During training, two groups of observers were exposed to objects with identical flow, but that were either stationary or moving as determined by other cues. The training caused identical test stimuli to be seen preferentially as stationary or moving by the two groups, respectively. We then asked whether different priors can exist independently at different locations in the visual field. Observers were trained to see objects either as stationary or as moving at two different locations. Observers’ stationarity bias at the two respective locations was modulated in the directions consistent with training. Thus, the utilization of extraretinal ego-motion signals for disambiguating optic flow signals can be updated as the result of experience, consistent with the updating of a Bayesian prior for stationarity. PMID:21191132

  5. Motion Controllers for Learners to Manipulate and Interact with 3D Objects for Mental Rotation Training

    ERIC Educational Resources Information Center

    Yeh, Shih-Ching; Wang, Jin-Liang; Wang, Chin-Yeh; Lin, Po-Han; Chen, Gwo-Dong; Rizzo, Albert

    2014-01-01

    Mental rotation is an important spatial processing ability and an important element in intelligence tests. However, the majority of past attempts at training mental rotation have used paper-and-pencil tests or digital images. This study proposes an innovative mental rotation training approach using magnetic motion controllers to allow learners to…

  6. Numerical scheme for riser motion calculation during 3-D VIV simulation

    NASA Astrophysics Data System (ADS)

    Huang, Kevin; Chen, Hamn-Ching; Chen, Chia-Rong

    2011-10-01

    This paper presents a numerical scheme for riser motion calculation and its application to riser VIV simulations. The discretisation of the governing differential equation is studied first. The top tensioned risers are simplified as tensioned beams. A centered space and forward time finite difference scheme is derived from the governing equations of motion. Then an implicit method is adopted for better numerical stability. The method meets von Neumann criteria and is shown to be unconditionally stable. The discretized linear algebraic equations are solved using a LU decomposition method. This approach is then applied to a series of benchmark cases with known solutions. The comparisons show good agreement. Finally the method is applied to practical riser VIV simulations. The studied cases cover a wide range of riser VIV problems, i.e. different riser outer diameter, length, tensioning conditions, and current profiles. Reasonable agreement is obtained between the numerical simulations and experimental data on riser motions and cross-flow VIV a/D . These validations and comparisons confirm that the present numerical scheme for riser motion calculation is valid and effective for long riser VIV simulation.

  7. Motion-parallax smoothness of short-, medium-, and long-distance 3D image presentation using multi-view displays.

    PubMed

    Takaki, Yasuhiro; Urano, Yohei; Nishio, Hiroyuki

    2012-11-19

    The discontinuity of motion parallax offered by multi-view displays was assessed by subjective evaluation. A super multi-view head-up display, which provides dense viewing points and has short-, medium-, and long-distance display ranges, was used. The results showed that discontinuity perception depended on the ratio of an image shift between adjacent parallax images to a pixel pitch of three-dimensional (3D) images and the crosstalk between viewing points. When the ratio was less than 0.2 and the crosstalk was small, the discontinuity was not perceived. When the ratio was greater than 1 and the crosstalk was small, the discontinuity was perceived, and the resolution of the 3D images decreased twice. When the crosstalk was large, the discontinuity was not perceived even when the ratio was 1 or 2. However, the resolution decreased two or more times. PMID:23187574

  8. Flying triangulation - A motion-robust optical 3D sensor for the real-time shape acquisition of complex objects

    NASA Astrophysics Data System (ADS)

    Willomitzer, Florian; Ettl, Svenja; Arold, Oliver; Häusler, Gerd

    2013-05-01

    The three-dimensional shape acquisition of objects has become more and more important in the last years. Up to now, there are several well-established methods which already yield impressive results. However, even under quite common conditions like object movement or a complex shaping, most methods become unsatisfying. Thus, the 3D shape acquisition is still a difficult and non-trivial task. We present our measurement principle "Flying Triangulation" which enables a motion-robust 3D acquisition of complex-shaped object surfaces by a freely movable handheld sensor. Since "Flying Triangulation" is scalable, a whole sensor-zoo for different object sizes is presented. Concluding, an overview of current and future fields of investigation is given.

  9. Combining marker-less patient setup and respiratory motion monitoring using low cost 3D camera technology

    NASA Astrophysics Data System (ADS)

    Tahavori, F.; Adams, E.; Dabbs, M.; Aldridge, L.; Liversidge, N.; Donovan, E.; Jordan, T.; Evans, PM.; Wells, K.

    2015-03-01

    Patient set-up misalignment/motion can be a significant source of error within external beam radiotherapy, leading to unwanted dose to healthy tissues and sub-optimal dose to the target tissue. Such inadvertent displacement or motion of the target volume may be caused by treatment set-up error, respiratory motion or an involuntary movement potentially decreasing therapeutic benefit. The conventional approach to managing abdominal-thoracic patient set-up is via skin markers (tattoos) and laser-based alignment. Alignment of the internal target volume with its position in the treatment plan can be achieved using Deep Inspiration Breath Hold (DIBH) in conjunction with marker-based respiratory motion monitoring. We propose a marker-less single system solution for patient set-up and respiratory motion management based on low cost 3D depth camera technology (such as the Microsoft Kinect). In this new work we assess this approach in a study group of six volunteer subjects. Separate simulated treatment mimic treatment "fractions" or set-ups are compared for each subject, undertaken using conventional laser-based alignment and with intrinsic depth images produced by Kinect. Microsoft Kinect is also compared with the well-known RPM system for respiratory motion management in terms of monitoring free-breathing and DIBH. Preliminary results suggest that Kinect is able to produce mm-level surface alignment and a comparable DIBH respiratory motion management when compared to the popular RPM system. Such an approach may also yield significant benefits in terms of patient throughput as marker alignment and respiratory motion can be automated in a single system.

  10. Assessment of spinal mobility in ankylosing spondylitis using a video-based motion capture system.

    PubMed

    Garrido-Castro, Juan L; Medina-Carnicer, Rafael; Schiottis, Ruxandra; Galisteo, Alfonso M; Collantes-Estevez, Eduardo; Gonzalez-Navas, Cristina

    2012-10-01

    This paper describes the use of a video-based motion capture system to assess spinal mobility in patients with ankylosing spondylitis (AS). The aim of the study is to assess reliability of the system comparing it with conventional metrology in order to define and analyze new measurements that reflect better spinal mobility. A motion capture system (UCOTrack) was used to measure spinal mobility in forty AS patients and twenty healthy subjects with a marker set defining 33 3D measurements, some already being used in conventional metrology. Radiographic studies were scored using the modified Stoke Ankylosing Spondylitis Spine Score index (mSASSS). Test-retest reliability studies were performed on the same day and over a two-week period. Motion capture shows very high reliability with Intraclass Correlation Coefficient values ranging from 0.89 to 0.99, low Standard Error of the Measurement (0.37-1.33 cm and 1.58°-6.54°), correlating very well with the Bath Ankylosing Spondylitis Metrology Index (BASMI) (p < 0.001) and, in some individual measures (cervical flexion, cervical lateral flexion, back inclination, shoulder-hip angle and spinal rotation), with mSASSS (p < 0.01). mSASSS also added significantly to the variance in multivariate linear regression analysis to certain measures (back inclination, cervical flexion and cervical lateral flexion). Quantitative results obtained with motion capture system using the protocol defined show to be highly reliable in patients with AS. This technique could be a useful tool for assessing the outcome of the disease and for monitoring the evolution of spinal mobility in AS patients. PMID:22560166

  11. Stereoscopic motion analysis in densely packed clusters: 3D analysis of the shimmering behaviour in Giant honey bees

    PubMed Central

    2011-01-01

    Background The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns. Results Stereoscopic imaging provided non-invasive, automated, simultaneous, in-situ 3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. Segmentation was the basis for the stereo matching, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the tracking procedure and triangulated into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb) of individual bees over time. Conclusions The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method could be used to study

  12. Satellite attitude motion models for capture and retrieval investigations

    NASA Technical Reports Server (NTRS)

    Cochran, John E., Jr.; Lahr, Brian S.

    1986-01-01

    The primary purpose of this research is to provide mathematical models which may be used in the investigation of various aspects of the remote capture and retrieval of uncontrolled satellites. Emphasis has been placed on analytical models; however, to verify analytical solutions, numerical integration must be used. Also, for satellites of certain types, numerical integration may be the only practical or perhaps the only possible method of solution. First, to provide a basis for analytical and numerical work, uncontrolled satellites were categorized using criteria based on: (1) orbital motions, (2) external angular momenta, (3) internal angular momenta, (4) physical characteristics, and (5) the stability of their equilibrium states. Several analytical solutions for the attitude motions of satellite models were compiled, checked, corrected in some minor respects and their short-term prediction capabilities were investigated. Single-rigid-body, dual-spin and multi-rotor configurations are treated. To verify the analytical models and to see how the true motion of a satellite which is acted upon by environmental torques differs from its corresponding torque-free motion, a numerical simulation code was developed. This code contains a relatively general satellite model and models for gravity-gradient and aerodynamic torques. The spacecraft physical model for the code and the equations of motion are given. The two environmental torque models are described.

  13. Pitching motion control of a butterfly-like 3D flapping wing-body model

    NASA Astrophysics Data System (ADS)

    Suzuki, Kosuke; Minami, Keisuke; Inamuro, Takaji

    2014-11-01

    Free flights and a pitching motion control of a butterfly-like flapping wing-body model are numerically investigated by using an immersed boundary-lattice Boltzmann method. The model flaps downward for generating the lift force and backward for generating the thrust force. Although the model can go upward against the gravity by the generated lift force, the model generates the nose-up torque, consequently gets off-balance. In this study, we discuss a way to control the pitching motion by flexing the body of the wing-body model like an actual butterfly. The body of the model is composed of two straight rigid rod connected by a rotary actuator. It is found that the pitching angle is suppressed in the range of +/-5° by using the proportional-plus-integral-plus-derivative (PID) control for the input torque of the rotary actuator.

  14. Stereo photography of neutral density He-filled bubbles for 3-D fluid motion studies in an engine cylinder.

    PubMed

    Kent, J C; Eaton, A R

    1982-03-01

    A new technique has been developed for studies of fluid motion within the cylinder of a reciprocating piston engine during the air induction process. Helium-filled bubbles, serving as neutrally buoyant flow tracer particles, enter the cylinder along with the inducted air charge. The bubble motion is recorded by stereo cine photography through the transparent cylinder of a specially designed research engine. Quantitative data on the 3-D velocity field generated during induction is obtained from frame-to-frame analysis of the stereo images, taking into account refraction of the rays due to the transparent cylinder. Other applications for which this technique appears suitable include measurements of velocity fields within intake ports and flow-field dynamics within intake manifolds of multicylinder engines. PMID:20372559

  15. Hamiltonian model of capture into mean motion resonance

    NASA Astrophysics Data System (ADS)

    Mustill, Alexander J.; Wyatt, Mark C.

    2011-11-01

    Mean motion resonances are a common feature of both our own Solar System and of extrasolar planetary systems. Bodies can be trapped in resonance when their orbital semi-major axes change, for instance when they migrate through a protoplanetary disc. We use a Hamiltonian model to thoroughly investigate the capture behaviour for first and second order resonances. Using this method, all resonances of the same order can be described by one equation, with applications to specific resonances by appropriate scaling. We focus on the limit where one body is a massless test particle and the other a massive planet. We quantify how the the probability of capture into a resonance depends on the relative migration rate of the planet and particle, and the particle's eccentricity. Resonant capture fails for high migration rates, and has decreasing probability for higher eccentricities, although for certain migration rates, capture probability peaks at a finite eccentricity. We also calculate libration amplitudes and the offset of the libration centres for captured particles, and the change in eccentricity if capture does not occur. Libration amplitudes are higher for larger initial eccentricity. The model allows for a complete description of a particle's behaviour as it successively encounters several resonances. The model is applicable to many scenarios, including (i) Planet migration through gas discs trapping other planets or planetesimals in resonances; (ii) Planet migration through a debris disc; (iii) Dust migration through PR drag. The Hamiltonian model will allow quick interpretation of the resonant properties of extrasolar planets and Kuiper Belt Objects, and will allow synthetic images of debris disc structures to be quickly generated, which will be useful for predicting and interpreting disc images made with ALMA, Darwin/TPF or similar missions. Full details can be found in Mustill & Wyatt (2011).

  16. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    PubMed

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming. PMID:27590974

  17. Towards real-time 2D/3D registration for organ motion monitoring in image-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Gendrin, C.; Spoerk, J.; Bloch, C.; Pawiro, S. A.; Weber, C.; Figl, M.; Markelj, P.; Pernus, F.; Georg, D.; Bergmann, H.; Birkfellner, W.

    2010-02-01

    Nowadays, radiation therapy systems incorporate kV imaging units which allow for the real-time acquisition of intra-fractional X-ray images of the patient with high details and contrast. An application of this technology is tumor motion monitoring during irradiation. For tumor tracking, implanted markers or position sensors are used which requires an intervention. 2D/3D intensity based registration is an alternative, non-invasive method but the procedure must be accelerate to the update rate of the device, which lies in the range of 5 Hz. In this paper we investigate fast CT to a single kV X-ray 2D/3D image registration using a new porcine reference phantom with seven implanted fiducial markers. Several parameters influencing the speed and accuracy of the registrations are investigated. First, four intensity based merit functions, namely Cross-Correlation, Rank Correlation, Mutual Information and Correlation Ratio, are compared. Secondly, wobbled splatting and ray casting rendering techniques are implemented on the GPU and the influence of each algorithm on the performance of 2D/3D registration is evaluated. Rendering times for a single DRR of 20 ms were achieved. Different thresholds of the CT volume were also examined for rendering to find the setting that achieves the best possible correspondence with the X-ray images. Fast registrations below 4 s became possible with an inplane accuracy down to 0.8 mm.

  18. Continuous-scanning laser Doppler vibrometry: Extensions to arbitrary areas, multi-frequency and 3D capture

    SciTech Connect

    Weekes, B.; Ewins, D.; Acciavatti, F.

    2014-05-27

    To date, differing implementations of continuous scan laser Doppler vibrometry have been demonstrated by various academic institutions, but since the scan paths were defined using step or sine functions from function generators, the paths were typically limited to 1D line scans or 2D areas such as raster paths or Lissajous trajectories. The excitation was previously often limited to a single frequency due to the specific signal processing performed to convert the scan data into an ODS. In this paper, a configuration of continuous-scan laser Doppler vibrometry is demonstrated which permits scanning of arbitrary areas, with the benefit of allowing multi-frequency/broadband excitation. Various means of generating scan paths to inspect arbitrary areas are discussed and demonstrated. Further, full 3D vibration capture is demonstrated by the addition of a range-finding facility to the described configuration, and iteratively relocating a single scanning laser head. Here, the range-finding facility was provided by a Microsoft Kinect, an inexpensive piece of consumer electronics.

  19. Continuous-scanning laser Doppler vibrometry: Extensions to arbitrary areas, multi-frequency and 3D capture

    NASA Astrophysics Data System (ADS)

    Weekes, B.; Ewins, D.; Acciavatti, F.

    2014-05-01

    To date, differing implementations of continuous scan laser Doppler vibrometry have been demonstrated by various academic institutions, but since the scan paths were defined using step or sine functions from function generators, the paths were typically limited to 1D line scans or 2D areas such as raster paths or Lissajous trajectories. The excitation was previously often limited to a single frequency due to the specific signal processing performed to convert the scan data into an ODS. In this paper, a configuration of continuous-scan laser Doppler vibrometry is demonstrated which permits scanning of arbitrary areas, with the benefit of allowing multi-frequency/broadband excitation. Various means of generating scan paths to inspect arbitrary areas are discussed and demonstrated. Further, full 3D vibration capture is demonstrated by the addition of a range-finding facility to the described configuration, and iteratively relocating a single scanning laser head. Here, the range-finding facility was provided by a Microsoft Kinect, an inexpensive piece of consumer electronics.

  20. 3D PET image reconstruction including both motion correction and registration directly into an MR or stereotaxic spatial atlas

    NASA Astrophysics Data System (ADS)

    Gravel, Paul; Verhaeghe, Jeroen; Reader, Andrew J.

    2013-01-01

    This work explores the feasibility and impact of including both the motion correction and the image registration transformation parameters from positron emission tomography (PET) image space to magnetic resonance (MR), or stereotaxic, image space within the system matrix of PET image reconstruction. This approach is motivated by the fields of neuroscience and psychiatry, where PET is used to investigate differences in activation patterns between different groups of participants, requiring all images to be registered to a common spatial atlas. Currently, image registration is performed after image reconstruction which introduces interpolation effects into the final image. Furthermore, motion correction (also requiring registration) introduces a further level of interpolation, and the overall result of these operations can lead to resolution degradation and possibly artifacts. It is important to note that performing such operations on a post-reconstruction basis means, strictly speaking, that the final images are not ones which maximize the desired objective function (e.g. maximum likelihood (ML), or maximum a posteriori reconstruction (MAP)). To correctly seek parameter estimates in the desired spatial atlas which are in accordance with the chosen reconstruction objective function, it is necessary to include the transformation parameters for both motion correction and registration within the system modeling stage of image reconstruction. Such an approach not only respects the statistically chosen objective function (e.g. ML or MAP), but furthermore should serve to reduce the interpolation effects. To evaluate the proposed method, this work investigates registration (including motion correction) using 2D and 3D simulations based on the high resolution research tomograph (HRRT) PET scanner geometry, with and without resolution modeling, using the ML expectation maximization (MLEM) reconstruction algorithm. The quality of reconstruction was assessed using bias

  1. Modulated Magnetic Nanowires for Controlling Domain Wall Motion: Toward 3D Magnetic Memories.

    PubMed

    Ivanov, Yurii P; Chuvilin, Andrey; Lopatin, Sergei; Kosel, Jurgen

    2016-05-24

    Cylindrical magnetic nanowires are attractive materials for next generation data storage devices owing to the theoretically achievable high domain wall velocity and their efficient fabrication in highly dense arrays. In order to obtain control over domain wall motion, reliable and well-defined pinning sites are required. Here, we show that modulated nanowires consisting of alternating nickel and cobalt sections facilitate efficient domain wall pinning at the interfaces of those sections. By combining electron holography with micromagnetic simulations, the pinning effect can be explained by the interaction of the stray fields generated at the interface and the domain wall. Utilizing a modified differential phase contrast imaging, we visualized the pinned domain wall with a high resolution, revealing its three-dimensional vortex structure with the previously predicted Bloch point at its center. These findings suggest the potential of modulated nanowires for the development of high-density, three-dimensional data storage devices. PMID:27138460

  2. MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems

    NASA Astrophysics Data System (ADS)

    Kopecky, Ken; Winer, Eliot

    2014-06-01

    Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.

  3. Nonlinear, nonlaminar-3D computation of electron motion through the output cavity of a klystron

    NASA Technical Reports Server (NTRS)

    Albers, L. U.; Kosmahl, H. G.

    1971-01-01

    The equations of motion used in the computation are discussed along with the space charge fields and the integration process. The following assumptions were used as a basis for the computation: (1) The beam is divided into N axisymmetric discs of equal charge and each disc into R rings of equal charge. (2) The velocity of each disc, its phase with respect to the gap voltage, and its radius at a specified position in the drift tunnel prior to the interaction gap is known from available large signal one dimensional programs. (3) The fringing rf fields are computed from exact analytical expressions derived from the wave equation assuming a known field shape between the tunnel tips at a radius a. (4) The beam is focused by an axisymmetric magnetic field. Both components of B, that is B sub z and B sub r, are taken into account. (5) Since this integration does not start at the cathode but rather further down the stream prior to entering the output cavity it is assumed that each electron moved along a laminar path from the cathode to the start of integration.

  4. 3D optical imagery for motion compensation in a limb ultrasound system

    NASA Astrophysics Data System (ADS)

    Ranger, Bryan J.; Feigin, Micha; Zhang, Xiang; Mireault, Al; Raskar, Ramesh; Herr, Hugh M.; Anthony, Brian W.

    2016-04-01

    Conventional processes for prosthetic socket fabrication are heavily subjective, often resulting in an interface to the human body that is neither comfortable nor completely functional. With nearly 100% of amputees reporting that they experience discomfort with the wearing of their prosthetic limb, designing an effective interface to the body can significantly affect quality of life and future health outcomes. Active research in medical imaging and biomechanical tissue modeling of residual limbs has led to significant advances in computer aided prosthetic socket design, demonstrating an interest in moving toward more quantifiable processes that are still patient-specific. In our work, medical ultrasonography is being pursued to acquire data that may quantify and improve the design process and fabrication of prosthetic sockets while greatly reducing cost compared to an MRI-based framework. This paper presents a prototype limb imaging system that uses a medical ultrasound probe, mounted to a mechanical positioning system and submerged in a water bath. The limb imaging is combined with three-dimensional optical imaging for motion compensation. Images are collected circumferentially around the limb and combined into cross-sectional axial image slices, resulting in a compound image that shows tissue distributions and anatomical boundaries similar to magnetic resonance imaging. In this paper we provide a progress update on our system development, along with preliminary results as we move toward full volumetric imaging of residual limbs for prosthetic socket design. This demonstrates a novel multi-modal approach to residual limb imaging.

  5. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network.

    PubMed

    Bukhari, W; Hong, S-M

    2016-03-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient's breathing cycle. The algorithm, named EKF-GPRN(+) , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN(+) prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN(+) implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN(+) . The experimental results show that the EKF-GPRN(+) algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN(+) algorithm can further reduce the prediction error by employing the gating

  6. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network

    NASA Astrophysics Data System (ADS)

    Bukhari, W.; Hong, S.-M.

    2016-03-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN+ , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN+ prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN+ implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN+ . The experimental results show that the EKF-GPRN+ algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN+ algorithm can further reduce the prediction error by employing the gating function, albeit

  7. Effects of simple and complex motion patterns on gene expression of chondrocytes seeded in 3D scaffolds.

    PubMed

    Grad, Sibylle; Gogolewski, Sylwester; Alini, Mauro; Wimmer, Markus A

    2006-11-01

    This study investigated the effect of unidirectional and multidirectional motion patterns on gene expression and molecule release of chondrocyte-seeded 3D scaffolds. Resorbable porous polyurethane scaffolds were seeded with bovine articular chondrocytes and exposed to dynamic compression, applied with a ceramic hip ball, alone (group 1), with superimposed rotation of the scaffold around its cylindrical axis (group 2), oscillation of the ball over the scaffold surface (group 3), or oscillation of ball and scaffold in phase difference (group 4). Compared with group 1, the proteoglycan 4 (PRG4) and cartilage oligomeric matrix protein (COMP) mRNA expression levels were markedly increased by ball oscillation (groups 3 and 4). Furthermore, the collagen type II mRNA expression was enhanced in the groups 3 and 4, while the aggrecan and tissue inhibitor of metalloproteinase-3 (TIMP-3) mRNA expression levels were upregulated by multidirectional articular motion (group 4). Ball oscillation (groups 3 and 4) also increased the release of PRG4, COMP, and hyaluronan (HA) into the culture media. This indicates that the applied stimuli can contribute to the maintenance of the chondrocytic phenotype of the cells. The mechanical effects causing cell stimulation by applied surface motion might be related to fluid film buildup and/or frictional shear at the scaffold-ball interface. It is suggested that the oscillating ball drags the fluid into the joint space, thereby causing biophysical effects similar to those of fluid flow. PMID:17518631

  8. Simultaneous 3D imaging of sound-induced motions of the tympanic membrane and middle ear ossicles.

    PubMed

    Chang, Ernest W; Cheng, Jeffrey T; Röösli, Christof; Kobler, James B; Rosowski, John J; Yun, Seok Hyun

    2013-10-01

    Efficient transfer of sound by the middle ear ossicles is essential for hearing. Various pathologies can impede the transmission of sound and thereby cause conductive hearing loss. Differential diagnosis of ossicular disorders can be challenging since the ossicles are normally hidden behind the tympanic membrane (TM). Here we describe the use of a technique termed optical coherence tomography (OCT) vibrography to view the sound-induced motion of the TM and ossicles simultaneously. With this method, we were able to capture three-dimensional motion of the intact TM and ossicles of the chinchilla ear with nanometer-scale sensitivity at sound frequencies from 0.5 to 5 kHz. The vibration patterns of the TM were complex and highly frequency dependent with mean amplitudes of 70-120 nm at 100 dB sound pressure level. The TM motion was only marginally sensitive to stapes fixation and incus-stapes joint interruption; however, when additional information derived from the simultaneous measurement of ossicular motion was added, it was possible to clearly distinguish these different simulated pathologies. The technique may be applicable to clinical diagnosis in Otology and to basic research in audition and acoustics. PMID:23811181

  9. Validation of the Leap Motion Controller using markered motion capture technology.

    PubMed

    Smeragliuolo, Anna H; Hill, N Jeremy; Disla, Luis; Putrino, David

    2016-06-14

    The Leap Motion Controller (LMC) is a low-cost, markerless motion capture device that tracks hand, wrist and forearm position. Integration of this technology into healthcare applications has begun to occur rapidly, making validation of the LMC׳s data output an important research goal. Here, we perform a detailed evaluation of the kinematic data output from the LMC, and validate this output against gold-standard, markered motion capture technology. We instructed subjects to perform three clinically-relevant wrist (flexion/extension, radial/ulnar deviation) and forearm (pronation/supination) movements. The movements were simultaneously tracked using both the LMC and a marker-based motion capture system from Motion Analysis Corporation (MAC). Adjusting for known inconsistencies in the LMC sampling frequency, we compared simultaneously acquired LMC and MAC data by performing Pearson׳s correlation (r) and root mean square error (RMSE). Wrist flexion/extension and radial/ulnar deviation showed good overall agreement (r=0.95; RMSE=11.6°, and r=0.92; RMSE=12.4°, respectively) with the MAC system. However, when tracking forearm pronation/supination, there were serious inconsistencies in reported joint angles (r=0.79; RMSE=38.4°). Hand posture significantly influenced the quality of wrist deviation (P<0.005) and forearm supination/pronation (P<0.001), but not wrist flexion/extension (P=0.29). We conclude that the LMC is capable of providing data that are clinically meaningful for wrist flexion/extension, and perhaps wrist deviation. It cannot yet return clinically meaningful data for measuring forearm pronation/supination. Future studies should continue to validate the LMC as updated versions of their software are developed. PMID:27102160

  10. 3-D ground motion modeling for M7 dynamic rupture earthquake scenarios on the Wasatch fault, Utah

    NASA Astrophysics Data System (ADS)

    Roten, D.; Olsen, K. B.; Cruz Atienza, V. M.; Pechmann, J. C.; Magistrale, H. W.

    2009-12-01

    The Salt Lake City segment of the Wasatch fault (WFSLC), located on the eastern edge of the Salt Lake Basin (SLB), is capable of producing M7 earthquakes and represents a serious seismic hazard to Salt Lake City, Utah. We simulate a series of rupture scenarios on the WFSLC to quantify the ground motion expected from such M7 events and to assess the importance of amplification effects from basin focusing and source directivity. We use the newly revised Wasatch Front community velocity model for our simulations, which is tested by simulating records of three local Mw 3.3-3.7 earthquakes in the frequency band 0.5 to 1.0 Hz. The M7 earthquake scenarios make use of a detailed 3-D model geometry of the WFSLC that we developed based on geological observations. To obtain a suite of realistic source representations for M7 WFSLC simulations we perform spontaneous-rupture simulations on a planar 43 km by 23 km fault with the staggered-grid split-node finite-difference (FD) method. We estimate the initial distribution of shear stress using models that assume depth-dependent normal stress for a dipping, normal fault as well as simpler models which use constant (depth-independent) normal stress. The slip rate histories from the spontaneous rupture scenarios are projected onto the irregular dipping geometry of the WFSLC and used to simulate 0-1 Hz wave propagation in the SLB area using a 4th-order, staggered-grid visco-elastic FD method. We find that peak ground velocities tend to be larger on the low-velocity sediments on the hanging wall side of the fault than on outcropping rock on the footwall side, confirming results of previous studies on normal faulting earthquakes. The simulated ground motions reveal strong along-strike directivity effects for ruptures nucleating towards the ends of the WFSLC. The 0-1 Hz FD simulations are combined with local scattering operators to obtain broadband (0-10 Hz) synthetics and maps of average peak ground motions. Finally we use broadband

  11. How Plates Pull Transforms Apart: 3-D Numerical Models of Oceanic Transform Fault Response to Changes in Plate Motion Direction

    NASA Astrophysics Data System (ADS)

    Morrow, T. A.; Mittelstaedt, E. L.; Olive, J. A. L.

    2015-12-01

    Observations along oceanic fracture zones suggest that some mid-ocean ridge transform faults (TFs) previously split into multiple strike-slip segments separated by short (<~50 km) intra-transform spreading centers and then reunited to a single TF trace. This history of segmentation appears to correspond with changes in plate motion direction. Despite the clear evidence of TF segmentation, the processes governing its development and evolution are not well characterized. Here we use a 3-D, finite-difference / marker-in-cell technique to model the evolution of localized strain at a TF subjected to a sudden change in plate motion direction. We simulate the oceanic lithosphere and underlying asthenosphere at a ridge-transform-ridge setting using a visco-elastic-plastic rheology with a history-dependent plastic weakening law and a temperature- and stress-dependent mantle viscosity. To simulate the development of topography, a low density, low viscosity 'sticky air' layer is present above the oceanic lithosphere. The initial thermal gradient follows a half-space cooling solution with an offset across the TF. We impose an enhanced thermal diffusivity in the uppermost 6 km of lithosphere to simulate the effects of hydrothermal circulation. An initial weak seed in the lithosphere helps localize shear deformation between the two offset ridge axes to form a TF. For each model case, the simulation is run initially with TF-parallel plate motion until the thermal structure reaches a steady state. The direction of plate motion is then rotated either instantaneously or over a specified time period, placing the TF in a state of trans-tension. Model runs continue until the system reaches a new steady state. Parameters varied here include: initial TF length, spreading rate, and the rotation rate and magnitude of spreading obliquity. We compare our model predictions to structural observations at existing TFs and records of TF segmentation preserved in oceanic fracture zones.

  12. 3-D or median map? Earthquake scenario ground-motion maps from physics-based models versus maps from ground-motion prediction equations

    NASA Astrophysics Data System (ADS)

    Porter, K.

    2015-12-01

    There are two common ways to create a ground-motion map for a hypothetical earthquake: using ground motion prediction equations (by far the more common of the two) and using 3-D physics-based modeling. The former is very familiar to engineers, the latter much less so, and the difference can present a problem because engineers tend to trust the familiar and distrust novelty. Maps for essentially the same hypothetical earthquake using the two different methods can look very different, while appearing to present the same information. Using one or the other can lead an engineer or disaster planner to very different estimates of damage and risk. The reasons have to do with depiction of variability, spatial correlation of shaking, the skewed distribution of real-world shaking, and the upward-curving relationship between shaking and damage. The scientists who develop the two kinds of map tend to specialize in one or the other and seem to defend their turf, which can aggravate the problem of clearly communicating with engineers.The USGS Science Application for Risk Reduction's (SAFRR) HayWired scenario has addressed the challenge of explaining to engineers the differences between the two maps, and why, in a disaster planning scenario, one might want to use the less-familiar 3-D map.

  13. Mapping motion from 4D-MRI to 3D-CT for use in 4D dose calculations: A technical feasibility study

    SciTech Connect

    Boye, Dirk; Lomax, Tony; Knopf, Antje

    2013-06-15

    Purpose: Target sites affected by organ motion require a time resolved (4D) dose calculation. Typical 4D dose calculations use 4D-CT as a basis. Unfortunately, 4D-CT images have the disadvantage of being a 'snap-shot' of the motion during acquisition and of assuming regularity of breathing. In addition, 4D-CT acquisitions involve a substantial additional dose burden to the patient making many, repeated 4D-CT acquisitions undesirable. Here the authors test the feasibility of an alternative approach to generate patient specific 4D-CT data sets. Methods: In this approach motion information is extracted from 4D-MRI. Simulated 4D-CT data sets [which the authors call 4D-CT(MRI)] are created by warping extracted deformation fields to a static 3D-CT data set. The employment of 4D-MRI sequences for this has the advantage that no assumptions on breathing regularity are made, irregularities in breathing can be studied and, if necessary, many repeat imaging studies (and consequently simulated 4D-CT data sets) can be performed on patients and/or volunteers. The accuracy of 4D-CT(MRI)s has been validated by 4D proton dose calculations. Our 4D dose algorithm takes into account displacements as well as deformations on the originating 4D-CT/4D-CT(MRI) by calculating the dose of each pencil beam based on an individual time stamp of when that pencil beam is applied. According to corresponding displacement and density-variation-maps the position and the water equivalent range of the dose grid points is adjusted at each time instance. Results: 4D dose distributions, using 4D-CT(MRI) data sets as input were compared to results based on a reference conventional 4D-CT data set capturing similar motion characteristics. Almost identical 4D dose distributions could be achieved, even though scanned proton beams are very sensitive to small differences in the patient geometry. In addition, 4D dose calculations have been performed on the same patient, but using 4D-CT(MRI) data sets based on

  14. Bi-planar 2D-to-3D registration in Fourier domain for stereoscopic x-ray motion tracking

    NASA Astrophysics Data System (ADS)

    Zosso, Dominique; Le Callennec, Benoît; Bach Cuadra, Meritxell; Aminian, Kamiar; Jolles, Brigitte M.; Thiran, Jean-Philippe

    2008-03-01

    In this paper we present a new method to track bone movements in stereoscopic X-ray image series of the knee joint. The method is based on two different X-ray image sets: a rotational series of acquisitions of the still subject knee that allows the tomographic reconstruction of the three-dimensional volume (model), and a stereoscopic image series of orthogonal projections as the subject performs movements. Tracking the movements of bones throughout the stereoscopic image series means to determine, for each frame, the best pose of every moving element (bone) previously identified in the 3D reconstructed model. The quality of a pose is reflected in the similarity between its theoretical projections and the actual radiographs. We use direct Fourier reconstruction to approximate the three-dimensional volume of the knee joint. Then, to avoid the expensive computation of digitally rendered radiographs (DRR) for pose recovery, we develop a corollary to the 3-dimensional central-slice theorem and reformulate the tracking problem in the Fourier domain. Under the hypothesis of parallel X-ray beams, the heavy 2D-to-3D registration of projections in the signal domain is replaced by efficient slice-to-volume registration in the Fourier domain. Focusing on rotational movements, the translation-relevant phase information can be discarded and we only consider scalar Fourier amplitudes. The core of our motion tracking algorithm can be implemented as a classical frame-wise slice-to-volume registration task. Results on both synthetic and real images confirm the validity of our approach.

  15. Hybrid MV-kV 3D respiratory motion tracking during radiation therapy with low imaging dose

    NASA Astrophysics Data System (ADS)

    Yan, Huagang; Li, Haiyun; Liu, Zhixiang; Nath, Ravinder; Liu, Wu

    2012-12-01

    A novel real-time adaptive MV-kV imaging framework for image-guided radiation therapy is developed to reduce the thoracic and abdominal tumor targeting uncertainty caused by respiration-induced intrafraction motion with ultra-low patient imaging dose. In our method, continuous stereoscopic MV-kV imaging is used at the beginning of a radiation therapy delivery for several seconds to measure the implanted marker positions. After this stereoscopic imaging period, the kV imager is switched off except for the times when no fiducial marker is detected in the cine-MV images. The 3D time-varying marker positions are estimated by combining the MV 2D projection data and the motion correlations between directional components of marker motion established from the stereoscopic imaging period and updated afterwards; in particular, the most likely position is assumed to be the position on the projection line that has the shortest distance to the first principal component line segment constructed from previous trajectory points. An adaptive windowed auto-regressive prediction is utilized to predict the marker position a short time later (310 ms and 460 ms in this study) to allow for tracking system latency. To demonstrate the feasibility and evaluate the accuracy of the proposed method, computer simulations were performed for both arc and fixed-gantry deliveries using 66 h of retrospective tumor motion data from 42 patients treated for thoracic or abdominal cancers. The simulations reveal that using our hybrid approach, a smaller than 1.2 mm or 1.5 mm root-mean-square tracking error can be achieved at a system latency of 310 ms or 460 ms, respectively. Because the kV imaging is only used for a short period of time in our method, extra patient imaging dose can be reduced by an order of magnitude compared to continuous MV-kV imaging, while the clinical tumor targeting accuracy for thoracic or abdominal cancers is maintained. Furthermore, no additional hardware is required with the

  16. Does fluid infiltration affect the motion of sediment grains? - A 3-D numerical modelling approach using SPH

    NASA Astrophysics Data System (ADS)

    Bartzke, Gerhard; Rogers, Benedict D.; Fourtakas, Georgios; Mokos, Athanasios; Huhn, Katrin

    2016-04-01

    The processes that cause the creation of a variety of sediment morphological features, e.g. laminated beds, ripples, or dunes, are based on the initial motion of individual sediment grains. However, with experimental techniques it is difficult to measure the flow characteristics, i.e., the velocity of the pore water flow in sediments, at a sufficient resolution and in a non-intrusive way. As a result, the role of fluid infiltration at the surface and in the interior affecting the initiation of motion of a sediment bed is not yet fully understood. Consequently, there is a strong need for numerical models, since these are capable of quantifying fluid driven sediment transport processes of complex sediment beds composed of irregular shapes. The numerical method Smoothed Particle Hydrodynamics (SPH) satisfies this need. As a meshless and Lagrangian technique, SPH is ideally suited to simulating flows in sediment beds composed of various grain shapes, but also flow around single grains at a high temporal and spatial resolution. The solver chosen is DualSPHysics (www.dual.sphysics.org) since this is validated for a range of flow conditions. For the present investigation a 3-D numerical flume model was generated using SPH with a length of 4.0 cm, a width of 0.05 cm and a height of 0.2 cm where mobile sediment particles were deposited in a recess. An experimental setup was designed to test sediment configurations composed of irregular grain shapes (grain diameter, D50=1000 μm). Each bed consisted of 3500 mobile objects. After the bed generation process, the entire domain was flooded with 18 million fluid particles. To drive the flow, an oscillating motion perpendicular to the bed was applied to the fluid, reaching a peak value of 0.3 cm/s, simulating 4 seconds of real time. The model results showed that flow speeds decreased logarithmically from the top of the domain towards the surface of the beds, indicating a fully developed boundary layer. Analysis of the fluid

  17. Capturing tumor complexity in vitro: Comparative analysis of 2D and 3D tumor models for drug discovery.

    PubMed

    Stock, Kristin; Estrada, Marta F; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph

    2016-01-01

    Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models. PMID:27364600

  18. Capturing tumor complexity in vitro: Comparative analysis of 2D and 3D tumor models for drug discovery

    PubMed Central

    Stock, Kristin; Estrada, Marta F.; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E.; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C.; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph

    2016-01-01

    Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models. PMID:27364600

  19. The Role of Motion Extrapolation in Amphibian Prey Capture

    PubMed Central

    2015-01-01

    Sensorimotor delays decouple behaviors from the events that drive them. The brain compensates for these delays with predictive mechanisms, but the efficacy and timescale over which these mechanisms operate remain poorly understood. Here, we assess how prediction is used to compensate for prey movement that occurs during visuomotor processing. We obtained high-speed video records of freely moving, tongue-projecting salamanders catching walking prey, emulating natural foraging conditions. We found that tongue projections were preceded by a rapid head turn lasting ∼130 ms. This motor lag, combined with the ∼100 ms phototransduction delay at photopic light levels, gave a ∼230 ms visuomotor response delay during which prey typically moved approximately one body length. Tongue projections, however, did not significantly lag prey position but were highly accurate instead. Angular errors in tongue projection accuracy were consistent with a linear extrapolation model that predicted prey position at the time of tongue contact using the average prey motion during a ∼175 ms period one visual latency before the head movement. The model explained successful strikes where the tongue hit the fly, and unsuccessful strikes where the fly turned and the tongue hit a phantom location consistent with the fly's earlier trajectory. The model parameters, obtained from the data, agree with the temporal integration and latency of retinal responses proposed to contribute to motion extrapolation. These results show that the salamander predicts future prey position and that prediction significantly improves prey capture success over a broad range of prey speeds and light levels. SIGNIFICANCE STATEMENT Neural processing delays cause actions to lag behind the events that elicit them. To cope with these delays, the brain predicts what will happen in the future. While neural circuits in the retina and beyond have been suggested to participate in such predictions, few behaviors have been

  20. Shoulder 3D range of motion and humerus rotation in two volleyball spike techniques: injury prevention and performance.

    PubMed

    Seminati, Elena; Marzari, Alessandra; Vacondio, Oreste; Minetti, Alberto E

    2015-06-01

    Repetitive stresses and movements on the shoulder in the volleyball spike expose this joint to overuse injuries, bringing athletes to a career threatening injury. Assuming that specific spike techniques play an important role in injury risk, we compared the kinematic of the traditional (TT) and the alternative (AT) techniques in 21 elite athletes, evaluating their safety with respect to performance. Glenohumeral joint was set as the centre of an imaginary sphere, intersected by the distal end of the humerus at different angles. Shoulder range of motion and angular velocities were calculated and compared to the joint limits. Ball speed and jump height were also assessed. Results indicated the trajectory of the humerus to be different for the TT, with maximal flexion of the shoulder reduced by 10 degrees, and horizontal abduction 15 degrees higher. No difference was found for external rotation angles, while axial rotation velocities were significantly higher in AT, with a 5% higher ball speed. Results suggest AT as a potential preventive solution to shoulder chronic pathologies, reducing shoulder flexion during spiking. The proposed method allows visualisation of risks associated with different overhead manoeuvres, by depicting humerus angles and velocities with respect to joint limits in the same 3D space. PMID:26151344

  1. SU-E-J-80: Interplay Effect Between VMAT Intensity Modulation and Tumor Motion in Hypofractioned Lung Treatment, Investigated with 3D Pressage Dosimeter

    SciTech Connect

    Touch, M; Wu, Q; Oldham, M

    2014-06-01

    Purpose: To demonstrate an embedded tissue equivalent presage dosimeter for measuring 3D doses in moving tumors and to study the interplay effect between the tumor motion and intensity modulation in hypofractioned Volumetric Modulated Arc Therapy(VMAT) lung treatment. Methods: Motion experiments were performed using cylindrical Presage dosimeters (5cm diameter by 7cm length) mounted inside the lung insert of a CIRS thorax phantom. Two different VMAT treatment plans were created and delivered in three different scenarios with the same prescribed dose of 18 Gy. Plan1, containing a 2 centimeter spherical CTV with an additional 2mm setup margin, was delivered on a stationary phantom. Plan2 used the same CTV except expanded by 1 cm in the Sup-Inf direction to generate ITV and PTV respectively. The dosimeters were irradiated in static and variable motion scenarios on a Truebeam system. After irradiation, high resolution 3D dosimetry was performed using the Duke Large Field-of-view Optical-CT Scanner, and compared to the calculated dose from Eclipse. Results: In the control case (no motion), good agreement was observed between the planned and delivered dose distributions as indicated by 100% 3D Gamma (3% of maximum planned dose and 3mm DTA) passing rates in the CTV. In motion cases gamma passing rates was 99% in CTV. DVH comparisons also showed good agreement between the planned and delivered dose in CTV for both control and motion cases. However, differences of 15% and 5% in dose to PTV were observed in the motion and control cases respectively. Conclusion: With very high dose nature of a hypofraction treatment, significant effect was observed only motion is introduced to the target. This can be resulted from the motion of the moving target and the modulation of the MLC. 3D optical dosimetry can be of great advantage in hypofraction treatment dose validation studies.

  2. Calculating the Probability of Strong Ground Motions Using 3D Seismic Waveform Modeling - SCEC CyberShake

    NASA Astrophysics Data System (ADS)

    Gupta, N.; Callaghan, S.; Graves, R.; Mehta, G.; Zhao, L.; Deelman, E.; Jordan, T. H.; Kesselman, C.; Okaya, D.; Cui, Y.; Field, E.; Gupta, V.; Vahi, K.; Maechling, P. J.

    2006-12-01

    Researchers from the SCEC Community Modeling Environment (SCEC/CME) project are utilizing the CyberShake computational platform and a distributed high performance computing environment that includes USC High Performance Computer Center and the NSF TeraGrid facilities to calculate physics-based probabilistic seismic hazard curves for several sites in the Southern California area. Traditionally, probabilistic seismic hazard analysis (PSHA) is conducted using intensity measure relationships based on empirical attenuation relationships. However, a more physics-based approach using waveform modeling could lead to significant improvements in seismic hazard analysis. Members of the SCEC/CME Project have integrated leading-edge PSHA software tools, SCEC-developed geophysical models, validated anelastic wave modeling software, and state-of-the-art computational technologies on the TeraGrid to calculate probabilistic seismic hazard curves using 3D waveform-based modeling. The CyberShake calculations for a single probablistic seismic hazard curve require tens of thousands of CPU hours and multiple terabytes of disk storage. The CyberShake workflows are run on high performance computing systems including multiple TeraGrid sites (currently SDSC and NCSA), and the USC Center for High Performance Computing and Communications. To manage the extensive job scheduling and data requirements, CyberShake utilizes a grid-based scientific workflow system based on the Virtual Data System (VDS), the Pegasus meta-scheduler system, and the Globus toolkit. Probabilistic seismic hazard curves for spectral acceleration at 3.0 seconds have been produced for eleven sites in the Southern California region, including rock and basin sites. At low ground motion levels, there is little difference between the CyberShake and attenuation relationship curves. At higher ground motion (lower probability) levels, the curves are similar for some sites (downtown LA, I-5/SR-14 interchange) but different for

  3. Using a motion capture system for spatial localization of EEG electrodes

    PubMed Central

    Reis, Pedro M. R.; Lochmann, Matthias

    2015-01-01

    Electroencephalography (EEG) is often used in source analysis studies, in which the locations of cortex regions responsible for a signal are determined. For this to be possible, accurate positions of the electrodes at the scalp surface must be determined, otherwise errors in the source estimation will occur. Today, several methods for acquiring these positions exist but they are often not satisfyingly accurate or take a long time to perform. Therefore, in this paper we describe a method capable of determining the positions accurately and fast. This method uses an infrared light motion capture system (IR-MOCAP) with 8 cameras arranged around a human participant. It acquires 3D coordinates of each electrode and automatically labels them. Each electrode has a small reflector on top of it thus allowing its detection by the cameras. We tested the accuracy of the presented method by acquiring the electrodes positions on a rigid sphere model and comparing these with measurements from computer tomography (CT). The average Euclidean distance between the sphere model CT measurements and the presented method was 1.23 mm with an average standard deviation of 0.51 mm. We also tested the method with a human participant. The measurement was quickly performed and all positions were captured. These results tell that, with this method, it is possible to acquire electrode positions with minimal error and little time effort for the study participants and investigators. PMID:25941468

  4. Using a motion capture system for spatial localization of EEG electrodes.

    PubMed

    Reis, Pedro M R; Lochmann, Matthias

    2015-01-01

    Electroencephalography (EEG) is often used in source analysis studies, in which the locations of cortex regions responsible for a signal are determined. For this to be possible, accurate positions of the electrodes at the scalp surface must be determined, otherwise errors in the source estimation will occur. Today, several methods for acquiring these positions exist but they are often not satisfyingly accurate or take a long time to perform. Therefore, in this paper we describe a method capable of determining the positions accurately and fast. This method uses an infrared light motion capture system (IR-MOCAP) with 8 cameras arranged around a human participant. It acquires 3D coordinates of each electrode and automatically labels them. Each electrode has a small reflector on top of it thus allowing its detection by the cameras. We tested the accuracy of the presented method by acquiring the electrodes positions on a rigid sphere model and comparing these with measurements from computer tomography (CT). The average Euclidean distance between the sphere model CT measurements and the presented method was 1.23 mm with an average standard deviation of 0.51 mm. We also tested the method with a human participant. The measurement was quickly performed and all positions were captured. These results tell that, with this method, it is possible to acquire electrode positions with minimal error and little time effort for the study participants and investigators. PMID:25941468

  5. Determining inter-fractional motion of the uterus using 3D ultrasound imaging during radiotherapy for cervical cancer

    NASA Astrophysics Data System (ADS)

    Baker, Mariwan; Jensen, Jørgen Arendt; Behrens, Claus F.

    2014-03-01

    Uterine positional changes can reduce the accuracy of radiotherapy for cervical cancer patients. The purpose of this study was to; 1) Quantify the inter-fractional uterine displacement using a novel 3D ultrasound (US) imaging system, and 2) Compare the result with the bone match shift determined by Cone- Beam CT (CBCT) imaging.Five cervical cancer patients were enrolled in the study. Three of them underwent weekly CBCT imaging prior to treatment and bone match shift was applied. After treatment delivery they underwent a weekly US scan. The transabdominal scans were conducted using a Clarity US system (Clarity® Model 310C00). Uterine positional shifts based on soft-tissue match using US was performed and compared to bone match shifts for the three directions. Mean value (+/-1 SD) of the US shifts were (mm); anterior-posterior (A/P): (3.8+/-5.5), superior-inferior (S/I) (-3.5+/-5.2), and left-right (L/R): (0.4+/-4.9). The variations were larger than the CBCT shifts. The largest inter-fractional displacement was from -2 mm to +14 mm in the AP-direction for patient 3. Thus, CBCT bone matching underestimates the uterine positional displacement due to neglecting internal uterine positional change to the bone structures. Since the US images were significantly better than the CBCT images in terms of soft-tissue visualization, the US system can provide an optional image-guided radiation therapy (IGRT) system. US imaging might be a better IGRT system than CBCT, despite difficulty in capturing the entire uterus. Uterine shifts based on US imaging contains relative uterus-bone displacement, which is not taken into consideration using CBCT bone match.

  6. A specialized motion capture system for real-time analysis of mandibular movements using infrared cameras

    PubMed Central

    2013-01-01

    Background In the last years, several methods and devices have been proposed to record the human mandibular movements, since they provide quantitative parameters that support the diagnosis and treatment of temporomandibular disorders. The techniques currently employed suffer from a number of drawbacks including high price, unnatural to use, lack of support for real-time analysis and mandibular movements recording as a pure rotation. In this paper, we propose a specialized optical motion capture system, which causes a minimum obstruction and can support 3D mandibular movement analysis in real-time. Methods We used three infrared cameras together with nine reflective markers that were placed at key points of the face. Some classical techniques are suggested to conduct the camera calibration and three-dimensional reconstruction and we propose some specialized algorithms to automatically recognize our set of markers and track them along a motion capture session. Results To test the system, we developed a prototype software and performed a clinical experiment in a group of 22 subjects. They were instructed to execute several movements for the functional evaluation of the mandible while the system was employed to record them. The acquired parameters and the reconstructed trajectories were used to confirm the typical function of temporomandibular joint in some subjects and to highlight its abnormal behavior in others. Conclusions The proposed system is an alternative to the existing optical, mechanical, electromagnetic and ultrasonic-based methods, and intends to address some drawbacks of currently available solutions. Its main goal is to assist specialists in diagnostic and treatment of temporomandibular disorders, since simple visual inspection may not be sufficient for a precise assessment of temporomandibular joint and associated muscles. PMID:23433470

  7. Tissue reconstruction in 3D-spheroids from rodent retina in a motion-free, bioreactor-based microstructure.

    PubMed

    Rieke, Matthias; Gottwald, Eric; Weibezahn, Karl-Friedrich; Layer, Paul Gottlob

    2008-12-01

    While conventional rotation culture-based retinal spheroids are most useful to study basic processes of retinogenesis and tissue regeneration, they are less appropriate for an easy and inexpensive mass production of histotypic 3-dimensional tissue spheroids, which will be of utmost importance for future bioengineering, e.g. for replacement of animal experimentation. Here we compared conventionally reaggregated spheroids derived from dissociated retinal cells from neonatal gerbils (Meriones unguiculatus) with spheroids cultured on a novel microscaffold cell chip (called cf-chip) in a motion-free bioreactor. Reaggregation and developmental processes leading to tissue formation, e.g. proliferation, apoptosis and differentiation were observed during the first 10 days in vitro (div). Remarkably, in each cf-chip micro-chamber, only one spheroid developed. In both culture systems, sphere sizes and proliferation rates were almost identical. However, apoptosis was only comparably high up to 5 div, but then became negligible in the cf-chip, while it up-rose again in the conventional culture. In both systems, immunohistochemical characterisation revealed the presence of Müller glia cells, of ganglion, amacrine, bipolar and horizontal cells at a highly comparable arrangement. In both systems, photoreceptors were detected only in spheroids from P3 retinae. Benefits of the chip-based 3D cell culture were a reliable sphere production at enhanced viability, the feasibility of single sphere observation during cultivation time, a high reproducibility and easy control of culture conditions. Further development of this approach should allow high-throughput systems not only for retinal but also other types of histotypic spheroids, to become suitable for environmental monitoring and biomedical diagnostics. PMID:19023488

  8. Implementation of wireless 3D stereo image capture system and synthesizing the depth of region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Kwon, Hyeokjae; Badarch, Luubaatar

    2014-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We send the raw captured image data to the host computer by using WiFi wireless communication and then use GPU hardware and CUDA programming for implementation of real time three-dimensional stereo image by synthesizing the depth of ROI(region of interest). We also try to find and declare the mechanism of deblurring of CMOS camera module based on the Kirchhoff diffraction formula and propose a deblurring model. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  9. Projectile Motion on an Inclined Misty Surface: I. Capturing and Analysing the Trajectory

    ERIC Educational Resources Information Center

    Ho, S. Y.; Foong, S. K.; Lim, C. H.; Lim, C. C.; Lin, K.; Kuppan, L.

    2009-01-01

    Projectile motion is usually the first non-uniform two-dimensional motion that students will encounter in a pre-university physics course. In this article, we introduce a novel technique for capturing the trajectory of projectile motion on an inclined Perspex plane. This is achieved by coating the Perspex with a thin layer of fine water droplets…

  10. Creation of 3D digital anthropomorphic phantoms which model actual patient non-rigid body motion as determined from MRI and position tracking studies of volunteers

    NASA Astrophysics Data System (ADS)

    Connolly, C. M.; Konik, A.; Dasari, P. K. R.; Segars, P.; Zheng, S.; Johnson, K. L.; Dey, J.; King, M. A.

    2011-03-01

    Patient motion can cause artifacts, which can lead to difficulty in interpretation. The purpose of this study is to create 3D digital anthropomorphic phantoms which model the location of the structures of the chest and upper abdomen of human volunteers undergoing a series of clinically relevant motions. The 3D anatomy is modeled using the XCAT phantom and based on MRI studies. The NURBS surfaces of the XCAT are interactively adapted to fit the MRI studies. A detailed XCAT phantom is first developed from an EKG triggered Navigator acquisition composed of sagittal slices with a 3 x 3 x 3 mm voxel dimension. Rigid body motion states are then acquired at breath-hold as sagittal slices partially covering the thorax, centered on the heart, with 9 mm gaps between them. For non-rigid body motion requiring greater sampling, modified Navigator sequences covering the entire thorax with 3 mm gaps between slices are obtained. The structures of the initial XCAT are then adapted to fit these different motion states. Simultaneous to MRI imaging the positions of multiple reflective markers on stretchy bands about the volunteer's chest and abdomen are optically tracked in 3D via stereo imaging. These phantoms with combined position tracking will be used to investigate both imaging-data-driven and motion-tracking strategies to estimate and correct for patient motion. Our initial application will be to cardiacperfusion SPECT imaging where the XCAT phantoms will be used to create patient activity and attenuation distributions for each volunteer with corresponding motion tracking data from the markers on the body-surface. Monte Carlo methods will then be used to simulate SPECT acquisitions, which will be used to evaluate various motion estimation and correction strategies.

  11. Respiratory motion compensation for simultaneous PET/MR based on a 3D-2D registration of strongly undersampled radial MR data: a simulation study

    NASA Astrophysics Data System (ADS)

    Rank, Christopher M.; Heußer, Thorsten; Flach, Barbara; Brehm, Marcus; Kachelrieß, Marc

    2015-03-01

    We propose a new method for PET/MR respiratory motion compensation, which is based on a 3D-2D registration of strongly undersampled MR data and a) runs in parallel with the PET acquisition, b) can be interlaced with clinical MR sequences, and c) requires less than one minute of the total MR acquisition time per bed position. In our simulation study, we applied a 3D encoded radial stack-of-stars sampling scheme with 160 radial spokes per slice and an acquisition time of 38 s. Gated 4D MR images were reconstructed using a 4D iterative reconstruction algorithm. Based on these images, motion vector fields were estimated using our newly-developed 3D-2D registration framework. A 4D PET volume of a patient with eight hot lesions in the lungs and upper abdomen was simulated and MoCo 4D PET images were reconstructed based on the motion vector fields derived from MR. For evaluation, average SUVmean values of the artificial lesions were determined for a 3D, a gated 4D, a MoCo 4D and a reference (with ten-fold measurement time) gated 4D reconstruction. Compared to the reference, 3D reconstructions yielded an underestimation of SUVmean values due to motion blurring. In contrast, gated 4D reconstructions showed the highest variation of SUVmean due to low statistics. MoCo 4D reconstructions were only slightly affected by these two sources of uncertainty resulting in a significant visual and quantitative improvement in terms of SUVmean values. Whereas temporal resolution was comparable to the gated 4D images, signal-to-noise ratio and contrast-to-noise ratio were close to the 3D reconstructions.

  12. Motion capture measures variability in laryngoscopic movement during endotracheal intubation: A preliminary report

    PubMed Central

    Carlson, Jestin N; Das, Samarjit; De la Torre, Fernando; Callaway, Clifton W; Phrampus, Paul E; Hodgins, Jessica

    2012-01-01

    Background Success rates with emergent endotracheal intubation (ETI) improve with increasing provider experience. Few objective metrics exist to quantify differences in ETI technique between providers of various skill levels. We tested the feasibility of using motion capture videography to quantify variability in the motions of the left hand and the laryngoscope in providers with various experience. Methods Three providers with varying levels of experience (attending physician (experienced), Emergency Medicine resident (intermediate), and post doctoral student with no previous ETI experience (novice)) each performed ETI four times on a mannequin. A Vicon, 16-camera system tracked the 3D orientation and movement of markers on the providers, handle of the laryngoscope, and mannequin. Attempt duration, path length of the left hand and the inclination of the plane of the laryngoscope handle (mean squared angular deviation from vertical) were calculated for each laryngoscopy attempt. We compared inter-attempt and inter-provider variability of each measure Results All ETI attempts were successful. Mean (SD) duration of laryngoscopy attempts in seconds differed between experienced 5.50 (0.68), intermediate 6.32 (1.13) and novice 12.38 (1.06) providers (p=0.021). Mean path length of the left hand did not differ between providers (p=0.37). Variability of the plane of the laryngoscope (degrees2) differed between providers: 8.3 (experienced), 28.7 (intermediate) and 54.5 (novice). Conclusion Motion analysis can detect inter-provider differences in hand and laryngoscope movements during ETI which may be related to provider experience. This technology has potential to objectively measure training and skill in ETI. PMID:22801254

  13. Real-time motion- and B0-correction for LASER-localized spiral-accelerated 3D-MRSI of the brain at 3T

    PubMed Central

    Bogner, Wolfgang; Hess, Aaron T; Gagoski, Borjan; Tisdall, M. Dylan; van der Kouwe, Andre J.W.; Trattnig, Siegfried; Rosen, Bruce; Andronesi, Ovidiu C

    2013-01-01

    The full potential of magnetic resonance spectroscopic imaging (MRSI) is often limited by localization artifacts, motion-related artifacts, scanner instabilities, and long measurement times. Localized adiabatic selective refocusing (LASER) provides accurate B1-insensitive spatial excitation even at high magnetic fields. Spiral encoding accelerates MRSI acquisition, and thus, enables 3D-coverage without compromising spatial resolution. Real-time position-and shim/frequency-tracking using MR navigators correct motion- and scanner instability-related artifacts. Each of these three advanced MRI techniques provides superior MRSI data compared to commonly used methods. In this work, we integrated in a single pulse sequence these three promising approaches. Real-time correction of motion, shim, and frequency-drifts using volumetric dual-contrast echo planar imaging-based navigators were implemented in an MRSI sequence that uses low-power gradient modulated short-echo time LASER localization and time efficient spiral readouts, in order to provide fast and robust 3D-MRSI in the human brain at 3T. The proposed sequence was demonstrated to be insensitive to motion- and scanner drift-related degradations of MRSI data in both phantoms and volunteers. Motion and scanner drift artifacts were eliminated and excellent spectral quality was recovered in the presence of strong movement. Our results confirm the expected benefits of combining a spiral 3D-LASER-MRSI sequence with real-time correction. The new sequence provides accurate, fast, and robust 3D metabolic imaging of the human brain at 3T. This will further facilitate the use of 3D-MRSI for neuroscience and clinical applications. PMID:24201013

  14. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  15. 3D SMoSIFT: three-dimensional sparse motion scale invariant feature transform for activity recognition from RGB-D videos

    NASA Astrophysics Data System (ADS)

    Wan, Jun; Ruan, Qiuqi; Li, Wei; An, Gaoyun; Zhao, Ruizhen

    2014-03-01

    Human activity recognition based on RGB-D data has received more attention in recent years. We propose a spatiotemporal feature named three-dimensional (3D) sparse motion scale-invariant feature transform (SIFT) from RGB-D data for activity recognition. First, we build pyramids as scale space for each RGB and depth frame, and then use Shi-Tomasi corner detector and sparse optical flow to quickly detect and track robust keypoints around the motion pattern in the scale space. Subsequently, local patches around keypoints, which are extracted from RGB-D data, are used to build 3D gradient and motion spaces. Then SIFT-like descriptors are calculated on both 3D spaces, respectively. The proposed feature is invariant to scale, transition, and partial occlusions. More importantly, the running time of the proposed feature is fast so that it is well-suited for real-time applications. We have evaluated the proposed feature under a bag of words model on three public RGB-D datasets: one-shot learning Chalearn Gesture Dataset, Cornell Activity Dataset-60, and MSR Daily Activity 3D dataset. Experimental results show that the proposed feature outperforms other spatiotemporal features and are comparative to other state-of-the-art approaches, even though there is only one training sample for each class.

  16. Substantiating Appropriate Motion Capture Techniques for the Assessment of Nordic Walking Gait and Posture in Older Adults.

    PubMed

    Dalton, Christopher M; Nantel, Julie

    2016-01-01

    Nordic walking (NW) has become a safe and simple form of exercise in recent years, and in studying this gait pattern, various data collection techniques have been employed, each with positives and negatives. The aim was to determine the effect of NW on older adult gait and posture and to determine optimal use of different data collection systems in both short and long duration analysis. Gait and posture during NW and normal walking were assessed in 17 healthy older adults (age: 69 ± 7.3). Participants performed two trials of 6 Minute Walk Tests (6MWT) (1 with poles (WP) and 1 without poles (NP)) and 6 trials of a 5m walk (3 WP and 3 NP). Motion was recorded using two systems, a 6-sensor accelerometry system and an 8-camera 3-dimensional motion capture system, in order to quantify spatial-temporal, kinematic, and kinetic parameters. With both systems, participants demonstrated increased stride length and double support and decreased gait speed and cadence WP compared to NP (p <0.05). Also, with motion capture, larger single support time was found WP (p <0.05). With 3-D capture, smaller hip power generation and moments of force were found at heel contact and pre-swing as well as smaller knee power absorption at heel contact, pre-swing, and terminal swing WP compared to NP, when assessed over one cycle (p <0.05). Also, WP yielded smaller moments of force at heel contact and terminal swing along with larger moments at mid-stance of a gait cycle (p <0.05). No changes were found for posture. NW seems appropriate for promoting a normal gait pattern in older adults. Three-dimensional motion capture should primarily be used during short duration gait analysis (i.e. single gait cycle), while accelerometry systems should be primarily employed in instances requiring longer duration analysis such as during the 6MWT. PMID:27214263

  17. Effect of Task-Correlated Physiological Fluctuations and Motion in 2D and 3D Echo-Planar Imaging in a Higher Cognitive Level fMRI Paradigm

    PubMed Central

    Ladstein, Jarle; Evensmoen, Hallvard R.; Håberg, Asta K.; Kristoffersen, Anders; Goa, Pål E.

    2016-01-01

    Purpose: To compare 2D and 3D echo-planar imaging (EPI) in a higher cognitive level fMRI paradigm. In particular, to study the link between the presence of task-correlated physiological fluctuations and motion and the fMRI contrast estimates from either 2D EPI or 3D EPI datasets, with and without adding nuisance regressors to the model. A signal model in the presence of partly task-correlated fluctuations is derived, and predictions for contrast estimates with and without nuisance regressors are made. Materials and Methods: Thirty-one healthy volunteers were scanned using 2D EPI and 3D EPI during a virtual environmental learning paradigm. In a subgroup of 7 subjects, heart rate and respiration were logged, and the correlation with the paradigm was evaluated. FMRI analysis was performed using models with and without nuisance regressors. Differences in the mean contrast estimates were investigated by analysis-of-variance using Subject, Sequence, Day, and Run as factors. The distributions of group level contrast estimates were compared. Results: Partially task-correlated fluctuations in respiration, heart rate and motion were observed. Statistically significant differences were found in the mean contrast estimates between the 2D EPI and 3D EPI when using a model without nuisance regressors. The inclusion of nuisance regressors for cardiorespiratory effects and motion reduced the difference to a statistically non-significant level. Furthermore, the contrast estimate values shifted more when including nuisance regressors for 3D EPI compared to 2D EPI. Conclusion: The results are consistent with 3D EPI having a higher sensitivity to fluctuations compared to 2D EPI. In the presence partially task-correlated physiological fluctuations or motion, proper correction is necessary to get expectation correct contrast estimates when using 3D EPI. As such task-correlated physiological fluctuations or motion is difficult to avoid in paradigms exploring higher cognitive functions, 2

  18. Computer numerical control (CNC) lithography: light-motion synchronized UV-LED lithography for 3D microfabrication

    NASA Astrophysics Data System (ADS)

    Kim, Jungkwun; Yoon, Yong-Kyu; Allen, Mark G.

    2016-03-01

    This paper presents a computer-numerical-controlled ultraviolet light-emitting diode (CNC UV-LED) lithography scheme for three-dimensional (3D) microfabrication. The CNC lithography scheme utilizes sequential multi-angled UV light exposures along with a synchronized switchable UV light source to create arbitrary 3D light traces, which are transferred into the photosensitive resist. The system comprises a switchable, movable UV-LED array as a light source, a motorized tilt-rotational sample holder, and a computer-control unit. System operation is such that the tilt-rotational sample holder moves in a pre-programmed routine, and the UV-LED is illuminated only at desired positions of the sample holder during the desired time period, enabling the formation of complex 3D microstructures. This facilitates easy fabrication of complex 3D structures, which otherwise would have required multiple manual exposure steps as in the previous multidirectional 3D UV lithography approach. Since it is batch processed, processing time is far less than that of the 3D printing approach at the expense of some reduction in the degree of achievable 3D structure complexity. In order to produce uniform light intensity from the arrayed LED light source, the UV-LED array stage has been kept rotating during exposure. UV-LED 3D fabrication capability was demonstrated through a plurality of complex structures such as V-shaped micropillars, micropanels, a micro-‘hi’ structure, a micro-‘cat’s claw,’ a micro-‘horn,’ a micro-‘calla lily,’ a micro-‘cowboy’s hat,’ and a micro-‘table napkin’ array.

  19. Filling gaps in visual motion for target capture

    PubMed Central

    Bosco, Gianfranco; Monache, Sergio Delle; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco

    2015-01-01

    A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation. PMID:25755637

  20. Capturing 3D resistivity of semi-arid karstic subsurface in varying moisture conditions using a wireless sensor network

    NASA Astrophysics Data System (ADS)

    Barnhart, K.; Oden, C. P.

    2012-12-01

    The dissolution of soluble bedrock results in surface and subterranean karst channels, which comprise 7-10% of the dry earth's surface. Karst serves as a preferential conduit to focus surface and subsurface water but it is difficult to exploit as a water resource or protect from pollution because of irregular structure and nonlinear hydrodynamic behavior. Geophysical characterization of karst commonly employs resistivity and seismic methods, but difficulties arise due to low resistivity contrast in arid environments and insufficient resolution of complex heterogeneous structures. To help reduce these difficulties, we employ a state-of-the-art wireless geophysical sensor array, which combines low-power radio telemetry and solar energy harvesting to enable long-term in-situ monitoring. The wireless aspect removes topological constraints common with standard wired resistivity equipment, which facilitates better coverage and/or sensor density to help improve aspect ratio and resolution. Continuous in-situ deployment allows data to be recorded according to nature's time scale; measurements are made during infrequent precipitation events which can increase resistivity contrast. The array is coordinated by a smart wireless bridge that continuously monitors local soil moisture content to detect when precipitation occurs, schedules resistivity surveys, and periodically relays data to the cloud via 3G cellular service. Traditional 2/3D gravity and seismic reflection surveys have also been conducted to clarify and corroborate results.

  1. Low-cost human motion capture system for postural analysis onboard ships

    NASA Astrophysics Data System (ADS)

    Nocerino, Erica; Ackermann, Sebastiano; Del Pizzo, Silvio; Menna, Fabio; Troisi, Salvatore

    2011-07-01

    The study of human equilibrium, also known as postural stability, concerns different research sectors (medicine, kinesiology, biomechanics, robotics, sport) and is usually performed employing motion analysis techniques for recording human movements and posture. A wide range of techniques and methodologies has been developed, but the choice of instrumentations and sensors depends on the requirement of the specific application. Postural stability is a topic of great interest for the maritime community, since ship motions can make demanding and difficult the maintenance of the upright stance with hazardous consequences for the safety of people onboard. The need of capturing the motion of an individual standing on a ship during its daily service does not permit to employ optical systems commonly used for human motion analysis. These sensors are not designed for operating in disadvantageous environmental conditions (water, wetness, saltiness) and with not optimal lighting. The solution proposed in this study consists in a motion acquisition system that could be easily usable onboard ships. It makes use of two different methodologies: (I) motion capture with videogrammetry and (II) motion measurement with Inertial Measurement Unit (IMU). The developed image-based motion capture system, made up of three low-cost, light and compact video cameras, was validated against a commercial optical system and then used for testing the reliability of the inertial sensors. In this paper, the whole process of planning, designing, calibrating, and assessing the accuracy of the motion capture system is reported and discussed. Results from the laboratory tests and preliminary campaigns in the field are presented.

  2. Imaging bacterial 3D motion using digital in-line holographic microscopy and correlation-based de-noising algorithm.

    PubMed

    Molaei, Mehdi; Sheng, Jian

    2014-12-29

    Better understanding of bacteria environment interactions in the context of biofilm formation requires accurate 3-dimentional measurements of bacteria motility. Digital Holographic Microscopy (DHM) has demonstrated its capability in resolving 3D distribution and mobility of particulates in a dense suspension. Due to their low scattering efficiency, bacteria are substantially difficult to be imaged by DHM. In this paper, we introduce a novel correlation-based de-noising algorithm to remove the background noise and enhance the quality of the hologram. Implemented in conjunction with DHM, we demonstrate that the method allows DHM to resolve 3-D E. coli bacteria locations of a dense suspension (>107 cells/ml) with submicron resolutions (<0.5 µm) over substantial depth and to obtain thousands of 3D cell trajectories. PMID:25607177

  3. Imaging bacterial 3D motion using digital in-line holographic microscopy and correlation-based de-noising algorithm

    PubMed Central

    Molaei, Mehdi; Sheng, Jian

    2014-01-01

    Abstract: Better understanding of bacteria environment interactions in the context of biofilm formation requires accurate 3-dimentional measurements of bacteria motility. Digital Holographic Microscopy (DHM) has demonstrated its capability in resolving 3D distribution and mobility of particulates in a dense suspension. Due to their low scattering efficiency, bacteria are substantially difficult to be imaged by DHM. In this paper, we introduce a novel correlation-based de-noising algorithm to remove the background noise and enhance the quality of the hologram. Implemented in conjunction with DHM, we demonstrate that the method allows DHM to resolve 3-D E. coli bacteria locations of a dense suspension (>107 cells/ml) with submicron resolutions (<0.5 µm) over substantial depth and to obtain thousands of 3D cell trajectories. PMID:25607177

  4. Influence of Head Motion on the Accuracy of 3D Reconstruction with Cone-Beam CT: Landmark Identification Errors in Maxillofacial Surface Model

    PubMed Central

    Song, Jin-Myoung; Cho, Jin-Hyoung

    2016-01-01

    Purpose The purpose of this study was to investigate the influence of head motion on the accuracy of three-dimensional (3D) reconstruction with cone-beam computed tomography (CBCT) scan. Materials and Methods Fifteen dry skulls were incorporated into a motion controller which simulated four types of head motion during CBCT scan: 2 horizontal rotations (to the right/to the left) and 2 vertical rotations (upward/downward). Each movement was triggered to occur at the start of the scan for 1 second by remote control. Four maxillofacial surface models with head motion and one control surface model without motion were obtained for each skull. Nine landmarks were identified on the five maxillofacial surface models for each skull, and landmark identification errors were compared between the control model and each of the models with head motion. Results Rendered surface models with head motion were similar to the control model in appearance; however, the landmark identification errors showed larger values in models with head motion than in the control. In particular, the Porion in the horizontal rotation models presented statistically significant differences (P < .05). Statistically significant difference in the errors between the right and left side landmark was present in the left side rotation which was opposite direction to the scanner rotation (P < .05). Conclusions Patient movement during CBCT scan might cause landmark identification errors on the 3D surface model in relation to the direction of the scanner rotation. Clinicians should take this into consideration to prevent patient movement during CBCT scan, particularly horizontal movement. PMID:27065238

  5. Bedside assistance in freehand ultrasonic diagnosis by real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion

    NASA Astrophysics Data System (ADS)

    Fukuzawa, M.; Kawata, K.; Nakamori, N.; Kitsunezuka, Y.

    2011-03-01

    By real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion, freehand ultrasonic diagnosis of neonatal ischemic diseases has been assisted at the bedside. The 2D ultrasonic movie was taken with a conventional ultrasonic apparatus (ATL HDI5000) and ultrasonic probes of 5-7 MHz with the compact tilt-sensor to measure the probe orientation. The real-time 3D visualization was realized by developing an extended version of the PC-based visualization system. The software was originally developed on the DirectX platform and optimized with the streaming SIMD extensions. The 3D scatter diagram of the latest pulsatile tissues has been continuously generated and visualized as projection image with the ultrasonic movie in the current section more than 15 fps. It revealed the 3D structure of pulsatile tissues such as middle and posterior cerebral arteries, Willis ring and cerebellar arteries, in which pediatricians have great interests in the blood flow because asphyxiated and/or low-birth-weight neonates have a high risk of ischemic diseases such as hypoxic-ischemic encephalopathy and periventricular leukomalacia. Since the pulsatile tissue-motion is due to local blood flow, it can be concluded that the system developed in this work is very useful to assist freehand ultrasonic diagnosis of ischemic diseases in the neonatal cranium.

  6. Topology dictionary for 3D video understanding.

    PubMed

    Tung, Tony; Matsuyama, Takashi

    2012-08-01

    This paper presents a novel approach that achieves 3D video understanding. 3D video consists of a stream of 3D models of subjects in motion. The acquisition of long sequences requires large storage space (2 GB for 1 min). Moreover, it is tedious to browse data sets and extract meaningful information. We propose the topology dictionary to encode and describe 3D video content. The model consists of a topology-based shape descriptor dictionary which can be generated from either extracted patterns or training sequences. The model relies on 1) topology description and classification using Reeb graphs, and 2) a Markov motion graph to represent topology change states. We show that the use of Reeb graphs as the high-level topology descriptor is relevant. It allows the dictionary to automatically model complex sequences, whereas other strategies would require prior knowledge on the shape and topology of the captured subjects. Our approach serves to encode 3D video sequences, and can be applied for content-based description and summarization of 3D video sequences. Furthermore, topology class labeling during a learning process enables the system to perform content-based event recognition. Experiments were carried out on various 3D videos. We showcase an application for 3D video progressive summarization using the topology dictionary. PMID:22745004

  7. Integrating structure-from-motion photogrammetry with geospatial software as a novel technique for quantifying 3D ecological characteristics of coral reefs

    PubMed Central

    Delparte, D; Gates, RD; Takabayashi, M

    2015-01-01

    The structural complexity of coral reefs plays a major role in the biodiversity, productivity, and overall functionality of reef ecosystems. Conventional metrics with 2-dimensional properties are inadequate for characterization of reef structural complexity. A 3-dimensional (3D) approach can better quantify topography, rugosity and other structural characteristics that play an important role in the ecology of coral reef communities. Structure-from-Motion (SfM) is an emerging low-cost photogrammetric method for high-resolution 3D topographic reconstruction. This study utilized SfM 3D reconstruction software tools to create textured mesh models of a reef at French Frigate Shoals, an atoll in the Northwestern Hawaiian Islands. The reconstructed orthophoto and digital elevation model were then integrated with geospatial software in order to quantify metrics pertaining to 3D complexity. The resulting data provided high-resolution physical properties of coral colonies that were then combined with live cover to accurately characterize the reef as a living structure. The 3D reconstruction of reef structure and complexity can be integrated with other physiological and ecological parameters in future research to develop reliable ecosystem models and improve capacity to monitor changes in the health and function of coral reef ecosystems. PMID:26207190

  8. Integrating structure-from-motion photogrammetry with geospatial software as a novel technique for quantifying 3D ecological characteristics of coral reefs.

    PubMed

    Burns, Jhr; Delparte, D; Gates, R D; Takabayashi, M

    2015-01-01

    The structural complexity of coral reefs plays a major role in the biodiversity, productivity, and overall functionality of reef ecosystems. Conventional metrics with 2-dimensional properties are inadequate for characterization of reef structural complexity. A 3-dimensional (3D) approach can better quantify topography, rugosity and other structural characteristics that play an important role in the ecology of coral reef communities. Structure-from-Motion (SfM) is an emerging low-cost photogrammetric method for high-resolution 3D topographic reconstruction. This study utilized SfM 3D reconstruction software tools to create textured mesh models of a reef at French Frigate Shoals, an atoll in the Northwestern Hawaiian Islands. The reconstructed orthophoto and digital elevation model were then integrated with geospatial software in order to quantify metrics pertaining to 3D complexity. The resulting data provided high-resolution physical properties of coral colonies that were then combined with live cover to accurately characterize the reef as a living structure. The 3D reconstruction of reef structure and complexity can be integrated with other physiological and ecological parameters in future research to develop reliable ecosystem models and improve capacity to monitor changes in the health and function of coral reef ecosystems. PMID:26207190

  9. The next chapter in experimental petrology: Metamorphic dehydration of polycrystalline gypsum captured in 3D microtomographic time series datasets

    NASA Astrophysics Data System (ADS)

    Bedford, John; Fusseis, Florian; Leclere, Henry; Wheeler, John; Faulkner, Dan

    2016-04-01

    Nucleation and growth of new minerals in response to disequilibrium is the most fundamental metamorphic process. However, our current kinetic models of metamorphic reactions are largely based on inference from fossil mineral assemblages, rather than from direct observation. The experimental investigation of metamorphism has also been limited, typically to concealed vessels that restrict the possibility of direct microstructural monitoring. Here we present one of the first time series datasets that captures a metamorphic reaction, dehydration of polycrystalline gypsum to form hemihydrate, in a series of three dimensional x-ray microtomographic datasets. We achieved this by installing an x-ray transparent hydrothermal cell (Fusseis et al., 2014, J. Synchrotron Rad. 21, 251-253) in the microtomography beamline 2BM at the Advanced Photon Source (USA). In the cell, we heated a millimetre-sized sample of Volterra Alabaster to 388 K while applying an effective pressure of 5 MPa. Using hard x-rays that penetrate the pressure vessel, we imaged the specimen 40 times while it reacted for approximately 10 hours. Each microtomographic dataset was acquired in 300 seconds without interrupting the reaction. Our absorption microtomographic data have a voxel size of 1.3 μm, which suffices to analyse the reaction progress in 4D. Gypsum can clearly be distinguished from hemihydrate and pores, which form due to the large negative solid volume change. On the resolved scale, the first hemihydrate needles appear after about 2 hours. Our data allow tracking of individual needles throughout the entire experiment. We quantified their growth rates by measuring their circumference. While individual grains grow at different rates, they all start slowly during the initial nucleation stage, then accelerate and grow steadily between about 200 and 400 minutes before reaction rate decelerates again. Hemihydrate needles are surrounded by porous haloes, which grow with the needles, link up and

  10. Validation and Comparison of 2D and 3D Codes for Nearshore Motion of Long Waves Using Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey

    2016-04-01

    Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT

  11. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    NASA Astrophysics Data System (ADS)

    Min, Yugang; Santhanam, Anand; Neelakkantan, Harini; Ruddy, Bari H.; Meeks, Sanford L.; Kupelian, Patrick A.

    2010-09-01

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  12. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    PubMed Central

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-01-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399

  13. Impact of assimilation of INSAT-3D retrieved atmospheric motion vectors on short-range forecast of summer monsoon 2014 over the South Asian region

    NASA Astrophysics Data System (ADS)

    Kumar, Prashant; Deb, Sanjib K.; Kishtawal, C. M.; Pal, P. K.

    2016-01-01

    The Weather Research and Forecasting (WRF) model and its three-dimensional variational data assimilation system are used in this study to assimilate the INSAT-3D, a recently launched Indian geostationary meteorological satellite derived from atmospheric motion vectors (AMVs) over the South Asian region during peak Indian summer monsoon month (i.e., July 2014). A total of four experiments were performed daily with and without assimilation of INSAT-3D-derived AMVs and the other AMVs available through Global Telecommunication System (GTS) for the entire month of July 2014. Before assimilating these newly derived INSAT-3D AMVs in the numerical model, a preliminary evaluation of these AMVs is performed with National Centers for Environmental Prediction (NCEP) final model analyses. The preliminary validation results show that root-mean-square vector difference (RMSVD) for INSAT-3D AMVs is ˜3.95, 6.66, and 5.65 ms-1 at low, mid, and high levels, respectively, and slightly more RMSVDs are noticed in GTS AMVs (˜4.0, 8.01, and 6.43 ms-1 at low, mid, and high levels, respectively). The assimilation of AMVs has improved the WRF model of produced wind speed, temperature, and moisture analyses as well as subsequent model forecasts over the Indian Ocean, Arabian Sea, Australia, and South Africa. Slightly more improvements are noticed in the experiment where only the INSAT-3D AMVs are assimilated compared to the experiment where only GTS AMVs are assimilated. The results also show improvement in rainfall predictions over the Indian region after AMV assimilation. Overall, the assimilation of INSAT-3D AMVs improved the WRF model short-range predictions over the South Asian region as compared to control experiments.

  14. Automated 3D architecture reconstruction from photogrammetric structure-and-motion: A case study of the One Pilla pagoda, Hanoi, Vienam

    NASA Astrophysics Data System (ADS)

    To, T.; Nguyen, D.; Tran, G.

    2015-04-01

    Heritage system of Vietnam has decline because of poor-conventional condition. For sustainable development, it is required a firmly control, space planning organization, and reasonable investment. Moreover, in the field of Cultural Heritage, the use of automated photogrammetric systems, based on Structure from Motion techniques (SfM), is widely used. With the potential of high-resolution, low-cost, large field of view, easiness, rapidity and completeness, the derivation of 3D metric information from Structure-and- Motion images is receiving great attention. In addition, heritage objects in form of 3D physical models are recorded not only for documentation issues, but also for historical interpretation, restoration, cultural and educational purposes. The study suggests the archaeological documentation of the "One Pilla" pagoda placed in Hanoi capital, Vietnam. The data acquired through digital camera Cannon EOS 550D, CMOS APS-C sensor 22.3 x 14.9 mm. Camera calibration and orientation were carried out by VisualSFM, CMPMVS (Multi-View Reconstruction) and SURE (Photogrammetric Surface Reconstruction from Imagery) software. The final result represents a scaled 3D model of the One Pilla Pagoda and displayed different views in MeshLab software.

  15. A 3D graphene oxide microchip and a Au-enwrapped silica nanocomposite-based supersandwich cytosensor toward capture and analysis of circulating tumor cells

    NASA Astrophysics Data System (ADS)

    Li, Na; Xiao, Tingyu; Zhang, Zhengtao; He, Rongxiang; Wen, Dan; Cao, Yiping; Zhang, Weiying; Chen, Yong

    2015-10-01

    Determination of the presence and number of circulating tumor cells (CTCs) in peripheral blood can provide clinically important data for prognosis and therapeutic response patterns. In this study, a versatile supersandwich cytosensor was successfully developed for the highly sensitive and selective analysis of CTCs using Au-enwrapped silica nanocomposites (Si/AuNPs) and three-dimensional (3D) microchips. First, 3D microchips were fabricated by a photolithography method. Then, the prepared substrate was applied to bind graphene oxide, streptavidin and biotinylated epithelial-cell adhesion-molecule antibody, resulting in high stability, bioactivity, and capability for CTCs capture. Furthermore, horseradish peroxidase and anti-CA153 were co-linked to the Si/AuNPs for signal amplification. The performance of the cytosensor was evaluated with MCF7 breast cancer cells. Under optimal conditions, the proposed supersandwich cytosensor showed high sensitivity with a wide range of 101 to 107 cells per mL and a detection limit of 10 cells per mL. More importantly, it could effectively distinguish CTCs from normal cells, which indicated the promising applications of our method for the clinical diagnosis and therapeutic monitoring of cancers.

  16. A 3D graphene oxide microchip and a Au-enwrapped silica nanocomposite-based supersandwich cytosensor toward capture and analysis of circulating tumor cells.

    PubMed

    Li, Na; Xiao, Tingyu; Zhang, Zhengtao; He, Rongxiang; Wen, Dan; Cao, Yiping; Zhang, Weiying; Chen, Yong

    2015-10-21

    Determination of the presence and number of circulating tumor cells (CTCs) in peripheral blood can provide clinically important data for prognosis and therapeutic response patterns. In this study, a versatile supersandwich cytosensor was successfully developed for the highly sensitive and selective analysis of CTCs using Au-enwrapped silica nanocomposites (Si/AuNPs) and three-dimensional (3D) microchips. First, 3D microchips were fabricated by a photolithography method. Then, the prepared substrate was applied to bind graphene oxide, streptavidin and biotinylated epithelial-cell adhesion-molecule antibody, resulting in high stability, bioactivity, and capability for CTCs capture. Furthermore, horseradish peroxidase and anti-CA153 were co-linked to the Si/AuNPs for signal amplification. The performance of the cytosensor was evaluated with MCF7 breast cancer cells. Under optimal conditions, the proposed supersandwich cytosensor showed high sensitivity with a wide range of 10(1) to 10(7) cells per mL and a detection limit of 10 cells per mL. More importantly, it could effectively distinguish CTCs from normal cells, which indicated the promising applications of our method for the clinical diagnosis and therapeutic monitoring of cancers. PMID:26391313

  17. An automated time and hand motion analysis based on planar motion capture extended to a virtual environment

    NASA Astrophysics Data System (ADS)

    Tinoco, Hector A.; Ovalle, Alex M.; Vargas, Carlos A.; Cardona, María J.

    2015-03-01

    In the context of industrial engineering, the predetermined time systems (PTS) play an important role in workplaces because inefficiencies are found in assembly processes that require manual manipulations. In this study, an approach is proposed with the aim to analyze time and motions in a manual process using a capture motion system embedded to a virtual environment. Capture motion system tracks IR passive markers located on the hands to take the positions of each one. For our purpose, a real workplace is virtually represented by domains to create a virtual workplace based on basic geometries. Motion captured data are combined with the virtual workplace to simulate operations carried out on it, and a time and motion analysis is completed by means of an algorithm. To test the methodology of analysis, a case study was intentionally designed using and violating the principles of motion economy. In the results, it was possible to observe where the hands never crossed as well as where the hands passed by the same place. In addition, the activities done in each zone were observed and some known deficiencies were identified in the distribution of the workplace by computational analysis. Using a frequency analysis of hand velocities, errors in the chosen assembly method were revealed showing differences in the hand velocities. An opportunity is seen to classify some quantifiable aspects that are not identified easily in a traditional time and motion analysis. The automated analysis is considered as the main contribution in this study. In the industrial context, a great application is perceived in terms of monitoring the workplace to analyze repeatability, PTS, workplace and labor activities redistribution using the proposed methodology.

  18. Flexible CNT-array double helices Strain Sensor with high stretchability for Motion Capture.

    PubMed

    Li, Cheng; Cui, Ya-Long; Tian, Gui-Li; Shu, Yi; Wang, Xue-Feng; Tian, He; Yang, Yi; Wei, Fei; Ren, Tian-Ling

    2015-01-01

    Motion capture is attracting more and more attention due to its potential wide applications in various fields. However, traditional methods for motion capture still have weakness such as high cost and space consuming. Based on these considerations, a flexible, highly stretchable strain sensor with high gauge factor for motion capture is fabricated with carbon nanotube (CNT) array double helices as the main building block. Ascribed to the unique flexible double helical CNT-array matrix, the strain sensor is able to measure strain up to 410%, with low hysteresis. Moreover, a demonstration of using this strain sensor for capture hand motion and to control a mechanical hand in real time is also achieved. A model based on finite difference method is also made to help understand the mechanism of the strain sensors. Our work demonstrates that strain sensors can measure very large strain while maintaining high sensitivity, and the motion capture based on this strain sensor is expected to be less expensive, more convenient and accessible. PMID:26530904

  19. Flexible CNT-array double helices Strain Sensor with high stretchability for Motion Capture

    NASA Astrophysics Data System (ADS)

    Li, Cheng; Cui, Ya-Long; Tian, Gui-Li; Shu, Yi; Wang, Xue-Feng; Tian, He; Yang, Yi; Wei, Fei; Ren, Tian-Ling

    2015-11-01

    Motion capture is attracting more and more attention due to its potential wide applications in various fields. However, traditional methods for motion capture still have weakness such as high cost and space consuming. Based on these considerations, a flexible, highly stretchable strain sensor with high gauge factor for motion capture is fabricated with carbon nanotube (CNT) array double helices as the main building block. Ascribed to the unique flexible double helical CNT-array matrix, the strain sensor is able to measure strain up to 410%, with low hysteresis. Moreover, a demonstration of using this strain sensor for capture hand motion and to control a mechanical hand in real time is also achieved. A model based on finite difference method is also made to help understand the mechanism of the strain sensors. Our work demonstrates that strain sensors can measure very large strain while maintaining high sensitivity, and the motion capture based on this strain sensor is expected to be less expensive, more convenient and accessible.

  20. Flexible CNT-array double helices Strain Sensor with high stretchability for Motion Capture

    PubMed Central

    Li, Cheng; Cui, Ya-Long; Tian, Gui-Li; Shu, Yi; Wang, Xue-Feng; Tian, He; Yang, Yi; Wei, Fei; Ren, Tian-Ling

    2015-01-01

    Motion capture is attracting more and more attention due to its potential wide applications in various fields. However, traditional methods for motion capture still have weakness such as high cost and space consuming. Based on these considerations, a flexible, highly stretchable strain sensor with high gauge factor for motion capture is fabricated with carbon nanotube (CNT) array double helices as the main building block. Ascribed to the unique flexible double helical CNT-array matrix, the strain sensor is able to measure strain up to 410%, with low hysteresis. Moreover, a demonstration of using this strain sensor for capture hand motion and to control a mechanical hand in real time is also achieved. A model based on finite difference method is also made to help understand the mechanism of the strain sensors. Our work demonstrates that strain sensors can measure very large strain while maintaining high sensitivity, and the motion capture based on this strain sensor is expected to be less expensive, more convenient and accessible. PMID:26530904

  1. Triggered optical coherence tomography for capturing rapid periodic motion

    NASA Astrophysics Data System (ADS)

    Chang, Ernest W.; Kobler, James B.; Yun, Seok H.

    2011-07-01

    Quantitative cross-sectional imaging of vocal folds during phonation is potentially useful for diagnosis and treatments of laryngeal disorders. Optical coherence tomography (OCT) is a powerful technique, but its relatively low frame rates makes it challenging to visualize rapidly vibrating tissues. Here, we demonstrate a novel method based on triggered laser scanning to capture 4-dimensional (4D) images of samples in motu at audio frequencies over 100 Hz. As proof-of-concept experiments, we applied this technique to imaging the oscillations of biopolymer gels on acoustic vibrators and aerodynamically driven vibrations of the vocal fold in an ex vivo calf larynx model. Our results suggest that triggered 4D OCT may be useful in understanding and assessing the function of vocal folds and developing novel treatments in research and clinical settings.

  2. Analysis of accuracy in optical motion capture - A protocol for laboratory setup evaluation.

    PubMed

    Eichelberger, Patric; Ferraro, Matteo; Minder, Ursina; Denton, Trevor; Blasimann, Angela; Krause, Fabian; Baur, Heiner

    2016-07-01

    Validity and reliability as scientific quality criteria have to be considered when using optical motion capture (OMC) for research purposes. Literature and standards recommend individual laboratory setup evaluation. However, system characteristics such as trueness, precision and uncertainty are often not addressed in scientific reports on 3D human movement analysis. One reason may be the lack of simple and practical methods for evaluating accuracy parameters of OMC. A protocol was developed for investigating the accuracy of an OMC system (Vicon, volume 5.5×1.2×2.0m(3)) with standard laboratory equipment and by means of trueness and uncertainty of marker distances. The study investigated the effects of number of cameras (6, 8 and 10), measurement height (foot, knee and hip) and movement condition (static and dynamic) on accuracy. Number of cameras, height and movement condition affected system accuracy significantly. For lower body assessment during level walking, the most favorable setting (10 cameras, foot region) revealed mean trueness and uncertainty to be -0.08 and 0.33mm, respectively. Dynamic accuracy cannot be predicted based on static error assessments. Dynamic procedures have to be used instead. The significant influence of the number of cameras and the measurement location suggests that instrumental errors should be evaluated in a laboratory- and task-specific manner. The use of standard laboratory equipment makes the proposed procedure widely applicable and it supports the setup process of OCM by simple functional error assessment. Careful system configuration and thorough measurement process control are needed to produce high-quality data. PMID:27230474

  3. Generation of fluoroscopic 3D images with a respiratory motion model based on an external surrogate signal

    NASA Astrophysics Data System (ADS)

    Hurwitz, Martina; Williams, Christopher L.; Mishra, Pankaj; Rottmann, Joerg; Dhou, Salam; Wagar, Matthew; Mannarino, Edward G.; Mak, Raymond H.; Lewis, John H.

    2015-01-01

    Respiratory motion during radiotherapy can cause uncertainties in definition of the target volume and in estimation of the dose delivered to the target and healthy tissue. In this paper, we generate volumetric images of the internal patient anatomy during treatment using only the motion of a surrogate signal. Pre-treatment four-dimensional CT imaging is used to create a patient-specific model correlating internal respiratory motion with the trajectory of an external surrogate placed on the chest. The performance of this model is assessed with digital and physical phantoms reproducing measured irregular patient breathing patterns. Ten patient breathing patterns are incorporated in a digital phantom. For each patient breathing pattern, the model is used to generate images over the course of thirty seconds. The tumor position predicted by the model is compared to ground truth information from the digital phantom. Over the ten patient breathing patterns, the average absolute error in the tumor centroid position predicted by the motion model is 1.4 mm. The corresponding error for one patient breathing pattern implemented in an anthropomorphic physical phantom was 0.6 mm. The global voxel intensity error was used to compare the full image to the ground truth and demonstrates good agreement between predicted and true images. The model also generates accurate predictions for breathing patterns with irregular phases or amplitudes.

  4. Validation of 3D motion tracking of pulmonary lesions using CT fluoroscopy images for robotically assisted lung biopsy

    NASA Astrophysics Data System (ADS)

    Xu, Sheng; Fichtinger, Gabor; Taylor, Russell H.; Cleary, Kevin R.

    2005-04-01

    As recently proposed in our previous work, the two-dimensional CT fluoroscopy image series can be used to track the three-dimensional motion of a pulmonary lesion. The assumption is that the lung tissue is locally rigid, so that the real-time CT fluoroscopy image can be combined with a preoperative CT volume to infer the position of the lesion when the lesion is not in the CT fluoroscopy imaging plane. In this paper, we validate the basic properties of our tracking algorithm using a synthetic four-dimensional lung dataset. The motion tracking result is compared to the ground truth of the four-dimensional dataset. The optimal parameter configurations of the algorithm are discussed. The robustness and accuracy of the tracking algorithm are presented. The error analysis shows that the local rigidity error is the principle component of the tracking error. The error increases as the lesion moves away from the image region being registered. Using the synthetic four-dimensional lung data, the average tracking error over a complete respiratory cycle is 0.8 mm for target lesions inside the lung. As a result, the motion tracking algorithm can potentially alleviate the effect of respiratory motion in CT fluoroscopy-guided lung biopsy.

  5. Hybrid 3-D rocket trajectory program. Part 1: Formulation and analysis. Part 2: Computer programming and user's instruction. [computerized simulation using three dimensional motion analysis

    NASA Technical Reports Server (NTRS)

    Huang, L. C. P.; Cook, R. A.

    1973-01-01

    Models utilizing various sub-sets of the six degrees of freedom are used in trajectory simulation. A 3-D model with only linear degrees of freedom is especially attractive, since the coefficients for the angular degrees of freedom are the most difficult to determine and the angular equations are the most time consuming for the computer to evaluate. A computer program is developed that uses three separate subsections to predict trajectories. A launch rail subsection is used until the rocket has left its launcher. The program then switches to a special 3-D section which computes motions in two linear and one angular degrees of freedom. When the rocket trims out, the program switches to the standard, three linear degrees of freedom model.

  6. Development of the dynamic motion simulator of 3D micro-gravity with a combined passive/active suspension system

    NASA Technical Reports Server (NTRS)

    Yoshida, Kazuya; Hirose, Shigeo; Ogawa, Tadashi

    1994-01-01

    The establishment of those in-orbit operations like 'Rendez-Vous/Docking' and 'Manipulator Berthing' with the assistance of robotics or autonomous control technology, is essential for the near future space programs. In order to study the control methods, develop the flight models, and verify how the system works, we need a tool or a testbed which enables us to simulate mechanically the micro-gravity environment. There have been many attempts to develop the micro-gravity testbeds, but once the simulation goes into the docking and berthing operation that involves mechanical contacts among multi bodies, the requirement becomes critical. A group at the Tokyo Institute of Technology has proposed a method that can simulate the 3D micro-gravity producing a smooth response to the impact phenomena with relatively simple apparatus. Recently the group carried out basic experiments successfully using a prototype hardware model of the testbed. This paper will present our idea of the 3D micro-gravity simulator and report the results of our initial experiments.

  7. Using motion capture to assess colonoscopy experience level

    PubMed Central

    Svendsen, Morten Bo; Preisler, Louise; Hillingsoe, Jens Georg; Svendsen, Lars Bo; Konge, Lars

    2014-01-01

    AIM: To study technical skills of colonoscopists using a Microsoft Kinect™ for motion analysis to develop a tool to guide colonoscopy education. RESULTS: Ten experienced endoscopists (gastroenterologists, n = 2; colorectal surgeons, n = 8) and 11 novices participated in the study. A Microsoft Kinect™ recorded the movements of the participants during the insertion of the colonoscope. We used a modified script from Microsoft to record skeletal data. Data were saved and later transferred to MatLab for analysis and the calculation of statistics. The test was performed on a physical model, specifically the “Kagaku Colonoscope Training Model” (Kyoto Kagaku Co. Ltd, Kyoto, Japan). After the introduction to the scope and colonoscopy model, the test was performed. Seven metrics were analyzed to find discriminative motion patterns between the novice and experienced endoscopists: hand distance from gurney, number of times the right hand was used to control the small wheel of the colonoscope, angulation of elbows, position of hands in relation to body posture, angulation of body posture in relation to the anus, mean distance between the hands and percentage of time the hands were approximated to each other. RESULTS: Four of the seven metrics showed discriminatory ability: mean distance between hands [45 cm for experienced endoscopists (SD 2) vs 37 cm for novice endoscopists (SD 6)], percentage of time in which the two hands were within 25 cm of each other [5% for experienced endoscopists (SD 4) vs 12% for novice endoscopists (SD 9)], the level of the right hand below the sighting line (z-axis) (25 cm for experienced endoscopists vs 36 cm for novice endoscopists, P < 0.05) and the level of the left hand below the z-axis (6 cm for experienced endoscopists vs 15 cm for novice endoscopists, P < 0.05). By plotting the distributions of the percentages for each group, we determined the best discriminatory value between the groups. A pass score was set at the intersection of

  8. The 3-D motion of the centre of gravity of the human body during level walking. II. Lower limb amputees.

    PubMed

    Tesio, L; Lanzi, D; Detrembleur, C

    1998-03-01

    OBJECTIVE: To analyse the motion of the centre of gravity (CG) of the body during gait in unilateral lower limb amputees with good kinematic patterns. DESIGN: Three transtibial (below-knee, BK) and four transfemoral (above-knee, AK) amputees were required to perform successive walks over a 2.4 m long force plate, at freely chosen cadence and speed. BACKGROUND: In previous studies it has been shown that in unilateral lower limb amputee gait, the motion of the CG can be more asymmetric than might be suspected from kinematic analysis. METHODS: The mechanical energy changes of the CG due to its motion in the vertical, forward and lateral direction were measured. Gait speed ranged 0.75-1.32 m s(-1) in the different subjects. This allowed calculation of (a) the positive work done by muscles to maintain the motion of the CG with respect to the ground ('external' work, W(ext)) and (b) the amount of the pendulum-like, energy-saving transfer between gravitational potential energy and kinetic energy of the CG during each step (percent recovery, R). Step length and vertical displacement of the CG were also measured. RESULTS: The recorded variables were kept within the normal limits, calculated in a previous work, when an average was made of the steps performed on the prosthetic (P) and on the normal (N) limb. Asymmetries were found, however, between the P and the N step. In BK amputees, the P step R was 5% greater and W(ext) was 21% lower than in the N step; in AK amputees, in the P step R was 54% greater and W(ext) was 66% lower than in the N step. Asymmetries were also found in the relative magnitude of the external work provided by each lower limb during the single stance as compared with the double stance: a marked deficit of work occurred at the P to N transition. PMID:11415775

  9. Evaluation of Structure from Motion Software to Create 3D Models of Late Nineteenth Century Great Lakes Shipwrecks Using Archived Diver-Acquired Video Surveys

    NASA Astrophysics Data System (ADS)

    Mertes, J.; Thomsen, T.; Gulley, J.

    2014-12-01

    Here we demonstrate the ability to use archived video surveys to create photorealistic 3D models of submerged archeological sites. We created 3D models of two nineteenth century Great Lakes shipwrecks using diver-acquired video surveys and Structure from Motion (SfM) software. Models were georeferenced using archived hand survey data. Comparison of hand survey measurements and digital measurements made using the models demonstrate that spatial analysis produces results with reasonable accuracy when wreck maps are available. Error associated with digital measurements displayed an inverse relationship to object size. Measurement error ranged from a maximum of 18 % (on 0.37 m object) and a minimum of 0.56 % (on a 4.21 m object). Our results demonstrate SfM can generate models of large maritime archaeological sites that for research, education and outreach purposes. Where site maps are available, these 3D models can be georeferenced to allow additional spatial analysis long after on-site data collection.

  10. Time Capture Tool (TimeCaT): Development of a Comprehensive Application to Support Data Capture for Time Motion Studies.

    PubMed Central

    Lopetegui, Marcelo; Yen, Po-Yin; Lai, Albert M.; Embi, Peter J.; Payne, Philip R.O.

    2012-01-01

    Time Motion Studies (TMS) have proved to be the gold standard method to measure and quantify clinical workflow, and have been widely used to assess the impact of health information systems implementation. Although there are tools available to conduct TMS, they provide different approaches for multitasking, interruptions, inter-observer reliability assessment and task taxonomy, making results across studies not comparable. We postulate that a significant contributing factor towards the standardization and spread of TMS would be the availability and spread of an accessible, scalable and dynamic tool. We present the development of a comprehensive Time Capture Tool (TimeCaT): a web application developed to support data capture for TMS. Ongoing and continuous development of TimeCaT includes the development and validation of a realistic inter-observer reliability scoring algorithm, the creation of an online clinical tasks ontology, and a novel quantitative workflow comparison method. PMID:23304332

  11. Real-time intensity based 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Steiner, E.; Stock, M.; Georg, D.; Birkfellner, W.

    2014-03-01

    Intra-fractional respiratorymotion during radiotherapy is one of themain sources of uncertainty in dose application creating the need to extend themargins of the planning target volume (PTV). Real-time tumormotion tracking by 2D/3D registration using on-board kilo-voltage (kV) imaging can lead to a reduction of the PTV. One limitation of this technique when using one projection image, is the inability to resolve motion along the imaging beam axis. We present a retrospective patient study to investigate the impact of paired portal mega-voltage (MV) and kV images, on registration accuracy. We used data from eighteen patients suffering from non small cell lung cancer undergoing regular treatment at our center. For each patient we acquired a planning CT and sequences of kV and MV images during treatment. Our evaluation consisted of comparing the accuracy of motion tracking in 6 degrees-of-freedom(DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. We use graphics processing unit rendering for real-time performance. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 3.3 mm to 1.8 mm and the motion along AP was successfully extracted. The mean registration time was of 190+/-35ms. Our evaluation shows that using kVMV image pairs leads to improved motion extraction in 6 DOF. Therefore, this approach is suitable for accurate, real-time tumor motion tracking with a conventional LINAC.

  12. 3D Motions of Iron in Six-Coordinate {FeNO}(7) Hemes by Nuclear Resonance Vibration Spectroscopy.

    PubMed

    Peng, Qian; Pavlik, Jeffrey W; Silvernail, Nathan J; Alp, E Ercan; Hu, Michael Y; Zhao, Jiyong; Sage, J Timothy; Scheidt, W Robert

    2016-04-25

    The vibrational spectrum of a six-coordinate nitrosyl iron porphyrinate, monoclinic [Fe(TpFPP)(1-MeIm)(NO)] (TpFPP=tetra-para-fluorophenylporphyrin; 1-MeIm=1-methylimidazole), has been studied by oriented single-crystal nuclear resonance vibrational spectroscopy (NRVS). The crystal was oriented to give spectra perpendicular to the porphyrin plane and two in-plane spectra perpendicular or parallel to the projection of the FeNO plane. These enable assignment of the FeNO bending and stretching modes. The measurements reveal that the two in-plane spectra have substantial differences that result from the strongly bonded axial NO ligand. The direction of the in-plane iron motion is found to be largely parallel and perpendicular to the projection of the bent FeNO on the porphyrin plane. The out-of-plane Fe-N-O stretching and bending modes are strongly mixed with each other, as well as with porphyrin ligand modes. The stretch is mixed with v50 as was also observed for dioxygen complexes. The frequency of the assigned stretching mode of eight Fe-X-O (X=N, C, and O) complexes is correlated with the Fe-XO bond lengths. The nature of highest frequency band at ≈560 cm(-1) has also been examined in two additional new derivatives. Previously assigned as the Fe-NO stretch (by resonance Raman), it is better described as the bend, as the motion of the central nitrogen atom of the FeNO group is very large. There is significant mixing of this mode. The results emphasize the importance of mode mixing; the extent of mixing must be related to the peripheral phenyl substituents. PMID:26999733

  13. The birth of a dinosaur footprint: Subsurface 3D motion reconstruction and discrete element simulation reveal track ontogeny

    PubMed Central

    2014-01-01

    Locomotion over deformable substrates is a common occurrence in nature. Footprints represent sedimentary distortions that provide anatomical, functional, and behavioral insights into trackmaker biology. The interpretation of such evidence can be challenging, however, particularly for fossil tracks recovered at bedding planes below the originally exposed surface. Even in living animals, the complex dynamics that give rise to footprint morphology are obscured by both foot and sediment opacity, which conceals animal–substrate and substrate–substrate interactions. We used X-ray reconstruction of moving morphology (XROMM) to image and animate the hind limb skeleton of a chicken-like bird traversing a dry, granular material. Foot movement differed significantly from walking on solid ground; the longest toe penetrated to a depth of ∼5 cm, reaching an angle of 30° below horizontal before slipping backward on withdrawal. The 3D kinematic data were integrated into a validated substrate simulation using the discrete element method (DEM) to create a quantitative model of limb-induced substrate deformation. Simulation revealed that despite sediment collapse yielding poor quality tracks at the air–substrate interface, subsurface displacements maintain a high level of organization owing to grain–grain support. Splitting the substrate volume along “virtual bedding planes” exposed prints that more closely resembled the foot and could easily be mistaken for shallow tracks. DEM data elucidate how highly localized deformations associated with foot entry and exit generate specific features in the final tracks, a temporal sequence that we term “track ontogeny.” This combination of methodologies fosters a synthesis between the surface/layer-based perspective prevalent in paleontology and the particle/volume-based perspective essential for a mechanistic understanding of sediment redistribution during track formation. PMID:25489092

  14. Quaternion-Based Gesture Recognition Using Wireless Wearable Motion Capture Sensors.

    PubMed

    Alavi, Shamir; Arsenault, Dennis; Whitehead, Anthony

    2016-01-01

    This work presents the development and implementation of a unified multi-sensor human motion capture and gesture recognition system that can distinguish between and classify six different gestures. Data was collected from eleven participants using a subset of five wireless motion sensors (inertial measurement units) attached to their arms and upper body from a complete motion capture system. We compare Support Vector Machines and Artificial Neural Networks on the same dataset under two different scenarios and evaluate the results. Our study indicates that near perfect classification accuracies are achievable for small gestures and that the speed of classification is sufficient to allow interactivity. However, such accuracies are more difficult to obtain when a participant does not participate in training, indicating that more work needs to be done in this area to create a system that can be used by the general population. PMID:27136554

  15. Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture

    PubMed Central

    Gao, Zhiquan; Yu, Yao; Zhou, Yu; Du, Sidan

    2015-01-01

    Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain. PMID:26402681

  16. Quaternion-Based Gesture Recognition Using Wireless Wearable Motion Capture Sensors

    PubMed Central

    Alavi, Shamir; Arsenault, Dennis; Whitehead, Anthony

    2016-01-01

    This work presents the development and implementation of a unified multi-sensor human motion capture and gesture recognition system that can distinguish between and classify six different gestures. Data was collected from eleven participants using a subset of five wireless motion sensors (inertial measurement units) attached to their arms and upper body from a complete motion capture system. We compare Support Vector Machines and Artificial Neural Networks on the same dataset under two different scenarios and evaluate the results. Our study indicates that near perfect classification accuracies are achievable for small gestures and that the speed of classification is sufficient to allow interactivity. However, such accuracies are more difficult to obtain when a participant does not participate in training, indicating that more work needs to be done in this area to create a system that can be used by the general population. PMID:27136554

  17. Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture.

    PubMed

    Gao, Zhiquan; Yu, Yao; Zhou, Yu; Du, Sidan

    2015-01-01

    Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain. PMID:26402681

  18. 34/45-Mbps 3D HDTV digital coding scheme using modified motion compensation with disparity vectors

    NASA Astrophysics Data System (ADS)

    Naito, Sei; Matsumoto, Shuichi

    1998-12-01

    This paper describes a digital compression coding scheme for transmitting three dimensional stereo HDTV signals with full resolution at bit-rates around 30 to 40 Mbps to be adapted for PDH networks of the CCITT 3rd digital hierarchy, 34 Mbps and 45 Mbps, SDH networks of 52 Mbps and ATM networks. In order to achieve a satisfactory quality for stereo HDTV pictures, three advanced key technologies are introduced into the MPEG-2 Multi-View Profile, i.e., a modified motion compensation using disparity vectors estimated between the left and right pictures, an adaptive rate control using a common buffer memory for left and right pictures encoding, and a discriminatory bit allocation which results in the improvement of left pictures quality without any degradation of right pictures. From the results of coding experiment conducted to evaluate the coding picture achieved by this coding scheme, it is confirmed that our coding scheme gives satisfactory picture quality even at 34 Mbps including audio and FEC data.

  19. Interaction of perceptual grouping and crossmodal temporal capture in tactile apparent-motion.

    PubMed

    Chen, Lihan; Shi, Zhuanghua; Müller, Hermann J

    2011-01-01

    Previous studies have shown that in tasks requiring participants to report the direction of apparent motion, task-irrelevant mono-beeps can "capture" visual motion perception when the beeps occur temporally close to the visual stimuli. However, the contributions of the relative timing of multimodal events and the event structure, modulating uni- and/or crossmodal perceptual grouping, remain unclear. To examine this question and extend the investigation to the tactile modality, the current experiments presented tactile two-tap apparent-motion streams, with an SOA of 400 ms between successive, left-/right-hand middle-finger taps, accompanied by task-irrelevant, non-spatial auditory stimuli. The streams were shown for 90 seconds, and participants' task was to continuously report the perceived (left- or rightward) direction of tactile motion. In Experiment 1, each tactile stimulus was paired with an auditory beep, though odd-numbered taps were paired with an asynchronous beep, with audiotactile SOAs ranging from -75 ms to 75 ms. Perceived direction of tactile motion varied systematically with audiotactile SOA, indicative of a temporal-capture effect. In Experiment 2, two audiotactile SOAs--one short (75 ms), one long (325 ms)--were compared. The long-SOA condition preserved the crossmodal event structure (so the temporal-capture dynamics should have been similar to that in Experiment 1), but both beeps now occurred temporally close to the taps on one side (even-numbered taps). The two SOAs were found to produce opposite modulations of apparent motion, indicative of an influence of crossmodal grouping. In Experiment 3, only odd-numbered, but not even-numbered, taps were paired with auditory beeps. This abolished the temporal-capture effect and, instead, a dominant percept of apparent motion from the audiotactile side to the tactile-only side was observed independently of the SOA variation. These findings suggest that asymmetric crossmodal grouping leads to an

  20. Predation by the Dwarf Seahorse on Copepods: Quantifying Motion and Flows Using 3D High Speed Digital Holographic Cinematography - When Seahorses Attack!

    NASA Astrophysics Data System (ADS)

    Gemmell, Brad; Sheng, Jian; Buskey, Ed

    2008-11-01

    Copepods are an important planktonic food source for most of the world's fish species. This high predation pressure has led copepods to evolve an extremely effective escape response, with reaction times to hydrodynamic disturbances of less than 4 ms and escape speeds of over 500 body lengths per second. Using 3D high speed digital holographic cinematography (up to 2000 frames per second) we elucidate the role of entrainment flow fields generated by a natural visual predator, the dwarf seahorse (Hippocampus zosterae) during attacks on its prey, Acartia tonsa. Using phytoplankton as a tracer, we recorded and reconstructed 3D flow fields around the head of the seahorse and its prey during both successful and unsuccessful attacks to better understand how some attacks lead to capture with little or no detection from the copepod while others result in failed attacks. Attacks start with a slow approach to minimize the hydro-mechanical disturbance which is used by copepods to detect the approach of a potential predator. Successful attacks result in the seahorse using its pipette-like mouth to create suction faster than the copepod's response latency. As these characteristic scales of entrainment increase, a successful escape becomes more likely.

  1. 3D crustal structure and long-period ground motions from a M9.0 megathrust earthquake in the Pacific Northwest region

    NASA Astrophysics Data System (ADS)

    Olsen, Kim B.; Stephenson, William J.; Geisselmeyer, Andreas

    2008-04-01

    We have developed a community velocity model for the Pacific Northwest region from northern California to southern Canada and carried out the first 3D simulation of a Mw 9.0 megathrust earthquake rupturing along the Cascadia subduction zone using a parallel supercomputer. A long-period (<0.5 Hz) source model was designed by mapping the inversion results for the December 26, 2004 Sumatra-Andaman earthquake (Han et al., Science 313(5787):658-662, 2006) onto the Cascadia subduction zone. Representative peak ground velocities for the metropolitan centers of the region include 42 cm/s in the Seattle area and 8-20 cm/s in the Tacoma, Olympia, Vancouver, and Portland areas. Combined with an extended duration of the shaking up to 5 min, these long-period ground motions may inflict significant damage on the built environment, in particular on the highrises in downtown Seattle.

  2. 3D crustal structure and long-period ground motions from a M9.0 megathrust earthquake in the Pacific Northwest region

    USGS Publications Warehouse

    Olsen, K.B.; Stephenson, W.J.; Geisselmeyer, A.

    2008-01-01

    We have developed a community velocity model for the Pacific Northwest region from northern California to southern Canada and carried out the first 3D simulation of a Mw 9.0 megathrust earthquake rupturing along the Cascadia subduction zone using a parallel supercomputer. A long-period (<0.5 Hz) source model was designed by mapping the inversion results for the December 26, 2004 Sumatra–Andaman earthquake (Han et al., Science 313(5787):658–662, 2006) onto the Cascadia subduction zone. Representative peak ground velocities for the metropolitan centers of the region include 42 cm/s in the Seattle area and 8–20 cm/s in the Tacoma, Olympia, Vancouver, and Portland areas. Combined with an extended duration of the shaking up to 5 min, these long-period ground motions may inflict significant damage on the built environment, in particular on the highrises in downtown Seattle.

  3. Capture of planets into mean-motion resonances and the origins of extrasolar orbital architectures

    NASA Astrophysics Data System (ADS)

    Batygin, Konstantin

    2015-08-01

    The early stages of dynamical evolution of planetary systems are often shaped by dissipative processes that drive orbital migration. In multi-planet systems, convergent amassing of orbits inevitably leads to encounters with rational period ratios, which may result in establishment of mean-motion resonances. The success or failure of resonant capture yields exceedingly different subsequent evolutions, and thus plays a central role in determining the ensuing orbital architecture of planetary systems. In this work, we employ an integrable Hamiltonian formalism for first order planetary resonances that allows both secondary bodies to have finite masses and eccentricities, and construct a comprehensive theory for resonant capture. Particularly, we derive conditions under which orbital evolution lies within the adiabatic regime, and provide a generalized criterion for guaranteed resonant locking as well as a procedure for calculating capture probabilities when capture is not certain. Subsequently, we utilize the developed analytical model to examine the evolution of Jupiter and Saturn within the protosolar nebula, and investigate the origins of the dominantly non-resonant orbital distribution of sub-Jovian extrasolar planets. Our calculations show that the commonly observed extrasolar orbital structure can be understood if planet pairs encounter mean-motion commensurabilities on slightly eccentric (e ˜ 0.02) orbits. Accordingly, we speculate that resonant capture among low-mass planets is typically rendered unsuccessful due to subtle axial asymmetries inherent to the global structure of protoplanetary discs.

  4. Interaction of Perceptual Grouping and Crossmodal Temporal Capture in Tactile Apparent-Motion

    PubMed Central

    Chen, Lihan; Shi, Zhuanghua; Müller, Hermann J.

    2011-01-01

    Previous studies have shown that in tasks requiring participants to report the direction of apparent motion, task-irrelevant mono-beeps can “capture” visual motion perception when the beeps occur temporally close to the visual stimuli. However, the contributions of the relative timing of multimodal events and the event structure, modulating uni- and/or crossmodal perceptual grouping, remain unclear. To examine this question and extend the investigation to the tactile modality, the current experiments presented tactile two-tap apparent-motion streams, with an SOA of 400 ms between successive, left-/right-hand middle-finger taps, accompanied by task-irrelevant, non-spatial auditory stimuli. The streams were shown for 90 seconds, and participants' task was to continuously report the perceived (left- or rightward) direction of tactile motion. In Experiment 1, each tactile stimulus was paired with an auditory beep, though odd-numbered taps were paired with an asynchronous beep, with audiotactile SOAs ranging from −75 ms to 75 ms. Perceived direction of tactile motion varied systematically with audiotactile SOA, indicative of a temporal-capture effect. In Experiment 2, two audiotactile SOAs—one short (75 ms), one long (325 ms)—were compared. The long-SOA condition preserved the crossmodal event structure (so the temporal-capture dynamics should have been similar to that in Experiment 1), but both beeps now occurred temporally close to the taps on one side (even-numbered taps). The two SOAs were found to produce opposite modulations of apparent motion, indicative of an influence of crossmodal grouping. In Experiment 3, only odd-numbered, but not even-numbered, taps were paired with auditory beeps. This abolished the temporal-capture effect and, instead, a dominant percept of apparent motion from the audiotactile side to the tactile-only side was observed independently of the SOA variation. These findings suggest that asymmetric crossmodal grouping leads to an

  5. Onset of collective motion in locusts is captured by a minimal model

    NASA Astrophysics Data System (ADS)

    Dyson, Louise; Yates, Christian A.; Buhl, Jerome; McKane, Alan J.

    2015-11-01

    We present a minimal model to describe the onset of collective motion seen when a population of locusts are placed in an annular arena. At low densities motion is disordered, while at high densities locusts march in a common direction, which may reverse during the experiment. The data are well captured by an individual-based model, in which demographic noise leads to the observed density-dependent effects. By fitting the model parameters to equation-free coefficients, we give a quantitative comparison, showing time series, stationary distributions, and the mean switching times between states.

  6. Onset of collective motion in locusts is captured by a minimal model.

    PubMed

    Dyson, Louise; Yates, Christian A; Buhl, Jerome; McKane, Alan J

    2015-11-01

    We present a minimal model to describe the onset of collective motion seen when a population of locusts are placed in an annular arena. At low densities motion is disordered, while at high densities locusts march in a common direction, which may reverse during the experiment. The data are well captured by an individual-based model, in which demographic noise leads to the observed density-dependent effects. By fitting the model parameters to equation-free coefficients, we give a quantitative comparison, showing time series, stationary distributions, and the mean switching times between states. PMID:26651724

  7. Identifying the origin of differences between 3D numerical simulations of ground motion in sedimentary basins: lessons from stringent canonical test models in the E2VP framework

    NASA Astrophysics Data System (ADS)

    Chaljub, Emmanuel; Maufroy, Emeline; Moczo, Peter; Kristek, Jozef; Priolo, Enrico; Klin, Peter; De Martin, Florent; Zhang, Zenghuo; Hollender, Fabrice; Bard, Pierre-Yves

    2013-04-01

    Numerical simulation is playing a role of increasing importance in the field of seismic hazard by providing quantitative estimates of earthquake ground motion, its variability, and its sensitivity to geometrical and mechanical properties of the medium. Continuous efforts to develop accurate and computationally efficient numerical methods, combined with increasing computational power have made it technically feasible to calculate seismograms in 3D realistic configurations and for frequencies of interest in seismic design applications. Now, in order to foster the use of numerical simulations in practical prediction of earthquake ground motion, it is important to evaluate the accuracy of current numerical methods when applied to realistic 3D sites. This process of verification is a necessary prerequisite to confrontation of numerical predictions and observations. Through the ongoing Euroseistest Verification and Validation Project (E2VP), which focuses on the Mygdonian basin (northern Greece), we investigated the capability of numerical methods to predict earthquake ground motion for frequencies up to 4 Hz. Numerical predictions obtained by several teams using a wide variety of methods were compared using quantitative goodness-of-fit criteria. In order to better understand the cause of misfits between different simulations, initially performed for the realistic geometry of the Mygdonian basin, we defined five stringent canonical configurations. The canonical models allow for identifying sources of misfits and quantify their importance. Detailed quantitative comparison of simulations in relation to dominant features of the models shows that even relatively simple heterogeneous models must be treated with maximum care in order to achieve sufficient level of accuracy. One important conclusion is that the numerical representation of models with strong variations (e.g. discontinuities) may considerably vary from one method to the other, and may become a dominant source of

  8. Computer-generated hologram for 3D scene from multi-view images

    NASA Astrophysics Data System (ADS)

    Chang, Eun-Young; Kang, Yun-Suk; Moon, KyungAe; Ho, Yo-Sung; Kim, Jinwoong

    2013-05-01

    Recently, the computer generated hologram (CGH) calculated from real existing objects is more actively investigated to support holographic video and TV applications. In this paper, we propose a method of generating a hologram of the natural 3-D scene from multi-view images in order to provide motion parallax viewing with a suitable navigation range. After a unified 3-D point source set describing the captured 3-D scene is obtained from multi-view images, a hologram pattern supporting motion-parallax is calculated from the set using a point-based CGH method. We confirmed that 3-D scenes are faithfully reconstructed using numerical reconstruction.

  9. Evaluation of Hand Motion Capture Protocol Using Static Computed Tomography Images: Application to an Instrumented Glove

    PubMed Central

    Buffi, James H.; Sancho Bru, Joaquín Luis; Crisco, Joseph J.; Murray, Wendy M.

    2014-01-01

    There has been a marked increase in the use of hand motion capture protocols in the past 20 yr. However, their absolute accuracies and precisions remain unclear. The purpose of this technical brief was to present a method for evaluating the accuracy and precision of the joint angles determined by a hand motion capture protocol using simultaneously collected static computed tomography (CT) images. The method consists of: (i) recording seven functional postures using both the motion capture protocol and a CT scanner; (ii) obtaining principal axes of the bones in each method; (iii) calculating the flexion angle at each joint for each method as the roll angle of the composite, sequential, roll-pitch-yaw rotations relating the orientation of the distal bone to the proximal bone; and (iv) comparing corresponding joint angle measurements. For demonstration, we applied the method to a Cyberglove protocol. Accuracy and precision of the instrumented-glove protocol were calculated as the mean and standard deviation, respectively, of the differences between the angles determined from the Cyberglove output and the CT images across the seven postures. Implementation in one subject highlighted substantial errors, especially for the distal joints of the fingers. This technical note both clearly demonstrates the need for future work and introduces a solid, technical approach with the potential to improve the current state of such assessments in our field. PMID:25203720

  10. Elastic network models capture the motions apparent within ensembles of RNA structures

    PubMed Central

    Zimmermann, Michael T.; Jernigan, Robert L.

    2014-01-01

    The role of structure and dynamics in mechanisms for RNA becomes increasingly important. Computational approaches using simple dynamics models have been successful at predicting the motions of proteins and are often applied to ribonucleo-protein complexes but have not been thoroughly tested for well-packed nucleic acid structures. In order to characterize a true set of motions, we investigate the apparent motions from 16 ensembles of experimentally determined RNA structures. These indicate a relatively limited set of motions that are captured by a small set of principal components (PCs). These limited motions closely resemble the motions computed from low frequency normal modes from elastic network models (ENMs), either at atomic or coarse-grained resolution. Various ENM model types, parameters, and structure representations are tested here against the experimental RNA structural ensembles, exposing differences between models for proteins and for folded RNAs. Differences in performance are seen, depending on the structure alignment algorithm used to generate PCs, modulating the apparent utility of ENMs but not significantly impacting their ability to generate functional motions. The loss of dynamical information upon coarse-graining is somewhat larger for RNAs than for globular proteins, indicating, perhaps, the lower cooperativity of the less densely packed RNA. However, the RNA structures show less sensitivity to the elastic network model parameters than do proteins. These findings further demonstrate the utility of ENMs and the appropriateness of their application to well-packed RNA-only structures, justifying their use for studying the dynamics of ribonucleo-proteins, such as the ribosome and regulatory RNAs. PMID:24759093

  11. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  12. Wearable motion capturing with the flexing and turning based on a hetero-core fiber optic stretching sensor

    NASA Astrophysics Data System (ADS)

    Koyama, Y.; Nishiyama, M.; Watanabe, K.

    2011-05-01

    In recent years, motion capturing technologies have been applied to the service of the rehabilitation for the physically challenged people and practicing sports in human daily life. In these application fields, it is important that a measurement system does not prevent human from doing natural activity for unrestricted motion capture in daily-life. The hetero-core optic fiber sensor that we developed is suited for the unconstrained motion capturing because of optical intensity-based measurement with excellent stability and repeatability using single-mode transmission fibers and needless of any compensation. In this paper, we propose the development of wearable sensor enables unconstrained motion capture systems using the hetero-core fiber optic stretching sensor in real time, which satisfy user's requirements of comfort and ubiquitous. The experiments of motion capturing were demonstrated by setting the hetero-core fiber optic stretching sensor on the elbow, the back of the body and the waist. As a result, the hetero-core fiber optic stretching sensor was able to detect the displacement of expansion and contraction in the optical loss by flexion motion of the arm and the trunk motion. The optical loss performance of the hetero-core fiber optic stretching sensor reveals monotonic characteristics with the displacement. The optical loss changes at the full scale of motion were 1.45dB for the motion of anteflexion and 1.99 dB for the motion of turn. The real-time motion capturing was demonstrated by means of the proposed hetero-core fiber optic stretching sensor without restricting natural human behavior.

  13. Lifetime of inner-shell hole states of Ar (2p) and Kr (3d) using equation-of-motion coupled cluster method

    SciTech Connect

    Ghosh, Aryya; Vaval, Nayana; Pal, Sourav

    2015-07-14

    Auger decay is an efficient ultrafast relaxation process of core-shell or inner-shell excited atom or molecule. Generally, it occurs in femto-second or even atto-second time domain. Direct measurement of lifetimes of Auger process of single ionized and double ionized inner-shell state of an atom or molecule is an extremely difficult task. In this paper, we have applied the highly correlated complex absorbing potential-equation-of-motion coupled cluster (CAP-EOMCC) approach which is a combination of CAP and EOMCC approach to calculate the lifetime of the states arising from 2p inner-shell ionization of an Ar atom and 3d inner-shell ionization of Kr atom. We have also calculated the lifetime of Ar{sup 2+}(2p{sup −1}3p{sup −1}) {sup 1}D, Ar{sup 2+}(2p{sup −1}3p{sup −1}) {sup 1}S, and Ar{sup 2+}(2p{sup −1}3s{sup −1}) {sup 1}P double ionized states. The predicted results are compared with the other theoretical results as well as experimental results available in the literature.

  14. A New Accurate 3D Measurement Tool to Assess the Range of Motion of the Tongue in Oral Cancer Patients: A Standardized Model.

    PubMed

    van Dijk, Simone; van Alphen, Maarten J A; Jacobi, Irene; Smeele, Ludwig E; van der Heijden, Ferdinand; Balm, Alfons J M

    2016-02-01

    In oral cancer treatment, function loss such as speech and swallowing deterioration can be severe, mostly due to reduced lingual mobility. Until now, there is no standardized measurement tool for tongue mobility and pre-operative prediction of function loss is based on expert opinion instead of evidence based insight. The purpose of this study was to assess the reliability of a triple-camera setup for the measurement of tongue range of motion (ROM) in healthy adults and its feasibility in patients with partial glossectomy. A triple-camera setup was used, and 3D coordinates of the tongue in five standardized tongue positions were achieved in 15 healthy volunteers. Maximum distances between the tip of the tongue and the maxillary midline were calculated. Each participant was recorded twice, and each movie was analysed three times by two separate raters. Intrarater, interrater and test-retest reliability were the main outcome measures. Secondly, feasibility of the method was tested in ten patients treated for oral tongue carcinoma. Intrarater, interrater and test-retest reliability all showed high correlation coefficients of >0.9 in both study groups. All healthy subjects showed perfect symmetrical tongue ROM. In patients, significant differences in lateral tongue movements were found, due to restricted tongue mobility after surgery. This triple-camera setup is a reliable measurement tool to assess three-dimensional information of tongue ROM. It constitutes an accurate tool for objective grading of reduced tongue mobility after partial glossectomy. PMID:26516075

  15. Accuracy and precision of gait events derived from motion capture in horses during walk and trot.

    PubMed

    Boye, Jenny Katrine; Thomsen, Maj Halling; Pfau, Thilo; Olsen, Emil

    2014-03-21

    This study aimed to create an evidence base for detection of stance-phase timings from motion capture in horses. The objective was to compare the accuracy (bias) and precision (SD) for five published algorithms for the detection of hoof-on and hoof-off using force plates as the reference standard. Six horses were walked and trotted over eight force plates surrounded by a synchronised 12-camera infrared motion capture system. The five algorithms (A-E) were based on: (A) horizontal velocity of the hoof; (B) Fetlock angle and horizontal hoof velocity; (C) horizontal displacement of the hoof relative to the centre of mass; (D) horizontal velocity of the hoof relative to the Centre of Mass and; (E) vertical acceleration of the hoof. A total of 240 stance phases in walk and 240 stance phases in trot were included in the assessment. Method D provided the most accurate and precise results in walk for stance phase duration with a bias of 4.1% for front limbs and 4.8% for hind limbs. For trot we derived a combination of method A for hoof-on and method E for hoof-off resulting in a bias of -6.2% of stance in the front limbs and method B for the hind limbs with a bias of 3.8% of stance phase duration. We conclude that motion capture yields accurate and precise detection of gait events for horses walking and trotting over ground and the results emphasise a need for different algorithms for front limbs versus hind limbs in trot. PMID:24529754

  16. Applied research of embedded WiFi technology in the motion capture system

    NASA Astrophysics Data System (ADS)

    Gui, Haixia

    2012-04-01

    Embedded wireless WiFi technology is one of the current wireless hot spots in network applications. This paper firstly introduces the definition and characteristics of WiFi. With the advantages of WiFi such as using no wiring, simple operation and stable transmission, this paper then gives a system design for the application of embedded wireless WiFi technology in the motion capture system. Also, it verifies the effectiveness of design in the WiFi-based wireless sensor hardware and software program.

  17. Three-dimensional finite element analysis of unilateral mastication in malocclusion cases using cone-beam computed tomography and a motion capture system

    PubMed Central

    2016-01-01

    Purpose Stress distribution and mandible distortion during lateral movements are known to be closely linked to bruxism, dental implant placement, and temporomandibular joint disorder. The present study was performed to determine stress distribution and distortion patterns of the mandible during lateral movements in Class I, II, and III relationships. Methods Five Korean volunteers (one normal, two Class II, and two Class III occlusion cases) were selected. Finite element (FE) modeling was performed using information from cone-beam computed tomographic (CBCT) scans of the subjects’ skulls, scanned images of dental casts, and incisor movement captured by an optical motion-capture system. Results In the Class I and II cases, maximum stress load occurred at the condyle of the balancing side, but, in the Class III cases, the maximum stress was loaded on the condyle of the working side. Maximum distortion was observed on the menton at the midline in every case, regardless of loading force. The distortion was greatest in Class III cases and smallest in Class II cases. Conclusions The stress distribution along and accompanying distortion of a mandible seems to be affected by the anteroposterior position of the mandible. Additionally, 3-D modeling of the craniofacial skeleton using CBCT and an optical laser scanner and reproduction of mandibular movement by way of the optical motion-capture technique used in this study are reliable techniques for investigating the masticatory system. PMID:27127690

  18. Relationships of a Circular Singer Arm Gesture to Acoustical and Perceptual Measures of Singing: A Motion Capture Study

    ERIC Educational Resources Information Center

    Brunkan, Melissa C.

    2016-01-01

    The purpose of this study was to validate previous research that suggests using movement in conjunction with singing tasks can affect intonation and perception of the task. Singers (N = 49) were video and audio recorded, using a motion capture system, while singing a phrase from a familiar song, first with no motion, and then while doing a low,…

  19. Kinematics differences between the flat, kick, and slice serves measured using a markerless motion capture method.

    PubMed

    Sheets, Alison L; Abrams, Geoffrey D; Corazza, Stefano; Safran, Marc R; Andriacchi, Thomas P

    2011-12-01

    Tennis injuries have been associated with serving mechanics, but quantitative kinematic measurements in realistic environments are limited by current motion capture technologies. This study tested for kinematic differences at the lower back, shoulder, elbow, wrist, and racquet between the flat, kick, and slice serves using a markerless motion capture (MMC) system. Seven male NCAA Division 1 players were tested on an outdoor court in daylight conditions. Peak racquet and joint center speeds occurred sequentially and increased from proximal (back) to distal (racquet). Racquet speeds at ball impact were not significantly different between serve types. However, there were significant differences in the direction of the racquet velocity vector between serves: the kick serve had the largest lateral and smallest forward racquet velocity components, while the flat serve had the smallest vertical component (p < 0.01). The slice serve had lateral velocity, like the kick, and large forward velocity, like the flat. Additionally, the racquet in the kick serve was positioned 8.7 cm more posterior and 21.1 cm more medial than the shoulder compared with the flat, which could suggest an increased risk of shoulder and back injury associated with the kick serve. This study demonstrated the potential for MMC for testing sports performance under natural conditions. PMID:21984513

  20. Biomechanical model-based displacement estimation in micro-sensor motion capture

    NASA Astrophysics Data System (ADS)

    Meng, X. L.; Zhang, Z. Q.; Sun, S. Y.; Wu, J. K.; Wong, W. C.

    2012-05-01

    In micro-sensor motion capture systems, the estimation of the body displacement in the global coordinate system remains a challenge due to lack of external references. This paper proposes a self-contained displacement estimation method based on a human biomechanical model to track the position of walking subjects in the global coordinate system without any additional supporting infrastructures. The proposed approach makes use of the biomechanics of the lower body segments and the assumption that during walking there is always at least one foot in contact with the ground. The ground contact joint is detected based on walking gait characteristics and used as the external references of the human body. The relative positions of the other joints are obtained from hierarchical transformations based on the biomechanical model. Anatomical constraints are proposed to apply to some specific joints of the lower body to further improve the accuracy of the algorithm. Performance of the proposed algorithm is compared with an optical motion capture system. The method is also demonstrated in outdoor and indoor long distance walking scenarios. The experimental results demonstrate clearly that the biomechanical model improves the displacement accuracy within the proposed framework.

  1. FuryExplorer: visual-interactive exploration of horse motion capture data

    NASA Astrophysics Data System (ADS)

    Wilhelm, Nils; Vögele, Anna; Zsoldos, Rebeka; Licka, Theresia; Krüger, Björn; Bernard, Jürgen

    2015-01-01

    The analysis of equine motion has a long tradition in the past of mankind. Equine biomechanics aims at detecting characteristics of horses indicative of good performance. Especially, veterinary medicine gait analysis plays an important role in diagnostics and in the emerging research of long-term effects of athletic exercises. More recently, the incorporation of motion capture technology contributed to an easier and faster analysis, with a trend from mere observation of horses towards the analysis of multivariate time-oriented data. However, due to the novelty of this topic being raised within an interdisciplinary context, there is yet a lack of visual-interactive interfaces to facilitate time series data analysis and information discourse for the veterinary and biomechanics communities. In this design study, we bring visual analytics technology into the respective domains, which, to our best knowledge, was never approached before. Based on requirements developed in the domain characterization phase, we present a visual-interactive system for the exploration of horse motion data. The system provides multiple views which enable domain experts to explore frequent poses and motions, but also to drill down to interesting subsets, possibly containing unexpected patterns. We show the applicability of the system in two exploratory use cases, one on the comparison of different gait motions, and one on the analysis of lameness recovery. Finally, we present the results of a summative user study conducted in the environment of the domain experts. The overall outcome was a significant improvement in effectiveness and efficiency in the analytical workflow of the domain experts.

  2. 3D RVE models able to capture and quantify the dispersion, agglomeration and orientation state of CNT in CNT/PP nanocomposites

    NASA Astrophysics Data System (ADS)

    Bhuiyan, Md; Pucha, Raghuram; Kalaitzidou, Kyriaki

    2016-02-01

    The focus of this study is to investigate the capabilities of 3D RVE models in predicting the tensile modulus of carbon nanotube polypropylene (CNT/PP) composites which differ slightly in the dispersion, agglomeration and orientation states of CNT within the PP matrix. The composites are made using melt mixing followed by either injection molding or melt spinning of fibers. The dispersion, agglomeration and orientation of CNT within the PP are experimentally altered by using a surfactant and by forcing the molten material to flow through a narrow orifice (melt spinning) that promotes alignment of CNT along the flow/drawing direction. An elaborate image analysis technique is used to quantify the CNT characteristics in terms of probability distribution functions (PDF). The PDF are then introduced to the 3D RVE models which also account for the CNT-PP interfacial interactions. It is concluded that the 3D RVE models can accurately distinguish among the different cases (dispersion, distribution, geometry and alignment of CNT) as the predicted tensile modulus is in good agreement with the experimentally determined one.

  3. Reaction null-space filter: extracting reactionless synergies for optimal postural balance from motion capture data.

    PubMed

    Nenchev, D N; Miyamoto, Y; Iribe, H; Takeuchi, K; Sato, D

    2016-06-01

    This paper introduces the notion of a reactionless synergy: a postural variation for a specific motion pattern/strategy, whereby the movements of the segments do not alter the force/moment balance at the feet. Given an optimal initial posture in terms of stability, a reactionless synergy can ensure optimality throughout the entire movement. Reactionless synergies are derived via a dynamical model wherein the feet are regarded to be unfixed. Though in contrast with the conventional fixed-feet models, this approach has the advantage of exhibiting the reactions at the feet explicitly. The dynamical model also facilitates a joint-space decomposition scheme yielding two motion components: the reactionless synergy and an orthogonal complement responsible for the dynamical coupling between the feet and the support. Since the reactionless synergy provides the basis (a feedforward control component) for optimal balance control, it may play an important role when evaluating balance abnormalities or when assessing optimality in balance control. We show how to apply the proposed method for analysis of motion capture data obtained from three voluntary movement patterns in the sagittal plane: squat, sway, and forward bend. PMID:26273732

  4. An effective attentional set for a specific colour does not prevent capture by infrequently presented motion distractors.

    PubMed

    Retell, James D; Becker, Stefanie I; Remington, Roger W

    2016-07-01

    An organism's survival depends on the ability to rapidly orient attention to unanticipated events in the world. Yet, the conditions needed to elicit such involuntary capture remain in doubt. Especially puzzling are spatial cueing experiments, which have consistently shown that involuntary shifts of attention to highly salient distractors are not determined by stimulus properties, but instead are contingent on attentional control settings induced by task demands. Do we always need to be set for an event to be captured by it, or is there a class of events that draw attention involuntarily even when unconnected to task goals? Recent results suggest that a task-irrelevant event will capture attention on first presentation, suggesting that salient stimuli that violate contextual expectations might automatically capture attention. Here, we investigated the role of contextual expectation by examining whether an irrelevant motion cue that was presented only rarely (∼3-6% of trials) would capture attention when observers had an active set for a specific target colour. The motion cue had no effect when presented frequently, but when rare produced a pattern of interference consistent with attentional capture. The critical dependence on the frequency with which the irrelevant motion singleton was presented is consistent with early theories of involuntary orienting to novel stimuli. We suggest that attention will be captured by salient stimuli that violate expectations, whereas top-down goals appear to modulate capture by stimuli that broadly conform to contextual expectations. PMID:26299891

  5. 3D graphene oxide-polymer hydrogel: near-infrared light-triggered active scaffold for reversible cell capture and on-demand release.

    PubMed

    Li, Wen; Wang, Jiasi; Ren, Jinsong; Qu, Xiaogang

    2013-12-10

    An active cell scaffold based on a graphene-polymer hydrogel has been successfully fabricated. The macroporous hydrogel can efficiently capture cells not only through the bioadhesive ligand RGD but also through on-demand release of cells with an NIR light stimulus. The latter process shows better dynamic control over cells than traditional passive-hydrogel-based cell depots. PMID:24123218

  6. 3D measurement of human upper body for gesture recognition

    NASA Astrophysics Data System (ADS)

    Wan, Khairunizam; Sawada, Hideyuki

    2007-10-01

    Measurement of human motion is widely required for various applications, and a significant part of this task is to identify motion in the process of human motion recognition. There are several application purposes of doing this research such as in surveillance, entertainment, medical treatment and traffic applications as user interfaces that require the recognition of different parts of human body to identify an action or a motion. The most challenging task in human motion recognition is to achieve the ability and reliability of a motion capture system for tracking and recognizing dynamic movements, because human body structure has many degrees of freedom. Many attempts for recognizing body actions have been reported so far, in which gestural motions have to be measured by some sensors first, and the obtained data are processed in a computer. This paper introduces the 3D motion analysis of human upper body using an optical motion capture system for the purpose of gesture recognition. In this study, the image processing technique to track optical markers attached at feature points of human body is introduced for constructing a human upper body model and estimating its three dimensional motion.

  7. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  8. Using Averaging-Based Factorization to Compare Seismic Hazard Models Derived from 3D Earthquake Simulations with NGA Ground Motion Prediction Equations

    NASA Astrophysics Data System (ADS)

    Wang, F.; Jordan, T. H.

    2012-12-01

    Seismic hazard models based on empirical ground motion prediction equations (GMPEs) employ a model-based factorization to account for source, propagation, and path effects. An alternative is to simulate these effects directly using earthquake source models combined with three-dimensional (3D) models of Earth structure. We have developed an averaging-based factorization (ABF) scheme that facilitates the geographically explicit comparison of these two types of seismic hazard models. For any fault source k with epicentral position x, slip spatial and temporal distribution f, and moment magnitude m, we calculate the excitation functions G(s, k, x, m, f) for sites s in a geographical region R, such as 5% damped spectral acceleration at a particular period. Through a sequence of weighted-averaging and normalization operations following a certain hierarchy over f, m, x, k, and s, we uniquely factorize G(s, k, x, m, f) into six components: A, B(s), C(s, k), D(s, k, x), E(s, k, x, m), and F(s, k, x, m, f). Factors for a target model can be divided by those of a reference model to obtain six corresponding factor ratios, or residual factors: a, b(s), c(s, k), d(s, k, x), e(s, k, x, m), and f(s, k, x, m, f). We show that these residual factors characterize differences in basin effects primarily through b(s), distance scaling primarily through c(s, k), and source directivity primarily through d(s, k, x). We illustrate the ABF scheme by comparing CyberShake Hazard Model (CSHM) for the Los Angeles region (Graves et. al. 2010) with the Next Generation Attenuation (NGA) GMPEs modified according to the directivity relations of Spudich and Chiou (2008). Relative to CSHM, all NGA models underestimate the directivity and basin effects. In particular, the NGA models do not account for the coupling between source directivity and basin excitation that substantially enhance the low-frequency seismic hazards in the sedimentary basins of the Los Angeles region. Assuming Cyber

  9. Quantitative Evaluation of 3D Mouse Behaviors and Motor Function in the Open-Field after Spinal Cord Injury Using Markerless Motion Tracking

    PubMed Central

    Sheets, Alison L.; Lai, Po-Lun; Fisher, Lesley C.; Basso, D. Michele

    2013-01-01

    Thousands of scientists strive to identify cellular mechanisms that could lead to breakthroughs in developing ameliorative treatments for debilitating neural and muscular conditions such as spinal cord injury (SCI). Most studies use rodent models to test hypotheses, and these are all limited by the methods available to evaluate animal motor function. This study’s goal was to develop a behavioral and locomotor assessment system in a murine model of SCI that enables quantitative kinematic measurements to be made automatically in the open-field by applying markerless motion tracking approaches. Three-dimensional movements of eight naïve, five mild, five moderate, and four severe SCI mice were recorded using 10 cameras (100 Hz). Background subtraction was used in each video frame to identify the animal’s silhouette, and the 3D shape at each time was reconstructed using shape-from-silhouette. The reconstructed volume was divided into front and back halves using k-means clustering. The animal’s front Center of Volume (CoV) height and whole-body CoV speed were calculated and used to automatically classify animal behaviors including directed locomotion, exploratory locomotion, meandering, standing, and rearing. More detailed analyses of CoV height, speed, and lateral deviation during directed locomotion revealed behavioral differences and functional impairments in animals with mild, moderate, and severe SCI when compared with naïve animals. Naïve animals displayed the widest variety of behaviors including rearing and crossing the center of the open-field, the fastest speeds, and tallest rear CoV heights. SCI reduced the range of behaviors, and decreased speed (r = .70 p<.005) and rear CoV height (r = .65 p<.01) were significantly correlated with greater lesion size. This markerless tracking approach is a first step toward fundamentally changing how rodent movement studies are conducted. By providing scientists with sensitive, quantitative measurement

  10. Analytical evaluation of the effects of inconsistent anthropometric measurements on joint kinematics in motion capturing.

    PubMed

    Krumm, Dominik; Cockcroft, John; Zaumseil, Falk; Odenwald, Stephan; Milani, Thomas L; Louw, Quinette

    2016-05-01

    Clinical decisions based on gait data obtained by optoelectronic motion capturing require profound knowledge about the repeatability of the used measurement systems and methods. The purpose of this study was to evaluate the effects of inconsistent anthropometric measurements on joint kinematics calculated with the Plug-in Gait model. Therefore, a sensitivity study was conducted to ascertain how joint kinematics output is affected to different anthropometric data input. One previously examined gait session of a healthy male subject and his anthropometric data that were assessed by two experienced examiners served as a basis for this analytical evaluation. This sensitivity study yielded a maximum difference in joint kinematics by the two sets of anthropometrics of up to 1.2°. In conclusion, this study has shown that the reliability of subjects' anthropometrics assessed by experienced examiners has no considerable effects on joint kinematics. PMID:27131168

  11. Extraction of bowing parameters from violin performance combining motion capture and sensors.

    PubMed

    Schoonderwaldt, E; Demoucron, M

    2009-11-01

    A method is described for measurement of a complete set of bowing parameters in violin performance. Optical motion capture was combined with sensors for accurate measurement of the main bowing parameters (bow position, bow velocity, bow acceleration, bow-bridge distance, and bow force) as well as secondary control parameters (skewness, inclination, and tilt of the bow). In addition, other performance features (moments of on/off in bow-string contact, string played, and bowing direction) were extracted. Detailed descriptions of the calculations of the bowing parameters, features, and calibrations are given. The described system is capable of measuring all bowing parameters without disturbing the player, allowing for detailed studies of musically relevant aspects of bow control and coordination of bowing parameters in bowed-string instrument performance. PMID:19894846

  12. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  13. TU-F-17A-04: Respiratory Phase-Resolved 3D MRI with Isotropic High Spatial Resolution: Determination of the Average Breathing Motion Pattern for Abdominal Radiotherapy Planning

    SciTech Connect

    Deng, Z; Pang, J; Yang, W; Yue, Y; Tuli, R; Fraass, B; Li, D; Fan, Z

    2014-06-15

    Purpose: To develop a retrospective 4D-MRI technique (respiratory phase-resolved 3D-MRI) for providing an accurate assessment of tumor motion secondary to respiration. Methods: A 3D projection reconstruction (PR) sequence with self-gating (SG) was developed for 4D-MRI on a 3.0T MRI scanner. The respiration-induced shift of the imaging target was recorded by SG signals acquired in the superior-inferior direction every 15 radial projections (i.e. temporal resolution 98 ms). A total of 73000 radial projections obtained in 8-min were retrospectively sorted into 10 time-domain evenly distributed respiratory phases based on the SG information. Ten 3D image sets were then reconstructed offline. The technique was validated on a motion phantom (gadolinium-doped water-filled box, frequency of 10 and 18 cycles/min) and humans (4 healthy and 2 patients with liver tumors). Imaging protocol included 8-min 4D-MRI followed by 1-min 2D-realtime (498 ms/frame) MRI as a reference. Results: The multiphase 3D image sets with isotropic high spatial resolution (1.56 mm) permits flexible image reformatting and visualization. No intra-phase motion-induced blurring was observed. Comparing to 2D-realtime, 4D-MRI yielded similar motion range (phantom: 10.46 vs. 11.27 mm; healthy subject: 25.20 vs. 17.9 mm; patient: 11.38 vs. 9.30 mm), reasonable displacement difference averaged over the 10 phases (0.74mm; 3.63mm; 1.65mm), and excellent cross-correlation (0.98; 0.96; 0.94) between the two displacement series. Conclusion: Our preliminary study has demonstrated that the 4D-MRI technique can provide high-quality respiratory phase-resolved 3D images that feature: a) isotropic high spatial resolution, b) a fixed scan time of 8 minutes, c) an accurate estimate of average motion pattern, and d) minimal intra-phase motion artifact. This approach has the potential to become a viable alternative solution to assess the impact of breathing on tumor motion and determine appropriate treatment margins

  14. A New Calibration Methodology for Thorax and Upper Limbs Motion Capture in Children Using Magneto and Inertial Sensors

    PubMed Central

    Ricci, Luca; Formica, Domenico; Sparaci, Laura; Lasorsa, Francesca Romana; Taffoni, Fabrizio; Tamilia, Eleonora; Guglielmelli, Eugenio

    2014-01-01

    Recent advances in wearable sensor technologies for motion capture have produced devices, mainly based on magneto and inertial measurement units (M-IMU), that are now suitable for out-of-the-lab use with children. In fact, the reduced size, weight and the wireless connectivity meet the requirement of minimum obtrusivity and give scientists the possibility to analyze children's motion in daily life contexts. Typical use of magneto and inertial measurement units (M-IMU) motion capture systems is based on attaching a sensing unit to each body segment of interest. The correct use of this setup requires a specific calibration methodology that allows mapping measurements from the sensors' frames of reference into useful kinematic information in the human limbs' frames of reference. The present work addresses this specific issue, presenting a calibration protocol to capture the kinematics of the upper limbs and thorax in typically developing (TD) children. The proposed method allows the construction, on each body segment, of a meaningful system of coordinates that are representative of real physiological motions and that are referred to as functional frames (FFs). We will also present a novel cost function for the Levenberg–Marquardt algorithm, to retrieve the rotation matrices between each sensor frame (SF) and the corresponding FF. Reported results on a group of 40 children suggest that the method is repeatable and reliable, opening the way to the extensive use of this technology for out-of-the-lab motion capture in children. PMID:24412901

  15. A new calibration methodology for thorax and upper limbs motion capture in children using magneto and inertial sensors.

    PubMed

    Ricci, Luca; Formica, Domenico; Sparaci, Laura; Lasorsa, Francesca Romana; Taffoni, Fabrizio; Tamilia, Eleonora; Guglielmelli, Eugenio

    2014-01-01

    Recent advances in wearable sensor technologies for motion capture have produced devices, mainly based on magneto and inertial measurement units (M-IMU), that are now suitable for out-of-the-lab use with children. In fact, the reduced size, weight and the wireless connectivity meet the requirement of minimum obtrusivity and give scientists the possibility to analyze children's motion in daily life contexts. Typical use of magneto and inertial measurement units (M-IMU) motion capture systems is based on attaching a sensing unit to each body segment of interest. The correct use of this setup requires a specific calibration methodology that allows mapping measurements from the sensors' frames of reference into useful kinematic information in the human limbs' frames of reference. The present work addresses this specific issue, presenting a calibration protocol to capture the kinematics of the upper limbs and thorax in typically developing (TD) children. The proposed method allows the construction, on each body segment, of a meaningful system of coordinates that are representative of real physiological motions and that are referred to as functional frames (FFs). We will also present a novel cost function for the Levenberg-Marquardt algorithm, to retrieve the rotation matrices between each sensor frame (SF) and the corresponding FF. Reported results on a group of 40 children suggest that the method is repeatable and reliable, opening the way to the extensive use of this technology for out-of-the-lab motion capture in children. PMID:24412901

  16. The Feasibility of 3d Point Cloud Generation from Smartphones

    NASA Astrophysics Data System (ADS)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  17. Dynamic heterogeneity of DNA methylation and hydroxymethylation in embryonic stem cell populations captured by single-cell 3D high-content analysis

    SciTech Connect

    Tajbakhsh, Jian; Stefanovski, Darko; Tang, George; Wawrowsky, Kolja; Liu, Naiyou; Fair, Jeffrey H.

    2015-03-15

    Cell-surface markers and transcription factors are being used in the assessment of stem cell fate and therapeutic safety, but display significant variability in stem cell cultures. We assessed nuclear patterns of 5-hydroxymethylcytosine (5hmC, associated with pluripotency), a second important epigenetic mark, and its combination with 5-methylcytosine (5mC, associated with differentiation), also in comparison to more established markers of pluripotency (Oct-4) and endodermal differentiation (FoxA2, Sox17) in mouse embryonic stem cells (mESC) over a 10-day differentiation course in vitro: by means of confocal and super-resolution imaging together with 3D high-content analysis, an essential tool in single-cell screening. In summary: 1) We did not measure any significant correlation of putative markers with global 5mC or 5hmC. 2) While average Oct-4 levels stagnated on a cell-population base (0.015 lnIU/day), Sox17 and FoxA2 increased 22-fold and 3-fold faster, respectively (Sox17: 0.343 lnIU/day; FoxA2: 0.046 lnIU/day). In comparison, global DNA methylation levels increased 4-fold faster (0.068 lnIU/day), and global hydroxymethylation declined at 0.046 lnIU/day, both with a better explanation of the temporal profile. 3) This progression was concomitant with the occurrence of distinct nuclear codistribution patterns that represented a heterogeneous spectrum of states in differentiation; converging to three major coexisting 5mC/5hmC phenotypes by day 10: 5hmC{sup +}/5mC{sup −}, 5hmC{sup +}/5mC{sup +}, and 5hmC{sup −}/5mC{sup +} cells. 4) Using optical nanoscopy we could delineate the respective topologies of 5mC/5hmC colocalization in subregions of nuclear DNA: in the majority of 5hmC{sup +}/5mC{sup +} cells 5hmC and 5mC predominantly occupied mutually exclusive territories resembling euchromatic and heterochromatic regions, respectively. Simultaneously, in a smaller subset of cells we observed a tighter colocalization of the two cytosine variants, presumably

  18. 3D ultrafast laser scanner

    NASA Astrophysics Data System (ADS)

    Mahjoubfar, A.; Goda, K.; Wang, C.; Fard, A.; Adam, J.; Gossett, D. R.; Ayazi, A.; Sollier, E.; Malik, O.; Chen, E.; Liu, Y.; Brown, R.; Sarkhosh, N.; Di Carlo, D.; Jalali, B.

    2013-03-01

    Laser scanners are essential for scientific research, manufacturing, defense, and medical practice. Unfortunately, often times the speed of conventional laser scanners (e.g., galvanometric mirrors and acousto-optic deflectors) falls short for many applications, resulting in motion blur and failure to capture fast transient information. Here, we present a novel type of laser scanner that offers roughly three orders of magnitude higher scan rates than conventional methods. Our laser scanner, which we refer to as the hybrid dispersion laser scanner, performs inertia-free laser scanning by dispersing a train of broadband pulses both temporally and spatially. More specifically, each broadband pulse is temporally processed by time stretch dispersive Fourier transform and further dispersed into space by one or more diffractive elements such as prisms and gratings. As a proof-of-principle demonstration, we perform 1D line scans at a record high scan rate of 91 MHz and 2D raster scans and 3D volumetric scans at an unprecedented scan rate of 105 kHz. The method holds promise for a broad range of scientific, industrial, and biomedical applications. To show the utility of our method, we demonstrate imaging, nanometer-resolved surface vibrometry, and high-precision flow cytometry with real-time throughput that conventional laser scanners cannot offer due to their low scan rates.

  19. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  20. QUANTIFYING UNCERTAINTIES IN GROUND MOTION SIMULATIONS FOR SCENARIO EARTHQUAKES ON THE HAYWARD-RODGERS CREEK FAULT SYSTEM USING THE USGS 3D VELOCITY MODEL AND REALISTIC PSEUDODYNAMIC RUPTURE MODELS

    SciTech Connect

    Rodgers, A; Xie, X

    2008-01-09

    This project seeks to compute ground motions for large (M>6.5) scenario earthquakes on the Hayward Fault using realistic pseudodynamic ruptures, the USGS three-dimensional (3D) velocity model and anelastic finite difference simulations on parallel computers. We will attempt to bound ground motions by performing simulations with suites of stochastic rupture models for a given scenario on a given fault segment. The outcome of this effort will provide the average, spread and range of ground motions that can be expected from likely large earthquake scenarios. The resulting ground motions will be based on first-principles calculations and include the effects of slip heterogeneity, fault geometry and directivity, however, they will be band-limited to relatively low-frequency (< 1 Hz).

  1. 3D video-based deformation measurement of the pelvis bone under dynamic cyclic loading

    PubMed Central

    2011-01-01

    Background Dynamic three-dimensional (3D) deformation of the pelvic bones is a crucial factor in the successful design and longevity of complex orthopaedic oncological implants. The current solutions are often not very promising for the patient; thus it would be interesting to measure the dynamic 3D-deformation of the whole pelvic bone in order to get a more realistic dataset for a better implant design. Therefore we hypothesis if it would be possible to combine a material testing machine with a 3D video motion capturing system, used in clinical gait analysis, to measure the sub millimetre deformation of a whole pelvis specimen. Method A pelvis specimen was placed in a standing position on a material testing machine. Passive reflective markers, traceable by the 3D video motion capturing system, were fixed to the bony surface of the pelvis specimen. While applying a dynamic sinusoidal load the 3D-movement of the markers was recorded by the cameras and afterwards the 3D-deformation of the pelvis specimen was computed. The accuracy of the 3D-movement of the markers was verified with 3D-displacement curve with a step function using a manual driven 3D micro-motion-stage. Results The resulting accuracy of the measurement system depended on the number of cameras tracking a marker. The noise level for a marker seen by two cameras was during the stationary phase of the calibration procedure ± 0.036 mm, and ± 0.022 mm if tracked by 6 cameras. The detectable 3D-movement performed by the 3D-micro-motion-stage was smaller than the noise level of the 3D-video motion capturing system. Therefore the limiting factor of the setup was the noise level, which resulted in a measurement accuracy for the dynamic test setup of ± 0.036 mm. Conclusion This 3D test setup opens new possibilities in dynamic testing of wide range materials, like anatomical specimens, biomaterials, and its combinations. The resulting 3D-deformation dataset can be used for a better estimation of material

  2. Repeatability of three-dimensional thorax and pelvis kinematics in the golf swing measured using a field-based motion capture system.

    PubMed

    Evans, Kerrie; Horan, Sean A; Neal, Robert J; Barrett, Rod S; Mills, Peter M

    2012-06-01

    Field-based methods of evaluating three-dimensional (3D) swing kinematics offer coaches and researchers the opportunity to assess golfers in context-specific environments. The purpose of this study was to establish the inter-trial, between-tester, between-location, and between-day repeatability of thorax and pelvis kinematics during the downswing using an electromagnetic motion capture system. Two experienced testers measured swing kinematics in 20 golfers (handicap < or =14 strokes) on consecutive days in an indoor and outdoor location. Participants performed five swings with each of two clubs (five-iron and driver) at each test condition. Repeatability of 3D kinematic data was evaluated by computing the coefficient of multiple determination (CMD) and the systematic error (SE). With the exception of pelvis forward bend for between-day and between-tester conditions, CMDs exceeded 0.854 for all variables, indicating high levels of overall waveform repeatability across conditions. When repeatability was compared across conditions using MANOVA, the lowest CMDs and highest SEs were found for the between-tester and between-day conditions. The highest CMDs were for the inter-trial and between-location conditions. The absence of significant differences in CMDs between these two conditions supports this method of analysing pelvis and thorax kinematics in different environmental settings without unduly affecting repeatability. PMID:22900406

  3. A 3-D Generalization of the Budyko Framework Captures the Mutual Interdependence Between Long-Term Mean Annual Precipitation, Actual and Potential Evapotranspiration

    NASA Astrophysics Data System (ADS)

    Carmona, A. M.; Poveda, G.

    2012-12-01

    We study the behavior of the 3-D parameter space defined by Φ =PET/P (so-called Aridity Index), Ψ =AET/P, and Ω =AET/PET, where P denotes mean annual precipitation, and PET and AET denote mean annual potential and actual evapotranspiration, respectively. Using information from the CLIMWAT 2.0 database (www.fao.org/nr/water/infores_databases_climwat.html) for P and PET, we estimate AET using both Budyko's and Turc's equations. Our results indicate that the well-known Budyko function that relates Φ vs.Ψ corresponds to a particular bi-dimensional cross-section of a broader coupling existing between Φ, Ψ and Ω (Figure 1a), and in turn of the mutual interdependence between P, PET and AET. The behavior of the three bi-dimensional projections are clearly parameterized by the remaining ortogonal parameter, such that: (i) the relation Φ vs. Ψ is defined by physically consistent varying values of Ω (Figure 1b); (ii) the relation Ω vs. Ψ is defined by physically consistent varying values of the Aridity Index,Φ (Figure 1c), and (iii) the relation Ω vs. Φ is defined by physically consistent varying values of Ψ (Figure 1d). Interestingly, we show that Φ and Ω are related by a power law, Φ~Ω-θ, with scaling exponent θ=1.15 (R2=0.91, n=3420) for the whole world (Figure 1d). Mathematical functions that model the three bi-dimensional projections and the surface defining the interdependence between Φ, Ψ and Ω will be presented. Our results provide a new framework to understand the coupling between the long-term mean annual water and energy balances in river basins, and the hydrological effects brought about by climate change, while taking into account the mutual interdependence between the three non-dimensional parameters Φ, Ψ and Ω, and in turn between P, PET and AET. Figure 1. (a) Three-dimensional rendering of sample values of Φ =PET/P (so-called Aridity Index), Ψ =AET/P, and Ω=AET/PET. Bi-dimensional projections of: (b) relation Φ vs.

  4. Predictive error detection in pianists: a combined ERP and motion capture study

    PubMed Central

    Maidhof, Clemens; Pitkäniemi, Anni; Tervaniemi, Mari

    2013-01-01

    Performing a piece of music involves the interplay of several cognitive and motor processes and requires extensive training to achieve a high skill level. However, even professional musicians commit errors occasionally. Previous event-related potential (ERP) studies have investigated the neurophysiological correlates of pitch errors during piano performance, and reported pre-error negativity already occurring approximately 70–100 ms before the error had been committed and audible. It was assumed that this pre-error negativity reflects predictive control processes that compare predicted consequences with actual consequences of one's own actions. However, in previous investigations, correct and incorrect pitch events were confounded by their different tempi. In addition, no data about the underlying movements were available. In the present study, we exploratively recorded the ERPs and 3D movement data of pianists' fingers simultaneously while they performed fingering exercises from memory. Results showed a pre-error negativity for incorrect keystrokes when both correct and incorrect keystrokes were performed with comparable tempi. Interestingly, even correct notes immediately preceding erroneous keystrokes elicited a very similar negativity. In addition, we explored the possibility of computing ERPs time-locked to a kinematic landmark in the finger motion trajectories defined by when a finger makes initial contact with the key surface, that is, at the onset of tactile feedback. Results suggest that incorrect notes elicited a small difference after the onset of tactile feedback, whereas correct notes preceding incorrect ones elicited negativity before the onset of tactile feedback. The results tentatively suggest that tactile feedback plays an important role in error-monitoring during piano performance, because the comparison between predicted and actual sensory (tactile) feedback may provide the information necessary for the detection of an upcoming error. PMID

  5. Combining EEG, MIDI, and motion capture techniques for investigating musical performance.

    PubMed

    Maidhof, Clemens; Kästner, Torsten; Makkonen, Tommi

    2014-03-01

    This article describes a setup for the simultaneous recording of electrophysiological data (EEG), musical data (MIDI), and three-dimensional movement data. Previously, each of these three different kinds of measurements, conducted sequentially, has been proven to provide important information about different aspects of music performance as an example of a demanding multisensory motor skill. With the method described here, it is possible to record brain-related activity and movement data simultaneously, with accurate timing resolution and at relatively low costs. EEG and MIDI data were synchronized with a modified version of the FTAP software, sending synchronization signals to the EEG recording device simultaneously with keypress events. Similarly, a motion capture system sent synchronization signals simultaneously with each recorded frame. The setup can be used for studies investigating cognitive and motor processes during music performance and music-like tasks--for example, in the domains of motor control, learning, music therapy, or musical emotions. Thus, this setup offers a promising possibility of a more behaviorally driven analysis of brain activity. PMID:23943580

  6. Accuracy and Precision of a Custom Camera-Based System for 2-D and 3-D Motion Tracking during Speech and Nonspeech Motor Tasks

    ERIC Educational Resources Information Center

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose: Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable…

  7. Cross-Modal Dynamic Capture: Congruency Effects in the Perception of Motion Across Sensory Modalities

    ERIC Educational Resources Information Center

    Soto-Faraco, Salvador; Spence, Charles; Kingstone, Alan

    2004-01-01

    This study investigated multisensory interactions in the perception of auditory and visual motion. When auditory and visual apparent motion streams are presented concurrently in opposite directions, participants often fail to discriminate the direction of motion of the auditory stream, whereas perception of the visual stream is unaffected by the…

  8. Validation of Attitude and Heading Reference System and Microsoft Kinect for Continuous Measurement of Cervical Range of Motion Compared to the Optical Motion Capture System

    PubMed Central

    2016-01-01

    Objective To compare optical motion capture system (MoCap), attitude and heading reference system (AHRS) sensor, and Microsoft Kinect for the continuous measurement of cervical range of motion (ROM). Methods Fifteen healthy adult subjects were asked to sit in front of the Kinect camera with optical markers and AHRS sensors attached to the body in a room equipped with optical motion capture camera. Subjects were instructed to independently perform axial rotation followed by flexion/extension and lateral bending. Each movement was repeated 5 times while being measured simultaneously with 3 devices. Using the MoCap system as the gold standard, the validity of AHRS and Kinect for measurement of cervical ROM was assessed by calculating correlation coefficient and Bland–Altman plot with 95% limits of agreement (LoA). Results MoCap and ARHS showed fair agreement (95% LoA<10°), while MoCap and Kinect showed less favorable agreement (95% LoA>10°) for measuring ROM in all directions. Intraclass correlation coefficient (ICC) values between MoCap and AHRS in –40° to 40° range were excellent for flexion/extension and lateral bending (ICC>0.9). ICC values were also fair for axial rotation (ICC>0.8). ICC values between MoCap and Kinect system in –40° to 40° range were fair for all motions. Conclusion Our study showed feasibility of using AHRS to measure cervical ROM during continuous motion with an acceptable range of error. AHRS and Kinect system can also be used for continuous monitoring of flexion/extension and lateral bending in ordinary range. PMID:27606262

  9. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  10. Coherence analysis for movement disorder motion captured by six degree-of-freedom inertial sensing

    NASA Astrophysics Data System (ADS)

    Teskey, Wesley J. E.; Elhabiby, Mohamed; El-Sheimy, Naser; MacIntosh, Brian

    2012-06-01

    The use of inertial sensors (accelerometer and gyroscopes) for evaluation of movement disorder motion, including essential tremor (ET) and Parkinson's disease (PD), is becoming prevalent. This paper uses a novel combination of six degree-of-freedom motion analysis and coherence based processing methodologies to uncover differences in the signature of motion for the ET and PD movement disorders. This is the first analysis of such motions utilizing the novel methodology outlined, and it displays a distinct motion profile differentiating between these two groups. Such an analysis can be used to assist medical professionals in diagnosing movement disorders given a currently high error rate of diagnosis. As well, the Kalman smoothing analysis performed in this paper can be quite useful for any application when tracking of human motion is required. Another contribution of the work is the use of wavelets in zero phase lag filtering, which helped in preparing the data for analysis by removing unwanted frequencies without introducing distortions into the data.

  11. The 3-D motion of the centre of gravity of the human body during level walking. I. Normal subjects at low and intermediate walking speeds.

    PubMed

    Tesio, L; Lanzi, D; Detrembleur, C

    1998-03-01

    OBJECTIVE: To measure the mechanical energy changes of the centre of gravity (CG) of the body in the forward, lateral and vertical direction during normal level walking at intermediate and low speeds. DESIGN: Eight healthy adults performed successive walks at speeds ranging from 0.25 to 1.75 m s(-1) over a dedicated force platform system. BACKGROUND: In previous studies, it was shown that the motion of the CG during gait can be altered more than the motion of individual segments. However, more detailed normative data are needed for clinical analysis. METHODS: The positive work done during the step to accelerate the body CG in the forward direction, W(f), to lift it, W(v), to accelerate it in the lateral direction, W(I), and the actual work done by the muscles to maintain its motion with respect to the ground ('external' work), W(ext), were measured. This allowed the calculation of the pendulum-like transfer between gravitational potential energy and kinetic energy of the CG, (percentage recovery, R). At the optimal speed of about 1.3 m s(-1), this transfer allows saving of as much as 65% of the muscular work which would have been otherwise needed to keep the body in motion with respect to the ground. The distance covered by the CG at each step either forward (step length, S(I)), or vertically (vertical displacement, S(v)) was also recorded. RESULTS: W(I) was, as a median, only 1.6-5.9% of W(ext). This ratio was higher, the lower the speed. At each step, W(ext) is needed to sustain two distinct increments of the total mechanical energy of the CG, E(tot). The increment a takes place during the double stance phase; the increment b takes place during the single stance phase. Both of these increments increased with speed. Over the speed range analyzed, the power spent to to sustain the a increment was 2.8-3.9 times higher than the power spent to sustain the b increment. PMID:11415774

  12. A multiple-shape memory polymer-metal composite actuator capable of programmable control, creating complex 3D motion of bending, twisting, and oscillation

    PubMed Central

    Shen, Qi; Trabia, Sarah; Stalbaum, Tyler; Palmre, Viljar; Kim, Kwang; Oh, Il-Kwon

    2016-01-01

    Development of biomimetic actuators has been an essential motivation in the study of smart materials. However, few materials are capable of controlling complex twisting and bending deformations simultaneously or separately using a dynamic control system. Here, we report an ionic polymer-metal composite actuator having multiple-shape memory effect, and is able to perform complex motion by two external inputs, electrical and thermal. Prior to the development of this type of actuator, this capability only could be realized with existing actuator technologies by using multiple actuators or another robotic system. This paper introduces a soft multiple-shape-memory polymer-metal composite (MSMPMC) actuator having multiple degrees-of-freedom that demonstrates high maneuverability when controlled by two external inputs, electrical and thermal. These multiple inputs allow for complex motions that are routine in nature, but that would be otherwise difficult to obtain with a single actuator. To the best of the authors’ knowledge, this MSMPMC actuator is the first solitary actuator capable of multiple-input control and the resulting deformability and maneuverability. PMID:27080134

  13. A multiple-shape memory polymer-metal composite actuator capable of programmable control, creating complex 3D motion of bending, twisting, and oscillation.

    PubMed

    Shen, Qi; Trabia, Sarah; Stalbaum, Tyler; Palmre, Viljar; Kim, Kwang; Oh, Il-Kwon

    2016-01-01

    Development of biomimetic actuators has been an essential motivation in the study of smart materials. However, few materials are capable of controlling complex twisting and bending deformations simultaneously or separately using a dynamic control system. Here, we report an ionic polymer-metal composite actuator having multiple-shape memory effect, and is able to perform complex motion by two external inputs, electrical and thermal. Prior to the development of this type of actuator, this capability only could be realized with existing actuator technologies by using multiple actuators or another robotic system. This paper introduces a soft multiple-shape-memory polymer-metal composite (MSMPMC) actuator having multiple degrees-of-freedom that demonstrates high maneuverability when controlled by two external inputs, electrical and thermal. These multiple inputs allow for complex motions that are routine in nature, but that would be otherwise difficult to obtain with a single actuator. To the best of the authors' knowledge, this MSMPMC actuator is the first solitary actuator capable of multiple-input control and the resulting deformability and maneuverability. PMID:27080134

  14. A multiple-shape memory polymer-metal composite actuator capable of programmable control, creating complex 3D motion of bending, twisting, and oscillation

    NASA Astrophysics Data System (ADS)

    Shen, Qi; Trabia, Sarah; Stalbaum, Tyler; Palmre, Viljar; Kim, Kwang; Oh, Il-Kwon

    2016-04-01

    Development of biomimetic actuators has been an essential motivation in the study of smart materials. However, few materials are capable of controlling complex twisting and bending deformations simultaneously or separately using a dynamic control system. Here, we report an ionic polymer-metal composite actuator having multiple-shape memory effect, and is able to perform complex motion by two external inputs, electrical and thermal. Prior to the development of this type of actuator, this capability only could be realized with existing actuator technologies by using multiple actuators or another robotic system. This paper introduces a soft multiple-shape-memory polymer-metal composite (MSMPMC) actuator having multiple degrees-of-freedom that demonstrates high maneuverability when controlled by two external inputs, electrical and thermal. These multiple inputs allow for complex motions that are routine in nature, but that would be otherwise difficult to obtain with a single actuator. To the best of the authors’ knowledge, this MSMPMC actuator is the first solitary actuator capable of multiple-input control and the resulting deformability and maneuverability.

  15. Advantages of fibre lasers in 3D metal cutting and welding applications supported by a 'beam in motion (BIM)' beam delivery system

    NASA Astrophysics Data System (ADS)

    Scheller, Torsten; Bastick, André; Griebel, Martin

    2012-03-01

    Modern laser technology is continuously opening up new fields of applications. Driven by the development of increasingly efficient laser sources, the new technology is successfully entering classical applications such as 3D cutting and welding of metals. Especially in light weight applications in the automotive industry laser manufacturing is key. Only by this technology the reduction of welding widths could be realised as well as the efficient machining of aluminium and the abrasion free machining of hardened steel. The paper compares the operation of different laser types in metal machining regarding wavelength, laser power, laser brilliance, process speed and welding depth to give an estimation for best use of single mode or multi mode lasers in this field of application. The experimental results will be presented by samples of applied parts. In addition a correlation between the process and the achieved mechanical properties will be made. For this application JENOPTIK Automatisierungstechnik GmbH is using the BIM beam control system in its machines, which is the first one to realize a fully integrated combination of beam control and robot. The wide performance and wavelength range of the laser radiation which can be transmitted opens up diverse possibilities of application and makes BIM a universal tool.

  16. 3-D TRMM Flyby of Hurricane Amanda

    NASA Video Gallery

    The TRMM satellite flew over Hurricane Amanda on Tuesday, May 27 at 1049 UTC (6:49 a.m. EDT) and captured rainfall rates and cloud height data that was used to create this 3-D simulated flyby. Cred...

  17. Comparison of Markerless and Marker-Based Motion Capture Technologies through Simultaneous Data Collection during Gait: Proof of Concept

    PubMed Central

    Cobelli, Claudio

    2014-01-01

    During the last decade markerless motion capture techniques have gained an increasing interest in the biomechanics community. In the clinical field, however, the application of markerless techniques is still debated. This is mainly due to a limited number of papers dedicated to the comparison with the state of the art of marker based motion capture, in term of repeatability of the three dimensional joints' kinematics. In the present work the application of markerless technique to data acquired with a marker-based system was investigated. All videos and external data were recorded with the same motion capture system and included the possibility to use markerless and marker-based methods simultaneously. Three dimensional markerless joint kinematics was estimated and compared with the one determined with traditional marker based systems, through the evaluation of root mean square distance between joint rotations. In order to compare the performance of markerless and marker-based systems in terms of clinically relevant joint angles estimation, the same anatomical frames of reference were defined for both systems. Differences in calibration and synchronization of the cameras were excluded by applying the same wand calibration and lens distortion correction to both techniques. Best results were achieved for knee flexion-extension angle, with an average root mean square distance of 11.75 deg, corresponding to 18.35% of the range of motion. Sagittal plane kinematics was estimated better than on the other planes also for hip and ankle (root mean square distance of 17.62 deg e.g. 44.66%, and 7.17 deg e.g. 33.12%), meanwhile estimates for hip joint were the most incorrect. This technique enables users of markerless technology to compare differences with marker-based in order to define the degree of applicability of markerless technique. PMID:24595273

  18. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  19. Fluid Substitution Modeling to Determine Sensitivity of 3D Vertical Seismic Profile Data to Injected CO­2­ at an active Carbon Capture, Utilization and Storage Project, Farnsworth field, TX.

    NASA Astrophysics Data System (ADS)

    Haar, K. K.; Balch, R. S.

    2015-12-01

    The Southwest Regional Partnership on Carbon Sequestration monitors a CO2 capture, utilization and storage project at Farnsworth field, TX. The reservoir interval is a Morrowan age fluvial sand deposited in an incised valley. The sands are between 10 to 25m thick and located about 2800m below the surface. Primary oil recovery began in 1958 and by the late 1960's secondary recovery through waterflooding was underway. In 2009, Chaparral Energy began tertiary recovery using 100% anthropogenic CO2 sourced from an ethanol and a fertilizer plant. This constitutes carbon sequestration and fulfills the DOE's initiative to determine the best approach to permanent carbon storage. One purpose of the study is to understand CO­2 migration from injection wells. CO2­ plume spatial distribution for this project is analyzed with the use of time-lapse 3D vertical seismic profiles centered on CO2 injection wells. They monitor raypaths traveling in a single direction compared to surface seismic surveys with raypaths traveling in both directions. 3D VSP surveys can image up to 1.5km away from the well of interest, exceeding regulatory requirements for maximum plume extent by a factor of two. To optimize the timing of repeat VSP acquisition, the sensitivity of the 3D VSP surveys to CO2 injection was analyzed to determine at what injection volumes a seismic response to the injected CO­2 will be observable. Static geologic models were generated for pre-CO2 and post-CO2 reservoir states through construction of fine scale seismic based geologic models, which were then history matched via flow simulations. These generated static states of the model, where CO2­ replaces oil and brine in pore spaces, allow for generation of impedance volumes which when convolved with a representative wavelet generate synthetic seismic volumes used in the sensitivity analysis. Funding for the project is provided by DOE's National Energy Technology Laboratory (NETL) under Award No. DE-FC26-05NT42591.

  20. A cross-platform solution for light field based 3D telemedicine.

    PubMed

    Wang, Gengkun; Xiang, Wei; Pickering, Mark

    2016-03-01

    Current telehealth services are dominated by conventional 2D video conferencing systems, which are limited in their capabilities in providing a satisfactory communication experience due to the lack of realism. The "immersiveness" provided by 3D technologies has the potential to promote telehealth services to a wider range of applications. However, conventional stereoscopic 3D technologies are deficient in many aspects, including low resolution and the requirement for complicated multi-camera setup and calibration, and special glasses. The advent of light field (LF) photography enables us to record light rays in a single shot and provide glasses-free 3D display with continuous motion parallax in a wide viewing zone, which is ideally suited for 3D telehealth applications. As far as our literature review suggests, there have been no reports of 3D telemedicine systems using LF technology. In this paper, we propose a cross-platform solution for a LF-based 3D telemedicine system. Firstly, a novel system architecture based on LF technology is established, which is able to capture the LF of a patient, and provide an immersive 3D display at the doctor site. For 3D modeling, we further propose an algorithm which is able to convert the captured LF to a 3D model with a high level of detail. For the software implementation on different platforms (i.e., desktop, web-based and mobile phone platforms), a cross-platform solution is proposed. Demo applications have been developed for 2D/3D video conferencing, 3D model display and edit, blood pressure and heart rate monitoring, and patient data viewing functions. The demo software can be extended to multi-discipline telehealth applications, such as tele-dentistry, tele-wound and tele-psychiatry. The proposed 3D telemedicine solution has the potential to revolutionize next-generation telemedicine technologies by providing a high quality immersive tele-consultation experience. PMID:26689324

  1. Performance assessment of HIFU lesion detection by Harmonic Motion Imaging for Focused Ultrasound (HMIFU): A 3D finite-element-based framework with experimental validation

    PubMed Central

    Hou, Gary Y.; Luo, Jianwen; Marquet, Fabrice; Maleke, Caroline; Vappou, Jonathan; Konofagou, Elisa E.

    2014-01-01

    Harmonic Motion Imaging for Focused Ultrasound (HMIFU) is a novel high-intensity focused ultrasound (HIFU) therapy monitoring method with feasibilities demonstrated in vitro, ex vivo and in vivo. Its principle is based on Amplitude-modulated (AM) - Harmonic Motion Imaging (HMI), an oscillatory radiation force used for imaging the tissue mechanical response during thermal ablation. In this study, a theoretical framework of HMIFU is presented, comprising a customized nonlinear wave propagation model, a finite-element (FE) analysis module, and an image-formation model. The objective of this study is to develop such a framework in order to 1) assess the fundamental performance of HMIFU in detecting HIFU lesions based on the change in tissue apparent elasticity, i.e., the increasing Young's modulus, and the HIFU lesion size with respect to the HIFU exposure time and 2) validate the simulation findings ex vivo. The same HMI and HMIFU parameters as in the experimental studies were used, i.e., 4.5-MHz HIFU frequency and 25 Hz AM frequency. For a lesion-to-background Young's modulus ratio of 3, 6, and 9, the FE and estimated HMI displacement ratios were equal to 1.83, 3.69, 5.39 and 1.65, 3.19, 4.59, respectively. In experiments, the HMI displacement followed a similar increasing trend of 1.19, 1.28, and 1.78 at 10-s, 20-s, and 30-s HIFU exposure, respectively. In addition, moderate agreement in lesion size growth was also found in both simulations (16.2, 73.1 and 334.7 mm2) and experiments (26.2, 94.2 and 206.2 mm2). Therefore, the feasibility of HMIFU for HIFU lesion detection based on the underlying tissue elasticity changes was verified through the developed theoretical framework, i.e., validation of the fundamental performance of the HMIFU system for lesion detection, localization and quantification, was demonstrated both theoretically and ex vivo. PMID:22036637

  2. 3D surface digitizing and modeling development at ITRI

    NASA Astrophysics Data System (ADS)

    Hsueh, Wen-Jean

    2000-06-01

    This paper gives an overview of the research and development activities in 3D surface digitizing and modeling conducted at the Industrial Technology Research Institute (ITRI) of Taiwan in the past decade. As a major technology and consulting service provider of the area, ITRI has developed 3D laser scanning digitizers ranging from low-cost compacts, industrial CAD/CAM digitizing, to large human body scanner, with in-house 3D surface modeling software to provide total solution in reverse engineering that requires processing capabilities of large number of 3D data. Based on both hardware and software technologies in scanning, merging, registration, surface fitting, reconstruction, and compression, ITRI is now exploring innovative methodologies that provide higher performances, including hardware-based correlation algorithms with advanced camera designs, animation surface model reconstruction, and optical tracking for motion capture. It is expected that the need for easy and fast high-quality 3D information in the near future will grow exponentially, at the same amazing rate as the internet and the human desire for realistic and natural images.

  3. Measurement of 3-D Vibrational Motion by Dynamic Photogrammetry Using Least-Square Image Matching for Sub-Pixel Targeting to Improve Accuracy

    PubMed Central

    Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho

    2016-01-01

    This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility. PMID:26978366

  4. Accuracy Assessment in Structure from Motion 3d Reconstruction from Uav-Born Images: the Influence of the Data Processing Methods

    NASA Astrophysics Data System (ADS)

    Caroti, G.; Martínez-Espejo Zaragoza, I.; Piemonte, A.

    2015-08-01

    The evolution of Structure from Motion (SfM) techniques and their integration with the established procedures of classic stereoscopic photogrammetric survey have provided a very effective tool for the production of three-dimensional textured models. Such models are not only aesthetically pleasing but can also contain metric information, the quality of which depends on both survey type and applied processing methodologies. An open research topic in this area refers to checking attainable accuracy levels. The knowledge of such accuracy is essential, especially in the integration of models obtained through SfM with other models derived from different sensors or methods (laser scanning, classic photogrammetry ...). Accuracy checks may be conducted by either comparing SfM models against a reference one or measuring the deviation of control points identified on models and measured with classic topographic instrumentation and methodologies. This paper presents an analysis of attainable accuracy levels, according to different approaches of survey and data processing. For this purpose, a survey of the Church of San Miniato in Marcianella (Pisa, Italy), has been used. The dataset is an integration of laser scanning with terrestrial and UAV-borne photogrammetric surveys; in addition, a high precision topographic network was established for the specific purpose. In particular, laser scanning has been used for the interior and the exterior of the church, with the exclusion of the roof, while UAVs have been used for the photogrammetric survey of both roof, with horizontal strips, and façade, with vertical strips.

  5. Measurement of 3-D Vibrational Motion by Dynamic Photogrammetry Using Least-Square Image Matching for Sub-Pixel Targeting to Improve Accuracy.

    PubMed

    Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho

    2016-01-01

    This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility. PMID:26978366

  6. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  7. Modeling moving systems with RELAP5-3D

    DOE PAGESBeta

    Mesina, G. L.; Aumiller, David L.; Buschman, Francis X.; Kyle, Matt R.

    2015-12-04

    RELAP5-3D is typically used to model stationary, land-based reactors. However, it can also model reactors in other inertial and accelerating frames of reference. By changing the magnitude of the gravitational vector through user input, RELAP5-3D can model reactors on a space station or the moon. The field equations have also been modified to model reactors in a non-inertial frame, such as occur in land-based reactors during earthquakes or onboard spacecraft. Transient body forces affect fluid flow in thermal-fluid machinery aboard accelerating crafts during rotational and translational accelerations. It is useful to express the equations of fluid motion in the acceleratingmore » frame of reference attached to the moving craft. However, careful treatment of the rotational and translational kinematics is required to accurately capture the physics of the fluid motion. Correlations for flow at angles between horizontal and vertical are generated via interpolation where no experimental studies or data exist. The equations for three-dimensional fluid motion in a non-inertial frame of reference are developed. As a result, two different systems for describing rotational motion are presented, user input is discussed, and an example is given.« less

  8. Modeling moving systems with RELAP5-3D

    SciTech Connect

    Mesina, G. L.; Aumiller, David L.; Buschman, Francis X.; Kyle, Matt R.

    2015-12-04

    RELAP5-3D is typically used to model stationary, land-based reactors. However, it can also model reactors in other inertial and accelerating frames of reference. By changing the magnitude of the gravitational vector through user input, RELAP5-3D can model reactors on a space station or the moon. The field equations have also been modified to model reactors in a non-inertial frame, such as occur in land-based reactors during earthquakes or onboard spacecraft. Transient body forces affect fluid flow in thermal-fluid machinery aboard accelerating crafts during rotational and translational accelerations. It is useful to express the equations of fluid motion in the accelerating frame of reference attached to the moving craft. However, careful treatment of the rotational and translational kinematics is required to accurately capture the physics of the fluid motion. Correlations for flow at angles between horizontal and vertical are generated via interpolation where no experimental studies or data exist. The equations for three-dimensional fluid motion in a non-inertial frame of reference are developed. As a result, two different systems for describing rotational motion are presented, user input is discussed, and an example is given.

  9. 3-D Flow Visualization with a Light-field Camera

    NASA Astrophysics Data System (ADS)

    Thurow, B.

    2012-12-01

    Light-field cameras have received attention recently due to their ability to acquire photographs that can be computationally refocused after they have been acquired. In this work, we describe the development of a light-field camera system for 3D visualization of turbulent flows. The camera developed in our lab, also known as a plenoptic camera, uses an array of microlenses mounted next to an image sensor to resolve both the position and angle of light rays incident upon the camera. For flow visualization, the flow field is seeded with small particles that follow the fluid's motion and are imaged using the camera and a pulsed light source. The tomographic MART algorithm is then applied to the light-field data in order to reconstruct a 3D volume of the instantaneous particle field. 3D, 3C velocity vectors are then determined from a pair of 3D particle fields using conventional cross-correlation algorithms. As an illustration of the concept, 3D/3C velocity measurements of a turbulent boundary layer produced on the wall of a conventional wind tunnel are presented. Future experiments are planned to use the camera to study the influence of wall permeability on the 3-D structure of the turbulent boundary layer.Schematic illustrating the concept of a plenoptic camera where each pixel represents both the position and angle of light rays entering the camera. This information can be used to computationally refocus an image after it has been acquired. Instantaneous 3D velocity field of a turbulent boundary layer determined using light-field data captured by a plenoptic camera.

  10. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  11. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  12. 3D camera tracking from disparity images

    NASA Astrophysics Data System (ADS)

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  13. 3D palmprint data fast acquisition and recognition

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoxu; Huang, Shujun; Gao, Nan; Zhang, Zonghua

    2014-11-01

    This paper presents a fast 3D (Three-Dimension) palmprint capturing system and develops an efficient 3D palmprint feature extraction and recognition method. In order to fast acquire accurate 3D shape and texture of palmprint, a DLP projector triggers a CCD camera to realize synchronization. By generating and projecting green fringe pattern images onto the measured palm surface, 3D palmprint data are calculated from the fringe pattern images. The periodic feature vector can be derived from the calculated 3D palmprint data, so undistorted 3D biometrics is obtained. Using the obtained 3D palmprint data, feature matching test have been carried out by Gabor filter, competition rules and the mean curvature. Experimental results on capturing 3D palmprint show that the proposed acquisition method can fast get 3D shape information of palmprint. Some initial experiments on recognition show the proposed method is efficient by using 3D palmprint data.

  14. Toward an affordable and user-friendly visual motion capture system.

    PubMed

    Bonnet, V; Sylla, N; Cherubini, A; Gonzáles, A; Azevedo Coste, C; Fraisse, P; Venture, G

    2014-01-01

    The present study aims at designing and evaluating a low-cost, simple and portable system for arm joint angle estimation during grasping-like motions. The system is based on a single RGB-D camera and three customized markers. The automatically detected and tracked marker positions were used as inputs to an offline inverse kinematic process based on bio-mechanical constraints to reduce noise effect and handle marker occlusion. The method was validated on 4 subjects with different motions. The joint angles were estimated both with the proposed low-cost system and, a stereophotogrammetric system. Comparative analysis shows good accuracy with high correlation coefficient (r= 0.92) and low average RMS error (3.8 deg). PMID:25570778

  15. Optimization of inertial sensor-based motion capturing for magnetically distorted field applications.

    PubMed

    Schiefer, Christoph; Ellegast, Rolf P; Hermanns, Ingo; Kraus, Thomas; Ochsmann, Elke; Larue, Christian; Plamondon, André

    2014-12-01

    Inertial measurement units (IMU) are gaining increasing importance for human motion tracking in a large variety of applications. IMUs consist of gyroscopes, accelerometers, and magnetometers which provide angular rate, acceleration, and magnetic field information, respectively. In scenarios with a permanently distorted magnetic field, orientation estimation algorithms revert to using only angular rate and acceleration information. The result is an increasing drift error of the heading information. This article describes a method to compensate the orientation drift of IMUs using angular rate and acceleration readings in a quaternion-based algorithm. Zero points (ZP) were introduced, which provide additional heading and gyroscope bias information and were combined with bidirectional orientation computation. The necessary frequency of ZPs to achieve an acceptable error level is derived in this article. In a laboratory environment the method and the effect of varying interval length between ZPs was evaluated. Eight subjects were equipped with seven IMUs at trunk, head and upper extremities. They performed a predefined course of box handling for 40 min at different motion speeds and ranges of motion. The orientation estimation was compared to an optical motion tracking system. The resulting mean root mean squared error (RMSE) of all measurements ranged from 1.7 deg to 7.6 deg (roll and pitch) and from 3.5 deg to 15.0 deg (heading) depending on the measured segment, at a mean interval-length of 1.1 min between two ZPs without magnetometer usage. The 95% limits of agreement (LOA) ranged in best case from -2.9 deg to 3.6 deg at the hip roll angle and in worst case from -19.3 deg to 18.9 deg at the forearm heading angle. This study demonstrates that combining ZPs and bidirectional computation can reduce orientation error of IMUs in environments with magnetic field distortion. PMID:25321344

  16. 3D laser traking of a particle in 3DFM

    NASA Astrophysics Data System (ADS)

    Desai, Kalpit; Welch, Gregory; Bishop, Gary; Taylor, Russell; Superfine, Richard

    2003-11-01

    The principal goal of 3D tracking in our home-built 3D Magnetic Force Microscope is to monitor movement of the particle with respect to laser beam waist and keep the particle at the center of laser beam. The sensory element is a Quadrant Photo Diode (QPD) which captures scattering of light caused by particle motion with bandwidth up to 40 KHz. XYZ translation stage is the driver element which moves particle back in the center of the laser with accuracy of couple of nanometers and with bandwidth up to 300 Hz. Since our particles vary in size, composition and shape, instead of using a priori model we use standard system identification techniques to have optimal approximation to the relationship between particle motion and QPD response. We have developed position feedback control system software that is capable of 3-dimensional tracking of beads that are attached to cilia on living cells which are beating at up to 15Hz. We have also modeled the control system of instrument to simulate performance of 3D particle tracking for different experimental conditions. Given operational level of nanometers, noise poses a great challenge for the tracking system. We propose to use stochastic control theory approaches to increase robustness of tracking.

  17. EFFECTS OF TURBULENCE, ECCENTRICITY DAMPING, AND MIGRATION RATE ON THE CAPTURE OF PLANETS INTO MEAN MOTION RESONANCE

    SciTech Connect

    Ketchum, Jacob A.; Adams, Fred C.; Bloch, Anthony M.

    2011-01-01

    Pairs of migrating extrasolar planets often lock into mean motion resonance as they drift inward. This paper studies the convergent migration of giant planets (driven by a circumstellar disk) and determines the probability that they are captured into mean motion resonance. The probability that such planets enter resonance depends on the type of resonance, the migration rate, the eccentricity damping rate, and the amplitude of the turbulent fluctuations. This problem is studied both through direct integrations of the full three-body problem and via semi-analytic model equations. In general, the probability of resonance decreases with increasing migration rate, and with increasing levels of turbulence, but increases with eccentricity damping. Previous work has shown that the distributions of orbital elements (eccentricity and semimajor axis) for observed extrasolar planets can be reproduced by migration models with multiple planets. However, these results depend on resonance locking, and this study shows that entry into-and maintenance of-mean motion resonance depends sensitively on the migration rate, eccentricity damping, and turbulence.

  18. Vacant Lander in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D image captured by the Mars Exploration Rover Opportunity's rear hazard-identification camera shows the now-empty lander that carried the rover 283 million miles to Meridiani Planum, Mars. Engineers received confirmation that Opportunity's six wheels successfully rolled off the lander and onto martian soil at 3:01 a.m. PST, January 31, 2004, on the seventh martian day, or sol, of the mission. The rover is approximately 1 meter (3 feet) in front of the lander, facing north.

  19. Measuring Actin Flow in 3D Cell Protrusions

    PubMed Central

    Chiu, Chi-Li; Digman, Michelle A.; Gratton, Enrico

    2013-01-01

    Actin dynamics is important in determining cell shape, tension, and migration. Methods such as fluorescent speckle microscopy and spatial temporal image correlation spectroscopy have been used to capture high-resolution actin turnover dynamics within cells in two dimensions. However, these methods are not directly applicable in 3D due to lower resolution and poor contrast. Here, we propose to capture actin flow in 3D with high spatial-temporal resolution by combining nanoscale precise imaging by rapid beam oscillation and fluctuation spectroscopy techniques. To measure the actin flow along cell protrusions in cell expressing actin-eGFP cultured in a type I collagen matrix, the laser was orbited around the protrusion and its trajectory was modulated in a clover-shaped pattern perpendicularly to the protrusion. Orbits were also alternated at two positions closely spaced along the protrusion axis. The pair cross-correlation function was applied to the fluorescence fluctuation from these two positions to capture the flow of actin. Measurements done on nonmoving cellular protrusion tips showed no pair-correlation at two orbital positions indicating a lack of flow of F-actin bundles. However, in some protrusions, the pair-correlation approach revealed directional flow of F-actin bundles near the protrusion surface with flow rates in the range of ∼1 μm/min, comparable to results in two dimensions using fluorescent speckle microscopy. Furthermore, we found that the actin flow rate is related to the distance to the protrusion tip. We also observed collagen deformation by concomitantly detecting collagen fibers with reflectance detection during these actin motions. The implementation of the nanoscale precise imaging by rapid beam oscillation method with a cloverleaf-shaped trajectory in conjunction with the pair cross-correlation function method provides a quantitative way of capturing dynamic flows and organization of proteins during cell migration in 3D in conditions of

  20. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-01-01

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate. PMID:25375758

  1. Sensor Fusion of Cameras and a Laser for City-Scale 3D Reconstruction

    PubMed Central

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-01-01

    This paper presents a sensor fusion system of cameras and a 2D laser sensor for large-scale 3D reconstruction. The proposed system is designed to capture data on a fast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor, and they are synchronized by a hardware trigger. Reconstruction of 3D structures is done by estimating frame-by-frame motion and accumulating vertical laser scans, as in previous works. However, our approach does not assume near 2D motion, but estimates free motion (including absolute scale) in 3D space using both laser data and image features. In order to avoid the degeneration associated with typical three-point algorithms, we present a new algorithm that selects 3D points from two frames captured by multiple cameras. The problem of error accumulation is solved by loop closing, not by GPS. The experimental results show that the estimated path is successfully overlaid on the satellite images, such that the reconstruction result is very accurate. PMID:25375758

  2. TRACE 3-D documentation

    SciTech Connect

    Crandall, K.R.

    1987-08-01

    TRACE 3-D is an interactive beam-dynamics program that calculates the envelopes of a bunched beam, including linear space-charge forces, through a user-defined transport system. TRACE 3-D provides an immediate graphics display of the envelopes and the phase-space ellipses and allows nine types of beam-matching options. This report describes the beam-dynamics calculations and gives detailed instruction for using the code. Several examples are described in detail.

  3. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  4. 3-D SAR image formation from sparse aperture data using 3-D target grids

    NASA Astrophysics Data System (ADS)

    Bhalla, Rajan; Li, Junfei; Ling, Hao

    2005-05-01

    The performance of ATR systems can potentially be improved by using three-dimensional (3-D) SAR images instead of the traditional two-dimensional SAR images or one-dimensional range profiles. 3-D SAR image formation of targets from radar backscattered data collected on wide angle, sparse apertures has been identified by AFRL as fundamental to building an object detection and recognition capability. A set of data has been released as a challenge problem. This paper describes a technique based on the concept of 3-D target grids aimed at the formation of 3-D SAR images of targets from sparse aperture data. The 3-D target grids capture the 3-D spatial and angular scattering properties of the target and serve as matched filters for SAR formation. The results of 3-D SAR formation using the backhoe public release data are presented.

  5. Rapid 360 degree imaging and stitching of 3D objects using multiple precision 3D cameras

    NASA Astrophysics Data System (ADS)

    Lu, Thomas; Yin, Stuart; Zhang, Jianzhong; Li, Jiangan; Wu, Frank

    2008-02-01

    In this paper, we present the system architecture of a 360 degree view 3D imaging system. The system consists of multiple 3D sensors synchronized to take 3D images around the object. Each 3D camera employs a single high-resolution digital camera and a color-coded light projector. The cameras are synchronized to rapidly capture the 3D and color information of a static object or a live person. The color encoded structure lighting ensures the precise reconstruction of the depth of the object. A 3D imaging system architecture is presented. The architecture employs the displacement of the camera and the projector to triangulate the depth information. The 3D camera system has achieved high depth resolution down to 0.1mm on a human head sized object and 360 degree imaging capability.

  6. Intra-event and Inter-event Ground Motion Variability from 3-D Broadband (0-8 Hz) Ensemble Simulations of Mw 6.7 Thrust Events Including Rough Fault Descriptions, Small-Scale Heterogeneities and Q(f)

    NASA Astrophysics Data System (ADS)

    Withers, K.; Olsen, K. B.; Shi, Z.; Day, S. M.

    2015-12-01

    We model blind thrust scenario earthquakes matching the fault geometry of 1994 Mw 6.7 Northridge earthquake up to 8 Hz by first performing dynamic rupture propagation using a support operator method (SORD). We extend the ground motion by converting the slip-rate data to a kinematic source for the finite difference wave propagation code AWP-ODC, which incorporates an improved frequency-dependent attenuation approach. This technique has high accuracy for Q values down to 15. The desired Q function is fit to the 'effective' Q over the coarse grained-cell for low Q, and a simple interpolation formula is used to interpolate the weights for arbitrary Q. Here, we use a power-law model Q above a reference frequency in the form Q 0 f^n with exponents ranging from 0.0-0.9. We find envelope and phase misfits only slightly larger than that of the elastic case when compared with that of the frequency-wavenumber solution for both a homogenous and a layered model with a large-velocity contrast. We also include small-scale medium complexity in both a 1D layered model and a 3D medium extracted from SCEC CVM-S4 including a surface geotechnical layer (GTL). We model additional realizations of the scenario by varying the hypocenter location, and find that similar moment magnitudes are generated. We observe that while the ground motion pattern changes, the median ground motion is not affected significantly, when binned as a function of distance, and is within 1 interevent standard deviation from the median GMPEs. We find that intra-event variability for the layered model simulations is similar to observed values of single-station standard deviation. We show that small-scale heterogeneity can significantly affect the intra-event variability at frequencies greater than ~1 Hz, becoming increasingly important at larger distances from the source. We perform a parameter space study by varying statistical parameters and find that the variability is fairly independent of the correlation length

  7. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  8. CONDITION FOR CAPTURE INTO FIRST-ORDER MEAN MOTION RESONANCES AND APPLICATION TO CONSTRAINTS ON THE ORIGIN OF RESONANT SYSTEMS

    SciTech Connect

    Ogihara, Masahiro; Kobayashi, Hiroshi E-mail: hkobayas@nagoya-u.jp

    2013-09-20

    We investigate the condition for capture into first-order mean motion resonances using numerical simulations with a wide range of various parameters. In particular, we focus on deriving the critical migration timescale for capture into the 2:1 resonance; additional numerical experiments for closely spaced resonances (e.g., 3:2) are also performed. We find that the critical migration timescale is determined by the planet-to-stellar mass ratio, and its dependence exhibits power-law behavior with index –4/3. This dependence is also supported by simple analytic arguments. We also find that the critical migration timescale for systems with equal-mass bodies is shorter than that in the restricted problem; for instance, for the 2:1 resonance between two equal-mass bodies, the critical timescale decreases by a factor of 10. In addition, using the obtained formula, the origin of observed systems that include first-order commensurabilities is constrained. Assuming that pairs of planets originally form well separated from each other and then undergo convergent migration and are captured in resonances, it is possible that a number of exoplanets experienced rapid orbital migration. For systems in closely spaced resonances, the differential migration timescale between the resonant pair can be constrained well; it is further suggested that several exoplanets underwent migration that can equal or even exceed the type I migration rate predicted by the linear theory. This implies that some of them may have formed in situ. Future observations and the use of our model will allow us to statistically determine the typical migration speed in a protoplanetary disk.

  9. Motion capture and manipulation of a single synthetic molecular rotor by optical microscopy.

    PubMed

    Ikeda, Tomohiro; Tsukahara, Takahiro; Iino, Ryota; Takeuchi, Masayuki; Noji, Hiroyuki

    2014-09-15

    Single-molecule imaging and manipulation with optical microscopy have become essential methods for studying biomolecular machines; however, only few efforts have been directed towards synthetic molecular machines. Single-molecule optical microscopy was now applied to a synthetic molecular rotor, a double-decker porphyrin (DD). By attaching a magnetic bead (ca. 200 nm) to the DD, its rotational dynamics were captured with a time resolution of 0.5 ms. DD showed rotational diffusion with 90° steps, which is consistent with its four-fold structural symmetry. Kinetic analysis revealed the first-order kinetics of the 90° step with a rate constant of 2.8 s(-1). The barrier height of the rotational potential was estimated to be greater than 7.4 kJ mol(-1) at 298 K. The DD was also forcibly rotated with magnetic tweezers, and again, four stable pausing angles that are separated by 90° were observed. These results demonstrate the potency of single-molecule optical microscopy for the elucidation of elementary properties of synthetic molecular machines. PMID:24989127

  10. 3D dynamic roadmapping for abdominal catheterizations.

    PubMed

    Bender, Frederik; Groher, Martin; Khamene, Ali; Wein, Wolfgang; Heibel, Tim Hauke; Navab, Nassir

    2008-01-01

    Despite rapid advances in interventional imaging, the navigation of a guide wire through abdominal vasculature remains, not only for novice radiologists, a difficult task. Since this navigation is mostly based on 2D fluoroscopic image sequences from one view, the process is slowed down significantly due to missing depth information and patient motion. We propose a novel approach for 3D dynamic roadmapping in deformable regions by predicting the location of the guide wire tip in a 3D vessel model from the tip's 2D location, respiratory motion analysis, and view geometry. In a first step, the method compensates for the apparent respiratory motion in 2D space before backprojecting the 2D guide wire tip into three dimensional space, using a given projection matrix. To countervail the error connected to the projection parameters and the motion compensation, as well as the ambiguity caused by vessel deformation, we establish a statistical framework, which computes a reliable estimate of the guide wire tip location within the 3D vessel model. With this 2D-to-3D transfer, the navigation can be performed from arbitrary viewing angles, disconnected from the static perspective view of the fluoroscopic sequence. Tests on a realistic breathing phantom and on synthetic data with a known ground truth clearly reveal the superiority of our approach compared to naive methods for 3D roadmapping. The concepts and information presented in this paper are based on research and are not commercially available. PMID:18982662

  11. Bootstrapping 3D fermions

    NASA Astrophysics Data System (ADS)

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-01

    We study the conformal bootstrap for a 4-point function of fermions < ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C T . We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N . We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  12. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  13. On Fundamental Evaluation Using Uav Imagery and 3d Modeling Software

    NASA Astrophysics Data System (ADS)

    Nakano, K.; Suzuki, H.; Tamino, T.; Chikatsu, H.

    2016-06-01

    Unmanned aerial vehicles (UAVs), which have been widely used in recent years, can acquire high-resolution images with resolutions in millimeters; such images cannot be acquired with manned aircrafts. Moreover, it has become possible to obtain a surface reconstruction of a realistic 3D model using high-overlap images and 3D modeling software such as Context capture, Pix4Dmapper, Photoscan based on computer vision technology such as structure from motion and multi-view stereo. 3D modeling software has many applications. However, most of them seem to not have obtained appropriate accuracy control in accordance with the knowledge of photogrammetry and/or computer vision. Therefore, we performed flight tests in a test field using an UAV equipped with a gimbal stabilizer and consumer grade digital camera. Our UAV is a hexacopter and can fly according to the waypoints for autonomous flight and can record flight logs. We acquired images from different altitudes such as 10 m, 20 m, and 30 m. We obtained 3D reconstruction results of orthoimages, point clouds, and textured TIN models for accuracy evaluation in some cases with different image scale conditions using 3D modeling software. Moreover, the accuracy aspect was evaluated for different units of input image—course unit and flight unit. This paper describes the fundamental accuracy evaluation for 3D modeling using UAV imagery and 3D modeling software from the viewpoint of close-range photogrammetry.

  14. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  15. Development of a 3D particle treecode for plasma simulations

    NASA Astrophysics Data System (ADS)

    Ong, Benjamin; Christlieb, Andrew; Krasny, Robert

    2008-11-01

    In this work we present a fully 3-D Boundary Integral Treecode (BIT). We apply the method to several classic problems such as sheath formation and 3D simulations of a Penning trap. In addition, we investigate the ability of the solver to naturally capture Coloumb scattering. A key point in the investigation is to understand the effect of different types of regularizations, and how to appropriately incorporate the regularization in the BIT framework. This work builds on substantial efforts in 1- and 2-D. [1] R. Krasny and K. Lindsay, A particle method and adaptive treecode for vortex sheet motion in 3-D flow, JCP, Vol. 172, No. 2, 879-907 [2] K. Matyash, R. Schneider, R. Sydora, and F. Taccogna, Application of a Grid-Free Kinetic Model to the Collisionless Sheath, Contrib. Plasma Phys, Vol. 48, No. 1-3, 116-120 (2008) [3] K. Cartwright and A. Christlieb, Boundary Integral Corrected Particle in Cell, SIAM Journal on Sci. Comput., submitted [4] A. Christlieb, R. Krasny, B. Ong and J. Qiu, A Step Towards Addressing Temporal Multi-scale Problems in Plasma Physics, in prep.

  16. 3D microscope

    NASA Astrophysics Data System (ADS)

    Iizuka, Keigo

    2008-02-01

    In order to circumvent the fact that only one observer can view the image from a stereoscopic microscope, an attachment was devised for displaying the 3D microscopic image on a large LCD monitor for viewing by multiple observers in real time. The principle of operation, design, fabrication, and performance are presented, along with tolerance measurements relating to the properties of the cellophane half-wave plate used in the design.

  17. 3-D MAPPING TECHNOLOGIES FOR HIGH LEVEL WASTE TANKS

    SciTech Connect

    Marzolf, A.; Folsom, M.

    2010-08-31

    This research investigated four techniques that could be applicable for mapping of solids remaining in radioactive waste tanks at the Savannah River Site: stereo vision, LIDAR, flash LIDAR, and Structure from Motion (SfM). Stereo vision is the least appropriate technique for the solids mapping application. Although the equipment cost is low and repackaging would be fairly simple, the algorithms to create a 3D image from stereo vision would require significant further development and may not even be applicable since stereo vision works by finding disparity in feature point locations from the images taken by the cameras. When minimal variation in visual texture exists for an area of interest, it becomes difficult for the software to detect correspondences for that object. SfM appears to be appropriate for solids mapping in waste tanks. However, equipment development would be required for positioning and movement of the camera in the tank space to enable capturing a sequence of images of the scene. Since SfM requires the identification of distinctive features and associates those features to their corresponding instantiations in the other image frames, mockup testing would be required to determine the applicability of SfM technology for mapping of waste in tanks. There may be too few features to track between image frame sequences to employ the SfM technology since uniform appearance may exist when viewing the remaining solids in the interior of the waste tanks. Although scanning LIDAR appears to be an adequate solution, the expense of the equipment ($80,000-$120,000) and the need for further development to allow tank deployment may prohibit utilizing this technology. The development would include repackaging of equipment to permit deployment through the 4-inch access ports and to keep the equipment relatively uncontaminated to allow use in additional tanks. 3D flash LIDAR has a number of advantages over stereo vision, scanning LIDAR, and SfM, including full frame

  18. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  19. A Quasi-Static Method for Determining the Characteristics of a Motion Capture Camera System in a "Split-Volume" Configuration

    NASA Technical Reports Server (NTRS)

    Miller, Chris; Mulavara, Ajitkumar; Bloomberg, Jacob

    2001-01-01

    To confidently report any data collected from a video-based motion capture system, its functional characteristics must be determined, namely accuracy, repeatability and resolution. Many researchers have examined these characteristics with motion capture systems, but they used only two cameras, positioned 90 degrees to each other. Everaert used 4 cameras, but all were aligned along major axes (two in x, one in y and z). Richards compared the characteristics of different commercially available systems set-up in practical configurations, but all cameras viewed a single calibration volume. The purpose of this study was to determine the accuracy, repeatability and resolution of a 6-camera Motion Analysis system in a split-volume configuration using a quasistatic methodology.

  20. Image-based indoor localization system based on 3D SfM model

    NASA Astrophysics Data System (ADS)

    Lu, Guoyu; Kambhamettu, Chandra

    2013-12-01

    Indoor localization is an important research topic for both of the robot and signal processing communities. In recent years, image-based localization is also employed in indoor environment for the easy availability of the necessary equipment. After capturing an image and sending it to an image database, the best matching image is returned with the navigation information. By allowing further camera pose estimation, the image-based localization system with the use of Structure-from-Motion reconstruction model can achieve higher accuracy than the methods of searching through a 2D image database. However, this emerging technique is still only on the use of outdoor environment. In this paper, we introduce the 3D SfM model based image-based localization system into the indoor localization task. We capture images of the indoor environment and reconstruct the 3D model. On the localization task, we simply use the images captured by a mobile to match the 3D reconstructed model to localize the image. In this process, we use the visual words and the approximate nearest neighbor methods to accelerate the process of nding the query feature's correspondences. Within the visual words, we conduct linear search in detecting the correspondences. From the experiments, we nd that the image-based localization method based on 3D SfM model gives good localization result based on both accuracy and speed.

  1. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  2. Multifunctional Setup for Studying Human Motor Control Using Transcranial Magnetic Stimulation, Electromyography, Motion Capture, and Virtual Reality.

    PubMed

    Talkington, William J; Pollard, Bradley S; Olesh, Erienne V; Gritsenko, Valeriya

    2015-01-01

    The study of neuromuscular control of movement in humans is accomplished with numerous technologies. Non-invasive methods for investigating neuromuscular function include transcranial magnetic stimulation, electromyography, and three-dimensional motion capture. The advent of readily available and cost-effective virtual reality solutions has expanded the capabilities of researchers in recreating "real-world" environments and movements in a laboratory setting. Naturalistic movement analysis will not only garner a greater understanding of motor control in healthy individuals, but also permit the design of experiments and rehabilitation strategies that target specific motor impairments (e.g. stroke). The combined use of these tools will lead to increasingly deeper understanding of neural mechanisms of motor control. A key requirement when combining these data acquisition systems is fine temporal correspondence between the various data streams. This protocol describes a multifunctional system's overall connectivity, intersystem signaling, and the temporal synchronization of recorded data. Synchronization of the component systems is primarily accomplished through the use of a customizable circuit, readily made with off the shelf components and minimal electronics assembly skills. PMID:26384034

  3. 3D Holographic Observatory for Long-term Monitoring of Complex Behaviors in Drosophila.

    PubMed

    Kumar, S Santosh; Sun, Yaning; Zou, Sige; Hong, Jiarong

    2016-01-01

    Drosophila is an excellent model organism towards understanding the cognitive function, aging and neurodegeneration in humans. The effects of aging and other long-term dynamics on the behavior serve as important biomarkers in identifying such changes to the brain. In this regard, we are presenting a new imaging technique for lifetime monitoring of Drosophila in 3D at spatial and temporal resolutions capable of resolving the motion of limbs and wings using holographic principles. The developed system is capable of monitoring and extracting various behavioral parameters, such as ethograms and spatial distributions, from a group of flies simultaneously. This technique can image complicated leg and wing motions of flies at a resolution, which allows capturing specific landing responses from the same data set. Overall, this system provides a unique opportunity for high throughput screenings of behavioral changes in 3D over a long term in Drosophila. PMID:27605243

  4. 3D Holographic Observatory for Long-term Monitoring of Complex Behaviors in Drosophila

    PubMed Central

    Kumar, S. Santosh; Sun, Yaning; Zou, Sige; Hong, Jiarong

    2016-01-01

    Drosophila is an excellent model organism towards understanding the cognitive function, aging and neurodegeneration in humans. The effects of aging and other long-term dynamics on the behavior serve as important biomarkers in identifying such changes to the brain. In this regard, we are presenting a new imaging technique for lifetime monitoring of Drosophila in 3D at spatial and temporal resolutions capable of resolving the motion of limbs and wings using holographic principles. The developed system is capable of monitoring and extracting various behavioral parameters, such as ethograms and spatial distributions, from a group of flies simultaneously. This technique can image complicated leg and wing motions of flies at a resolution, which allows capturing specific landing responses from the same data set. Overall, this system provides a unique opportunity for high throughput screenings of behavioral changes in 3D over a long term in Drosophila. PMID:27605243

  5. Multiviewer 3D monitor

    NASA Astrophysics Data System (ADS)

    Kostrzewski, Andrew A.; Aye, Tin M.; Kim, Dai Hyun; Esterkin, Vladimir; Savant, Gajendra D.

    1998-09-01

    Physical Optics Corporation has developed an advanced 3-D virtual reality system for use with simulation tools for training technical and military personnel. This system avoids such drawbacks of other virtual reality (VR) systems as eye fatigue, headaches, and alignment for each viewer, all of which are due to the need to wear special VR goggles. The new system is based on direct viewing of an interactive environment. This innovative holographic multiplexed screen technology makes it unnecessary for the viewer to wear special goggles.

  6. Whole-body 3D scanner and scan data report

    NASA Astrophysics Data System (ADS)

    Addleman, Stephen R.

    1997-03-01

    With the first whole-body 3D scanner now available the next adventure confronting the user is what to do with all of the data. While the system was built for anthropologists, it has created interest among users from a wide variety of fields. Users with applications in the fields of anthropology, costume design, garment design, entertainment, VR and gaming have a need for the data in formats unique to their fields. Data from the scanner is being converted to solid models for art and design and NURBS for computer graphics applications. Motion capture has made scan data move and dance. The scanner has created a need for advanced application software just as other scanners have in the past.

  7. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  8. Motion.

    ERIC Educational Resources Information Center

    Gerhart, James B.; Nussbaum, Rudi H.

    This monograph was written for the Conference on the New Instructional Materials in Physics held at the University of Washington in summer, 1965. It is intended for use in an introductory course in college physics. It consists of an extensive qualitative discussion of motion followed by a detailed development of the quantitative methods needed to…

  9. Motion.

    ERIC Educational Resources Information Center

    Brand, Judith, Ed.

    2002-01-01

    This issue of Exploratorium Magazine focuses on the topic of motion. Contents include: (1) "First Word" (Zach Tobias); (2) "Cosmic Collisions" (Robert Irion); (3) "The Mobile Cell" (Karen E. Kalumuck); (4) "The Paths of Paths" (Steven Vogel); (5) "Fragments" (Pearl Tesler); (6) "Moving Pictures" (Amy Snyder); (7) "Plants on the Go" (Katharine…

  10. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  11. 3D polarimetric purity

    NASA Astrophysics Data System (ADS)

    Gil, José J.; San José, Ignacio

    2010-11-01

    From our previous definition of the indices of polarimetric purity for 3D light beams [J.J. Gil, J.M. Correas, P.A. Melero and C. Ferreira, Monogr. Semin. Mat. G. de Galdeano 31, 161 (2004)], an analysis of their geometric and physical interpretation is presented. It is found that, in agreement with previous results, the first parameter is a measure of the degree of polarization, whereas the second parameter (called the degree of directionality) is a measure of the mean angular aperture of the direction of propagation of the corresponding light beam. This pair of invariant, non-dimensional, indices of polarimetric purity contains complete information about the polarimetric purity of a light beam. The overall degree of polarimetric purity is obtained as a weighted quadratic average of the degree of polarization and the degree of directionality.

  12. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  13. 'Bonneville' in 3-D!

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Mars Exploration Rover Spirit took this 3-D navigation camera mosaic of the crater called 'Bonneville' after driving approximately 13 meters (42.7 feet) to get a better vantage point. Spirit's current position is close enough to the edge to see the interior of the crater, but high enough and far enough back to get a view of all of the walls. Because scientists and rover controllers are so pleased with this location, they will stay here for at least two more martian days, or sols, to take high resolution panoramic camera images of 'Bonneville' in its entirety. Just above the far crater rim, on the left side, is the rover's heatshield, which is visible as a tiny reflective speck.

  14. fMRI brain mapping during motion capture and FES induced motor tasks: signal to noise ratio assessment.

    PubMed

    Gandolla, Marta; Ferrante, Simona; Casellato, Claudia; Ferrigno, Giancarlo; Molteni, Franco; Martegani, Alberto; Frattini, Tiziano; Pedrocchi, Alessandra

    2011-10-01

    Functional Electrical Stimulation (FES) is a well known clinical rehabilitation procedure, however the neural mechanisms that underlie this treatment at Central Nervous System (CNS) level are still not completely understood. Functional magnetic resonance imaging (fMRI) is a suitable tool to investigate effects of rehabilitative treatments on brain plasticity. Moreover, monitoring the effective executed movement is needed to correctly interpret activation maps, most of all in neurological patients where required motor tasks could be only partially accomplished. The proposed experimental set-up includes a 1.5 T fMRI scanner, a motion capture system to acquire kinematic data, and an electro-stimulation device. The introduction of metallic devices and of stimulation current in the MRI room could affect fMRI acquisitions so as to prevent a reliable activation maps analysis. What we are interested in is that the Blood Oxygenation Level Dependent (BOLD) signal, marker of neural activity, could be detected within a given experimental condition and set-up. In this paper we assess temporal Signal to Noise Ratio (SNR) as image quality index. BOLD signal change is about 1-2% as revealed by a 1.5 T scanner. This work demonstrates that, with this innovative set-up, in the main cortical sensorimotor regions 1% BOLD signal change can be detected at least in the 93% of the sub-volumes, and almost 100% of the sub-volumes are suitable for 2% signal change detection. The integrated experimental set-up will therefore allows to detect FES induced movements fMRI maps simultaneously with kinematic acquisitions so as to investigate FES-based rehabilitation treatments contribution at CNS level. PMID:21550290

  15. Stereo-vision based 3D modeling for unmanned ground vehicles

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Jasiobedzki, Piotr

    2007-04-01

    Instant Scene Modeler (iSM) is a vision system for generating calibrated photo-realistic 3D models of unknown environments quickly using stereo image sequences. Equipped with iSM, Unmanned Ground Vehicles (UGVs) can capture stereo images and create 3D models to be sent back to the base station, while they explore unknown environments. Rapid access to 3D models will increase the operator situational awareness and allow better mission planning and execution, as the models can be visualized from different views and used for relative measurements. Current military operations of UGVs in urban warfare threats involve the operator hand-sketching the environment from live video feed. iSM eliminates the need for an additional operator as the 3D model is generated automatically. The photo-realism of the models enhances the situational awareness of the mission and the models can also be used for change detection. iSM has been tested on our autonomous vehicle to create photo-realistic 3D models while the rover traverses in unknown environments. Moreover, a proof-of-concept iSM payload has been mounted on an iRobot PackBot with Wayfarer technology, which is equipped with autonomous urban reconnaissance capabilities. The Wayfarer PackBot UGV uses wheel odometry for localization and builds 2D occupancy grid maps from a laser sensor. While the UGV is following walls and avoiding obstacles, iSM captures and processes images to create photo-realistic 3D models. Experimental results show that iSM can complement Wayfarer PackBot's autonomous navigation in two ways. The photo-realistic 3D models provide better situational awareness than 2D grid maps. Moreover, iSM also recovers the camera motion, also known as the visual odometry. As wheel odometry error grows over time, this can help improve the wheel odometry for better localization.

  16. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  17. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  18. Ames Lab 101: 3D Metals Printer

    SciTech Connect

    Ott, Ryan

    2014-02-13

    To meet one of the biggest energy challenges of the 21st century - finding alternatives to rare-earth elements and other critical materials - scientists will need new and advanced tools. The Critical Materials Institute at the U.S. Department of Energy's Ames Laboratory has a new one: a 3D printer for metals research. 3D printing technology, which has captured the imagination of both industry and consumers, enables ideas to move quickly from the initial design phase to final form using materials including polymers, ceramics, paper and even food. But the Critical Materials Institute (CMI) will apply the advantages of the 3D printing process in a unique way: for materials discovery.

  19. Ames Lab 101: 3D Metals Printer

    ScienceCinema

    Ott, Ryan

    2014-06-04

    To meet one of the biggest energy challenges of the 21st century - finding alternatives to rare-earth elements and other critical materials - scientists will need new and advanced tools. The Critical Materials Institute at the U.S. Department of Energy's Ames Laboratory has a new one: a 3D printer for metals research. 3D printing technology, which has captured the imagination of both industry and consumers, enables ideas to move quickly from the initial design phase to final form using materials including polymers, ceramics, paper and even food. But the Critical Materials Institute (CMI) will apply the advantages of the 3D printing process in a unique way: for materials discovery.

  20. Effect of viewing distance on 3D fatigue caused by viewing mobile 3D content

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Lee, Dong-Su; Park, Min-Chul; Yano, Sumio

    2013-05-01

    With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market growth, there is an important thing to consider for consistent development and growth in the display market. To put it briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a human-friendly mobile 3D viewing.

  1. GestAction3D: A Platform for Studying Displacements and Deformations of 3D Objects Using Hands

    NASA Astrophysics Data System (ADS)

    Lingrand, Diane; Renevier, Philippe; Pinna-Déry, Anne-Marie; Cremaschi, Xavier; Lion, Stevens; Rouel, Jean-Guilhem; Jeanne, David; Cuisinaud, Philippe; Soula*, Julien

    We present a low-cost hand-based device coupled with a 3D motion recovery engine and 3D visualization. This platform aims at studying ergonomic 3D interactions in order to manipulate and deform 3D models by interacting with hands on 3D meshes. Deformations are done using different modes of interaction that we will detail in the paper. Finger extremities are attached to vertices, edges or facets. Switching from one mode to another or changing the point of view is done using gestures. The determination of the more adequate gestures is part of the work

  2. Numerical simulation of 3D breaking waves

    NASA Astrophysics Data System (ADS)

    Fraunie, Philippe; Golay, Frederic

    2015-04-01

    Numerical methods dealing with two phase flows basically can be classified in two ways : the "interface tracking" methods when the two phases are resolved separately including boundary conditions fixed at the interface and the "interface capturing" methods when a single flow is considered with variable density. Physical and numerical properties of the two approaches are discussed, based on some numerical experiments performed concerning 3D breaking waves. Acknowledgements : This research was supported by the Modtercom program of Region PACA.

  3. Metrological characterization of 3D imaging devices

    NASA Astrophysics Data System (ADS)

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  4. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  5. Dynamical Systems Analysis of Fully 3D Ocean Features

    NASA Astrophysics Data System (ADS)

    Pratt, L. J.

    2011-12-01

    Dynamical systems analysis of transport and stirring processes has been developed most thoroughly for 2D flow fields. The calculation of manifolds, turnstile lobes, transport barriers, etc. based on observations of the ocean is most often conducted near the sea surface, whereas analyses at depth, usually carried out with model output, is normally confined to constant-z surfaces. At the meoscale and larger, ocean flows are quasi 2D, but smaller scale (submesoscale) motions, including mixed layer phenomena with significant vertical velocity, may be predominantly 3D. The zoology of hyperbolic trajectories becomes richer in such cases and their attendant manifolds are much more difficult to calculate. I will describe some of the basic geometrical features and corresponding Lagrangian Coherent Features expected to arise in upper ocean fronts, eddies, and Langmuir circulations. Traditional GFD models such as the rotating can flow may capture the important generic features. The dynamical systems approach is most helpful when these features are coherent and persistent and the implications and difficulties for this requirement in fully 3D flows will also be discussed.

  6. Computation of supersonic flow fields about bodies in coning motion using a shock-capturing finite-difference technique.

    NASA Technical Reports Server (NTRS)

    Schiff, L. B.

    1972-01-01

    A numerical method for computing the nonlinear inviscid flow field surrounding a body performing coning motion is described. The method permits accurate computation of the aerodynamic moment due to one of the four motions characterizing an arbitrary nonplanar motion. Results of computations for a slender circular cone in coning motion are presented, and show good agreement with experiment for angles of attack up to twice the cone half angle. The computational results display significant departure of the side moment from the linear theory value with increasing angle of attack, but agree well with experimental measurements. This indicates that the initial nonlinear behavior of the aerodynamic moment is determined primarily by the inviscid flow.

  7. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  8. 3D Spray Droplet Distributions in Sneezes

    NASA Astrophysics Data System (ADS)

    Techet, Alexandra; Scharfman, Barry; Bourouiba, Lydia

    2015-11-01

    3D spray droplet clouds generated during human sneezing are investigated using the Synthetic Aperture Feature Extraction (SAFE) method, which relies on light field imaging (LFI) and synthetic aperture (SA) refocusing computational photographic techniques. An array of nine high-speed cameras are used to image sneeze droplets and tracked the droplets in 3D space and time (3D + T). An additional high-speed camera is utilized to track the motion of the head during sneezing. In the SAFE method, the raw images recorded by each camera in the array are preprocessed and binarized, simplifying post processing after image refocusing and enabling the extraction of feature sizes and positions in 3D + T. These binary images are refocused using either additive or multiplicative methods, combined with thresholding. Sneeze droplet centroids, radii, distributions and trajectories are determined and compared with existing data. The reconstructed 3D droplet centroids and radii enable a more complete understanding of the physical extent and fluid dynamics of sneeze ejecta. These measurements are important for understanding the infectious disease transmission potential of sneezes in various indoor environments.

  9. Lifting Object Detection Datasets into 3D.

    PubMed

    Carreira, Joao; Vicente, Sara; Agapito, Lourdes; Batista, Jorge

    2016-07-01

    While data has certainly taken the center stage in computer vision in recent years, it can still be difficult to obtain in certain scenarios. In particular, acquiring ground truth 3D shapes of objects pictured in 2D images remains a challenging feat and this has hampered progress in recognition-based object reconstruction from a single image. Here we propose to bypass previous solutions such as 3D scanning or manual design, that scale poorly, and instead populate object category detection datasets semi-automatically with dense, per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground truth figure-ground segmentations and (iii) a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion and then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions and to accurately estimate cameras viewpoints on one of the most challenging existing object-category detection datasets, PASCAL VOC. We hope that our results will re-stimulate interest on joint object recognition and 3D reconstruction from a single image. PMID:27295458

  10. Visual inertia of rotating 3-D objects.

    PubMed

    Jiang, Y; Pantle, A J; Mark, L S

    1998-02-01

    Five experiments were designed to determine whether a rotating, transparent 3-D cloud of dots (simulated sphere) could influence the perceived direction of rotation of a subsequent sphere. Experiment 1 established conditions under which the direction of rotation of a virtual sphere was perceived unambiguously. When a near-far luminance difference and perspective depth cues were present, observers consistently saw the sphere rotate in the intended direction. In Experiment 2, a near-far luminance difference was used to create an unambiguous rotation sequence that was followed by a directionally ambiguous rotation sequence that lacked both the near-far luminance cue and the perspective cue. Observers consistently saw the second sequence as rotating in the same direction as the first, indicating the presence of 3-D visual inertia. Experiment 3 showed that 3-D visual inertia was sufficiently powerful to bias the perceived direction of a rotation sequence made unambiguous by a near-far luminance cue. Experiment 5 showed that 3-D visual inertia could be obtained using an occlusion depth cue to create an unambiguous inertia-inducing sequence. Finally, Experiments 2, 4, and 5 all revealed a fast-decay phase of inertia that lasted for approximately 800 msec, followed by an asymptotic phase that lasted for periods as long as 1,600 msec. The implications of these findings are examined with respect to motion mechanisms of 3-D visual inertia. PMID:9529911

  11. Application of 3D Photo-reconstruction techniques in Geomorphology: Examples through different landforms and scales

    NASA Astrophysics Data System (ADS)

    Gómez-Gutiérrez, Álvaro; Susanne, Schnabel; Conoscenti, Christian; Caraballo-Arias, Nathalie A.; Ferro, Vito; di Stefano, Constanza; Juan de Sanjosé, José; Berenguer-Sempere, Fernando; de Matías, Javier

    2014-05-01

    Recent developments made in tri-dimensional photo-reconstruction techniques (3D-PR), such as the use of Structure from Motion (SfM) and MultiView Stereo (MVS) techniques together, have allowed obtaining high resolution 3D point clouds. In order to achieve final point clouds with these techniques, only oblique images from consumer un-calibrated and non-metric cameras are needed. Here, these techniques are used in order to measure, monitor and quantify geomorphological features and processes. Three different applications through a range of scales and landforms are presented here. Firstly, five small gully headcuts located in a small catchment in SW Spain were monitored with the aim of estimating headcut retreat rates. During this field work, 3D models obtained by means of a Terrestrial Laser Scanner (TLS) were captured and used as benchmarks to analyze 3D-PR method accuracy. Results of this analysis showed centimeter-level accuracies with average distances between the 3D-PR model and the TLS model ranging from 0.009 to 0.025 m. Estimated soil loss ranged from -0.246 m3 to 0.114 m3 for a wet period (289 mm) of 54 days in 2013. Secondly, a calanchi type badland in Sicily (Italy) was photo-reconstructed and the quality of the 3D-PR model was analyzed using a Digital Elevation Model produced by classic digital photogrammetry with photos captured by an Unmanned Aerial Vehicle (UAV). In this case, sub-meter calculated accuracies (0.30) showed that it is possible to describe badland morphology using 3D-PR models but it is not feasible to use these models to quantify annual rates of soil erosion in badlands (10 mm eroded per year). Finally, a high-resolution model of the Veleta rock glacier (in SE Spain) was elaborated with 3D-PR techniques and compared with a 3D model obtained by means of a TLS. Results indicated that 3D-PR method can be applied to the micro-scale study of glacier morphologies and processes with average distances to the TLS point cloud of 0.21 m.

  12. 3D-GNOME: an integrated web service for structural modeling of the 3D genome

    PubMed Central

    Szalaj, Przemyslaw; Michalski, Paul J.; Wróblewski, Przemysław; Tang, Zhonghui; Kadlof, Michal; Mazzocco, Giovanni; Ruan, Yijun; Plewczynski, Dariusz

    2016-01-01

    Recent advances in high-throughput chromosome conformation capture (3C) technology, such as Hi-C and ChIA-PET, have demonstrated the importance of 3D genome organization in development, cell differentiation and transcriptional regulation. There is now a widespread need for computational tools to generate and analyze 3D structural models from 3C data. Here we introduce our 3D GeNOme Modeling Engine (3D-GNOME), a web service which generates 3D structures from 3C data and provides tools to visually inspect and annotate the resulting structures, in addition to a variety of statistical plots and heatmaps which characterize the selected genomic region. Users submit a bedpe (paired-end BED format) file containing the locations and strengths of long range contact points, and 3D-GNOME simulates the structure and provides a convenient user interface for further analysis. Alternatively, a user may generate structures using published ChIA-PET data for the GM12878 cell line by simply specifying a genomic region of interest. 3D-GNOME is freely available at http://3dgnome.cent.uw.edu.pl/. PMID:27185892

  13. 3D-GNOME: an integrated web service for structural modeling of the 3D genome.

    PubMed

    Szalaj, Przemyslaw; Michalski, Paul J; Wróblewski, Przemysław; Tang, Zhonghui; Kadlof, Michal; Mazzocco, Giovanni; Ruan, Yijun; Plewczynski, Dariusz

    2016-07-01

    Recent advances in high-throughput chromosome conformation capture (3C) technology, such as Hi-C and ChIA-PET, have demonstrated the importance of 3D genome organization in development, cell differentiation and transcriptional regulation. There is now a widespread need for computational tools to generate and analyze 3D structural models from 3C data. Here we introduce our 3D GeNOme Modeling Engine (3D-GNOME), a web service which generates 3D structures from 3C data and provides tools to visually inspect and annotate the resulting structures, in addition to a variety of statistical plots and heatmaps which characterize the selected genomic region. Users submit a bedpe (paired-end BED format) file containing the locations and strengths of long range contact points, and 3D-GNOME simulates the structure and provides a convenient user interface for further analysis. Alternatively, a user may generate structures using published ChIA-PET data for the GM12878 cell line by simply specifying a genomic region of interest. 3D-GNOME is freely available at http://3dgnome.cent.uw.edu.pl/. PMID:27185892

  14. New 3D Bolton standards: coregistration of biplane x rays and 3D CT

    NASA Astrophysics Data System (ADS)

    Dean, David; Subramanyan, Krishna; Kim, Eun-Kyung

    1997-04-01

    The Bolton Standards 'normative' cohort (16 males, 16 females) have been invited back to the Bolton-Brush Growth Study Center for new biorthogonal plain film head x-rays and 3D (three dimensional) head CT-scans. A set of 29 3D landmarks were identified on both their biplane head film and 3D CT images. The current 3D CT image is then superimposed onto the landmarks collected from the current biplane head films. Three post-doctoral fellows have collected 37 3D landmarks from the Bolton Standards' 40 - 70 year old biplane head films. These films were captured annually during their growing period (ages 3 - 18). Using 29 of these landmarks the current 3D CT image is next warped (via thin plate spline) to landmarks taken from each participant's 18th year biplane head films, a process that is successively reiterated back to age 3. This process is demonstrated here for one of the Bolton Standards. The outer skull surfaces will be extracted from each warped 3D CT image and an average will be generated for each age/sex group. The resulting longitudinal series of average 'normative' boney skull surface images may be useful for craniofacial patient: diagnosis, treatment planning, stereotactic procedures, and outcomes assessment.

  15. Applying Augmented Reality to a Mobile-Assisted Learning System for Martial Arts Using Kinect Motion Capture

    ERIC Educational Resources Information Center

    Hsu, Wen-Chun; Shih, Ju-Ling

    2016-01-01

    In this study, to learn the routine of Tantui, a branch of martial arts was taken as an object of research. Fitts' stages of motor learning and augmented reality (AR) were applied to a 3D mobile-assisted learning system for martial arts, which was characterized by free viewing angles. With the new system, learners could rotate the viewing angle of…

  16. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  17. USM3D Predictions of Supersonic Nozzle Flow

    NASA Technical Reports Server (NTRS)

    Carter, Melissa B.; Elmiligui, Alaa A.; Campbell, Richard L.; Nayani, Sudheer N.

    2014-01-01

    This study focused on the NASA Tetrahedral Unstructured Software System CFD code (USM3D) capability to predict supersonic plume flow. Previous studies, published in 2004 and 2009, investigated USM3D's results versus historical experimental data. This current study continued that comparison however focusing on the use of the volume souring to capture the shear layers and internal shock structure of the plume. This study was conducted using two benchmark axisymmetric supersonic jet experimental data sets. The study showed that with the use of volume sourcing, USM3D was able to capture and model a jet plume's shear layer and internal shock structure.

  18. Testing long-period ground-motion simulations of scenario earthquakes using the Mw 7.2 El Mayor-Cucapah mainshock: Evaluation of finite-fault rupture characterization and 3D seismic velocity models

    USGS Publications Warehouse

    Graves, Robert W.; Aagaard, Brad T.

    2011-01-01

    Using a suite of five hypothetical finite-fault rupture models, we test the ability of long-period (T>2.0 s) ground-motion simulations of scenario earthquakes to produce waveforms throughout southern California consistent with those recorded during the 4 April 2010 Mw 7.2 El Mayor-Cucapah earthquake. The hypothetical ruptures are generated using the methodology proposed by Graves and Pitarka (2010) and require, as inputs, only a general description of the fault location and geometry, event magnitude, and hypocenter, as would be done for a scenario event. For each rupture model, two Southern California Earthquake Center three-dimensional community seismic velocity models (CVM-4m and CVM-H62) are used, resulting in a total of 10 ground-motion simulations, which we compare with recorded ground motions. While the details of the motions vary across the simulations, the median levels match the observed peak ground velocities reasonably well, with the standard deviation of the residuals generally within 50% of the median. Simulations with the CVM-4m model yield somewhat lower variance than those with the CVM-H62 model. Both models tend to overpredict motions in the San Diego region and underpredict motions in the Mojave desert. Within the greater Los Angeles basin, the CVM-4m model generally matches the level of observed motions, whereas the CVM-H62 model tends to overpredict the motions, particularly in the southern portion of the basin. The variance in the peak velocity residuals is lowest for a rupture that has significant shallow slip (<5 km depth), whereas the variance in the residuals is greatest for ruptures with large asperities below 10 km depth. Overall, these results are encouraging and provide confidence in the predictive capabilities of the simulation methodology, while also suggesting some regions in which the seismic velocity models may need improvement.

  19. 3D Spectroscopy in Astronomy

    NASA Astrophysics Data System (ADS)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  20. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  1. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  2. 3D Imaging with Structured Illumination for Advanced Security Applications

    SciTech Connect

    Birch, Gabriel Carisle; Dagel, Amber Lynn; Kast, Brian A.; Smith, Collin S.

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  3. Holography of incoherently illuminated 3D scenes

    NASA Astrophysics Data System (ADS)

    Shaked, Natan T.; Rosen, Joseph

    2008-04-01

    We review several methods of generating holograms of 3D realistic objects illuminated by incoherent white light. Using these methods, it is possible to obtain holograms with a simple digital camera, operating in regular light conditions. Thus, most disadvantages characterizing conventional holography, namely the need for a powerful, highly coherent laser and meticulous stability of the optical system are avoided. These holograms can be reconstructed optically by illuminating them with a coherent plane wave, or alternatively by using a digital reconstruction technique. In order to generate the proposed hologram, the 3D scene is captured from multiple points of view by a simple digital camera. Then, the acquired projections are digitally processed to yield the final hologram of the 3D scene. Based on this principle, we can generate Fourier, Fresnel, image or other types of holograms. To obtain certain advantages over the regular holograms, we also propose new digital holograms, such as modified Fresnel holograms and protected correlation holograms. Instead of shifting the camera mechanically to acquire a different projection of the 3D scene each time, it is possible to use a microlens array for acquiring the entire projections in a single camera shot. Alternatively, only the extreme projections can be acquired experimentally, while the middle projections are predicted digitally by using the view synthesis algorithm. The prospective goal of these methods is to facilitate the design of a simple, portable digital holographic camera which can be useful for a variety of practical applications.

  4. Modular 3-D Transport model

    EPA Science Inventory

    MT3D was first developed by Chunmiao Zheng in 1990 at S.S. Papadopulos & Associates, Inc. with partial support from the U.S. Environmental Protection Agency (USEPA). Starting in 1990, MT3D was released as a pubic domain code from the USEPA. Commercial versions with enhanced capab...

  5. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  6. LLNL-Earth3D

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  7. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible. PMID:7919882

  8. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  9. 3D World Building System

    ScienceCinema

    None

    2014-02-26

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  10. Automatic 2D-to-3D image conversion using 3D examples from the internet

    NASA Astrophysics Data System (ADS)

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  11. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  12. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  13. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  14. Fabricating 3D figurines with personalized faces.

    PubMed

    Tena, J Rafael; Mahler, Moshe; Beeler, Thabo; Grosse, Max; Hengchin Yeh; Matthews, Iain

    2013-01-01

    We present a semi-automated system for fabricating figurines with faces that are personalised to the individual likeness of the customer. The efficacy of the system has been demonstrated by commercial deployments at Walt Disney World Resort and Star Wars Celebration VI in Orlando Florida. Although the system is semi automated, human intervention is limited to a few simple tasks to maintain the high throughput and consistent quality required for commercial application. In contrast to existing systems that fabricate custom heads that are assembled to pre-fabricated plastic bodies, our system seamlessly integrates 3D facial data with a predefined figurine body into a unique and continuous object that is fabricated as a single piece. The combination of state-of-the-art 3D capture, modelling, and printing that are the core of our system provide the flexibility to fabricate figurines whose complexity is only limited by the creativity of the designer. PMID:24808129

  15. 3D scene reconstruction from multi-aperture images

    NASA Astrophysics Data System (ADS)

    Mao, Miao; Qin, Kaihuai

    2014-04-01

    With the development of virtual reality, there is a growing demand for 3D modeling of real scenes. This paper proposes a novel 3D scene reconstruction framework based on multi-aperture images. Our framework consists of four parts. Firstly, images with different apertures are captured via programmable aperture. Secondly, we use SIFT method for feature point matching. Then we exploit binocular stereo vision to calculate camera parameters and 3D positions of matching points, forming a sparse 3D scene model. Finally, we apply patch-based multi-view stereo to obtain a dense 3D scene model. Experimental results show that our method is practical and effective to reconstruct dense 3D scene.

  16. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  17. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  18. Movement Behaviour of Traditionally Managed Cattle in the Eastern Province of Zambia Captured Using Two-Dimensional Motion Sensors

    PubMed Central

    Lubaba, Caesar H.; Hidano, Arata; Welburn, Susan C.; Revie, Crawford W.; Eisler, Mark C.

    2015-01-01

    Two-dimensional motion sensors use electronic accelerometers to record the lying, standing and walking activity of cattle. Movement behaviour data collected automatically using these sensors over prolonged periods of time could be of use to stakeholders making management and disease control decisions in rural sub-Saharan Africa leading to potential improvements in animal health and production. Motion sensors were used in this study with the aim of monitoring and quantifying the movement behaviour of traditionally managed Angoni cattle in Petauke District in the Eastern Province of Zambia. This study was designed to assess whether motion sensors were suitable for use on traditionally managed cattle in two veterinary camps in Petauke District in the Eastern Province of Zambia. In each veterinary camp, twenty cattle were selected for study. Each animal had a motion sensor placed on its hind leg to continuously measure and record its movement behaviour over a two week period. Analysing the sensor data using principal components analysis (PCA) revealed that the majority of variability in behaviour among studied cattle could be attributed to their behaviour at night and in the morning. The behaviour at night was markedly different between veterinary camps; while differences in the morning appeared to reflect varying behaviour across all animals. The study results validate the use of such motion sensors in the chosen setting and highlight the importance of appropriate data summarisation techniques to adequately describe and compare animal movement behaviours if association to other factors, such as location, breed or health status are to be assessed. PMID:26366728

  19. Bioprinting of 3D hydrogels.

    PubMed

    Stanton, M M; Samitier, J; Sánchez, S

    2015-08-01

    Three-dimensional (3D) bioprinting has recently emerged as an extension of 3D material printing, by using biocompatible or cellular components to build structures in an additive, layer-by-layer methodology for encapsulation and culture of cells. These 3D systems allow for cell culture in a suspension for formation of highly organized tissue or controlled spatial orientation of cell environments. The in vitro 3D cellular environments simulate the complexity of an in vivo environment and natural extracellular matrices (ECM). This paper will focus on bioprinting utilizing hydrogels as 3D scaffolds. Hydrogels are advantageous for cell culture as they are highly permeable to cell culture media, nutrients, and waste products generated during metabolic cell processes. They have the ability to be fabricated in customized shapes with various material properties with dimensions at the micron scale. 3D hydrogels are a reliable method for biocompatible 3D printing and have applications in tissue engineering, drug screening, and organ on a chip models. PMID:26066320

  20. Arena3D: visualization of biological networks in 3D

    PubMed Central

    Pavlopoulos, Georgios A; O'Donoghue, Seán I; Satagopam, Venkata P; Soldatos, Theodoros G; Pafilis, Evangelos; Schneider, Reinhard

    2008-01-01

    Background Complexity is a key problem when visualizing biological networks; as the number of entities increases, most graphical views become incomprehensible. Our goal is to enable many thousands of entities to be visualized meaningfully and with high performance. Results We present a new visualization tool, Arena3D, which introduces a new concept of staggered layers in 3D space. Related data – such as proteins, chemicals, or pathways – can be grouped onto separate layers and arranged via layout algorithms, such as Fruchterman-Reingold, distance geometry, and a novel hierarchical layout. Data on a layer can be clustered via k-means, affinity propagation, Markov clustering, neighbor joining, tree clustering, or UPGMA ('unweighted pair-group method with arithmetic mean'). A simple input format defines the name and URL for each node, and defines connections or similarity scores between pairs of nodes. The use of Arena3D is illustrated with datasets related to Huntington's disease. Conclusion Arena3D is a user friendly visualization tool that is able to visualize biological or any other network in 3D space. It is free for academic use and runs on any platform. It can be downloaded or lunched directly from . Java3D library and Java 1.5 need to be pre-installed for the software to run. PMID:19040715

  1. The 3D rocket combustor acoustics model

    NASA Technical Reports Server (NTRS)

    Priem, Richard J.; Breisacher, Kevin J.

    1992-01-01

    The theory and procedures for determining the characteristics of pressure oscillations in rocket engines with prescribed burning rate oscillations are presented. Analyses including radial and hub baffles and absorbers can be performed in one, two, and three dimensions. Pressure and velocity oscillations calculated using this procedure are presented for the SSME to show the influence of baffles and absorbers on the burning rate oscillations required to achieve neutral stability. Comparisons are made between the results obtained utilizing 1-D, 2-D, and 3-D assumptions with regards to capturing the physical phenomena of interest and computational requirements.

  2. An Effective 3D Ear Acquisition System

    PubMed Central

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  3. An Effective 3D Ear Acquisition System.

    PubMed

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  4. Evaluation of two approaches for aligning data obtained from