Sample records for camera three-dimensional motion

  1. Holographic motion picture camera with Doppler shift compensation

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1976-01-01

    A holographic motion picture camera is reported for producing three dimensional images by employing an elliptical optical system. There is provided in one of the beam paths (the object or reference beam path) a motion compensator which enables the camera to photograph faster moving objects.

  2. The application of holography as a real-time three-dimensional motion picture camera

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L.

    1973-01-01

    A historical introduction to holography is presented, as well as a basic description of sideband holography for stationary objects. A brief theoretical development of both time-dependent and time-independent holography is also provided, along with an analytical and intuitive discussion of a unique holographic arrangement which allows the resolution of front surface detail from an object moving at high speeds. As an application of such a system, a real-time three-dimensional motion picture camera system is discussed and the results of a recent demonstration of the world's first true three-dimensional motion picture are given.

  3. Dimensional coordinate measurements: application in characterizing cervical spine motion

    NASA Astrophysics Data System (ADS)

    Zheng, Weilong; Li, Linan; Wang, Shibin; Wang, Zhiyong; Shi, Nianke; Xue, Yuan

    2014-06-01

    Cervical spine as a complicated part in the human body, the form of its movement is diverse. The movements of the segments of vertebrae are three-dimensional, and it is reflected in the changes of the angle between two joint and the displacement in different directions. Under normal conditions, cervical can flex, extend, lateral flex and rotate. For there is no relative motion between measuring marks fixed on one segment of cervical vertebra, the cervical vertebrae with three marked points can be seen as a body. Body's motion in space can be decomposed into translational movement and rotational movement around a base point .This study concerns the calculation of dimensional coordinate of the marked points pasted to the human body's cervical spine by an optical method. Afterward, these measures will allow the calculation of motion parameters for every spine segment. For this study, we choose a three-dimensional measurement method based on binocular stereo vision. The object with marked points is placed in front of the CCD camera. Through each shot, we will get there two parallax images taken from different cameras. According to the principle of binocular vision we can be realized three-dimensional measurements. Cameras are erected parallelly. This paper describes the layout of experimental system and a mathematical model to get the coordinates.

  4. Experimental investigation of strain errors in stereo-digital image correlation due to camera calibration

    NASA Astrophysics Data System (ADS)

    Shao, Xinxing; Zhu, Feipeng; Su, Zhilong; Dai, Xiangjun; Chen, Zhenning; He, Xiaoyuan

    2018-03-01

    The strain errors in stereo-digital image correlation (DIC) due to camera calibration were investigated using precisely controlled numerical experiments and real experiments. Three-dimensional rigid body motion tests were conducted to examine the effects of camera calibration on the measured results. For a fully accurate calibration, rigid body motion causes negligible strain errors. However, for inaccurately calibrated camera parameters and a short working distance, rigid body motion will lead to more than 50-μɛ strain errors, which significantly affects the measurement. In practical measurements, it is impossible to obtain a fully accurate calibration; therefore, considerable attention should be focused on attempting to avoid these types of errors, especially for high-accuracy strain measurements. It is necessary to avoid large rigid body motions in both two-dimensional DIC and stereo-DIC.

  5. Computer aided photographic engineering

    NASA Technical Reports Server (NTRS)

    Hixson, Jeffrey A.; Rieckhoff, Tom

    1988-01-01

    High speed photography is an excellent source of engineering data but only provides a two-dimensional representation of a three-dimensional event. Multiple cameras can be used to provide data for the third dimension but camera locations are not always available. A solution to this problem is to overlay three-dimensional CAD/CAM models of the hardware being tested onto a film or photographic image, allowing the engineer to measure surface distances, relative motions between components, and surface variations.

  6. Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects

    DOEpatents

    Lu, Shin-Yee

    1998-01-01

    A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.

  7. Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects

    DOEpatents

    Lu, S.Y.

    1998-12-22

    A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.

  8. 3-D Velocimetry of Strombolian Explosions

    NASA Astrophysics Data System (ADS)

    Taddeucci, J.; Gaudin, D.; Orr, T. R.; Scarlato, P.; Houghton, B. F.; Del Bello, E.

    2014-12-01

    Using two synchronized high-speed cameras we were able to reconstruct the three-dimensional displacement and velocity field of bomb-sized pyroclasts in Strombolian explosions at Stromboli Volcano. Relatively low-intensity Strombolian-style activity offers a rare opportunity to observe volcanic processes that remain hidden from view during more violent explosive activity. Such processes include the ejection and emplacement of bomb-sized clasts along pure or drag-modified ballistic trajectories, in-flight bomb collision, and gas liberation dynamics. High-speed imaging of Strombolian activity has already opened new windows for the study of the abovementioned processes, but to date has only utilized two-dimensional analysis with limited motion detection and ability to record motion towards or away from the observer. To overcome this limitation, we deployed two synchronized high-speed video cameras at Stromboli. The two cameras, located sixty meters apart, filmed Strombolian explosions at 500 and 1000 frames per second and with different resolutions. Frames from the two cameras were pre-processed and combined into a single video showing frames alternating from one to the other camera. Bomb-sized pyroclasts were then manually identified and tracked in the combined video, together with fixed reference points located as close as possible to the vent. The results from manual tracking were fed to a custom software routine that, knowing the relative position of the vent and cameras, and the field of view of the latter, provided the position of each bomb relative to the reference points. By tracking tens of bombs over five to ten frames at different intervals during one explosion, we were able to reconstruct the three-dimensional evolution of the displacement and velocity fields of bomb-sized pyroclasts during individual Strombolian explosions. Shifting jet directivity and dispersal angle clearly appear from the three-dimensional analysis.

  9. Tracking a Head-Mounted Display in a Room-Sized Environment with Head-Mounted Cameras

    DTIC Science & Technology

    1990-04-01

    poor resolution and a very limited working volume [Wan90]. 4 OPTOTRAK [Nor88] uses one camera with two dual-axis CCD infrared position sensors. Each...Nor88] Northern Digital. Trade literature on Optotrak - Northern Digital’s Three Dimensional Optical Motion Tracking and Analysis System. Northern Digital

  10. Settling dynamics of asymmetric rigid fibers

    Treesearch

    E.J. Tozzi; C Tim Scott; David Vahey; D.J. Klingenberg

    2011-01-01

    The three-dimensional motion of asymmetric rigid fibers settling under gravity in a quiescent fluid was experimentally measured using a pair of cameras located on a movable platform. The particle motion typically consisted of an initial transient after which the particle approached a steady rate of rotation about an axis parallel to the acceleration of gravity, with...

  11. Unsupervised markerless 3-DOF motion tracking in real time using a single low-budget camera.

    PubMed

    Quesada, Luis; León, Alejandro J

    2012-10-01

    Motion tracking is a critical task in many computer vision applications. Existing motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on motion tracking. In this paper, we present a novel three degrees of freedom motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera that can be found installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a nonmodeled unmarked object that may be nonrigid, nonconvex, partially occluded, self-occluded, or motion blurred, given that it is opaque, evenly colored, enough contrasting with the background in each frame, and that it does not rotate. Our system is also able to determine the most relevant object to track in the screen. Our proposal does not impose additional constraints, therefore it allows a market-wide implementation of applications that require the estimation of the three position degrees of freedom of an object.

  12. A Soft Sensor-Based Three-Dimensional (3-D) Finger Motion Measurement System

    PubMed Central

    Park, Wookeun; Ro, Kyongkwan; Kim, Suin; Bae, Joonbum

    2017-01-01

    In this study, a soft sensor-based three-dimensional (3-D) finger motion measurement system is proposed. The sensors, made of the soft material Ecoflex, comprise embedded microchannels filled with a conductive liquid metal (EGaln). The superior elasticity, light weight, and sensitivity of soft sensors allows them to be embedded in environments in which conventional sensors cannot. Complicated finger joints, such as the carpometacarpal (CMC) joint of the thumb are modeled to specify the location of the sensors. Algorithms to decouple the signals from soft sensors are proposed to extract the pure flexion, extension, abduction, and adduction joint angles. The performance of the proposed system and algorithms are verified by comparison with a camera-based motion capture system. PMID:28241414

  13. Automated Reconstruction of Three-Dimensional Fish Motion, Forces, and Torques

    PubMed Central

    Voesenek, Cees J.; Pieters, Remco P. M.; van Leeuwen, Johan L.

    2016-01-01

    Fish can move freely through the water column and make complex three-dimensional motions to explore their environment, escape or feed. Nevertheless, the majority of swimming studies is currently limited to two-dimensional analyses. Accurate experimental quantification of changes in body shape, position and orientation (swimming kinematics) in three dimensions is therefore essential to advance biomechanical research of fish swimming. Here, we present a validated method that automatically tracks a swimming fish in three dimensions from multi-camera high-speed video. We use an optimisation procedure to fit a parameterised, morphology-based fish model to each set of video images. This results in a time sequence of position, orientation and body curvature. We post-process this data to derive additional kinematic parameters (e.g. velocities, accelerations) and propose an inverse-dynamics method to compute the resultant forces and torques during swimming. The presented method for quantifying 3D fish motion paves the way for future analyses of swimming biomechanics. PMID:26752597

  14. An overview of the stereo correlation and triangulation formulations used in DICe.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Daniel Z.

    This document provides a detailed overview of the stereo correlation algorithm and triangulation formulation used in the Digital Image Correlation Engine (DICe) to triangulate three dimensional motion in space given the image coordinates and camera calibration parameters.

  15. 3D reconstruction based on light field images

    NASA Astrophysics Data System (ADS)

    Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei

    2018-04-01

    This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.

  16. Visual fatigue modeling for stereoscopic video shot based on camera motion

    NASA Astrophysics Data System (ADS)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  17. Vision System Measures Motions of Robot and External Objects

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2008-01-01

    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.

  18. Observation of the wing deformation and the CFD study of cicada

    NASA Astrophysics Data System (ADS)

    Dai, Hu; Mohd Adam Das, Shahrizan; Luo, Haoxiang

    2011-11-01

    We studied the wing properties and kinematics of cicada when the 13-year species emerged in amazingly large numbers in middle Tennessee during May 2011. Using a high-speed camera, we recorded the wing motion of the insect and then reconstructed the three-dimensional wing kinematics using a video digitization software. Like many other insects, the deformation of the cicada wing is asymmetric between the downstroke and upstroke half cycles, and this particular deformation pattern would benefit production of the lift and propulsive forces. Both two-dimensional and three-dimensional CFD studies are carried out based on the reconstructed wing motion. The implication of the study on the role of the aerodynamic force in the wing deformation will be discussed. This work is sponsored by the NSF.

  19. Real-time marker-free motion capture system using blob feature analysis

    NASA Astrophysics Data System (ADS)

    Park, Chang-Joon; Kim, Sung-Eun; Kim, Hong-Seok; Lee, In-Ho

    2005-02-01

    This paper presents a real-time marker-free motion capture system which can reconstruct 3-dimensional human motions. The virtual character of the proposed system mimics the motion of an actor in real-time. The proposed system captures human motions by using three synchronized CCD cameras and detects the root and end-effectors of an actor such as a head, hands, and feet by exploiting the blob feature analysis. And then, the 3-dimensional positions of end-effectors are restored and tracked by using Kalman filter. At last, the positions of the intermediate joint are reconstructed by using anatomically constrained inverse kinematics algorithm. The proposed system was implemented under general lighting conditions and we confirmed that the proposed system could reconstruct motions of a lot of people wearing various clothes in real-time stably.

  20. Three-dimensional cinematography with control object of unknown shape.

    PubMed

    Dapena, J; Harman, E A; Miller, J A

    1982-01-01

    A technique for reconstruction of three-dimensional (3D) motion which involves a simple filming procedure but allows the deduction of coordinates in large object volumes was developed. Internal camera parameters are calculated from measurements of the film images of two calibrated crosses while external camera parameters are calculated from the film images of points in a control object of unknown shape but at least one known length. The control object, which includes the volume in which the activity is to take place, is formed by a series of poles placed at unknown locations, each carrying two targets. From the internal and external camera parameters, and from locations of the images of point in the films of the two cameras, 3D coordinates of the point can be calculated. Root mean square errors of the three coordinates of points in a large object volume (5m x 5m x 1.5m) were 15 mm, 13 mm, 13 mm and 6 mm, and relative errors in lengths averaged 0.5%, 0.7% and 0.5%, respectively.

  1. Virtual three-dimensional blackboard: three-dimensional finger tracking with a single camera

    NASA Astrophysics Data System (ADS)

    Wu, Andrew; Hassan-Shafique, Khurram; Shah, Mubarak; da Vitoria Lobo, N.

    2004-01-01

    We present a method for three-dimensional (3D) tracking of a human finger from a monocular sequence of images. To recover the third dimension from the two-dimensional images, we use the fact that the motion of the human arm is highly constrained owing to the dependencies between elbow and forearm and the physical constraints on joint angles. We use these anthropometric constraints to derive a 3D trajectory of a gesticulating arm. The system is fully automated and does not require human intervention. The system presented can be used as a visualization tool, as a user-input interface, or as part of some gesture-analysis system in which 3D information is important.

  2. Motion analysis report

    NASA Technical Reports Server (NTRS)

    Badler, N. I.

    1985-01-01

    Human motion analysis is the task of converting actual human movements into computer readable data. Such movement information may be obtained though active or passive sensing methods. Active methods include physical measuring devices such as goniometers on joints of the body, force plates, and manually operated sensors such as a Cybex dynamometer. Passive sensing de-couples the position measuring device from actual human contact. Passive sensors include Selspot scanning systems (since there is no mechanical connection between the subject's attached LEDs and the infrared sensing cameras), sonic (spark-based) three-dimensional digitizers, Polhemus six-dimensional tracking systems, and image processing systems based on multiple views and photogrammetric calculations.

  3. Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Zuo, Chao; Tao, Tianyang; Hu, Yan; Zhang, Minliang; Chen, Qian; Gu, Guohua

    2018-04-01

    Phase-shifting profilometry (PSP) is a widely used approach to high-accuracy three-dimensional shape measurements. However, when it comes to moving objects, phase errors induced by the movement often result in severe artifacts even though a high-speed camera is in use. From our observations, there are three kinds of motion artifacts: motion ripples, motion-induced phase unwrapping errors, and motion outliers. We present a novel motion-compensated PSP to remove the artifacts for dynamic measurements of rigid objects. The phase error of motion ripples is analyzed for the N-step phase-shifting algorithm and is compensated using the statistical nature of the fringes. The phase unwrapping errors are corrected exploiting adjacent reliable pixels, and the outliers are removed by comparing the original phase map with a smoothed phase map. Compared with the three-step PSP, our method can improve the accuracy by more than 95% for objects in motion.

  4. Non Contacting Evaluation of Strains and Cracking Using Optical and Infrared Imaging Techniques

    DTIC Science & Technology

    1988-08-22

    Compatible Zenith Z-386 microcomputer with plotter II. 3-D Motion Measurinq System 1. Complete OPTOTRAK three dimensional digitizing system. System includes...acquisition unit - 16 single ended analog input channels 3. Data Analysis Package software (KINEPLOT) 4. Extra OPTOTRAK Camera (max 224 per system

  5. Science with a selfie stick: Plant biomass estimation using smartphone based ‘Structure From Motion’ photogrammetry

    USDA-ARS?s Scientific Manuscript database

    Significant advancements in photogrammetric Structure-from-Motion (SfM) software, coupled with improvements in the quality and resolution of smartphone cameras, has made it possible to create ultra-fine resolution three-dimensional models of physical objects using an ordinary smartphone. Here we pre...

  6. Measurement of 3-D Vibrational Motion by Dynamic Photogrammetry Using Least-Square Image Matching for Sub-Pixel Targeting to Improve Accuracy.

    PubMed

    Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho

    2016-03-11

    This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility.

  7. Measurement of 3-D Vibrational Motion by Dynamic Photogrammetry Using Least-Square Image Matching for Sub-Pixel Targeting to Improve Accuracy

    PubMed Central

    Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho

    2016-01-01

    This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility. PMID:26978366

  8. Spatio-temporal patterns of sediment particle movement on 2D and 3D bedforms

    NASA Astrophysics Data System (ADS)

    Tsubaki, Ryota; Baranya, Sándor; Muste, Marian; Toda, Yuji

    2018-06-01

    An experimental study was conducted to explore sediment particle motion in an open channel and its relationship to bedform characteristics. High-definition submersed video cameras were utilized to record images of particle motion over a dune's length scale. Image processing was conducted to account for illumination heterogeneity due to bedform geometric irregularity and light reflection at the water's surface. Identification of moving particles using a customized algorithm was subsequently conducted and then the instantaneous velocity distribution of sediment particles was evaluated using particle image velocimetry. Obtained experimental results indicate that the motion of sediment particles atop dunes differs depending on dune geometry (i.e., two-dimensional or three-dimensional, respectively). Sediment motion and its relationship to dune shape and dynamics are also discussed.

  9. [Segmental wall movement of the left ventricle in healthy persons and myocardial infarct patients studied by a catheter-less nuclear medical method (camera-cinematography of the heart)].

    PubMed

    Geffers, H; Sigel, H; Bitter, F; Kampmann, H; Stauch, M; Adam, W E

    1976-08-01

    Camera-Kinematography is a nearly noninvasive method to investigate regional motion of the myocard, and allows evaluation of the function of the heart. About 20 min after injection of 15-20 mCi of 99mTC-Human-Serum-Albumin, when the tracer is distributed homogenously within the bloodpool, data acquisition starts. Myocardial wall motion is represented in an appropriate quasi three-dimensional form. In this representation scars can be revealed as "silent" (akinetic) regions, aneurysms by asynchronic motion. Time activity curves for arbitrarily chosen regions can be calculated and give an equivalent for regional volume changes. 16 patients with an old infarction have been investigated. In fourteen cases the location and extent of regions with abnormal motion could be evaluated. Only two cases of a small posterior wall infarction did not show deviations from normal contraction pattern.

  10. A novel camera localization system for extending three-dimensional digital image correlation

    NASA Astrophysics Data System (ADS)

    Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher

    2018-03-01

    The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.

  11. Three-dimensional optical reconstruction of vocal fold kinematics using high-speed video with a laser projection system

    PubMed Central

    Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael

    2015-01-01

    Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485

  12. Dynamic Estimation of Rigid Motion from Perspective Views via Recursive Identification of Exterior Differential Systems with Parameters on a Topological Manifold

    DTIC Science & Technology

    1994-02-15

    0. Faugeras. Three dimensional vision, a geometric viewpoint. MIT Press, 1993. [19] 0 . D. Faugeras and S. Maybank . Motion from point mathces...multiplicity of solutions. Int. J. of Computer Vision, 1990. 1201 0.D. Faugeras, Q.T. Luong, and S.J. Maybank . Camera self-calibration: theory and...Kalrnan filter-based algorithms for estimating depth from image sequences. Int. J. of computer vision, 1989. [41] S. Maybank . Theory of

  13. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  14. Three-dimensional quantification of cardiac surface motion: a newly developed three-dimensional digital motion-capture and reconstruction system for beating heart surgery.

    PubMed

    Watanabe, Toshiki; Omata, Sadao; Odamura, Motoki; Okada, Masahumi; Nakamura, Yoshihiko; Yokoyama, Hitoshi

    2006-11-01

    This study aimed to evaluate our newly developed 3-dimensional digital motion-capture and reconstruction system in an animal experiment setting and to characterize quantitatively the three regional cardiac surface motions, in the left anterior descending artery, right coronary artery, and left circumflex artery, before and after stabilization using a stabilizer. Six pigs underwent a full sternotomy. Three tiny metallic markers (diameter 2 mm) coated with a reflective material were attached on three regional cardiac surfaces (left anterior descending, right coronary, and left circumflex coronary artery regions). These markers were captured by two high-speed digital video cameras (955 frames per second) as 2-dimensional coordinates and reconstructed to 3-dimensional data points (about 480 xyz-position data per second) by a newly developed computer program. The remaining motion after stabilization ranged from 0.4 to 1.01 mm at the left anterior descending, 0.91 to 1.52 mm at the right coronary artery, and 0.53 to 1.14 mm at the left circumflex regions. Significant differences before and after stabilization were evaluated in maximum moving velocity (left anterior descending 456.7 +/- 178.7 vs 306.5 +/- 207.4 mm/s; right coronary artery 574.9 +/- 161.7 vs 446.9 +/- 170.7 mm/s; left circumflex 578.7 +/- 226.7 vs 398.9 +/- 192.6 mm/s; P < .0001) and maximum acceleration (left anterior descending 238.8 +/- 137.4 vs 169.4 +/- 132.7 m/s2; right coronary artery 315.0 +/- 123.9 vs 242.9 +/- 120.6 m/s2; left circumflex 307.9 +/- 151.0 vs 217.2 +/- 132.3 m/s2; P < .0001). This system is useful for a precise quantification of the heart surface movement. This helps us better understand the complexity of the heart, its motion, and the need for developing a better stabilizer for beating heart surgery.

  15. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  16. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis

    PubMed Central

    Cerveri, Pietro; Barros, Ricardo M. L.; Marins, João C. B.; Silvatti, Amanda P.

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846

  17. A Three-Dimensional Kinematic and Kinetic Study of the College-Level Female Softball Swing

    PubMed Central

    Milanovich, Monica; Nesbit, Steven M.

    2014-01-01

    This paper quantifies and discusses the three-dimensional kinematic and kinetic characteristics of the female softball swing as performed by fourteen female collegiate amateur subjects. The analyses were performed using a three-dimensional computer model. The model was driven kinematically from subject swings data that were recorded with a multi-camera motion analysis system. Each subject used two distinct bats with significantly different inertial properties. Model output included bat trajectories, subject/bat interaction forces and torques, work, and power. These data formed the basis for a detailed analysis and description of fundamental swing kinematic and kinetic quantities. The analyses revealed that the softball swing is a highly coordinated and individual three-dimensional motion and subject-to-subject variations were significant in all kinematic and kinetic quantities. In addition, the potential effects of bat properties on swing mechanics are discussed. The paths of the hands and the centre-of-curvature of the bat relative to the horizontal plane appear to be important trajectory characteristics of the swing. Descriptions of the swing mechanics and practical implications are offered based upon these findings. Key Points The female softball swing is a highly coordinated and individual three-dimensional motion and subject-to-subject variations were significant in all kinematic and kinetic quantities. The paths of the grip point, bat centre-of-curvature, CG, and COP are complex yet reveal consistent patterns among subjects indicating that these patterns are fundamental components of the swing. The most important mechanical quantity relative to generating bat speed is the total work applied to the bat from the batter. Computer modeling of the softball swing is a viable means for study of the fundamental mechanics of the swing motion, the interactions between the batter and the bat, and the energy transfers between the two. PMID:24570623

  18. A three-dimensional kinematic and kinetic study of the college-level female softball swing.

    PubMed

    Milanovich, Monica; Nesbit, Steven M

    2014-01-01

    This paper quantifies and discusses the three-dimensional kinematic and kinetic characteristics of the female softball swing as performed by fourteen female collegiate amateur subjects. The analyses were performed using a three-dimensional computer model. The model was driven kinematically from subject swings data that were recorded with a multi-camera motion analysis system. Each subject used two distinct bats with significantly different inertial properties. Model output included bat trajectories, subject/bat interaction forces and torques, work, and power. These data formed the basis for a detailed analysis and description of fundamental swing kinematic and kinetic quantities. The analyses revealed that the softball swing is a highly coordinated and individual three-dimensional motion and subject-to-subject variations were significant in all kinematic and kinetic quantities. In addition, the potential effects of bat properties on swing mechanics are discussed. The paths of the hands and the centre-of-curvature of the bat relative to the horizontal plane appear to be important trajectory characteristics of the swing. Descriptions of the swing mechanics and practical implications are offered based upon these findings. Key PointsThe female softball swing is a highly coordinated and individual three-dimensional motion and subject-to-subject variations were significant in all kinematic and kinetic quantities.The paths of the grip point, bat centre-of-curvature, CG, and COP are complex yet reveal consistent patterns among subjects indicating that these patterns are fundamental components of the swing.The most important mechanical quantity relative to generating bat speed is the total work applied to the bat from the batter.Computer modeling of the softball swing is a viable means for study of the fundamental mechanics of the swing motion, the interactions between the batter and the bat, and the energy transfers between the two.

  19. FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.

    PubMed

    Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu

    2017-07-18

    Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.

  20. Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras

    PubMed Central

    Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Guo, Yin

    2016-01-01

    The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained. PMID:27869731

  1. The phantom robot - Predictive displays for teleoperation with time delay

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.; Kim, Won S.; Venema, Steven C.

    1990-01-01

    An enhanced teleoperation technique for time-delayed bilateral teleoperator control is discussed. The control technique selected for time delay is based on the use of a high-fidelity graphics phantom robot that is being controlled in real time (without time delay) against the static task image. Thus, the motion of the phantom robot image on the monitor predicts the motion of the real robot. The real robot's motion will follow the phantom robot's motion on the monitor with the communication time delay implied in the task. Real-time high-fidelity graphics simulation of a PUMA arm is generated and overlaid on the actual camera view of the arm. A simple camera calibration technique is used for calibrated graphics overlay. A preliminary experiment is performed with the predictive display by using a very simple tapping task. The results with this simple task indicate that predictive display enhances the human operator's telemanipulation task performance significantly during free motion when there is a long time delay. It appears, however, that either two-view or stereoscopic predictive displays are necessary for general three-dimensional tasks.

  2. Use of three-dimensional computer graphic animation to illustrate cleft lip and palate surgery.

    PubMed

    Cutting, C; Oliker, A; Haring, J; Dayan, J; Smith, D

    2002-01-01

    Three-dimensional (3D) computer animation is not commonly used to illustrate surgical techniques. This article describes the surgery-specific processes that were required to produce animations to teach cleft lip and palate surgery. Three-dimensional models were created using CT scans of two Chinese children with unrepaired clefts (one unilateral and one bilateral). We programmed several custom software tools, including an incision tool, a forceps tool, and a fat tool. Three-dimensional animation was found to be particularly useful for illustrating surgical concepts. Positioning the virtual "camera" made it possible to view the anatomy from angles that are impossible to obtain with a real camera. Transparency allows the underlying anatomy to be seen during surgical repair while maintaining a view of the overlaying tissue relationships. Finally, the representation of motion allows modeling of anatomical mechanics that cannot be done with static illustrations. The animations presented in this article can be viewed on-line at http://www.smiletrain.org/programs/virtual_surgery2.htm. Sophisticated surgical procedures are clarified with the use of 3D animation software and customized software tools. The next step in the development of this technology is the creation of interactive simulators that recreate the experience of surgery in a safe, digital environment. Copyright 2003 Wiley-Liss, Inc.

  3. Dynamic motion analysis of dart throwers motion visualized through computerized tomography and calculation of the axis of rotation.

    PubMed

    Edirisinghe, Y; Troupis, J M; Patel, M; Smith, J; Crossett, M

    2014-05-01

    We used a dynamic three-dimensional (3D) mapping method to model the wrist in dynamic unrestricted dart throwers motion in three men and four women. With the aid of precision landmark identification, a 3D coordinate system was applied to the distal radius and the movement of the carpus was described. Subsequently, with dynamic 3D reconstructions and freedom to position the camera viewpoint anywhere in space, we observed the motion pathways of all carpal bones in dart throwers motion and calculated its axis of rotation. This was calculated to lie in 27° of anteversion from the coronal plane and 44° of varus angulation relative to the transverse plane. This technique is a safe and a feasible carpal imaging method to gain key information for decision making in future hand surgical and rehabilitative practices.

  4. A Novel Method for Tracking Individuals of Fruit Fly Swarms Flying in a Laboratory Flight Arena.

    PubMed

    Cheng, Xi En; Qian, Zhi-Ming; Wang, Shuo Hong; Jiang, Nan; Guo, Aike; Chen, Yan Qiu

    2015-01-01

    The growing interest in studying social behaviours of swarming fruit flies, Drosophila melanogaster, has heightened the need for developing tools that provide quantitative motion data. To achieve such a goal, multi-camera three-dimensional tracking technology is the key experimental gateway. We have developed a novel tracking system for tracking hundreds of fruit flies flying in a confined cubic flight arena. In addition to the proposed tracking algorithm, this work offers additional contributions in three aspects: body detection, orientation estimation, and data validation. To demonstrate the opportunities that the proposed system offers for generating high-throughput quantitative motion data, we conducted experiments on five experimental configurations. We also performed quantitative analysis on the kinematics and the spatial structure and the motion patterns of fruit fly swarms. We found that there exists an asymptotic distance between fruit flies in swarms as the population density increases. Further, we discovered the evidence for repulsive response when the distance between fruit flies approached the asymptotic distance. Overall, the proposed tracking system presents a powerful method for studying flight behaviours of fruit flies in a three-dimensional environment.

  5. Plenoptic Image Motion Deblurring.

    PubMed

    Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo

    2018-04-01

    We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.

  6. Individual tree detection from Unmanned Aerial Vehicle (UAV) derived canopy height model in an open canopy mixed conifer forest

    Treesearch

    Midhun Mohan; Carlos Alberto Silva; Carine Klauberg; Prahlad Jat; Glenn Catts; Adrian Cardil; Andrew Thomas Hudak; Mahendra Dia

    2017-01-01

    Advances in Unmanned Aerial Vehicle (UAV) technology and data processing capabilities have made it feasible to obtain high-resolution imagery and three dimensional (3D) data which can be used for forest monitoring and assessing tree attributes. This study evaluates the applicability of low consumer grade cameras attached to UAVs and structure-from-motion (SfM)...

  7. Real time markerless motion tracking using linked kinematic chains

    DOEpatents

    Luck, Jason P [Arvada, CO; Small, Daniel E [Albuquerque, NM

    2007-08-14

    A markerless method is described for tracking the motion of subjects in a three dimensional environment using a model based on linked kinematic chains. The invention is suitable for tracking robotic, animal or human subjects in real-time using a single computer with inexpensive video equipment, and does not require the use of markers or specialized clothing. A simple model of rigid linked segments is constructed of the subject and tracked using three dimensional volumetric data collected by a multiple camera video imaging system. A physics based method is then used to compute forces to align the model with subsequent volumetric data sets in real-time. The method is able to handle occlusion of segments and accommodates joint limits, velocity constraints, and collision constraints and provides for error recovery. The method further provides for elimination of singularities in Jacobian based calculations, which has been problematic in alternative methods.

  8. Analysis of motion in speed skating

    NASA Astrophysics Data System (ADS)

    Koga, Yuzo; Nishimura, Tetsu; Watanabe, Naoki; Okamoto, Kousuke; Wada, Yuhei

    1997-03-01

    A motion on sports has been studied by many researchers from the view of the medical, psychological and mechanical fields. Here, we try to analyze a speed skating motion dynamically for an aim of performing the best record. As an official competition of speed skating is performed on the round rink, the skating motion must be studied on the three phases, that is, starting phase, straight and curved course skating phase. It is indispensable to have a visual data of a skating motion in order to analyze kinematically. So we took a several subject's skating motion by 8 mm video cameras in order to obtain three dimensional data. As the first step, the movement of the center of gravity of skater (abbreviate to C. G.) is discussed in this paper, because a skating motion is very complicated. The movement of C. G. will give an information of the reaction force to a skate blade from the surface of ice. We discuss the discrepancy of several skating motion by studied subjects. Our final goal is to suggest the best skating form for getting the finest record.

  9. Three-dimensional motion-picture imaging of dynamic object by parallel-phase-shifting digital holographic microscopy using an inverted magnification optical system

    NASA Astrophysics Data System (ADS)

    Fukuda, Takahito; Shinomura, Masato; Xia, Peng; Awatsuji, Yasuhiro; Nishio, Kenzo; Matoba, Osamu

    2017-04-01

    We constructed a parallel-phase-shifting digital holographic microscopy (PPSDHM) system using an inverted magnification optical system, and succeeded in three-dimensional (3D) motion-picture imaging for 3D displacement of a microscopic object. In the PPSDHM system, the inverted and afocal magnification optical system consisted of a microscope objective (16.56 mm focal length and 0.25 numerical aperture) and a convex lens (300 mm focal length and 82 mm aperture diameter). A polarization-imaging camera was used to record multiple phase-shifted holograms with a single-shot exposure. We recorded an alum crystal, sinking down in aqueous solution of alum, by the constructed PPSDHM system at 60 frames/s for about 20 s and reconstructed high-quality 3D motion-picture image of the crystal. Then, we calculated amounts of displacement of the crystal from the amounts in the focus plane and the magnifications of the magnification optical system, and obtained the 3D trajectory of the crystal by that amounts.

  10. Three dimensional measurement with an electrically tunable focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Lei, Yu; Tong, Qing; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng

    2017-03-01

    A liquid crystal microlens array (LCMLA) with an arrayed microhole pattern electrode based on nematic liquid crystal materials using a fabrication method including traditional UV-photolithography and wet etching is presented. Its focusing performance is measured under different voltage signals applied between the electrodes of the LCMLA. The experimental outcome shows that the focal length of the LCMLA can be tuned easily by only changing the root mean square value of the voltage signal applied. The developed LCMLA is further integrated with a main lens and an imaging sensor to construct a LCMLA-based focused plenoptic camera (LCFPC) prototype. The focused range of the LCFPC can be shifted electrically along the optical axis of the imaging system. The principles and methods for acquiring several key parameters such as three dimensional (3D) depth, positioning, and motion expression are given. The depth resolution is discussed in detail. Experiments are carried out to obtain the static and dynamic 3D information of objects chosen.

  11. Three dimensional measurement with an electrically tunable focused plenoptic camera.

    PubMed

    Lei, Yu; Tong, Qing; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng

    2017-03-01

    A liquid crystal microlens array (LCMLA) with an arrayed microhole pattern electrode based on nematic liquid crystal materials using a fabrication method including traditional UV-photolithography and wet etching is presented. Its focusing performance is measured under different voltage signals applied between the electrodes of the LCMLA. The experimental outcome shows that the focal length of the LCMLA can be tuned easily by only changing the root mean square value of the voltage signal applied. The developed LCMLA is further integrated with a main lens and an imaging sensor to construct a LCMLA-based focused plenoptic camera (LCFPC) prototype. The focused range of the LCFPC can be shifted electrically along the optical axis of the imaging system. The principles and methods for acquiring several key parameters such as three dimensional (3D) depth, positioning, and motion expression are given. The depth resolution is discussed in detail. Experiments are carried out to obtain the static and dynamic 3D information of objects chosen.

  12. The Dynamics of Flow and Three-dimensional Motion Around a Morphologically Complex Aquatic Plant

    NASA Astrophysics Data System (ADS)

    Boothroyd, R.; Hardy, R. J.; Warburton, J.; Marjoribanks, T.

    2016-12-01

    Aquatic vegetation has a significant impact on the hydraulic functioning of river systems. The morphology of an individual plant can influence the mean and turbulent properties of the flow, and the plant posture reconfigures to minimise drag. We report findings from a flume and numerical experiment investigating the dynamics of motion and three-dimensional flow around an isolated Hebe odora plant over a range of flow conditions. In the flume experiment, a high definition video camera recorded plant motion dynamics and three-dimensional velocity profiles were measured using an acoustic Doppler velocimeter. By producing a binary image of the plant in each frame, the plant dynamics can be quantified. Zones of greatest plant motion are on the upper and leeward sides of the plant. With increasing flow the plant is compressed and deflected downwards by up to 18% of the unstressed height. Plant tip motions are tracked and shown to lengthen with increasing flow, transitioning from horizontally dominated to vertically dominated motion. The plant acts as a porous blockage to flow, producing spatially heterogeneous downstream velocity fields with the measured wake length decreasing by 20% with increasing flow. These measurements are then used as boundary conditions and to validate a computational fluid dynamics (CFD) model. By explicitly accounting for the time-averaged plant posture, good agreement is found between flume measurements and model predictions. The flow structures demonstrate characteristics of a junction vortex system, with plant shear layer turbulence dominated by Kelvin-Helmholtz and Görtler-type vortices generated through shear instability. With increasing flow, drag coefficients decrease by up to 8%, from 1.45 to 1.34. This is equivalent to a change in the Manning's n term from 0.086 to 0.078.

  13. A Novel Method for Tracking Individuals of Fruit Fly Swarms Flying in a Laboratory Flight Arena

    PubMed Central

    Cheng, Xi En; Qian, Zhi-Ming; Wang, Shuo Hong; Jiang, Nan; Guo, Aike; Chen, Yan Qiu

    2015-01-01

    The growing interest in studying social behaviours of swarming fruit flies, Drosophila melanogaster, has heightened the need for developing tools that provide quantitative motion data. To achieve such a goal, multi-camera three-dimensional tracking technology is the key experimental gateway. We have developed a novel tracking system for tracking hundreds of fruit flies flying in a confined cubic flight arena. In addition to the proposed tracking algorithm, this work offers additional contributions in three aspects: body detection, orientation estimation, and data validation. To demonstrate the opportunities that the proposed system offers for generating high-throughput quantitative motion data, we conducted experiments on five experimental configurations. We also performed quantitative analysis on the kinematics and the spatial structure and the motion patterns of fruit fly swarms. We found that there exists an asymptotic distance between fruit flies in swarms as the population density increases. Further, we discovered the evidence for repulsive response when the distance between fruit flies approached the asymptotic distance. Overall, the proposed tracking system presents a powerful method for studying flight behaviours of fruit flies in a three-dimensional environment. PMID:26083385

  14. Vision sensing techniques in aeronautics and astronautics

    NASA Technical Reports Server (NTRS)

    Hall, E. L.

    1988-01-01

    The close relationship between sensing and other tasks in orbital space, and the integral role of vision sensing in practical aerospace applications, are illustrated. Typical space mission-vision tasks encompass the docking of space vehicles, the detection of unexpected objects, the diagnosis of spacecraft damage, and the inspection of critical spacecraft components. Attention is presently given to image functions, the 'windowing' of a view, the number of cameras required for inspection tasks, the choice of incoherent or coherent (laser) illumination, three-dimensional-to-two-dimensional model-matching, edge- and region-segmentation techniques, and motion analysis for tracking.

  15. Camera pose estimation for augmented reality in a small indoor dynamic scene

    NASA Astrophysics Data System (ADS)

    Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad

    2017-09-01

    Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.

  16. In-vivo confirmation of the use of the dart thrower's motion during activities of daily living.

    PubMed

    Brigstocke, G H O; Hearnden, A; Holt, C; Whatling, G

    2014-05-01

    The dart thrower's motion is a wrist rotation along an oblique plane from radial extension to ulnar flexion. We report an in-vivo study to confirm the use of the dart thrower's motion during activities of daily living. Global wrist motion in ten volunteers was recorded using a three-dimensional optoelectronic motion capture system, in which digital infra-red cameras track the movement of retro-reflective marker clusters. Global wrist motion has been approximated to the dart thrower's motion when hammering a nail, throwing a ball, drinking from a glass, pouring from a jug and twisting the lid of a jar, but not when combing hair or manipulating buttons. The dart thrower's motion is the plane of global wrist motion used during most activities of daily living. Arthrodesis of the radiocarpal joint instead of the midcarpal joint will allow better wrist function during most activities of daily living by preserving the dart thrower's motion.

  17. Research on three-dimensional reconstruction method based on binocular vision

    NASA Astrophysics Data System (ADS)

    Li, Jinlin; Wang, Zhihui; Wang, Minjun

    2018-03-01

    As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.

  18. Generation of animation sequences of three dimensional models

    NASA Technical Reports Server (NTRS)

    Poi, Sharon (Inventor); Bell, Brad N. (Inventor)

    1990-01-01

    The invention is directed toward a method and apparatus for generating an animated sequence through the movement of three-dimensional graphical models. A plurality of pre-defined graphical models are stored and manipulated in response to interactive commands or by means of a pre-defined command file. The models may be combined as part of a hierarchical structure to represent physical systems without need to create a separate model which represents the combined system. System motion is simulated through the introduction of translation, rotation and scaling parameters upon a model within the system. The motion is then transmitted down through the system hierarchy of models in accordance with hierarchical definitions and joint movement limitations. The present invention also calls for a method of editing hierarchical structure in response to interactive commands or a command file such that a model may be included, deleted, copied or moved within multiple system model hierarchies. The present invention also calls for the definition of multiple viewpoints or cameras which may exist as part of a system hierarchy or as an independent camera. The simulated movement of the models and systems is graphically displayed on a monitor and a frame is recorded by means of a video controller. Multiple movement and hierarchy manipulations are then recorded as a sequence of frames which may be played back as an animation sequence on a video cassette recorder.

  19. Effects of Different Camera Motions on the Error in Estimates of Epipolar Geometry between Two Dimensional Images in Order to Provide a Framework for Solutions to Vision Based Simultaneous Localization and Mapping (SLAM)

    DTIC Science & Technology

    2007-09-01

    the projective camera matrix (P) which is a 3x4 matrix that is represents both the intrinsic and extrinsic parameters of a camera. It is used to...K contains the intrinsic parameters of the camera and |R t⎡ ⎤⎣ ⎦ represents the extrinsic parameters of the camera. By definition, the extrinsic ... extrinsic parameters are known then the camera is said to be calibrated. If only the intrinsic parameters are known, then the projective camera can

  20. Three-dimensional, automated, real-time video system for tracking limb motion in brain-machine interface studies.

    PubMed

    Peikon, Ian D; Fitzsimmons, Nathan A; Lebedev, Mikhail A; Nicolelis, Miguel A L

    2009-06-15

    Collection and analysis of limb kinematic data are essential components of the study of biological motion, including research into biomechanics, kinesiology, neurophysiology and brain-machine interfaces (BMIs). In particular, BMI research requires advanced, real-time systems capable of sampling limb kinematics with minimal contact to the subject's body. To answer this demand, we have developed an automated video tracking system for real-time tracking of multiple body parts in freely behaving primates. The system employs high-contrast markers painted on the animal's joints to continuously track the three-dimensional positions of their limbs during activity. Two-dimensional coordinates captured by each video camera are combined and converted to three-dimensional coordinates using a quadratic fitting algorithm. Real-time operation of the system is accomplished using direct memory access (DMA). The system tracks the markers at a rate of 52 frames per second (fps) in real-time and up to 100fps if video recordings are captured to be later analyzed off-line. The system has been tested in several BMI primate experiments, in which limb position was sampled simultaneously with chronic recordings of the extracellular activity of hundreds of cortical cells. During these recordings, multiple computational models were employed to extract a series of kinematic parameters from neuronal ensemble activity in real-time. The system operated reliably under these experimental conditions and was able to compensate for marker occlusions that occurred during natural movements. We propose that this system could also be extended to applications that include other classes of biological motion.

  1. 3D surface pressure measurement with single light-field camera and pressure-sensitive paint

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Xu, Shengming; Zhao, Zhou; Niu, Xiaofu; Quinn, Mark Kenneth

    2018-05-01

    A novel technique that simultaneously measures three-dimensional model geometry, as well as surface pressure distribution, with single camera is demonstrated in this study. The technique takes the advantage of light-field photography which can capture three-dimensional information with single light-field camera, and combines it with the intensity-based pressure-sensitive paint method. The proposed single camera light-field three-dimensional pressure measurement technique (LF-3DPSP) utilises a similar hardware setup to the traditional two-dimensional pressure measurement technique, with exception that the wind-on, wind-off and model geometry images are captured via an in-house-constructed light-field camera. The proposed LF-3DPSP technique was validated with a Mach 5 flared cone model test. Results show that the technique is capable of measuring three-dimensional geometry with high accuracy for relatively large curvature models, and the pressure results compare well with the Schlieren tests, analytical calculations, and numerical simulations.

  2. Concurrent validation of Xsens MVN measurement of lower limb joint angular kinematics.

    PubMed

    Zhang, Jun-Tian; Novak, Alison C; Brouwer, Brenda; Li, Qingguo

    2013-08-01

    This study aims to validate a commercially available inertial sensor based motion capture system, Xsens MVN BIOMECH using its native protocols, against a camera-based motion capture system for the measurement of joint angular kinematics. Performance was evaluated by comparing waveform similarity using range of motion, mean error and a new formulation of the coefficient of multiple correlation (CMC). Three dimensional joint angles of the lower limbs were determined for ten healthy subjects while they performed three daily activities: level walking, stair ascent, and stair descent. Under all three walking conditions, the Xsens system most accurately determined the flexion/extension joint angle (CMC > 0.96) for all joints. The joint angle measurements associated with the other two joint axes had lower correlation including complex CMC values. The poor correlation in the other two joint axes is most likely due to differences in the anatomical frame definition of limb segments used by the Xsens and Optotrak systems. Implementation of a protocol to align these two systems is necessary when comparing joint angle waveforms measured by the Xsens and other motion capture systems.

  3. Kinetics of throwing arm joints and the trunk motion during an overarm distance throw by skilled Japanese elementary school boys.

    PubMed

    Kobayashi, Yasuto; Ae, Michiyoshi; Miyazaki, Akiyo; Fujii, Norihisa; Iiboshi, Akira; Nakatani, Hideki

    2016-09-01

    The purpose of this study was to investigate joint kinetics of the throwing arms and role of trunk motion in skilled elementary school boys during an overarm distance throw. Throwing motions of 42 boys from second, fourth, and sixth grade were videotaped with three high-speed cameras operating at 300 fps. Seven skilled boys from each grade were selected on the basis of throwing distance for three-dimensional kinetic analysis. Joint forces, torques, and torque powers of the throwing arm joints were calculated from reconstructed three-dimensional coordinate data smoothed at cut-off frequencies of 10.5-15 Hz and by the inverse dynamics method. Throwing distance and ball velocity significantly increased with school grade. The angular velocity of elbow extension before ball release increased with school grade, although no significant increase between the grades was observed in peak extension torque of elbow joint. The joint torque power of shoulder internal/external rotation tended to increase with school grade. When teaching the overarm throw, elementary school teachers should observe large backward twisting of trunk during the striding phase and should keep in mind that young children, such as second graders (age 8 years), will be unable to effectively utilise shoulder external/internal rotation during the throwing phase.

  4. Markerless 3D motion capture for animal locomotion studies

    PubMed Central

    Sellers, William Irvin; Hirasaki, Eishi

    2014-01-01

    ABSTRACT Obtaining quantitative data describing the movements of animals is an essential step in understanding their locomotor biology. Outside the laboratory, measuring animal locomotion often relies on video-based approaches and analysis is hampered because of difficulties in calibration and often the limited availability of possible camera positions. It is also usually restricted to two dimensions, which is often an undesirable over-simplification given the essentially three-dimensional nature of many locomotor performances. In this paper we demonstrate a fully three-dimensional approach based on 3D photogrammetric reconstruction using multiple, synchronised video cameras. This approach allows full calibration based on the separation of the individual cameras and will work fully automatically with completely unmarked and undisturbed animals. As such it has the potential to revolutionise work carried out on free-ranging animals in sanctuaries and zoological gardens where ad hoc approaches are essential and access within enclosures often severely restricted. The paper demonstrates the effectiveness of video-based 3D photogrammetry with examples from primates and birds, as well as discussing the current limitations of this technique and illustrating the accuracies that can be obtained. All the software required is open source so this can be a very cost effective approach and provides a methodology of obtaining data in situations where other approaches would be completely ineffective. PMID:24972869

  5. A reduced-dimensionality approach to uncovering dyadic modes of body motion in conversations.

    PubMed

    Gaziv, Guy; Noy, Lior; Liron, Yuvalal; Alon, Uri

    2017-01-01

    Face-to-face conversations are central to human communication and a fascinating example of joint action. Beyond verbal content, one of the primary ways in which information is conveyed in conversations is body language. Body motion in natural conversations has been difficult to study precisely due to the large number of coordinates at play. There is need for fresh approaches to analyze and understand the data, in order to ask whether dyads show basic building blocks of coupled motion. Here we present a method for analyzing body motion during joint action using depth-sensing cameras, and use it to analyze a sample of scientific conversations. Our method consists of three steps: defining modes of body motion of individual participants, defining dyadic modes made of combinations of these individual modes, and lastly defining motion motifs as dyadic modes that occur significantly more often than expected given the single-person motion statistics. As a proof-of-concept, we analyze the motion of 12 dyads of scientists measured using two Microsoft Kinect cameras. In our sample, we find that out of many possible modes, only two were motion motifs: synchronized parallel torso motion in which the participants swayed from side to side in sync, and still segments where neither person moved. We find evidence of dyad individuality in the use of motion modes. For a randomly selected subset of 5 dyads, this individuality was maintained for at least 6 months. The present approach to simplify complex motion data and to define motion motifs may be used to understand other joint tasks and interactions. The analysis tools developed here and the motion dataset are publicly available.

  6. A reduced-dimensionality approach to uncovering dyadic modes of body motion in conversations

    PubMed Central

    Noy, Lior; Liron, Yuvalal; Alon, Uri

    2017-01-01

    Face-to-face conversations are central to human communication and a fascinating example of joint action. Beyond verbal content, one of the primary ways in which information is conveyed in conversations is body language. Body motion in natural conversations has been difficult to study precisely due to the large number of coordinates at play. There is need for fresh approaches to analyze and understand the data, in order to ask whether dyads show basic building blocks of coupled motion. Here we present a method for analyzing body motion during joint action using depth-sensing cameras, and use it to analyze a sample of scientific conversations. Our method consists of three steps: defining modes of body motion of individual participants, defining dyadic modes made of combinations of these individual modes, and lastly defining motion motifs as dyadic modes that occur significantly more often than expected given the single-person motion statistics. As a proof-of-concept, we analyze the motion of 12 dyads of scientists measured using two Microsoft Kinect cameras. In our sample, we find that out of many possible modes, only two were motion motifs: synchronized parallel torso motion in which the participants swayed from side to side in sync, and still segments where neither person moved. We find evidence of dyad individuality in the use of motion modes. For a randomly selected subset of 5 dyads, this individuality was maintained for at least 6 months. The present approach to simplify complex motion data and to define motion motifs may be used to understand other joint tasks and interactions. The analysis tools developed here and the motion dataset are publicly available. PMID:28141861

  7. A system for extracting 3-dimensional measurements from a stereo pair of TV cameras

    NASA Technical Reports Server (NTRS)

    Yakimovsky, Y.; Cunningham, R.

    1976-01-01

    Obtaining accurate three-dimensional (3-D) measurement from a stereo pair of TV cameras is a task requiring camera modeling, calibration, and the matching of the two images of a real 3-D point on the two TV pictures. A system which models and calibrates the cameras and pairs the two images of a real-world point in the two pictures, either manually or automatically, was implemented. This system is operating and provides three-dimensional measurements resolution of + or - mm at distances of about 2 m.

  8. Single-Camera Stereoscopy Setup to Visualize 3D Dusty Plasma Flows

    NASA Astrophysics Data System (ADS)

    Romero-Talamas, C. A.; Lemma, T.; Bates, E. M.; Birmingham, W. J.; Rivera, W. F.

    2016-10-01

    A setup to visualize and track individual particles in multi-layered dusty plasma flows is presented. The setup consists of a single camera with variable frame rate, and a pair of adjustable mirrors that project the same field of view from two different angles to the camera, allowing for three-dimensional tracking of particles. Flows are generated by inclining the plane in which the dust is levitated using a specially designed setup that allows for external motion control without compromising vacuum. Dust illumination is achieved with an optics arrangement that includes a Powell lens that creates a laser fan with adjustable thickness and with approximately constant intensity everywhere. Both the illumination and the stereoscopy setup allow for the camera to be placed at right angles with respect to the levitation plane, in preparation for magnetized dusty plasma experiments in which there will be no direct optical access to the levitation plane. Image data and analysis of unmagnetized dusty plasma flows acquired with this setup are presented.

  9. Seafloor Topographic Analysis in Staged Ocean Resource Exploration

    NASA Astrophysics Data System (ADS)

    Ikeda, M.; Okawa, M.; Osawa, K.; Kadoshima, K.; Asakawa, E.; Sumi, T.

    2017-12-01

    J-MARES (Research and Development Partnership for Next Generation Technology of Marine Resources Survey, JAPAN) has been designing a low-expense and high-efficiency exploration system for seafloor hydrothermal massive sulfide deposits in "Cross-ministerial Strategic Innovation Promotion Program (SIP)" granted by the Cabinet Office, Government of Japan since 2014. We designed a method to focus mineral deposit prospective area in multi-stages (the regional survey, semi-detail survey and detail survey) by extracted topographic features of some well-known seafloor massive sulfide deposits from seafloor topographic analysis using seafloor topographic data acquired by the bathymetric survey. We applied this procedure to an area of interest more than 100km x 100km over Okinawa Trough, including some known seafloor massive sulfide deposits. In Addition, we tried to create a three-dimensional model of seafloor topography by SfM (Structure from Motion) technique using multiple image data of Chimney distributed around well-known seafloor massive sulfide deposit taken with Hi-Vision camera mounted on ROV in detail survey such as geophysical exploration. Topographic features of Chimney was extracted by measuring created three-dimensional model. As the result, it was possible to estimate shape of seafloor sulfide such as Chimney to be mined by three-dimensional model created from image data taken with camera mounted on ROV. In this presentation, we will discuss about focusing mineral deposit prospective area in multi-stages by seafloor topographic analysis using seafloor topographic data in exploration system for seafloor massive sulfide deposit and also discuss about three-dimensional model of seafloor topography created from seafloor image data taken with ROV.

  10. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion.

  11. Three-dimensional trunk kinematics in golf: between-club differences and relationships to clubhead speed.

    PubMed

    Joyce, Christopher; Burnett, Angus; Cochrane, Jodie; Ball, Kevin

    2013-06-01

    The aims of this study were (i) to determine whether significant three-dimensional (3D) trunk kinematic differences existed between a driver and a five-iron during a golf swing; and (ii) to determine the anthropometric, physiological, and trunk kinematic variables associated with clubhead speed. Trunk range of motion and golf swing kinematic data were collected from 15 low-handicap male golfers (handicap = 2.5 +/- 1.9). Data were collected using a 10-camera motion capture system operating at 250 Hz. Data on clubhead speed and ball velocity were collected using a real-time launch monitor. Paired t-tests revealed nine significant (p < or = 0.0019) between-club differences for golf swing kinematics, namely trunk and lower trunk flexion/extension and lower trunk axial rotation. Multiple regression analyses explained 33.7-66.7% of the variance in clubhead speed for the driver and five-iron, respectively, with both trunk and lower trunk variables showing associations with clubhead speed. Future studies should consider the role of the upper limbs and modifiable features of the golf club in developing clubhead speed for the driver in particular.

  12. Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor); Glasgow, Thomas K. (Inventor)

    1999-01-01

    A system and a method for measuring three-dimensional velocities at a plurality of points in a fluid employing at least two cameras positioned approximately perpendicular to one another. The cameras are calibrated to accurately represent image coordinates in world coordinate system. The two-dimensional views of the cameras are recorded for image processing and centroid coordinate determination. Any overlapping particle clusters are decomposed into constituent centroids. The tracer particles are tracked on a two-dimensional basis and then stereo matched to obtain three-dimensional locations of the particles as a function of time so that velocities can be measured therefrom The stereo imaging velocimetry technique of the present invention provides a full-field. quantitative, three-dimensional map of any optically transparent fluid which is seeded with tracer particles.

  13. The MicronEye Motion Monitor: A New Tool for Class and Laboratory Demonstrations.

    ERIC Educational Resources Information Center

    Nissan, M.; And Others

    1988-01-01

    Describes a special camera that can be directly linked to a computer that has been adapted for studying movement. Discusses capture, processing, and analysis of two-dimensional data with either IBM PC or Apple II computers. Gives examples of a variety of mechanical tests including pendulum motion, air track, and air table. (CW)

  14. Applications of digital image acquisition in anthropometry

    NASA Technical Reports Server (NTRS)

    Woolford, B.; Lewis, J. L.

    1981-01-01

    A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.

  15. Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.

    PubMed

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.

  16. Three-dimensional reconstruction of the fast-start swimming kinematics of densely schooling fish

    PubMed Central

    Paley, Derek A.

    2012-01-01

    Information transmission via non-verbal cues such as a fright response can be quantified in a fish school by reconstructing individual fish motion in three dimensions. In this paper, we describe an automated tracking framework to reconstruct the full-body trajectories of densely schooling fish using two-dimensional silhouettes in multiple cameras. We model the shape of each fish as a series of elliptical cross sections along a flexible midline. We estimate the size of each ellipse using an iterated extended Kalman filter. The shape model is used in a model-based tracking framework in which simulated annealing is applied at each step to estimate the midline. Results are presented for eight fish with occlusions. The tracking system is currently being used to investigate fast-start behaviour of schooling fish in response to looming stimuli. PMID:21642367

  17. Development of robots and application to industrial processes

    NASA Technical Reports Server (NTRS)

    Palm, W. J.; Liscano, R.

    1984-01-01

    An algorithm is presented for using a robot system with a single camera to position in three-dimensional space a slender object for insertion into a hole; for example, an electrical pin-type termination into a connector hole. The algorithm relies on a control-configured end effector to achieve the required horizontal translations and rotational motion, and it does not require camera calibration. A force sensor in each fingertip is integrated with the vision system to allow the robot to teach itself new reference points when different connectors and pins are used. Variability in the grasped orientation and position of the pin can be accomodated with the sensor system. Performance tests show that the system is feasible. More work is needed to determine more precisely the effects of lighting levels and lighting direction.

  18. Teasing Apart Complex Motions using VideoPoint

    NASA Astrophysics Data System (ADS)

    Fischer, Mark

    2002-10-01

    Using video analysis software such as VideoPoint, it is possible to explore the physics of any phenomenon that can be captured on videotape. The good news is that complex motions can be filmed and analyzed. The bad news is that the motions can become very complex very quickly. An example of such a complicated motion, the 2-dimensional motion of an object as filmed by a camera that is moving and rotating in the same plane will be discussed. Methods for extracting the desired object motion will be given as well as suggestions for shooting more easily analyzable video clips.

  19. Depth measurements through controlled aberrations of projected patterns.

    PubMed

    Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim

    2012-03-12

    Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.

  20. Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm

    NASA Astrophysics Data System (ADS)

    Lahamy, H.; Lichti, D.

    2011-09-01

    Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.

  1. Feasibility evaluation of a motion detection system with face images for stereotactic radiosurgery.

    PubMed

    Yamakawa, Takuya; Ogawa, Koichi; Iyatomi, Hitoshi; Kunieda, Etsuo

    2011-01-01

    In stereotactic radiosurgery we can irradiate a targeted volume precisely with a narrow high-energy x-ray beam, and thus the motion of a targeted area may cause side effects to normal organs. This paper describes our motion detection system with three USB cameras. To reduce the effect of change in illuminance in a tracking area we used an infrared light and USB cameras that were sensitive to the infrared light. The motion detection of a patient was performed by tracking his/her ears and nose with three USB cameras, where pattern matching between a predefined template image for each view and acquired images was done by an exhaustive search method with a general-purpose computing on a graphics processing unit (GPGPU). The results of the experiments showed that the measurement accuracy of our system was less than 0.7 mm, amounting to less than half of that of our previous system.

  2. Depth-tunable three-dimensional display with interactive light field control

    NASA Astrophysics Data System (ADS)

    Xie, Songlin; Wang, Peng; Sang, Xinzhu; Li, Chenyu; Dou, Wenhua; Xiao, Liquan

    2016-07-01

    A software-defined depth-tunable three-dimensional (3D) display with interactive 3D depth control is presented. With the proposed post-processing system, the disparity of the multi-view media can be freely adjusted. Benefiting from a wealth of information inherently contains in dense multi-view images captured with parallel arrangement camera array, the 3D light field is built and the light field structure is controlled to adjust the disparity without additional acquired depth information since the light field structure itself contains depth information. A statistical analysis based on the least square is carried out to extract the depth information inherently exists in the light field structure and the accurate depth information can be used to re-parameterize light fields for the autostereoscopic display, and a smooth motion parallax can be guaranteed. Experimental results show that the system is convenient and effective to adjust the 3D scene performance in the 3D display.

  3. 3-dimensional telepresence system for a robotic environment

    DOEpatents

    Anderson, Matthew O.; McKay, Mark D.

    2000-01-01

    A telepresence system includes a camera pair remotely controlled by a control module affixed to an operator. The camera pair provides for three dimensional viewing and the control module, affixed to the operator, affords hands-free operation of the camera pair. In one embodiment, the control module is affixed to the head of the operator and an initial position is established. A triangulating device is provided to track the head movement of the operator relative to the initial position. A processor module receives input from the triangulating device to determine where the operator has moved relative to the initial position and moves the camera pair in response thereto. The movement of the camera pair is predetermined by a software map having a plurality of operation zones. Each zone therein corresponds to unique camera movement parameters such as speed of movement. Speed parameters include constant speed, or increasing or decreasing. Other parameters include pan, tilt, slide, raise or lowering of the cameras. Other user interface devices are provided to improve the three dimensional control capabilities of an operator in a local operating environment. Such other devices include a pair of visual display glasses, a microphone and a remote actuator. The pair of visual display glasses are provided to facilitate three dimensional viewing, hence depth perception. The microphone affords hands-free camera movement by utilizing voice commands. The actuator allows the operator to remotely control various robotic mechanisms in the remote operating environment.

  4. Full High-definition three-dimensional gynaecological laparoscopy--clinical assessment of a new robot-assisted device.

    PubMed

    Tuschy, Benjamin; Berlit, Sebastian; Brade, Joachim; Sütterlin, Marc; Hornemann, Amadeus

    2014-01-01

    To investigate the clinical assessment of a full high-definition (HD) three-dimensional robot-assisted laparoscopic device in gynaecological surgery. This study included 70 women who underwent gynaecological laparoscopic procedures. Demographic parameters, type and duration of surgery and perioperative complications were analyzed. Fifteen surgeons were postoperatively interviewed regarding their assessment of this new system with a standardized questionnaire. The clinical assessment revealed that three-dimensional full-HD visualisation is comfortable and improves spatial orientation and hand-to-eye coordination. The majority of the surgeons stated they would prefer a three-dimensional system to a conventional two-dimensional device and stated that the robotic camera arm led to more relaxed working conditions. Three-dimensional laparoscopy is feasible, comfortable and well-accepted in daily routine. The three-dimensional visualisation improves surgeons' hand-to-eye coordination, intracorporeal suturing and fine dissection. The combination of full-HD three-dimensional visualisation with the robotic camera arm results in very high image quality and stability.

  5. Does a 3D Image Improve Laparoscopic Motor Skills?

    PubMed

    Folaranmi, Semiu Eniola; Partridge, Roland W; Brennan, Paul M; Hennessey, Iain A M

    2016-08-01

    To quantitatively determine whether a three-dimensional (3D) image improves laparoscopic performance compared with a two-dimensional (2D) image. This is a prospective study with two groups of participants: novices (5) and experts (5). Individuals within each group undertook a validated laparoscopic task on a box simulator, alternating between 2D and a 3D laparoscopic image until they had repeated the task five times with each imaging modality. A dedicated motion capture camera was used to determine the time taken to complete the task (seconds) and instrument distance traveled (meters). Among the experts, the mean time taken to perform the task on the 3D image was significantly quicker than on the 2D image, 40.2 seconds versus 51.2 seconds, P < .0001. Among the novices, the mean task time again was significantly quicker on the 3D image, 56.4 seconds versus 82.7 seconds, P < .0001. There was no significant difference in the mean time it took a novice to perform the task using a 3D camera compared with an expert on a 2D camera, 56.4 seconds versus 51.3 seconds, P = .3341. The use of a 3D image confers a significant performance advantage over a 2D camera in quantitatively measured laparoscopic skills for both experts and novices. The use of a 3D image appears to improve a novice's performance to the extent that it is not statistically different from an expert using a 2D image.

  6. Muscle forces analysis in the shoulder mechanism during wheelchair propulsion.

    PubMed

    Lin, Hwai-Ting; Su, Fong-Chin; Wu, Hong-Wen; An, Kai-Nan

    2004-01-01

    This study combines an ergometric wheelchair, a six-camera video motion capture system and a prototype computer graphics based musculoskeletal model (CGMM) to predict shoulder joint loading, muscle contraction force per muscle and the sequence of muscular actions during wheelchair propulsion, and also to provide an animated computer graphics model of the relative interactions. Five healthy male subjects with no history of upper extremity injury participated. A conventional manual wheelchair was equipped with a six-component load cell to collect three-dimensional forces and moments experienced by the wheel, allowing real-time measurement of hand/rim force applied by subjects during normal wheelchair operation. An ExpertVision six-camera video motion capture system collected trajectory data of markers attached on anatomical positions. The CGMM was used to simulate and animate muscle action by using an optimization technique combining observed muscular motions with physiological constraints to estimate muscle contraction forces during wheelchair propulsion. The CGMM provides results that satisfactorily match the predictions of previous work, disregarding minor differences which presumably result from differing experimental conditions, measurement technologies and subjects. Specifically, the CGMM shows that the supraspinatus, infraspinatus, anterior deltoid, pectoralis major and biceps long head are the prime movers during the propulsion phase. The middle and posterior deltoid and supraspinatus muscles are responsible for arm return during the recovery phase. CGMM modelling shows that the rotator cuff and pectoralis major play an important role during wheelchair propulsion, confirming the known risk of injury for these muscles during wheelchair propulsion. The CGMM successfully transforms six-camera video motion capture data into a technically useful and visually interesting animated video model of the shoulder musculoskeletal system. The CGMM further yields accurate estimates of muscular forces during motion, indicating that this prototype modelling and analysis technique will aid in study, analysis and therapy of the mechanics and underlying pathomechanics involved in various musculoskeletal overuse syndromes.

  7. A Tool for the Automated Collection of Space Utilization Data: Three Dimensional Space Utilization Monitor

    NASA Technical Reports Server (NTRS)

    Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.

    2015-01-01

    Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP), in collaboration with the Behavioral Health and Performance (BHP) Element, is conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within that volume. NASA is looking for innovative methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods for collecting such data exist yet many are obtrusive and require significant post-processing. Example technologies used in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multiple camera filmography. However due to constraints of space operations many such methods are infeasible, such as inertial tracking systems which typically rely upon a gravity vector to normalize sensor readings, and traditional IR systems which are large and require extensive calibration. However multiple technologies have not yet been applied to space operations for these explicit purposes. Two of these include 3-Dimensional Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) and depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR).

  8. Stereo imaging velocimetry for microgravity applications

    NASA Technical Reports Server (NTRS)

    Miller, Brian B.; Meyer, Maryjo B.; Bethea, Mark D.

    1994-01-01

    Stereo imaging velocimetry is the quantitative measurement of three-dimensional flow fields using two sensors recording data from different vantage points. The system described in this paper, under development at NASA Lewis Research Center in Cleveland, Ohio, uses two CCD cameras placed perpendicular to one another, laser disk recorders, an image processing substation, and a 586-based computer to record data at standard NTSC video rates (30 Hertz) and reduce it offline. The flow itself is marked with seed particles, hence the fluid must be transparent. The velocimeter tracks the motion of the particles, and from these we deduce a multipoint (500 or more), quantitative map of the flow. Conceptually, the software portion of the velocimeter can be divided into distinct modules. These modules are: camera calibration, particle finding (image segmentation) and centroid location, particle overlap decomposition, particle tracking, and stereo matching. We discuss our approach to each module, and give our currently achieved speed and accuracy for each where available.

  9. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.

    PubMed

    Ci, Wenyan; Huang, Yingping

    2016-10-17

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.

  10. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    NASA Astrophysics Data System (ADS)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  11. Single exposure three-dimensional imaging of dusty plasma clusters.

    PubMed

    Hartmann, Peter; Donkó, István; Donkó, Zoltán

    2013-02-01

    We have worked out the details of a single camera, single exposure method to perform three-dimensional imaging of a finite particle cluster. The procedure is based on the plenoptic imaging principle and utilizes a commercial Lytro light field still camera. We demonstrate the capabilities of our technique on a single layer particle cluster in a dusty plasma, where the camera is aligned and inclined at a small angle to the particle layer. The reconstruction of the third coordinate (depth) is found to be accurate and even shadowing particles can be identified.

  12. High-accuracy optical extensometer based on coordinate transform in two-dimensional digital image correlation

    NASA Astrophysics Data System (ADS)

    Lv, Zeqian; Xu, Xiaohai; Yan, Tianhao; Cai, Yulong; Su, Yong; Zhang, Qingchuan

    2018-01-01

    In the measurement of plate specimens, traditional two-dimensional (2D) digital image correlation (DIC) is challenged by two aspects: (1) the slant optical axis (misalignment of the optical camera axis and the object surface) and (2) out-of-plane motions (including translations and rotations) of the specimens. There are measurement errors in the results measured by 2D DIC, especially when the out-of-plane motions are big enough. To solve this problem, a novel compensation method has been proposed to correct the unsatisfactory results. The proposed compensation method consists of three main parts: 1) a pre-calibration step is used to determine the intrinsic parameters and lens distortions; 2) a compensation panel (a rigid panel with several markers located at known positions) is mounted to the specimen to track the specimen's motion so that the relative coordinate transformation between the compensation panel and the 2D DIC setup can be calculated using the coordinate transform algorithm; 3) three-dimensional world coordinates of measuring points on the specimen can be reconstructed via the coordinate transform algorithm and used to calculate deformations. Simulations have been carried out to validate the proposed compensation method. Results come out that when the extensometer length is 400 pixels, the strain accuracy reaches 10 με no matter out-of-plane translations (less than 1/200 of the object distance) nor out-of-plane rotations (rotation angle less than 5°) occur. The proposed compensation method leads to good results even when the out-of-plane translation reaches several percents of the object distance or the out-of-plane rotation angle reaches tens of degrees. The proposed compensation method has been applied in tensile experiments to obtain high-accuracy results as well.

  13. A Three Dimensional Kinematic and Kinetic Study of the Golf Swing

    PubMed Central

    Nesbit, Steven M.

    2005-01-01

    This paper discusses the three-dimensional kinematics and kinetics of a golf swing as performed by 84 male and one female amateur subjects of various skill levels. The analysis was performed using a variable full-body computer model of a human coupled with a flexible model of a golf club. Data to drive the model was obtained from subject swings recorded using a multi-camera motion analysis system. Model output included club trajectories, golfer/club interaction forces and torques, work and power, and club deflections. These data formed the basis for a statistical analysis of all subjects, and a detailed analysis and comparison of the swing characteristics of four of the subjects. The analysis generated much new data concerning the mechanics of the golf swing. It revealed that a golf swing is a highly coordinated and individual motion and subject-to-subject variations were significant. The study highlighted the importance of the wrists in generating club head velocity and orienting the club face. The trajectory of the hands and the ability to do work were the factors most closely related to skill level. Key Points Full-body model of the golf swing. Mechanical description of the golf swing. Statistical analysis of golf swing mechanics. Comparisons of subject swing mechanics PMID:24627665

  14. A three dimensional kinematic and kinetic study of the golf swing.

    PubMed

    Nesbit, Steven M

    2005-12-01

    This paper discusses the three-dimensional kinematics and kinetics of a golf swing as performed by 84 male and one female amateur subjects of various skill levels. The analysis was performed using a variable full-body computer model of a human coupled with a flexible model of a golf club. Data to drive the model was obtained from subject swings recorded using a multi-camera motion analysis system. Model output included club trajectories, golfer/club interaction forces and torques, work and power, and club deflections. These data formed the basis for a statistical analysis of all subjects, and a detailed analysis and comparison of the swing characteristics of four of the subjects. The analysis generated much new data concerning the mechanics of the golf swing. It revealed that a golf swing is a highly coordinated and individual motion and subject-to-subject variations were significant. The study highlighted the importance of the wrists in generating club head velocity and orienting the club face. The trajectory of the hands and the ability to do work were the factors most closely related to skill level. Key PointsFull-body model of the golf swing.Mechanical description of the golf swing.Statistical analysis of golf swing mechanics.Comparisons of subject swing mechanics.

  15. Three-dimensional kinematics of the lower limbs during forward ice hockey skating.

    PubMed

    Upjohn, Tegan; Turcotte, René; Pearsall, David J; Loh, Jonathan

    2008-05-01

    The objectives of the study were to describe lower limb kinematics in three dimensions during the forward skating stride in hockey players and to contrast skating techniques between low- and high-calibre skaters. Participant motions were recorded with four synchronized digital video cameras while wearing reflective marker triads on the thighs, shanks, and skates. Participants skated on a specialized treadmill with a polyethylene slat bed at a self-selected speed for 1 min. Each participant completed three 1-min skating trials separated by 5 min of rest. Joint and limb segment angles were calculated within the local (anatomical) and global reference planes. Similar gross movement patterns and stride rates were observed; however, high-calibre participants showed a greater range and rate of joint motion in both the sagittal and frontal planes, contributing to greater stride length for high-calibre players. Furthermore, consequent postural differences led to greater lateral excursion during the power stroke in high-calibre skaters. In conclusion, specific kinematic differences in both joint and limb segment angle movement patterns were observed between low- and high-calibre skaters.

  16. Application of Least-Squares Adjustment Technique to Geometric Camera Calibration and Photogrammetric Flow Visualization

    NASA Technical Reports Server (NTRS)

    Chen, Fang-Jenq

    1997-01-01

    Flow visualization produces data in the form of two-dimensional images. If the optical components of a camera system are perfect, the transformation equations between the two-dimensional image and the three-dimensional object space are linear and easy to solve. However, real camera lenses introduce nonlinear distortions that affect the accuracy of transformation unless proper corrections are applied. An iterative least-squares adjustment algorithm is developed to solve the nonlinear transformation equations incorporated with distortion corrections. Experimental applications demonstrate that a relative precision on the order of 40,000 is achievable without tedious laboratory calibrations of the camera.

  17. Measurement of Zeta-Potential at Microchannel Wall by a Nanoscale Laser Induced Fluorescence Imaging

    NASA Astrophysics Data System (ADS)

    Kazoe, Yutaka; Sato, Yohei

    A nanoscale laser induced fluorescence imaging was proposed by using fluorescent dye and the evanescent wave with total internal reflection of a laser beam. The present study focused on the two-dimensional measurement of zeta-potential at the microchannel wall, which is an electrostatic potential at the wall surface and a dominant parameter of electroosmotic flow. The evanescent wave, which decays exponentially from the wall, was used as an excitation light of the fluorescent dye. The fluorescent intensity detected by a CCD camera is closely related to the zeta-potential. Two kinds of fluorescent dye solution at different ionic concentrations were injected into a T-shaped microchannel, and formed a mixing flow field in the junction area. The two-dimensional distribution of zeta-potential at the microchannel wall in the pressure-driven flow field was measured. The obtained zeta-potential distribution has a transverse gradient toward the mixing flow field and was changed by the difference in the averaged velocity of pressure-driven flow. To understand the ion motion in the mixing flow field, the three-dimensional flow structure was analyzed by the velocity measurement using micron-resolution particle image velocimetry and the numerical simulation. It is concluded that the two-dimensional distribution of zeta-potential at the microchannel wall was dependent on the ion motion in the flow field, which was governed by the convection and molecular diffusion.

  18. Space-variant restoration of images degraded by camera motion blur.

    PubMed

    Sorel, Michal; Flusser, Jan

    2008-02-01

    We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.

  19. Aerodynamics of a beetle in take-off flights

    NASA Astrophysics Data System (ADS)

    Lee, Boogeon; Park, Hyungmin; Kim, Sun-Tae

    2015-11-01

    In the present study, we investigate the aerodynamics of a beetle in its take-off flights based on the three-dimensional kinematics of inner (hindwing) and outer (elytron) wings, and body postures, which are measured with three high-speed cameras at 2000 fps. To track the highly deformable wing motions, we distribute 21 morphological markers and use the modified direct linear transform algorithm for the reconstruction of measured wing motions. To realize different take-off conditions, we consider two types of take-off flights; that is, one is the take-off from a flat ground and the other is from a vertical rod mimicking a branch of a tree. It is first found that the elytron which is flapped passively due to the motion of hindwing also has non-negligible wing-kinematic parameters. With the ground, the flapping amplitude of elytron is reduced and the hindwing changes its flapping angular velocity during up and downstrokes. On the other hand, the angle of attack on the elytron and hindwing increases and decreases, respectively, due to the ground. These changes in the wing motion are critically related to the aerodynamic force generation, which will be discussed in detail. Supported by the grant to Bio-Mimetic Robot Research Center funded by Defense Acquisition Program Administration (UD130070ID).

  20. 3D Measurement of Forearm and Upper Arm during Throwing Motion using Body Mounted Sensor

    NASA Astrophysics Data System (ADS)

    Koda, Hideharu; Sagawa, Koichi; Kuroshima, Kouta; Tsukamoto, Toshiaki; Urita, Kazutaka; Ishibashi, Yasuyuki

    The aim of this study is to propose the measurement method of three-dimensional (3D) movement of forearm and upper arm during pitching motion of baseball using inertial sensors without serious consideration of sensor installation. Although high accuracy measurement of sports motion is achieved by using optical motion capture system at present, it has some disadvantages such as the calibration of cameras and limitation of measurement place. Whereas the proposed method for 3D measurement of pitching motion using body mounted sensors provides trajectory and orientation of upper arm by the integration of acceleration and angular velocity measured on upper limb. The trajectory of forearm is derived so that the elbow joint axis of forearm corresponds to that of upper arm. Spatial relation between upper limb and sensor system is obtained by performing predetermined movements of upper limb and utilizing angular velocity and gravitational acceleration. The integration error is modified so that the estimated final position, velocity and posture of upper limb agree with the actual ones. The experimental results of the measurement of pitching motion show that trajectories of shoulder, elbow and wrist estimated by the proposed method are highly correlated to those from the motion capture system within the estimation error of about 10 [%].

  1. Conceptual Design and Dynamics Testing and Modeling of a Mars Tumbleweed Rover

    NASA Technical Reports Server (NTRS)

    Calhoun Philip C.; Harris, Steven B.; Raiszadeh, Behzad; Zaleski, Kristina D.

    2005-01-01

    The NASA Langley Research Center has been developing a novel concept for a Mars planetary rover called the Mars Tumbleweed. This concept utilizes the wind to propel the rover along the Mars surface, bringing it the potential to cover vast distances not possible with current Mars rover technology. This vehicle, in its deployed configuration, must be large and lightweight to provide the ratio of drag force to rolling resistance necessary to initiate motion from rest on the Mars surface. One Tumbleweed design concept that satisfies these considerations is called the Eggbeater-Dandelion. This paper describes the basic design considerations and a proposed dynamics model of the concept for use in simulation studies. It includes a summary of rolling/bouncing dynamics tests that used videogrammetry to better understand, characterize, and validate the dynamics model assumptions, especially the effective rolling resistance in bouncing/rolling dynamic conditions. The dynamics test used cameras to capture the motion of 32 targets affixed to a test article s outer structure. Proper placement of the cameras and alignment of their respective fields of view provided adequate image resolution of multiple targets along the trajectory as the test article proceeded down the ramp. Image processing of the frames from multiple cameras was used to determine the target positions. Position data from a set of these test runs was compared with results of a three dimensional, flexible dynamics model. Model input parameters were adjusted to match the test data for runs conducted. This process presented herein provided the means to characterize the dynamics and validate the simulation of the Eggbeater-Dandelion concept. The simulation model was used to demonstrate full scale Tumbleweed motion from a stationary condition on a flat-sloped terrain using representative Mars environment parameters.

  2. Application of a laser scanner to three dimensional visual sensing tasks

    NASA Technical Reports Server (NTRS)

    Ryan, Arthur M.

    1992-01-01

    The issues are described which are associated with using a laser scanner for visual sensing and the methods developed by the author to address them. A laser scanner is a device that controls the direction of a laser beam by deflecting it through a pair of orthogonal mirrors, the orientations of which are specified by a computer. If a calibrated laser scanner is combined with a calibrated camera, it is possible to perform three dimensional sensing by directing the laser at objects within the field of view of the camera. There are several issues associated with using a laser scanner for three dimensional visual sensing that must be addressed in order to use the laser scanner effectively. First, methods are needed to calibrate the laser scanner and estimate three dimensional points. Second, methods to estimate three dimensional points using a calibrated camera and laser scanner are required. Third, methods are required for locating the laser spot in a cluttered image. Fourth, mathematical models that predict the laser scanner's performance and provide structure for three dimensional data points are necessary. Several methods were developed to address each of these and has evaluated them to determine how and when they should be applied. The theoretical development, implementation, and results when used in a dual arm eighteen degree of freedom robotic system for space assembly is described.

  3. A novel optical investigation technique for railroad track inspection and assessment

    NASA Astrophysics Data System (ADS)

    Sabato, Alessandro; Beale, Christopher H.; Niezrecki, Christopher

    2017-04-01

    Track failures due to cross tie degradation or loss in ballast support may result in a number of problems ranging from simple service interruptions to derailments. Structural Health Monitoring (SHM) of railway track is important for safety reasons and to reduce downtime and maintenance costs. For this reason, novel and cost-effective track inspection technologies for assessing tracks' health are currently insufficient and needed. Advancements achieved in recent years in cameras technology, optical sensors, and image-processing algorithms have made machine vision, Structure from Motion (SfM), and three-dimensional (3D) Digital Image Correlation (DIC) systems extremely appealing techniques for extracting structural deformations and geometry profiles. Therefore, optically based, non-contact measurement techniques may be used for assessing surface defects, rail and tie deflection profiles, and ballast condition. In this study, the design of two camera-based measurement systems is proposed for crossties-ballast condition assessment and track examination purposes. The first one consists of four pairs of cameras installed on the underside of a rail car to detect the induced deformation and displacement on the whole length of the track's cross tie using 3D DIC measurement techniques. The second consists of another set of cameras using SfM techniques for obtaining a 3D rendering of the infrastructure from a series of two-dimensional (2D) images to evaluate the state of the track qualitatively. The feasibility of the proposed optical systems is evaluated through extensive laboratory tests, demonstrating their ability to measure parameters of interest (e.g. crosstie's full-field displacement, vertical deflection, shape, etc.) for assessment and SHM of railroad track.

  4. Augmented reality glass-free three-dimensional display with the stereo camera

    NASA Astrophysics Data System (ADS)

    Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.

  5. Single camera volumetric velocimetry in aortic sinus with a percutaneous valve

    NASA Astrophysics Data System (ADS)

    Clifford, Chris; Thurow, Brian; Midha, Prem; Okafor, Ikechukwu; Raghav, Vrishank; Yoganathan, Ajit

    2016-11-01

    Cardiac flows have long been understood to be highly three dimensional, yet traditional in vitro techniques used to capture these complexities are costly and cumbersome. Thus, two dimensional techniques are primarily used for heart valve flow diagnostics. The recent introduction of plenoptic camera technology allows for traditional cameras to capture both spatial and angular information from a light field through the addition of a microlens array in front of the image sensor. When combined with traditional particle image velocimetry (PIV) techniques, volumetric velocity data may be acquired with a single camera using off-the-shelf optics. Particle volume pairs are reconstructed from raw plenoptic images using a filtered refocusing scheme, followed by three-dimensional cross-correlation. This technique was applied to the sinus region (known for having highly three-dimensional flow structures) of an in vitro aortic model with a percutaneous valve. Phase-locked plenoptic PIV data was acquired at two cardiac outputs (2 and 5 L/min) and 7 phases of the cardiac cycle. The volumetric PIV data was compared to standard 2D-2C PIV. Flow features such as recirculation and stagnation were observed in the sinus region in both cases.

  6. A trillion frames per second: the techniques and applications of light-in-flight photography.

    PubMed

    Faccio, Daniele; Velten, Andreas

    2018-06-14

    Cameras capable of capturing videos at a trillion frames per second allow to freeze light in motion, a very counterintuitive capability when related to our everyday experience in which light appears to travel instantaneously. By combining this capability with computational imaging techniques, new imaging opportunities emerge such as three dimensional imaging of scenes that are hidden behind a corner, the study of relativistic distortion effects, imaging through diffusive media and imaging of ultrafast optical processes such as laser ablation, supercontinuum and plasma generation. We provide an overview of the main techniques that have been developed for ultra-high speed photography with a particular focus on `light in flight' imaging, i.e. applications where the key element is the imaging of light itself at frame rates that allow to freeze it's motion and therefore extract information that would otherwise be blurred out and lost. . © 2018 IOP Publishing Ltd.

  7. Reconstruction of measurable three-dimensional point cloud model based on large-scene archaeological excavation sites

    NASA Astrophysics Data System (ADS)

    Zhang, Chun-Sen; Zhang, Meng-Meng; Zhang, Wei-Xing

    2017-01-01

    This paper outlines a low-cost, user-friendly photogrammetric technique with nonmetric cameras to obtain excavation site digital sequence images, based on photogrammetry and computer vision. Digital camera calibration, automatic aerial triangulation, image feature extraction, image sequence matching, and dense digital differential rectification are used, combined with a certain number of global control points of the excavation site, to reconstruct the high precision of measured three-dimensional (3-D) models. Using the acrobatic figurines in the Qin Shi Huang mausoleum excavation as an example, our method solves the problems of little base-to-height ratio, high inclination, unstable altitudes, and significant ground elevation changes affecting image matching. Compared to 3-D laser scanning, the 3-D color point cloud obtained by this method can maintain the same visual result and has advantages of low project cost, simple data processing, and high accuracy. Structure-from-motion (SfM) is often used to reconstruct 3-D models of large scenes and has lower accuracy if it is a reconstructed 3-D model of a small scene at close range. Results indicate that this method quickly achieves 3-D reconstruction of large archaeological sites and produces heritage site distribution of orthophotos providing a scientific basis for accurate location of cultural relics, archaeological excavations, investigation, and site protection planning. This proposed method has a comprehensive application value.

  8. Docking alignment system

    NASA Technical Reports Server (NTRS)

    Monford, Leo G. (Inventor)

    1990-01-01

    Improved techniques are provided for alignment of two objects. The present invention is particularly suited for three-dimensional translation and three-dimensional rotational alignment of objects in outer space. A camera 18 is fixedly mounted to one object, such as a remote manipulator arm 10 of the spacecraft, while the planar reflective surface 30 is fixed to the other object, such as a grapple fixture 20. A monitor 50 displays in real-time images from the camera, such that the monitor displays both the reflected image of the camera and visible markings on the planar reflective surface when the objects are in proper alignment. The monitor may thus be viewed by the operator and the arm 10 manipulated so that the reflective surface is perpendicular to the optical axis of the camera, the roll of the reflective surface is at a selected angle with respect to the camera, and the camera is spaced a pre-selected distance from the reflective surface.

  9. A three-dimensional quality-guided phase unwrapping method for MR elastography

    NASA Astrophysics Data System (ADS)

    Wang, Huifang; Weaver, John B.; Perreard, Irina I.; Doyley, Marvin M.; Paulsen, Keith D.

    2011-07-01

    Magnetic resonance elastography (MRE) uses accumulated phases that are acquired at multiple, uniformly spaced relative phase offsets, to estimate harmonic motion information. Heavily wrapped phase occurs when the motion is large and unwrapping procedures are necessary to estimate the displacements required by MRE. Two unwrapping methods were developed and compared in this paper. The first method is a sequentially applied approach. The three-dimensional MRE phase image block for each slice was processed by two-dimensional unwrapping followed by a one-dimensional phase unwrapping approach along the phase-offset direction. This unwrapping approach generally works well for low noise data. However, there are still cases where the two-dimensional unwrapping method fails when noise is high. In this case, the baseline of the corrupted regions within an unwrapped image will not be consistent. Instead of separating the two-dimensional and one-dimensional unwrapping in a sequential approach, an interleaved three-dimensional quality-guided unwrapping method was developed to combine both the two-dimensional phase image continuity and one-dimensional harmonic motion information. The quality of one-dimensional harmonic motion unwrapping was used to guide the three-dimensional unwrapping procedures and it resulted in stronger guidance than in the sequential method. In this work, in vivo results generated by the two methods were compared.

  10. Digital stereophotogrammetry based on circular markers and zooming cameras: evaluation of a method for 3D analysis of small motions in orthopaedic research

    PubMed Central

    2011-01-01

    Background Orthopaedic research projects focusing on small displacements in a small measurement volume require a radiation free, three dimensional motion analysis system. A stereophotogrammetrical motion analysis system can track wireless, small, light-weight markers attached to the objects. Thereby the disturbance of the measured objects through the marker tracking can be kept at minimum. The purpose of this study was to develop and evaluate a non-position fixed compact motion analysis system configured for a small measurement volume and able to zoom while tracking small round flat markers in respect to a fiducial marker which was used for the camera pose estimation. Methods The system consisted of two web cameras and the fiducial marker placed in front of them. The markers to track were black circles on a white background. The algorithm to detect a centre of the projected circle on the image plane was described and applied. In order to evaluate the accuracy (mean measurement error) and precision (standard deviation of the measurement error) of the optical measurement system, two experiments were performed: 1) inter-marker distance measurement and 2) marker displacement measurement. Results The first experiment of the 10 mm distances measurement showed a total accuracy of 0.0086 mm and precision of ± 0.1002 mm. In the second experiment, translations from 0.5 mm to 5 mm were measured with total accuracy of 0.0038 mm and precision of ± 0.0461 mm. The rotations of 2.25° amount were measured with the entire accuracy of 0.058° and the precision was of ± 0.172°. Conclusions The description of the non-proprietary measurement device with very good levels of accuracy and precision may provide opportunities for new, cost effective applications of stereophotogrammetrical analysis in musculoskeletal research projects, focusing on kinematics of small displacements in a small measurement volume. PMID:21284867

  11. 3D video-based deformation measurement of the pelvis bone under dynamic cyclic loading

    PubMed Central

    2011-01-01

    Background Dynamic three-dimensional (3D) deformation of the pelvic bones is a crucial factor in the successful design and longevity of complex orthopaedic oncological implants. The current solutions are often not very promising for the patient; thus it would be interesting to measure the dynamic 3D-deformation of the whole pelvic bone in order to get a more realistic dataset for a better implant design. Therefore we hypothesis if it would be possible to combine a material testing machine with a 3D video motion capturing system, used in clinical gait analysis, to measure the sub millimetre deformation of a whole pelvis specimen. Method A pelvis specimen was placed in a standing position on a material testing machine. Passive reflective markers, traceable by the 3D video motion capturing system, were fixed to the bony surface of the pelvis specimen. While applying a dynamic sinusoidal load the 3D-movement of the markers was recorded by the cameras and afterwards the 3D-deformation of the pelvis specimen was computed. The accuracy of the 3D-movement of the markers was verified with 3D-displacement curve with a step function using a manual driven 3D micro-motion-stage. Results The resulting accuracy of the measurement system depended on the number of cameras tracking a marker. The noise level for a marker seen by two cameras was during the stationary phase of the calibration procedure ± 0.036 mm, and ± 0.022 mm if tracked by 6 cameras. The detectable 3D-movement performed by the 3D-micro-motion-stage was smaller than the noise level of the 3D-video motion capturing system. Therefore the limiting factor of the setup was the noise level, which resulted in a measurement accuracy for the dynamic test setup of ± 0.036 mm. Conclusion This 3D test setup opens new possibilities in dynamic testing of wide range materials, like anatomical specimens, biomaterials, and its combinations. The resulting 3D-deformation dataset can be used for a better estimation of material characteristics of the underlying structures. This is an important factor in a reliable biomechanical modelling and simulation as well as in a successful design of complex implants. PMID:21762533

  12. A simple method for in vivo measurement of implant rod three-dimensional geometry during scoliosis surgery.

    PubMed

    Salmingo, Remel A; Tadano, Shigeru; Fujisaki, Kazuhiro; Abe, Yuichiro; Ito, Manabu

    2012-05-01

    Scoliosis is defined as a spinal pathology characterized as a three-dimensional deformity of the spine combined with vertebral rotation. Treatment for severe scoliosis is achieved when the scoliotic spine is surgically corrected and fixed using implanted rods and screws. Several studies performed biomechanical modeling and corrective forces measurements of scoliosis correction. These studies were able to predict the clinical outcome and measured the corrective forces acting on screws, however, they were not able to measure the intraoperative three-dimensional geometry of the spinal rod. In effect, the results of biomechanical modeling might not be so realistic and the corrective forces during the surgical correction procedure were intra-operatively difficult to measure. Projective geometry has been shown to be successful in the reconstruction of a three-dimensional structure using a series of images obtained from different views. In this study, we propose a new method to measure the three-dimensional geometry of an implant rod using two cameras. The reconstruction method requires only a few parameters, the included angle θ between the two cameras, the actual length of the rod in mm, and the location of points for curve fitting. The implant rod utilized in spine surgery was used to evaluate the accuracy of the current method. The three-dimensional geometry of the rod was measured from the image obtained by a scanner and compared to the proposed method using two cameras. The mean error in the reconstruction measurements ranged from 0.32 to 0.45 mm. The method presented here demonstrated the possibility of intra-operatively measuring the three-dimensional geometry of spinal rod. The proposed method could be used in surgical procedures to better understand the biomechanics of scoliosis correction through real-time measurement of three-dimensional implant rod geometry in vivo.

  13. Effect of thong style flip-flops on children's barefoot walking and jogging kinematics.

    PubMed

    Chard, Angus; Greene, Andrew; Hunt, Adrienne; Vanwanseele, Benedicte; Smith, Richard

    2013-03-05

    Thong style flip-flops are a popular form of footwear for children. Health professionals relate the wearing of thongs to foot pathology and deformity despite the lack of quantitative evidence to support or refute the benefits or disadvantages of children wearing thongs. The purpose of this study was to compare the effect of thong footwear on children's barefoot three dimensional foot kinematics during walking and jogging. Thirteen healthy children (age 10.3 ± 1.6 SD years) were recruited from the metropolitan area of Sydney Australia following a national press release. Kinematic data were recorded at 200 Hz using a 14 camera motion analysis system (Cortex, Motion Analysis Corporation, Santa Rosa, USA) and simultaneous ground reaction force were measured using a force platform (Model 9281B, Kistler, Winterthur, Switzerland). A three-segment foot model was used to describe three dimensional ankle, midfoot and one dimensional hallux kinematics during the stance sub-phases of contact, midstance and propulsion. Thongs resulted in increased ankle dorsiflexion during contact (by 10.9°, p; = 0.005 walk and by 8.1°, p; = 0.005 jog); increased midfoot plantarflexion during midstance (by 5.0°, p; = 0.037 jog) and propulsion (by 6.7°, p; = 0.044 walk and by 5.4°, p;= 0.020 jog); increased midfoot inversion during contact (by 3.8°, p;= 0.042 jog) and reduced hallux dorsiflexion during walking 10% prior to heel strike (by 6.5°, p; = 0.005) at heel strike (by 4.9°, p; = 0.031) and 10% post toe-off (by 10.7°, p; = 0.001). Ankle dorsiflexion during the contact phase of walking and jogging, combined with reduced hallux dorsiflexion during walking, suggests a mechanism to retain the thong during weight acceptance. Greater midfoot plantarflexion throughout midstance while walking and throughout midstance and propulsion while jogging may indicate a gripping action to sustain the thong during stance. While these compensations exist, the overall findings suggest that foot motion whilst wearing thongs may be more replicable of barefoot motion than originally thought.

  14. Effect of thong style flip-flops on children’s barefoot walking and jogging kinematics

    PubMed Central

    2013-01-01

    Background Thong style flip-flops are a popular form of footwear for children. Health professionals relate the wearing of thongs to foot pathology and deformity despite the lack of quantitative evidence to support or refute the benefits or disadvantages of children wearing thongs. The purpose of this study was to compare the effect of thong footwear on children’s barefoot three dimensional foot kinematics during walking and jogging. Methods Thirteen healthy children (age 10.3 ± 1.6 SD years) were recruited from the metropolitan area of Sydney Australia following a national press release. Kinematic data were recorded at 200 Hz using a 14 camera motion analysis system (Cortex, Motion Analysis Corporation, Santa Rosa, USA) and simultaneous ground reaction force were measured using a force platform (Model 9281B, Kistler, Winterthur, Switzerland). A three-segment foot model was used to describe three dimensional ankle, midfoot and one dimensional hallux kinematics during the stance sub-phases of contact, midstance and propulsion. Results Thongs resulted in increased ankle dorsiflexion during contact (by 10.9°, p; = 0.005 walk and by 8.1°, p; = 0.005 jog); increased midfoot plantarflexion during midstance (by 5.0°, p; = 0.037 jog) and propulsion (by 6.7°, p; = 0.044 walk and by 5.4°, p;= 0.020 jog); increased midfoot inversion during contact (by 3.8°, p;= 0.042 jog) and reduced hallux dorsiflexion during walking 10% prior to heel strike (by 6.5°, p; = 0.005) at heel strike (by 4.9°, p; = 0.031) and 10% post toe-off (by 10.7°, p; = 0.001). Conclusions Ankle dorsiflexion during the contact phase of walking and jogging, combined with reduced hallux dorsiflexion during walking, suggests a mechanism to retain the thong during weight acceptance. Greater midfoot plantarflexion throughout midstance while walking and throughout midstance and propulsion while jogging may indicate a gripping action to sustain the thong during stance. While these compensations exist, the overall findings suggest that foot motion whilst wearing thongs may be more replicable of barefoot motion than originally thought. PMID:23497571

  15. Effects of camera location on the reconstruction of 3D flare trajectory with two cameras

    NASA Astrophysics Data System (ADS)

    Özsaraç, Seçkin; Yeşilkaya, Muhammed

    2015-05-01

    Flares are used as valuable electronic warfare assets for the battle against infrared guided missiles. The trajectory of the flare is one of the most important factors that determine the effectiveness of the counter measure. Reconstruction of the three dimensional (3D) position of a point, which is seen by multiple cameras, is a common problem. Camera placement, camera calibration, corresponding pixel determination in between the images of different cameras and also the triangulation algorithm affect the performance of 3D position estimation. In this paper, we specifically investigate the effects of camera placement on the flare trajectory estimation performance by simulations. Firstly, 3D trajectory of a flare and also the aircraft, which dispenses the flare, are generated with simple motion models. Then, we place two virtual ideal pinhole camera models on different locations. Assuming the cameras are tracking the aircraft perfectly, the view vectors of the cameras are computed. Afterwards, using the view vector of each camera and also the 3D position of the flare, image plane coordinates of the flare on both cameras are computed using the field of view (FOV) values. To increase the fidelity of the simulation, we have used two sources of error. One is used to model the uncertainties in the determination of the camera view vectors, i.e. the orientations of the cameras are measured noisy. Second noise source is used to model the imperfections of the corresponding pixel determination of the flare in between the two cameras. Finally, 3D position of the flare is estimated using the corresponding pixel indices, view vector and also the FOV of the cameras by triangulation. All the processes mentioned so far are repeated for different relative camera placements so that the optimum estimation error performance is found for the given aircraft and are trajectories.

  16. Method for separating video camera motion from scene motion for constrained 3D displacement measurements

    NASA Astrophysics Data System (ADS)

    Gauthier, L. R.; Jansen, M. E.; Meyer, J. R.

    2014-09-01

    Camera motion is a potential problem when a video camera is used to perform dynamic displacement measurements. If the scene camera moves at the wrong time, the apparent motion of the object under study can easily be confused with the real motion of the object. In some cases, it is practically impossible to prevent camera motion, as for instance, when a camera is used outdoors in windy conditions. A method to address this challenge is described that provides an objective means to measure the displacement of an object of interest in the scene, even when the camera itself is moving in an unpredictable fashion at the same time. The main idea is to synchronously measure the motion of the camera and to use those data ex post facto to subtract out the apparent motion in the scene that is caused by the camera motion. The motion of the scene camera is measured by using a reference camera that is rigidly attached to the scene camera and oriented towards a stationary reference object. For instance, this reference object may be on the ground, which is known to be stationary. It is necessary to calibrate the reference camera by simultaneously measuring the scene images and the reference images at times when it is known that the scene object is stationary and the camera is moving. These data are used to map camera movement data to apparent scene movement data in pixel space and subsequently used to remove the camera movement from the scene measurements.

  17. Utilizing Commercial Hardware and Open Source Computer Vision Software to Perform Motion Capture for Reduced Gravity Flight

    NASA Technical Reports Server (NTRS)

    Humphreys, Brad; Bellisario, Brian; Gallo, Christopher; Thompson, William K.; Lewandowski, Beth

    2016-01-01

    Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts. To perform validation of these models and to support the Advanced Exercise Concepts Project, several candidate devices have been flown onboard NASAs Reduced Gravity Aircraft. In terrestrial laboratories, researchers typically have available to them motion capture systems for the measurement of subject kinematics. Onboard the parabolic flight aircraft it is not practical to utilize the traditional motion capture systems due to the large working volume they require and their relatively high replacement cost if damaged. To support measuring kinematics on board parabolic aircraft, a motion capture system is being developed utilizing open source computer vision code with commercial off the shelf (COTS) video camera hardware. While the systems accuracy is lower than lab setups, it provides a means to produce quantitative comparison motion capture kinematic data. Additionally, data such as required exercise volume for small spaces such as the Orion capsule can be determined. METHODS: OpenCV is an open source computer vision library that provides the ability to perform multi-camera 3 dimensional reconstruction. Utilizing OpenCV, via the Python programming language, a set of tools has been developed to perform motion capture in confined spaces using commercial cameras. Four Sony Video Cameras were intrinsically calibrated prior to flight. Intrinsic calibration provides a set of camera specific parameters to remove geometric distortion of the lens and sensor (specific to each individual camera). A set of high contrast markers were placed on the exercising subject (safety also necessitated that they be soft in case they become detached during parabolic flight); small yarn balls were used. Extrinsic calibration, the determination of camera location and orientation parameters, is performed using fixed landmark markers shared by the camera scenes. Additionally a wand calibration, the sweeping of the camera scenes simultaneously, was also performed. Techniques have been developed to perform intrinsic calibration, extrinsic calibration, isolation of the markers in the scene, calculation of marker 2D centroids, and 3D reconstruction from multiple cameras. These methods have been tested in the laboratory side-by-side comparison to a traditional motion capture system and also on a parabolic flight.

  18. A new method to acquire 3-D images of a dental cast

    NASA Astrophysics Data System (ADS)

    Li, Zhongke; Yi, Yaxing; Zhu, Zhen; Li, Hua; Qin, Yongyuan

    2006-01-01

    This paper introduced our newly developed method to acquire three-dimensional images of a dental cast. A rotatable table, a laser-knife, a mirror, a CCD camera and a personal computer made up of a three-dimensional data acquiring system. A dental cast is placed on the table; the mirror is installed beside the table; a linear laser is projected to the dental cast; the CCD camera is put up above the dental cast, it can take picture of the dental cast and the shadow in the mirror; while the table rotating, the camera records the shape of the laser streak projected on the dental cast, and transmit the data to the computer. After the table rotated one circuit, the computer processes the data, calculates the three-dimensional coordinates of the dental cast's surface. In data processing procedure, artificial neural networks are enrolled to calibrate the lens distortion, map coordinates form screen coordinate system to world coordinate system. According to the three-dimensional coordinates, the computer reconstructs the stereo image of the dental cast. It is essential for computer-aided diagnosis and treatment planning in orthodontics. In comparison with other systems in service, for example, laser beam three-dimensional scanning system, the characteristic of this three-dimensional data acquiring system: a. celerity, it casts only 1 minute to scan a dental cast; b. compact, the machinery is simple and compact; c. no blind zone, a mirror is introduced ably to reduce blind zone.

  19. Three Dimensional Gait Analysis Using Wearable Acceleration and Gyro Sensors Based on Quaternion Calculations

    PubMed Central

    Tadano, Shigeru; Takeda, Ryo; Miyagawa, Hiroaki

    2013-01-01

    This paper proposes a method for three dimensional gait analysis using wearable sensors and quaternion calculations. Seven sensor units consisting of a tri-axial acceleration and gyro sensors, were fixed to the lower limbs. The acceleration and angular velocity data of each sensor unit were measured during level walking. The initial orientations of the sensor units were estimated using acceleration data during upright standing position and the angular displacements were estimated afterwards using angular velocity data during gait. Here, an algorithm based on quaternion calculation was implemented for orientation estimation of the sensor units. The orientations of the sensor units were converted to the orientations of the body segments by a rotation matrix obtained from a calibration trial. Body segment orientations were then used for constructing a three dimensional wire frame animation of the volunteers during the gait. Gait analysis was conducted on five volunteers, and results were compared with those from a camera-based motion analysis system. Comparisons were made for the joint trajectory in the horizontal and sagittal plane. The average RMSE and correlation coefficient (CC) were 10.14 deg and 0.98, 7.88 deg and 0.97, 9.75 deg and 0.78 for the hip, knee and ankle flexion angles, respectively. PMID:23877128

  20. Plasma crystal dynamics measured with a three-dimensional plenoptic camera

    NASA Astrophysics Data System (ADS)

    Jambor, M.; Nosenko, V.; Zhdanov, S. K.; Thomas, H. M.

    2016-03-01

    Three-dimensional (3D) imaging of a single-layer plasma crystal was performed using a commercial plenoptic camera. To enhance the out-of-plane oscillations of particles in the crystal, the mode-coupling instability (MCI) was triggered in it by lowering the discharge power below a threshold. 3D coordinates of all particles in the crystal were extracted from the recorded videos. All three fundamental wave modes of the plasma crystal were calculated from these data. In the out-of-plane spectrum, only the MCI-induced hot spots (corresponding to the unstable hybrid mode) were resolved. The results are in agreement with theory and show that plenoptic cameras can be used to measure the 3D dynamics of plasma crystals.

  1. Plasma crystal dynamics measured with a three-dimensional plenoptic camera.

    PubMed

    Jambor, M; Nosenko, V; Zhdanov, S K; Thomas, H M

    2016-03-01

    Three-dimensional (3D) imaging of a single-layer plasma crystal was performed using a commercial plenoptic camera. To enhance the out-of-plane oscillations of particles in the crystal, the mode-coupling instability (MCI) was triggered in it by lowering the discharge power below a threshold. 3D coordinates of all particles in the crystal were extracted from the recorded videos. All three fundamental wave modes of the plasma crystal were calculated from these data. In the out-of-plane spectrum, only the MCI-induced hot spots (corresponding to the unstable hybrid mode) were resolved. The results are in agreement with theory and show that plenoptic cameras can be used to measure the 3D dynamics of plasma crystals.

  2. Design of an open-ended plenoptic camera for three-dimensional imaging of dusty plasmas

    NASA Astrophysics Data System (ADS)

    Sanpei, Akio; Tokunaga, Kazuya; Hayashi, Yasuaki

    2017-08-01

    Herein, the design of a plenoptic imaging system for three-dimensional reconstructions of dusty plasmas using an integral photography technique has been reported. This open-ended system is constructed with a multi-convex lens array and a typical reflex CMOS camera. We validated the design of the reconstruction system using known target particles. Additionally, the system has been applied to observations of fine particles floating in a horizontal, parallel-plate radio-frequency plasma. Furthermore, the system works well in the range of our dusty plasma experiment. We can identify the three-dimensional positions of dust particles from a single-exposure image obtained from one viewing port.

  3. 3D Surface Reconstruction for Lower Limb Prosthetic Model using Radon Transform

    NASA Astrophysics Data System (ADS)

    Sobani, S. S. Mohd; Mahmood, N. H.; Zakaria, N. A.; Razak, M. A. Abdul

    2018-03-01

    This paper describes the idea to realize three-dimensional surfaces of objects with cylinder-based shapes where the techniques adopted and the strategy developed for a non-rigid three-dimensional surface reconstruction of an object from uncalibrated two-dimensional image sequences using multiple-view digital camera and turntable setup. The surface of an object is reconstructed based on the concept of tomography with the aid of performing several digital image processing algorithms on the two-dimensional images captured by a digital camera in thirty-six different projections and the three-dimensional structure of the surface is analysed. Four different objects are used as experimental models in the reconstructions and each object is placed on a manually rotated turntable. The results shown that the proposed method has successfully reconstruct the three-dimensional surface of the objects and practicable. The shape and size of the reconstructed three-dimensional objects are recognizable and distinguishable. The reconstructions of objects involved in the test are strengthened with the analysis where the maximum percent error obtained from the computation is approximately 1.4 % for the height whilst 4.0%, 4.79% and 4.7% for the diameters at three specific heights of the objects.

  4. Quantifying frontal plane knee motion during single limb squats: reliability and validity of 2-dimensional measures.

    PubMed

    Gwynne, Craig R; Curran, Sarah A

    2014-12-01

    Clinical assessment of lower limb kinematics during dynamic tasks may identify individuals who demonstrate abnormal movement patterns that may lead to etiology of exacerbation of knee conditions such as patellofemoral joint (PFJt) pain. The purpose of this study was to determine the reliability, validity and associated measurement error of a clinically appropriate two-dimensional (2-D) procedure of quantifying frontal plane knee alignment during single limb squats. Nine female and nine male recreationally active subjects with no history of PFJt pain had frontal plane limb alignment assessed using three-dimensional (3-D) motion analysis and digital video cameras (2-D analysis) while performing single limb squats. The association between 2-D and 3-D measures was quantified using Pearson's product correlation coefficients. Intraclass correlation coefficients (ICCs) were determined for within- and between-session reliability of 2-D data and standard error of measurement (SEM) was used to establish measurement error. Frontal plane limb alignment assessed with 2-D analysis demonstrated good correlation compared with 3-D methods (r = 0.64 to 0.78, p < 0.001). Within-session (0.86) and between-session ICCs (0.74) demonstrated good reliability for 2-D measures and SEM scores ranged from 2° to 4°. 2-D measures have good consistency and may provide a valid measure of lower limb alignment when compared to existing 3-D methods. Assessment of lower limb kinematics using 2-D methods may be an accurate and clinically useful alternative to 3-D motion analysis when identifying individuals who demonstrate abnormal movement patterns associated with PFJt pain. 2b.

  5. The Use Of Videography For Three-Dimensional Motion Analysis

    NASA Astrophysics Data System (ADS)

    Hawkins, D. A.; Hawthorne, D. L.; DeLozier, G. S.; Campbell, K. R.; Grabiner, M. D.

    1988-02-01

    Special video path editing capabilities with custom hardware and software, have been developed for use in conjunction with existing video acquisition hardware and firmware. This system has simplified the task of quantifying the kinematics of human movement. A set of retro-reflective markers are secured to a subject performing a given task (i.e. walking, throwing, swinging a golf club, etc.). Multiple cameras, a video processor, and a computer work station collect video data while the task is performed. Software has been developed to edit video files, create centroid data, and identify marker paths. Multi-camera path files are combined to form a 3D path file using the DLT method of cinematography. A separate program converts the 3D path file into kinematic data by creating a set of local coordinate axes and performing a series of coordinate transformations from one local system to the next. The kinematic data is then displayed for appropriate review and/or comparison.

  6. Cinematic camera emulation using two-dimensional color transforms

    NASA Astrophysics Data System (ADS)

    McElvain, Jon S.; Gish, Walter

    2015-02-01

    For cinematic and episodic productions, on-set look management is an important component of the creative process, and involves iterative adjustments of the set, actors, lighting and camera configuration. Instead of using the professional motion capture device to establish a particular look, the use of a smaller form factor DSLR is considered for this purpose due to its increased agility. Because the spectral response characteristics will be different between the two camera systems, a camera emulation transform is needed to approximate the behavior of the destination camera. Recently, twodimensional transforms have been shown to provide high-accuracy conversion of raw camera signals to a defined colorimetric state. In this study, the same formalism is used for camera emulation, whereby a Canon 5D Mark III DSLR is used to approximate the behavior a Red Epic cinematic camera. The spectral response characteristics for both cameras were measured and used to build 2D as well as 3x3 matrix emulation transforms. When tested on multispectral image databases, the 2D emulation transforms outperform their matrix counterparts, particularly for images containing highly chromatic content.

  7. A three-dimensional printed patient-specific scaphoid replacement: a cadaveric study.

    PubMed

    Honigmann, Philipp; Schumacher, Ralf; Marek, Romy; Büttner, Franz; Thieringer, Florian; Haefeli, Mathias

    2018-05-01

    We present our first cadaveric test results of a three-dimensional printed patient-specific scaphoid replacement with tendon suspension, which showed normal motion behaviour and preservation of a stable scapholunate interval during physiological range of motion.

  8. Scene-aware joint global and local homographic video coding

    NASA Astrophysics Data System (ADS)

    Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.

    2016-09-01

    Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.

  9. Three-dimensional shape measurement and calibration for fringe projection by considering unequal height of the projector and the camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu Feipeng; Shi Hongjian; Bai Pengxiang

    In fringe projection, the CCD camera and the projector are often placed at equal height. In this paper, we will study the calibration of an unequal arrangement of the CCD camera and the projector. The principle of fringe projection with two-dimensional digital image correlation to acquire the profile of object surface is described in detail. By formula derivation and experiment, the linear relationship between the out-of-plane calibration coefficient and the y coordinate is clearly found. To acquire the three-dimensional (3D) information of an object correctly, this paper presents an effective calibration method with linear least-squares fitting, which is very simplemore » in principle and calibration. Experiments are implemented to validate the availability and reliability of the calibration method.« less

  10. The Role of Motion Concepts in Understanding Non-Motion Concepts

    PubMed Central

    Khatin-Zadeh, Omid; Banaruee, Hassan; Khoshsima, Hooshang; Marmolejo-Ramos, Fernando

    2017-01-01

    This article discusses a specific type of metaphor in which an abstract non-motion domain is described in terms of a motion event. Abstract non-motion domains are inherently different from concrete motion domains. However, motion domains are used to describe abstract non-motion domains in many metaphors. Three main reasons are suggested for the suitability of motion events in such metaphorical descriptions. Firstly, motion events usually have high degrees of concreteness. Secondly, motion events are highly imageable. Thirdly, components of any motion event can be imagined almost simultaneously within a three-dimensional space. These three characteristics make motion events suitable domains for describing abstract non-motion domains, and facilitate the process of online comprehension throughout language processing. Extending the main point into the field of mathematics, this article discusses the process of transforming abstract mathematical problems into imageable geometric representations within the three-dimensional space. This strategy is widely used by mathematicians to solve highly abstract and complex problems. PMID:29240715

  11. Fiducial marker-based correction for involuntary motion in weight-bearing C-arm CT scanning of knees. Part I. Numerical model-based optimization

    PubMed Central

    Choi, Jang-Hwan; Fahrig, Rebecca; Keil, Andreas; Besier, Thor F.; Pal, Saikat; McWalter, Emily J.; Beaupré, Gary S.; Maier, Andreas

    2013-01-01

    Purpose: Human subjects in standing positions are apt to show much more involuntary motion than in supine positions. The authors aimed to simulate a complicated realistic lower body movement using the four-dimensional (4D) digital extended cardiac-torso (XCAT) phantom. The authors also investigated fiducial marker-based motion compensation methods in two-dimensional (2D) and three-dimensional (3D) space. The level of involuntary movement-induced artifacts and image quality improvement were investigated after applying each method. Methods: An optical tracking system with eight cameras and seven retroreflective markers enabled us to track involuntary motion of the lower body of nine healthy subjects holding a squat position at 60° of flexion. The XCAT-based knee model was developed using the 4D XCAT phantom and the optical tracking data acquired at 120 Hz. The authors divided the lower body in the XCAT into six parts and applied unique affine transforms to each so that the motion (6 degrees of freedom) could be synchronized with the optical markers’ location at each time frame. The control points of the XCAT were tessellated into triangles and 248 projection images were created based on intersections of each ray and monochromatic absorption. The tracking data sets with the largest motion (Subject 2) and the smallest motion (Subject 5) among the nine data sets were used to animate the XCAT knee model. The authors defined eight skin control points well distributed around the knees as pseudo-fiducial markers which functioned as a reference in motion correction. Motion compensation was done in the following ways: (1) simple projection shifting in 2D, (2) deformable projection warping in 2D, and (3) rigid body warping in 3D. Graphics hardware accelerated filtered backprojection was implemented and combined with the three correction methods in order to speed up the simulation process. Correction fidelity was evaluated as a function of number of markers used (4–12) and marker distribution in three scenarios. Results: Average optical-based translational motion for the nine subjects was 2.14 mm (±0.69 mm) and 2.29 mm (±0.63 mm) for the right and left knee, respectively. In the representative central slices of Subject 2, the authors observed 20.30%, 18.30%, and 22.02% improvements in the structural similarity (SSIM) index with 2D shifting, 2D warping, and 3D warping, respectively. The performance of 2D warping improved as the number of markers increased up to 12 while 2D shifting and 3D warping were insensitive to the number of markers used. The minimum required number of markers for 2D shifting, 2D warping, and 3D warping was 4–6, 12, and 8, respectively. An even distribution of markers over the entire field of view provided robust performance for all three correction methods. Conclusions: The authors were able to simulate subject-specific realistic knee movement in weight-bearing positions. This study indicates that involuntary motion can seriously degrade the image quality. The proposed three methods were evaluated with the numerical knee model; 3D warping was shown to outperform the 2D methods. The methods are shown to significantly reduce motion artifacts if an appropriate marker setup is chosen. PMID:24007156

  12. Fiducial marker-based correction for involuntary motion in weight-bearing C-arm CT scanning of knees. Part I. Numerical model-based optimization.

    PubMed

    Choi, Jang-Hwan; Fahrig, Rebecca; Keil, Andreas; Besier, Thor F; Pal, Saikat; McWalter, Emily J; Beaupré, Gary S; Maier, Andreas

    2013-09-01

    Human subjects in standing positions are apt to show much more involuntary motion than in supine positions. The authors aimed to simulate a complicated realistic lower body movement using the four-dimensional (4D) digital extended cardiac-torso (XCAT) phantom. The authors also investigated fiducial marker-based motion compensation methods in two-dimensional (2D) and three-dimensional (3D) space. The level of involuntary movement-induced artifacts and image quality improvement were investigated after applying each method. An optical tracking system with eight cameras and seven retroreflective markers enabled us to track involuntary motion of the lower body of nine healthy subjects holding a squat position at 60° of flexion. The XCAT-based knee model was developed using the 4D XCAT phantom and the optical tracking data acquired at 120 Hz. The authors divided the lower body in the XCAT into six parts and applied unique affine transforms to each so that the motion (6 degrees of freedom) could be synchronized with the optical markers' location at each time frame. The control points of the XCAT were tessellated into triangles and 248 projection images were created based on intersections of each ray and monochromatic absorption. The tracking data sets with the largest motion (Subject 2) and the smallest motion (Subject 5) among the nine data sets were used to animate the XCAT knee model. The authors defined eight skin control points well distributed around the knees as pseudo-fiducial markers which functioned as a reference in motion correction. Motion compensation was done in the following ways: (1) simple projection shifting in 2D, (2) deformable projection warping in 2D, and (3) rigid body warping in 3D. Graphics hardware accelerated filtered backprojection was implemented and combined with the three correction methods in order to speed up the simulation process. Correction fidelity was evaluated as a function of number of markers used (4-12) and marker distribution in three scenarios. Average optical-based translational motion for the nine subjects was 2.14 mm (± 0.69 mm) and 2.29 mm (± 0.63 mm) for the right and left knee, respectively. In the representative central slices of Subject 2, the authors observed 20.30%, 18.30%, and 22.02% improvements in the structural similarity (SSIM) index with 2D shifting, 2D warping, and 3D warping, respectively. The performance of 2D warping improved as the number of markers increased up to 12 while 2D shifting and 3D warping were insensitive to the number of markers used. The minimum required number of markers for 2D shifting, 2D warping, and 3D warping was 4-6, 12, and 8, respectively. An even distribution of markers over the entire field of view provided robust performance for all three correction methods. The authors were able to simulate subject-specific realistic knee movement in weight-bearing positions. This study indicates that involuntary motion can seriously degrade the image quality. The proposed three methods were evaluated with the numerical knee model; 3D warping was shown to outperform the 2D methods. The methods are shown to significantly reduce motion artifacts if an appropriate marker setup is chosen.

  13. Comparison of three different techniques for camera and motion control of a teleoperated robot.

    PubMed

    Doisy, Guillaume; Ronen, Adi; Edan, Yael

    2017-01-01

    This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-11-04

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  15. Method and apparatus for coherent imaging of infrared energy

    DOEpatents

    Hutchinson, D.P.

    1998-05-12

    A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera`s two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera`s integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting. 8 figs.

  16. Strengths and Weaknesses of a Planar Whole-Body Method of 153Sm Dosimetry for Patients with Metastatic Osteosarcoma and Comparison with Three-Dimensional Dosimetry

    PubMed Central

    Plyku, Donika; Loeb, David M.; Prideaux, Andrew R.; Baechler, Sébastien; Wahl, Richard L.; Sgouros, George

    2015-01-01

    Abstract Purpose: Dosimetric accuracy depends directly upon the accuracy of the activity measurements in tumors and organs. The authors present the methods and results of a retrospective tumor dosimetry analysis in 14 patients with a total of 28 tumors treated with high activities of 153Sm-ethylenediaminetetramethylenephosphonate (153Sm-EDTMP) for therapy of metastatic osteosarcoma using planar images and compare the results with three-dimensional dosimetry. Materials and Methods: Analysis of phantom data provided a complete set of parameters for dosimetric calculations, including buildup factor, attenuation coefficient, and camera dead-time compensation. The latter was obtained using a previously developed methodology that accounts for the relative motion of the camera and patient during whole-body (WB) imaging. Tumor activity values calculated from the anterior and posterior views of WB planar images of patients treated with 153Sm-EDTMP for pediatric osteosarcoma were compared with the geometric mean value. The mean activities were integrated over time and tumor-absorbed doses were calculated using the software package OLINDA/EXM. Results: The authors found that it was necessary to employ the dead-time correction algorithm to prevent measured tumor activity half-lives from often exceeding the physical decay half-life of 153Sm. Measured half-lives so long are unquestionably in error. Tumor-absorbed doses varied between 0.0022 and 0.27 cGy/MBq with an average of 0.065 cGy/MBq; however, a comparison with absorbed dose values derived from a three-dimensional analysis for the same tumors showed no correlation; moreover, the ratio of three-dimensional absorbed dose value to planar absorbed dose value was 2.19. From the anterior and posterior activity comparisons, the order of clinical uncertainty for activity and dose calculations from WB planar images, with the present methodology, is hypothesized to be about 70%. Conclusion: The dosimetric results from clinical patient data indicate that absolute planar dosimetry is unreliable and dosimetry using three-dimensional imaging is preferable, particularly for tumors, except perhaps for the most sophisticated planar methods. The relative activity and patient kinetics derived from planar imaging show a greater level of reliability than the dosimetry. PMID:26560193

  17. Concordance of Motion Sensor and Clinician-Rated Fall Risk Scores in Older Adults.

    PubMed

    Elledge, Julie

    2017-12-01

    As the older adult population in the United States continues to grow, developing reliable, valid, and practical methods for identifying fall risk is a high priority. Falls are prevalent in older adults and contribute significantly to morbidity and mortality rates and rising health costs. Identifying at-risk older adults and intervening in a timely manner can reduce falls. Conventional fall risk assessment tools require a health professional trained in the use of each tool for administration and interpretation. Motion sensor technology, which uses three-dimensional cameras to measure patient movements, is promising for assessing older adults' fall risk because it could eliminate or reduce the need for provider oversight. The purpose of this study was to assess the concordance of fall risk scores as measured by a motion sensor device, the OmniVR Virtual Rehabilitation System, with clinician-rated fall risk scores in older adult outpatients undergoing physical rehabilitation. Three standardized fall risk assessments were administered by the OmniVR and by a clinician. Validity of the OmniVR was assessed by measuring the concordance between the two assessment methods. Stability of the OmniVR fall risk ratings was assessed by measuring test-retest reliability. The OmniVR scores showed high concordance with the clinician-rated scores and high stability over time, demonstrating comparability with provider measurements.

  18. Symmetry breaking motion of a vortex pair in a driven cavity

    NASA Astrophysics Data System (ADS)

    McHugh, John; Osman, Kahar; Farias, Jason

    2002-11-01

    The two-dimensional driven cavity problem with an anti-symmetric sinusoidal forcing has been found to exhibit a subcritical symmetry breaking bifurcation (Farias and McHugh, Phys. Fluids, 2002). Equilibrium solutions are either a symmetric vortex pair or an asymmetric motion. The asymmetric motion is an asymmetric vortex pair at low Reynolds numbers, but merges into a three vortex motion at higher Reynolds numbers. The asymmetric solution is obtained by initiating the flow with a single vortex centered in the domain. Symmetric motion is obtained with no initial vortex, or weak initial vortex. The steady three-vortex motion occurs at a Reynolds number of approximately 3000, where the symmetric vortex pair has already gone through a Hopf bifurcation. Further two-dimensional results show that forcing with two full oscillations across the top of the cavity results in two steady vortex motions, depending on initial conditions. Three-dimensional results have even more steady solutions. The results are computational and theoretical.

  19. Comparative assessment of techniques for initial pose estimation using monocular vision

    NASA Astrophysics Data System (ADS)

    Sharma, Sumant; D`Amico, Simone

    2016-06-01

    This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.

  20. Joint power generation differentiates young and adult sprinters during the transition from block start into acceleration: a cross-sectional study.

    PubMed

    Debaere, Sofie; Vanwanseele, Benedicte; Delecluse, Christophe; Aerenhouts, Dirk; Hagman, Friso; Jonkers, Ilse

    2017-11-01

    The aim of this study was to investigate differences in joint power generation between well-trained adult athletes and young sprinters from block clearance to initial contact of second stance. Eleven under 16 (U16) and 18 under 18 (U18) promising sprinters executed an explosive start action. Fourteen well-trained adult sprinters completed the exact same protocol. All athletes were equipped with 74 spherical reflective markers, while an opto-electronic motion analysis system consisting of 12 infrared cameras (250 Hz, MX3, Vicon, Oxford Metrics, UK) and 2 Kistler force plates (1,000 Hz) was used to collect the three-dimensional marker trajectories and ground reaction forces (Nexus, Vicon). Three-dimensional kinematics, kinetics, and power were calculated (Opensim) and time normalised from the first action after gunshot until initial contact of second stance after block clearance. This study showed that adult athletes rely on higher knee power generation during the first stance to induce longer step length and therefore higher velocity. In younger athletes, power generation of hip was more dominant.

  1. Three-dimensional kinematic correlates of ball velocity during maximal instep soccer kicking in males.

    PubMed

    Sinclair, Jonathan; Fewtrell, David; Taylor, Paul John; Bottoms, Lindsay; Atkins, Stephen; Hobbs, Sarah Jane

    2014-01-01

    Achieving a high ball velocity is important during soccer shooting, as it gives the goalkeeper less time to react, thus improving a player's chance of scoring. This study aimed to identify important technical aspects of kicking linked to the generation of ball velocity using regression analyses. Maximal instep kicks were obtained from 22 academy-level soccer players using a 10-camera motion capture system sampling at 500 Hz. Three-dimensional kinematics of the lower extremity segments were obtained. Regression analysis was used to identify the kinematic parameters associated with the development of ball velocity. A single biomechanical parameter; knee extension velocity of the kicking limb at ball contact Adjusted R(2) = 0.39, p ≤ 0.01 was obtained as a significant predictor of ball-velocity. This study suggests that sagittal plane knee extension velocity is the strongest contributor to ball velocity and potentially overall kicking performance. It is conceivable therefore that players may benefit from exposure to coaching and strength techniques geared towards the improvement of knee extension angular velocity as highlighted in this study.

  2. Performance analysis of visual tracking algorithms for motion-based user interfaces on mobile devices

    NASA Astrophysics Data System (ADS)

    Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing

    2008-02-01

    Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.

  3. Three-dimensional finite element modelling of muscle forces during mastication.

    PubMed

    Röhrle, Oliver; Pullan, Andrew J

    2007-01-01

    This paper presents a three-dimensional finite element model of human mastication. Specifically, an anatomically realistic model of the masseter muscles and associated bones is used to investigate the dynamics of chewing. A motion capture system is used to track the jaw motion of a subject chewing standard foods. The three-dimensional nonlinear deformation of the masseter muscles are calculated via the finite element method, using the jaw motion data as boundary conditions. Motion-driven muscle activation patterns and a transversely isotropic material law, defined in a muscle-fibre coordinate system, are used in the calculations. Time-force relationships are presented and analysed with respect to different tasks during mastication, e.g. opening, closing, and biting, and are also compared to a more traditional one-dimensional model. The results strongly suggest that, due to the complex arrangement of muscle force directions, modelling skeletal muscles as conventional one-dimensional lines of action might introduce a significant source of error.

  4. GoPro Hero Cameras for Creation of a Three-Dimensional, Educational, Neurointerventional Video.

    PubMed

    Park, Min S; Brock, Andrea; Mortimer, Vance; Taussky, Philipp; Couldwell, William T; Quigley, Edward

    2017-10-01

    Neurointerventional education relies on an apprenticeship model, with the trainee observing and participating in procedures with the guidance of a mentor. While educational videos are becoming prevalent in surgical cases, there is a dearth of comparable educational material for trainees in neurointerventional programs. We sought to create a high-quality, three-dimensional video of a routine diagnostic cerebral angiogram for use as an educational tool. A diagnostic cerebral angiogram was recorded using two GoPro HERO 3+ cameras with the Dual HERO System to capture the proceduralist's hands during the case. This video was edited with recordings from the video monitors to create a real-time three-dimensional video of both the actions of the neurointerventionalist and the resulting wire/catheter movements. The final edited video, in either two or three dimensions, can serve as another instructional tool for the training of residents and/or fellows. Additional videos can be created in a similar fashion of more complicated neurointerventional cases. The GoPro HERO 3+ camera and Dual HERO System can be used to create educational videos of neurointerventional procedures.

  5. Pose-free structure from motion using depth from motion constraints.

    PubMed

    Zhang, Ji; Boutin, Mireille; Aliaga, Daniel G

    2011-10-01

    Structure from motion (SFM) is the problem of recovering the geometry of a scene from a stream of images taken from unknown viewpoints. One popular approach to estimate the geometry of a scene is to track scene features on several images and reconstruct their position in 3-D. During this process, the unknown camera pose must also be recovered. Unfortunately, recovering the pose can be an ill-conditioned problem which, in turn, can make the SFM problem difficult to solve accurately. We propose an alternative formulation of the SFM problem with fixed internal camera parameters known a priori. In this formulation, obtained by algebraic variable elimination, the external camera pose parameters do not appear. As a result, the problem is better conditioned in addition to involving much fewer variables. Variable elimination is done in three steps. First, we take the standard SFM equations in projective coordinates and eliminate the camera orientations from the equations. We then further eliminate the camera center positions. Finally, we also eliminate all 3-D point positions coordinates, except for their depths with respect to the camera center, thus obtaining a set of simple polynomial equations of degree two and three. We show that, when there are merely a few points and pictures, these "depth-only equations" can be solved in a global fashion using homotopy methods. We also show that, in general, these same equations can be used to formulate a pose-free cost function to refine SFM solutions in a way that is more accurate than by minimizing the total reprojection error, as done when using the bundle adjustment method. The generalization of our approach to the case of varying internal camera parameters is briefly discussed. © 2011 IEEE

  6. Analytical and numerical construction of vertical periodic orbits about triangular libration points based on polynomial expansion relations among directions

    NASA Astrophysics Data System (ADS)

    Qian, Ying-Jing; Yang, Xiao-Dong; Zhai, Guan-Qiao; Zhang, Wei

    2017-08-01

    Innovated by the nonlinear modes concept in the vibrational dynamics, the vertical periodic orbits around the triangular libration points are revisited for the Circular Restricted Three-body Problem. The ζ -component motion is treated as the dominant motion and the ξ and η -component motions are treated as the slave motions. The slave motions are in nature related to the dominant motion through the approximate nonlinear polynomial expansions with respect to the ζ -position and ζ -velocity during the one of the periodic orbital motions. By employing the relations among the three directions, the three-dimensional system can be transferred into one-dimensional problem. Then the approximate three-dimensional vertical periodic solution can be analytically obtained by solving the dominant motion only on ζ -direction. To demonstrate the effectiveness of the proposed method, an accuracy study was carried out to validate the polynomial expansion (PE) method. As one of the applications, the invariant nonlinear relations in polynomial expansion form are used as constraints to obtain numerical solutions by differential correction. The nonlinear relations among the directions provide an alternative point of view to explore the overall dynamics of periodic orbits around libration points with general rules.

  7. [Effect of calcaneocuboid arthrodesis on three-dimensional kinematics of talonavicular joint].

    PubMed

    Chen, Yanxi; Yu, Guangrong; Ding, Zhuquan

    2007-03-01

    To discuss the effect of the calcaneocuboid arthrodesis on three-dimensional kinematics of talonavicular joint and its clinical significance. Ten fresh-frozen foot specimens, three-dimensional kinematics of talonavicular joint were determined in the case of neutral position, dorsiflexion. plantoflexion, adduction, abduction, inversion and eversion motion by means of three-dimensional coordinate instrument (Immersion MicroScribe G2X) before and after calcaneocuboid arthrodesis under non-weight with moment of couple, bending moment, equilibrium dynamic loading. Calcaneocuboid arthrodesis was performed on these feet in neutral position and the lateral column of normal length. A significant decrease in the three-dimensional kinematics of talonavicular joint was observed (P < 0.01) in cadaver model following calcaneocuboid arthrodesis. Talonavicular joint motion was diminished by 31.21% +/- 6.08% in sagittal plane; by 51.46% +/- 7.91% in coronal plane; by 36.98% +/- 4.12% in transverse plane; and averagely by 41.25% +/- 6.02%. Calcancocuboid arthrodesis could limite motion of the talonavicular joints, and the disadvantage of calcaneocuboid arthrodesis shouldn't be neglected.

  8. Four-dimensional (4D) tracking of high-temperature microparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhehui, E-mail: zwang@lanl.gov; Liu, Q.; Waganaar, W.

    High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.

  9. Four-dimensional (4D) tracking of high-temperature microparticles

    NASA Astrophysics Data System (ADS)

    Wang, Zhehui; Liu, Q.; Waganaar, W.; Fontanese, J.; James, D.; Munsat, T.

    2016-11-01

    High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.

  10. Four-dimensional (4D) tracking of high-temperature microparticles

    DOE PAGES

    Wang, Zhehui; Liu, Qiuguang; Waganaar, Bill; ...

    2016-07-08

    High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. As a result, velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.

  11. Four-dimensional (4D) tracking of high-temperature microparticles.

    PubMed

    Wang, Zhehui; Liu, Q; Waganaar, W; Fontanese, J; James, D; Munsat, T

    2016-11-01

    High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.

  12. 3-D endoscopic imaging using plenoptic camera.

    PubMed

    Le, Hanh N D; Decker, Ryan; Opferman, Justin; Kim, Peter; Krieger, Axel; Kang, Jin U

    2016-06-01

    Three-dimensional endoscopic imaging using plenoptic technique combined with F-matching algorithm has been pursued in this study. A custom relay optics was designed to integrate a commercial surgical straight endoscope with a plenoptic camera.

  13. Thunderstorm observations from Space Shuttle

    NASA Technical Reports Server (NTRS)

    Vonnegut, B.; Vaughan, O. H., Jr.; Brook, M.

    1983-01-01

    Results of the Nighttime/Daytime Optical Survey of Lightning (NOSL) experiments done on the STS-2 and STS-4 flights are covered. During these two flights of the Space Shuttle Columbia, the astronaut teams of J. Engle and R. Truly, and K. Mattingly II and H. Hartsfield took motion pictures of thunderstorms with a 16 mm cine camera. Film taken during daylight showed interesting thunderstorm cloud formations, where individual frames taken tens of seconds apart, when viewed as stereo pairs, provided information on the three-dimensional structure of the cloud systems. Film taken at night showed clouds illuminated by lightning with discharges that propagated horizontally at speeds of up to 10 to the 5th m/sec and extended for distances on the order of 60 km or more.

  14. Dynamics and interactions of particles in a thermophoretic trap

    NASA Astrophysics Data System (ADS)

    Foster, Benjamin; Fung, Frankie; Fieweger, Connor; Usatyuk, Mykhaylo; Gaj, Anita; DeSalvo, B. J.; Chin, Cheng

    2017-08-01

    We investigate dynamics and interactions of particles levitated and trapped by the thermophoretic force in a vacuum cell. Our analysis is based on footage taken by orthogonal cameras that are able to capture the three dimensional trajectories of the particles. In contrast to spherical particles, which remain stationary at the center of the cell, here we report new qualitative features of the motion of particles with non-spherical geometry. Singly levitated particles exhibit steady spinning around their body axis and rotation around the symmetry axis of the cell. When two levitated particles approach each other, repulsive or attractive interactions between the particles are observed. Our levitation system offers a wonderful platform to study interaction between particles in a microgravity environment.

  15. PIV/HPIV Film Analysis Software Package

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    A PIV/HPIV film analysis software system was developed that calculates the 2-dimensional spatial autocorrelations of subregions of Particle Image Velocimetry (PIV) or Holographic Particle Image Velocimetry (HPIV) film recordings. The software controls three hardware subsystems including (1) a Kodak Megaplus 1.4 camera and EPIX 4MEG framegrabber subsystem, (2) an IEEE/Unidex 11 precision motion control subsystem, and (3) an Alacron I860 array processor subsystem. The software runs on an IBM PC/AT host computer running either the Microsoft Windows 3.1 or Windows 95 operating system. It is capable of processing five PIV or HPIV displacement vectors per second, and is completely automated with the exception of user input to a configuration file prior to analysis execution for update of various system parameters.

  16. Experimental study of transport of a dimer on a vertically oscillating plate

    PubMed Central

    Wang, Jiao; Liu, Caishan; Ma, Daolin

    2014-01-01

    It has recently been shown that a dimer, composed of two identical spheres rigidly connected by a rod, under harmonic vertical vibration can exhibit a self-ordered transport behaviour. In this case, the mass centre of the dimer will perform a circular orbit in the horizontal plane, or a straight line if confined between parallel walls. In order to validate the numerical discoveries, we experimentally investigate the temporal evolution of the dimer's motion in both two- and three-dimensional situations. A stereoscopic vision method with a pair of high-speed cameras is adopted to perform omnidirectional measurements. All the cases studied in our experiments are also simulated using an existing numerical model. The combined investigations detail the dimer's dynamics and clearly show that its transport behaviours originate from a series of combinations of different contact states. This series is critical to our understanding of the transport properties in the dimer's motion and related self-ordered phenomena in granular systems. PMID:25383029

  17. Choosing a Motion Detector.

    ERIC Educational Resources Information Center

    Ballard, David M.

    1990-01-01

    Examines the characteristics of three types of motion detectors: Doppler radar, infrared, and ultrasonic wave, and how they are used on school buses to prevent students from being killed by their own school bus. Other safety devices cited are bus crossing arms and a camera monitor system. (MLF)

  18. Three dimensional identification card and applications

    NASA Astrophysics Data System (ADS)

    Zhou, Changhe; Wang, Shaoqing; Li, Chao; Li, Hao; Liu, Zhao

    2016-10-01

    Three dimensional Identification Card, with its three-dimensional personal image displayed and stored for personal identification, is supposed be the advanced version of the present two-dimensional identification card in the future [1]. Three dimensional Identification Card means that there are three-dimensional optical techniques are used, the personal image on ID card is displayed to be three-dimensional, so we can see three dimensional personal face. The ID card also stores the three-dimensional face information in its inside electronics chip, which might be recorded by using two-channel cameras, and it can be displayed in computer as three-dimensional images for personal identification. Three-dimensional ID card might be one interesting direction to update the present two-dimensional card in the future. Three-dimension ID card might be widely used in airport custom, entrance of hotel, school, university, as passport for on-line banking, registration of on-line game, etc...

  19. A panning DLT procedure for three-dimensional videography.

    PubMed

    Yu, B; Koh, T J; Hay, J G

    1993-06-01

    The direct linear transformation (DLT) method [Abdel-Aziz and Karara, APS Symposium on Photogrammetry. American Society of Photogrammetry, Falls Church, VA (1971)] is widely used in biomechanics to obtain three-dimensional space coordinates from film and video records. This method has some major shortcomings when used to analyze events which take place over large areas. To overcome these shortcomings, a three-dimensional data collection method based on the DLT method, and making use of panning cameras, was developed. Several small single control volumes were combined to construct a large total control volume. For each single control volume, a regression equation (calibration equation) is developed to express each of the 11 DLT parameters as a function of camera orientation, so that the DLT parameters can then be estimated from arbitrary camera orientations. Once the DLT parameters are known for at least two cameras, and the associated two-dimensional film or video coordinates of the event are obtained, the desired three-dimensional space coordinates can be computed. In a laboratory test, five single control volumes (in a total control volume of 24.40 x 2.44 x 2.44 m3) were used to test the effect of the position of the single control volume on the accuracy of the computed three dimensional space coordinates. Linear and quadratic calibration equations were used to test the effect of the order of the equation on the accuracy of the computed three dimensional space coordinates. For four of the five single control volumes tested, the mean resultant errors associated with the use of the linear calibration equation were significantly larger than those associated with the use of the quadratic calibration equation. The position of the single control volume had no significant effect on the mean resultant errors in computed three dimensional coordinates when the quadratic calibration equation was used. Under the same data collection conditions, the mean resultant errors in the computed three dimensional coordinates associated with the panning and stationary DLT methods were 17 and 22 mm, respectively. The major advantages of the panning DLT method lie in the large image sizes obtained and in the ease with which the data can be collected. The method also has potential for use in a wide variety of contexts. The major shortcoming of the method is the large amount of digitizing necessary to calibrate the total control volume. Adaptations of the method to reduce the amount of digitizing required are being explored.

  20. Software Graphical User Interface For Analysis Of Images

    NASA Technical Reports Server (NTRS)

    Leonard, Desiree M.; Nolf, Scott R.; Avis, Elizabeth L.; Stacy, Kathryn

    1992-01-01

    CAMTOOL software provides graphical interface between Sun Microsystems workstation and Eikonix Model 1412 digitizing camera system. Camera scans and digitizes images, halftones, reflectives, transmissives, rigid or flexible flat material, or three-dimensional objects. Users digitize images and select from three destinations: work-station display screen, magnetic-tape drive, or hard disk. Written in C.

  1. Astronomy Demonstrations and Models.

    ERIC Educational Resources Information Center

    Eckroth, Charles A.

    Demonstrations in astronomy classes seem to be more necessary than in physics classes for three reasons. First, many of the events are very large scale and impossibly remote from human senses. Secondly, while physics courses use discussions of one- and two-dimensional motion, three-dimensional motion is the normal situation in astronomy; thus,…

  2. 3-D endoscopic imaging using plenoptic camera

    PubMed Central

    Le, Hanh N. D.; Decker, Ryan; Opferman, Justin; Kim, Peter; Krieger, Axel

    2017-01-01

    Three-dimensional endoscopic imaging using plenoptic technique combined with F-matching algorithm has been pursued in this study. A custom relay optics was designed to integrate a commercial surgical straight endoscope with a plenoptic camera. PMID:29276806

  3. The Effects of Applying Game-Based Learning to Webcam Motion Sensor Games for Autistic Students' Sensory Integration Training

    ERIC Educational Resources Information Center

    Li, Kun-Hsien; Lou, Shi-Jer; Tsai, Huei-Yin; Shih, Ru-Chu

    2012-01-01

    This study aims to explore the effects of applying game-based learning to webcam motion sensor games for autistic students' sensory integration training for autistic students. The research participants were three autistic students aged from six to ten. Webcam camera as the research tool wad connected internet games to engage in motion sensor…

  4. Apparent motion determined by surface layout not by disparity or three-dimensional distance.

    PubMed

    He, Z J; Nakayama, K

    1994-01-13

    The most meaningful events ecologically, including the motion of objects, occur in relation to or on surfaces. We run along the ground, cars travel on roads, balls roll across lawns, and so on. Even though there are other motions, such as flying of birds, it is likely that motion along surfaces is more frequent and more significant biologically. To examine whether events occurring in relation to surfaces have a preferred status in terms of visual representation, we asked whether the phenomenon of apparent motion would show a preference for motion attached to surfaces. We used a competitive three-dimensional motion paradigm and found that there is a preference to see motion between tokens placed within the same disparity as opposed to different planes. Supporting our surface-layout hypothesis, the effect of disparity was eliminated either by slanting the tokens so that they were all seen within the same surface plane or by inserting a single slanted background surface upon which the tokens could rest. Additionally, a highly curved stereoscopic surface led to the perception of a more circuitous motion path defined by that surface, instead of the shortest path in three-dimensional space.

  5. A Tool for the Automated Collection of Space Utilization Data: Three Dimensional Space Utilization Monitor

    NASA Technical Reports Server (NTRS)

    Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.

    2017-01-01

    Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP) and the Behavioral Health and Performance (BHP) Element are conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within the volume. NASA needs methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods exist yet many are obtrusive and require significant post-processing. ?Examplesused in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multi-camera methods ?Due to constraints of space operations many such methods are infeasible. Inertial tracking systems typically rely upon a gravity vector to normalize sensor readings,and traditional IR systems are large and require extensive calibration. ?However, multiple technologies have not been applied to space operations for these purposes. Two of these include: 3D Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) ?Depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR)

  6. Spatiotemporal motion boundary detection and motion boundary velocity estimation for tracking moving objects with a moving camera: a level sets PDEs approach with concurrent camera motion compensation.

    PubMed

    Feghali, Rosario; Mitiche, Amar

    2004-11-01

    The purpose of this study is to investigate a method of tracking moving objects with a moving camera. This method estimates simultaneously the motion induced by camera movement. The problem is formulated as a Bayesian motion-based partitioning problem in the spatiotemporal domain of the image quence. An energy functional is derived from the Bayesian formulation. The Euler-Lagrange descent equations determine imultaneously an estimate of the image motion field induced by camera motion and an estimate of the spatiotemporal motion undary surface. The Euler-Lagrange equation corresponding to the surface is expressed as a level-set partial differential equation for topology independence and numerically stable implementation. The method can be initialized simply and can track multiple objects with nonsimultaneous motions. Velocities on motion boundaries can be estimated from geometrical properties of the motion boundary. Several examples of experimental verification are given using synthetic and real-image sequences.

  7. MO-FG-CAMPUS-JeP3-04: Feasibility Study of Real-Time Ultrasound Monitoring for Abdominal Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Lin; Kien Ng, Sook; Zhang, Ying

    Purpose: Ultrasound is ideal for real-time monitoring in radiotherapy with high soft tissue contrast, non-ionization, portability, and cost effectiveness. Few studies investigated clinical application of real-time ultrasound monitoring for abdominal stereotactic body radiation therapy (SBRT). This study aims to demonstrate the feasibility of real-time monitoring of 3D target motion using 4D ultrasound. Methods: An ultrasound probe holding system was designed to allow clinician to freely move and lock ultrasound probe. For phantom study, an abdominal ultrasound phantom was secured on a 2D programmable respiratory motion stage. One side of the stage was elevated than another side to generate 3D motion.more » The motion stage made periodic breath-hold movement. Phantom movement tracked by infrared camera was considered as ground truth. For volunteer study three healthy subjects underwent the same setup for abdominal SBRT with active breath control (ABC). 4D ultrasound B-mode images were acquired for both phantom and volunteers for real-time monitoring. 10 breath-hold cycles were monitored for each experiment. For phantom, the target motion tracked by ultrasound was compared with motion tracked by infrared camera. For healthy volunteers, the reproducibility of ABC breath-hold was evaluated. Results: Volunteer study showed the ultrasound system fitted well to the clinical SBRT setup. The reproducibility for 10 breath-holds is less than 2 mm in three directions for all three volunteers. For phantom study the motion between inspiration and expiration captured by camera (ground truth) is 2.35±0.02 mm, 1.28±0.04 mm, 8.85±0.03 mm in LR, AP, SI directly, respectively. The motion monitored by ultrasound is 2.21±0.07 mm, 1.32±0.12mm, 9.10±0.08mm, respectively. The motion monitoring error in any direction is less than 0.5 mm. Conclusion: The volunteer study proved the clinical feasibility of real-time ultrasound monitoring for abdominal SBRT. The phantom and volunteer ABC studies demonstrated sub-millimeter accuracy of 3D motion movement monitoring.« less

  8. Calibration of a Hall effect displacement measurement system for complex motion analysis using a neural network.

    PubMed

    Northey, G W; Oliver, M L; Rittenhouse, D M

    2006-01-01

    Biomechanics studies often require the analysis of position and orientation. Although a variety of transducer and camera systems can be utilized, a common inexpensive alternative is the Hall effect sensor. Hall effect sensors have been used extensively for one-dimensional position analysis but their non-linear behavior and cross-talk effects make them difficult to calibrate for effective and accurate two- and three-dimensional position and orientation analysis. The aim of this study was to develop and calibrate a displacement measurement system for a hydraulic-actuation joystick used for repetitive motion analysis of heavy equipment operators. The system utilizes an array of four Hall effect sensors that are all active during any joystick movement. This built-in redundancy allows the calibration to utilize fully connected feed forward neural networks in conjunction with a Microscribe 3D digitizer. A fully connected feed forward neural network with one hidden layer containing five neurons was developed. Results indicate that the ability of the neural network to accurately predict the x, y and z coordinates of the joystick handle was good with r(2) values of 0.98 and higher. The calibration technique was found to be equally as accurate when used on data collected 5 days after the initial calibration, indicating the system is robust and stable enough to not require calibration every time the joystick is used. This calibration system allowed an infinite number of joystick orientations and positions to be found within the range of joystick motion.

  9. Methodology issues concerning the accuracy of kinematic data collection and analysis using the ariel performance analysis system

    NASA Technical Reports Server (NTRS)

    Wilmington, R. P.; Klute, Glenn K. (Editor); Carroll, Amy E. (Editor); Stuart, Mark A. (Editor); Poliner, Jeff (Editor); Rajulu, Sudhakar (Editor); Stanush, Julie (Editor)

    1992-01-01

    Kinematics, the study of motion exclusive of the influences of mass and force, is one of the primary methods used for the analysis of human biomechanical systems as well as other types of mechanical systems. The Anthropometry and Biomechanics Laboratory (ABL) in the Crew Interface Analysis section of the Man-Systems Division performs both human body kinematics as well as mechanical system kinematics using the Ariel Performance Analysis System (APAS). The APAS supports both analysis of analog signals (e.g. force plate data collection) as well as digitization and analysis of video data. The current evaluations address several methodology issues concerning the accuracy of the kinematic data collection and analysis used in the ABL. This document describes a series of evaluations performed to gain quantitative data pertaining to position and constant angular velocity movements under several operating conditions. Two-dimensional as well as three-dimensional data collection and analyses were completed in a controlled laboratory environment using typical hardware setups. In addition, an evaluation was performed to evaluate the accuracy impact due to a single axis camera offset. Segment length and positional data exhibited errors within 3 percent when using three-dimensional analysis and yielded errors within 8 percent through two-dimensional analysis (Direct Linear Software). Peak angular velocities displayed errors within 6 percent through three-dimensional analyses and exhibited errors of 12 percent when using two-dimensional analysis (Direct Linear Software). The specific results from this series of evaluations and their impacts on the methodology issues of kinematic data collection and analyses are presented in detail. The accuracy levels observed in these evaluations are also presented.

  10. The Proof of the ``Vortex Theory of Matter''

    NASA Astrophysics Data System (ADS)

    Moon, Russell

    2009-11-01

    According to the Vortex Theory, protons and electrons are three-dimensional holes connected by fourth-dimensional vortices. It was further theorized that when photons are absorbed then readmitted by atoms, the photon is absorbed into the proton, moves through the fourth-dimensional vortex, then reemerges back into three-dimensional space through the electron. To prove this hypothesis, an experiment was conducted using a hollow aluminum sphere containing a powerful permanent magnet suspended directly above a zinc plate. Ultraviolet light was then shined upon the zinc. The zinc emits electrons via the photoelectric effect that are attracted to the surface of the aluminum sphere. The sphere was removed from above the zinc plate and repositioned above a sensitive infrared digital camera in another room. The ball and camera were placed within a darkened box inside a Faraday cage. Light was shined upon the zinc plate and the picture taken by the camera was observed. When the light was turned on above the zinc plate in one room, the camera recorded increased light coming from the surface of the sphere within the other room; when the light was turned off, the intensity of the infrared light coming from the surface of the sphere was suddenly diminished. Five other tests were then performed to eliminate other possible explanations such as quantum-entangled electrons.

  11. The Proof of the ``Vortex Theory of Matter''

    NASA Astrophysics Data System (ADS)

    Gridnev, Konstantin; Moon, Russell; Vasiliev, Victor

    2009-11-01

    According to the Vortex Theory, protons and electrons are three-dimensional holes connected by fourth-dimensional vortices. It was further theorized that when photons are absorbed then readmitted by atoms, the photon is absorbed into the proton, moves through the fourth-dimensional vortex, then reemerges back into three-dimensional space through the electron^2. To prove this hypothesis, an experiment was conducted using a hollow aluminum sphere containing a powerful permanent magnet suspended directly above a zinc plate. Ultraviolet light was then shined upon the zinc. The zinc emits electrons via the photoelectric effect that are attracted to the surface of the aluminum sphere. The sphere was removed from above the zinc plate and repositioned above a sensitive infrared digital camera in another room. The ball and camera were placed within a darkened box inside a Faraday cage. Light was shined upon the zinc plate and the picture taken by the camera was observed. When the light was turned on above the zinc plate in one room, the camera recorded increased light coming from the surface of the sphere within the other room; when the light was turned off, the intensity of the infrared light coming from the surface of the sphere was suddenly diminished. Five other tests were then performed to eliminate other possible explanations such as quantum-entangled electrons.

  12. The Proof of the ``Vortex Theory of Matter''

    NASA Astrophysics Data System (ADS)

    Gridnev, Konstantin; Moon, Russell; Vasiliev, Victor

    2009-10-01

    According to the Vortex Theory, protons and electrons are three-dimensional holes connected by fourth-dimensional vortices. It was further theorized that when photons are absorbed then readmitted by atoms, the photon is absorbed into the proton, moves through the fourth-dimensional vortex, then reemerges back into three-dimensional space through the electron^2. To prove this hypothesis, an experiment was conducted using a hollow aluminum sphere containing a powerful permanent magnet suspended directly above a zinc plate. Ultraviolet light was then shined upon the zinc. The zinc emits electrons via the photoelectric effect that are attracted to the surface of the aluminum sphere. The sphere was removed from above the zinc plate and repositioned above a sensitive infrared digital camera in another room. The ball and camera were placed within a darkened box inside a Faraday cage. Light was shined upon the zinc plate and the picture taken by the camera was observed. When the light was turned on above the zinc plate in one room, the camera recorded increased light coming from the surface of the sphere within the other room; when the light was turned off, the intensity of the infrared light coming from the surface of the sphere was suddenly diminished. Five other tests were then performed to eliminate other possible explanations such as quantum-entangled electrons.

  13. The Proof of the ``Vortex Theory of Matter''

    NASA Astrophysics Data System (ADS)

    Moon, Russell; Gridnev, Konstantin; Vasiliev, Victor

    2010-02-01

    According to the Vortex Theory, protons and electrons are three-dimensional holes connected by fourth-dimensional vortices. It was further theorized that when photons are absorbed then readmitted by atoms, the photon is absorbed into the proton, moves through the fourth-dimensional vortex, then reemerges back into three-dimensional space through the electron. To prove this hypothesis, an experiment was conducted using a hollow aluminum sphere containing a powerful permanent magnet suspended directly above a zinc plate. Ultraviolet light was then shined upon the zinc. The zinc emits electrons via the photoelectric effect that are attracted to the surface of the aluminum sphere. The sphere was removed from above the zinc plate and repositioned above a sensitive infrared digital camera in another room. The ball and camera were placed within a darkened box inside a Faraday cage. Light was shined upon the zinc plate and the picture taken by the camera was observed. When the light was turned on above the zinc plate in one room, the camera recorded increased light coming from the surface of the sphere within the other room; when the light was turned off, the intensity of the infrared light coming from the surface of the sphere was suddenly diminished. Five other tests were then performed to eliminate other possible explanations such as quantum-entangled electrons. )

  14. Incompressible Deformation Estimation Algorithm (IDEA) from Tagged MR Images

    PubMed Central

    Liu, Xiaofeng; Abd-Elmoniem, Khaled Z.; Stone, Maureen; Murano, Emi Z.; Zhuo, Jiachen; Gullapalli, Rao P.; Prince, Jerry L.

    2013-01-01

    Measuring the three-dimensional motion of muscular tissues, e.g., the heart or the tongue, using magnetic resonance (MR) tagging is typically carried out by interpolating the two-dimensional motion information measured on orthogonal stacks of images. The incompressibility of muscle tissue is an important constraint on the reconstructed motion field and can significantly help to counter the sparsity and incompleteness of the available motion information. Previous methods utilizing this fact produced incompressible motions with limited accuracy. In this paper, we present an incompressible deformation estimation algorithm (IDEA) that reconstructs a dense representation of the three-dimensional displacement field from tagged MR images and the estimated motion field is incompressible to high precision. At each imaged time frame, the tagged images are first processed to determine components of the displacement vector at each pixel relative to the reference time. IDEA then applies a smoothing, divergence-free, vector spline to interpolate velocity fields at intermediate discrete times such that the collection of velocity fields integrate over time to match the observed displacement components. Through this process, IDEA yields a dense estimate of a three-dimensional displacement field that matches our observations and also corresponds to an incompressible motion. The method was validated with both numerical simulation and in vivo human experiments on the heart and the tongue. PMID:21937342

  15. Pre-clinical and clinical walking kinematics in female breeding pigs with lameness: A nested case-control cohort study.

    PubMed

    Stavrakakis, S; Guy, J H; Syranidis, I; Johnson, G R; Edwards, S A

    2015-07-01

    Gait profiles were investigated in a cohort of female pigs experiencing a lameness period prevalence of 29% over 17 months. Gait alterations before and during visually diagnosed lameness were evaluated to identify the best quantitative clinical lameness indicators and early predictors for lameness. Pre-breeding gilts (n= 84) were recruited to the study over a period of 6 months, underwent motion capture every 5 weeks and, depending on their age at entry to the study, were followed for up to three successive gestations. Animals were subject to motion capture in each parity at 8 weeks of gestation and on the day of weaning (28 days postpartum). During kinematic motion capture, the pigs walked on the same concrete walkway and an array of infra-red cameras was used to collect three dimensional coordinate data of reflective skin markers attached to the head, trunk and limb anatomical landmarks. Of 24 pigs diagnosed with lameness, 19 had preclinical gait records, whilst 18 had a motion capture while lame. Depending on availability, data from one or two preclinical motion capture 1-11 months prior to lameness and on the day of lameness were analysed. Lameness was best detected and evaluated using relative spatiotemporal gait parameters, especially vertical head displacement and asymmetric stride phase timing. Irregularity in the step-to-stride length ratio was elevated (deviation  ≥ 0.03) in young pigs which presented lameness in later life (odds ratio 7.2-10.8). Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Three-dimensional control of Tetrahymena pyriformis using artificial magnetotaxis

    NASA Astrophysics Data System (ADS)

    Hyung Kim, Dal; Seung Soo Kim, Paul; Agung Julius, Anak; Jun Kim, Min

    2012-01-01

    We demonstrate three-dimensional control with the eukaryotic cell Tetrahymena pyriformis (T. pyriformis) using two sets of Helmholtz coils for xy-plane motion and a single electromagnet for z-direction motion. T. pyriformis is modified to have artificial magnetotaxis with internalized magnetite. To track the cell's z-axis position, intensity profiles of non-motile cells at varying distances from the focal plane are used. During vertical motion along the z-axis, the intensity difference is used to determine the position of the cell. The three-dimensional control of the live microorganism T. pyriformis as a cellular robot shows great potential for practical applications in microscale tasks, such as target transport and cell therapy.

  17. The X-Factor: an evaluation of common methods used to analyse major inter-segment kinematics during the golf swing.

    PubMed

    Brown, Susan J; Selbie, W Scott; Wallace, Eric S

    2013-01-01

    A common biomechanical feature of a golf swing, described in various ways in the literature, is the interaction between the thorax and pelvis, often termed the X-Factor. There is no consistent method used within golf biomechanics literature however to calculate these segment interactions. The purpose of this study was to examine X-factor data calculated using three reported methods in order to determine the similarity or otherwise of the data calculated using each method. A twelve-camera three-dimensional motion capture system was used to capture the driver swings of 19 participants and a subject specific three-dimensional biomechanical model was created with the position and orientation of each model estimated using a global optimisation algorithm. Comparison of the X-Factor methods showed significant differences for events during the swing (P < 0.05). Data for each kinematic measure were derived as a times series for all three methods and regression analysis of these data showed that whilst one method could be successfully mapped to another, the mappings between methods are subject dependent (P <0.05). Findings suggest that a consistent methodology considering the X-Factor from a joint angle approach is most insightful in describing a golf swing.

  18. Three-Dimensional Localization of Single Molecules for Super-Resolution Imaging and Single-Particle Tracking

    PubMed Central

    von Diezmann, Alex; Shechtman, Yoav; Moerner, W. E.

    2017-01-01

    Single-molecule super-resolution fluorescence microscopy and single-particle tracking are two imaging modalities that illuminate the properties of cells and materials on spatial scales down to tens of nanometers, or with dynamical information about nanoscale particle motion in the millisecond range, respectively. These methods generally use wide-field microscopes and two-dimensional camera detectors to localize molecules to much higher precision than the diffraction limit. Given the limited total photons available from each single-molecule label, both modalities require careful mathematical analysis and image processing. Much more information can be obtained about the system under study by extending to three-dimensional (3D) single-molecule localization: without this capability, visualization of structures or motions extending in the axial direction can easily be missed or confused, compromising scientific understanding. A variety of methods for obtaining both 3D super-resolution images and 3D tracking information have been devised, each with their own strengths and weaknesses. These include imaging of multiple focal planes, point-spread-function engineering, and interferometric detection. These methods may be compared based on their ability to provide accurate and precise position information of single-molecule emitters with limited photons. To successfully apply and further develop these methods, it is essential to consider many practical concerns, including the effects of optical aberrations, field-dependence in the imaging system, fluorophore labeling density, and registration between different color channels. Selected examples of 3D super-resolution imaging and tracking are described for illustration from a variety of biological contexts and with a variety of methods, demonstrating the power of 3D localization for understanding complex systems. PMID:28151646

  19. Three-Dimensional Pathology Specimen Modeling Using "Structure-From-Motion" Photogrammetry: A Powerful New Tool for Surgical Pathology.

    PubMed

    Turchini, John; Buckland, Michael E; Gill, Anthony J; Battye, Shane

    2018-05-30

    - Three-dimensional (3D) photogrammetry is a method of image-based modeling in which data points in digital images, taken from offset viewpoints, are analyzed to generate a 3D model. This modeling technique has been widely used in the context of geomorphology and artificial imagery, but has yet to be used within the realm of anatomic pathology. - To describe the application of a 3D photogrammetry system capable of producing high-quality 3D digital models and its uses in routine surgical pathology practice as well as medical education. - We modeled specimens received in the 2 participating laboratories. The capture and photogrammetry process was automated using user control software, a digital single-lens reflex camera, and digital turntable, to generate a 3D model with the output in a PDF file. - The entity demonstrated in each specimen was well demarcated and easily identified. Adjacent normal tissue could also be easily distinguished. Colors were preserved. The concave shapes of any cystic structures or normal convex rounded structures were discernable. Surgically important regions were identifiable. - Macroscopic 3D modeling of specimens can be achieved through Structure-From-Motion photogrammetry technology and can be applied quickly and easily in routine laboratory practice. There are numerous advantages to the use of 3D photogrammetry in pathology, including improved clinicopathologic correlation for the surgeon and enhanced medical education, revolutionizing the digital pathology museum with virtual reality environments and 3D-printing specimen models.

  20. International Congress on High Speed Photography and Photonics, 17th, Pretoria, Republic of South Africa, Sept. 1-5, 1986, Proceedings. Volumes 1 & 2

    NASA Astrophysics Data System (ADS)

    McDowell, M. W.; Hollingworth, D.

    1986-01-01

    The present conference discusses topics in mining applications of high speed photography, ballistic, shock wave and detonation studies employing high speed photography, laser and X-ray diagnostics, biomechanical photography, millisec-microsec-nanosec-picosec-femtosec photographic methods, holographic, schlieren, and interferometric techniques, and videography. Attention is given to such issues as the pulse-shaping of ultrashort optical pulses, the performance of soft X-ray streak cameras, multiple-frame image tube operation, moire-enlargement motion-raster photography, two-dimensional imaging with tomographic techniques, photochron TV streak cameras, and streak techniques in detonics.

  1. Analysis of the Pendular and Pitch Motions of a Driven Three-Dimensional Pendulum

    ERIC Educational Resources Information Center

    Findley, T.; Yoshida, S.; Norwood, D. P.

    2007-01-01

    A three-dimensional pendulum, modelled after the Laser Interferometer Gravitational-Wave Observatory's suspended optics, was constructed to investigate the pendulum's dynamics due to suspension point motion. In particular, we were interested in studying the pendular-pitch energy coupling. Determination of the pendular's Q value (the quality factor…

  2. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera

    PubMed Central

    Ci, Wenyan; Huang, Yingping

    2016-01-01

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508

  3. Analyzing octopus movements using three-dimensional reconstruction.

    PubMed

    Yekutieli, Yoram; Mitelman, Rea; Hochner, Binyamin; Flash, Tamar

    2007-09-01

    Octopus arms, as well as other muscular hydrostats, are characterized by a very large number of degrees of freedom and a rich motion repertoire. Over the years, several attempts have been made to elucidate the interplay between the biomechanics of these organs and their control systems. Recent developments in electrophysiological recordings from both the arms and brains of behaving octopuses mark significant progress in this direction. The next stage is relating these recordings to the octopus arm movements, which requires an accurate and reliable method of movement description and analysis. Here we describe a semiautomatic computerized system for 3D reconstruction of an octopus arm during motion. It consists of two digital video cameras and a PC computer running custom-made software. The system overcomes the difficulty of extracting the motion of smooth, nonrigid objects in poor viewing conditions. Some of the trouble is explained by the problem of light refraction in recording underwater motion. Here we use both experiments and simulations to analyze the refraction problem and show that accurate reconstruction is possible. We have used this system successfully to reconstruct different types of octopus arm movements, such as reaching and bend initiation movements. Our system is noninvasive and does not require attaching any artificial markers to the octopus arm. It may therefore be of more general use in reconstructing other nonrigid, elongated objects in motion.

  4. Motion capture for human motion measuring by using single camera with triangle markers

    NASA Astrophysics Data System (ADS)

    Takahashi, Hidenori; Tanaka, Takayuki; Kaneko, Shun'ichi

    2005-12-01

    This study aims to realize a motion capture for measuring 3D human motions by using single camera. Although motion capture by using multiple cameras is widely used in sports field, medical field, engineering field and so on, optical motion capture method with one camera is not established. In this paper, the authors achieved a 3D motion capture by using one camera, named as Mono-MoCap (MMC), on the basis of two calibration methods and triangle markers which each length of side is given. The camera calibration methods made 3D coordinates transformation parameter and a lens distortion parameter with Modified DLT method. The triangle markers enabled to calculate a coordinate value of a depth direction on a camera coordinate. Experiments of 3D position measurement by using the MMC on a measurement space of cubic 2 m on each side show an average error of measurement of a center of gravity of a triangle marker was less than 2 mm. As compared with conventional motion capture method by using multiple cameras, the MMC has enough accuracy for 3D measurement. Also, by putting a triangle marker on each human joint, the MMC was able to capture a walking motion, a standing-up motion and a bending and stretching motion. In addition, a method using a triangle marker together with conventional spherical markers was proposed. Finally, a method to estimate a position of a marker by measuring the velocity of the marker was proposed in order to improve the accuracy of MMC.

  5. An anti-disturbing real time pose estimation method and system

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Zhang, Xiao-hu

    2011-08-01

    Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new method can estimate pose between camera and object when part even all known features are lost, and has a quick response time benefit from GPU parallel computing. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in autonomous navigation and positioning, robots fields at unknown environment. The results of simulation and experiments demonstrate that proposed method could suppress noise effectively, extracted features robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.

  6. Three-dimensional hysteresis compensation enhances accuracy of robotic artificial muscles

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Simeonov, Anthony; Yip, Michael C.

    2018-03-01

    Robotic artificial muscles are compliant and can generate straight contractions. They are increasingly popular as driving mechanisms for robotic systems. However, their strain and tension force often vary simultaneously under varying loads and inputs, resulting in three-dimensional hysteretic relationships. The three-dimensional hysteresis in robotic artificial muscles poses difficulties in estimating how they work and how to make them perform designed motions. This study proposes an approach to driving robotic artificial muscles to generate designed motions and forces by modeling and compensating for their three-dimensional hysteresis. The proposed scheme captures the nonlinearity by embedding two hysteresis models. The effectiveness of the model is confirmed by testing three popular robotic artificial muscles. Inverting the proposed model allows us to compensate for the hysteresis among temperature surrogate, contraction length, and tension force of a shape memory alloy (SMA) actuator. Feedforward control of an SMA-actuated robotic bicep is demonstrated. This study can be generalized to other robotic artificial muscles, thus enabling muscle-powered machines to generate desired motions.

  7. The study of integration about measurable image and 4D production

    NASA Astrophysics Data System (ADS)

    Zhang, Chunsen; Hu, Pingbo; Niu, Weiyun

    2008-12-01

    In this paper, we create the geospatial data of three-dimensional (3D) modeling by the combination of digital photogrammetry and digital close-range photogrammetry. For large-scale geographical background, we make the establishment of DEM and DOM combination of three-dimensional landscape model based on the digital photogrammetry which uses aerial image data to make "4D" (DOM: Digital Orthophoto Map, DEM: Digital Elevation Model, DLG: Digital Line Graphic and DRG: Digital Raster Graphic) production. For the range of building and other artificial features which the users are interested in, we realize that the real features of the three-dimensional reconstruction adopting the method of the digital close-range photogrammetry can come true on the basis of following steps : non-metric cameras for data collection, the camera calibration, feature extraction, image matching, and other steps. At last, we combine three-dimensional background and local measurements real images of these large geographic data and realize the integration of measurable real image and the 4D production.The article discussed the way of the whole flow and technology, achieved the three-dimensional reconstruction and the integration of the large-scale threedimensional landscape and the metric building.

  8. A new position measurement system using a motion-capture camera for wind tunnel tests.

    PubMed

    Park, Hyo Seon; Kim, Ji Young; Kim, Jin Gi; Choi, Se Woon; Kim, Yousok

    2013-09-13

    Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS) could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS). The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape) of the test specimen using system identification methods (frequency domain decomposition, FDD). By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape) with the 3D measurements.

  9. A New Position Measurement System Using a Motion-Capture Camera for Wind Tunnel Tests

    PubMed Central

    Park, Hyo Seon; Kim, Ji Young; Kim, Jin Gi; Choi, Se Woon; Kim, Yousok

    2013-01-01

    Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS) could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS). The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape) of the test specimen using system identification methods (frequency domain decomposition, FDD). By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape) with the 3D measurements. PMID:24064600

  10. Volume three-dimensional flow measurements using wavelength multiplexing.

    PubMed

    Moore, Andrew J; Smith, Jason; Lawson, Nicholas J

    2005-10-01

    Optically distinguishable seeding particles that emit light in a narrow bandwidth, and a combination of bandwidths, were prepared by encapsulating quantum dots. The three-dimensional components of the particles' displacement were measured within a volume of fluid with particle tracking velocimetry (PTV). Particles are multiplexed to different hue bands in the camera images, enabling an increased seeding density and (or) fewer cameras to be used, thereby increasing the measurement spatial resolution and (or) reducing optical access requirements. The technique is also applicable to two-phase flow measurements with PTV or particle image velocimetry, where each phase is uniquely seeded.

  11. Unmanned aerial vehicle-based structure from motion biomass inventory estimates

    NASA Astrophysics Data System (ADS)

    Bedell, Emily; Leslie, Monique; Fankhauser, Katie; Burnett, Jonathan; Wing, Michael G.; Thomas, Evan A.

    2017-04-01

    Riparian vegetation restoration efforts require cost-effective, accurate, and replicable impact assessments. We present a method to use an unmanned aerial vehicle (UAV) equipped with a GoPro digital camera to collect photogrammetric data of a 0.8-ha riparian restoration. A three-dimensional point cloud was created from the photos using "structure from motion" techniques. The point cloud was analyzed and compared to traditional, ground-based monitoring techniques. Ground-truth data were collected on 6.3% of the study site and averaged across the entire site to report stem heights in stems/ha in three height classes. The project site was divided into four analysis sections, one for derivation of parameters used in the UAV data analysis and the remaining three sections reserved for method validation. Comparing the ground-truth data to the UAV generated data produced an overall error of 21.6% and indicated an R2 value of 0.98. A Bland-Altman analysis indicated a 95% probability that the UAV stems/section result will be within 61 stems/section of the ground-truth data. The ground-truth data are reported with an 80% confidence interval of ±1032 stems/ha thus, the UAV was able to estimate stems well within this confidence interval.

  12. Underwater image mosaicking and visual odometry

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz; Tangirala, Sekhar; Sorber, Scott

    2017-05-01

    This paper summarizes the results of studies in underwater odometery using a video camera for estimating the velocity of an unmanned underwater vehicle (UUV). Underwater vehicles are usually equipped with sonar and Inertial Measurement Unit (IMU) - an integrated sensor package that combines multiple accelerometers and gyros to produce a three dimensional measurement of both specific force and angular rate with respect to an inertial reference frame for navigation. In this study, we investigate the use of odometry information obtainable from a video camera mounted on a UUV to extract vehicle velocity relative to the ocean floor. A key challenge with this process is the seemingly bland (i.e. featureless) nature of video data obtained underwater which could make conventional approaches to image-based motion estimation difficult. To address this problem, we perform image enhancement, followed by frame to frame image transformation, registration and mosaicking/stitching. With this approach the velocity components associated with the moving sensor (vehicle) are readily obtained from (i) the components of the transform matrix at each frame; (ii) information about the height of the vehicle above the seabed; and (iii) the sensor resolution. Preliminary results are presented.

  13. Evaluation of a stereoscopic camera-based three-dimensional viewing workstation for ophthalmic surgery.

    PubMed

    Bhadri, Prashant R; Rowley, Adrian P; Khurana, Rahul N; Deboer, Charles M; Kerns, Ralph M; Chong, Lawrence P; Humayun, Mark S

    2007-05-01

    To evaluate the effectiveness of a prototype stereoscopic camera-based viewing system (Digital Microsurgical Workstation, three-dimensional (3D) Vision Systems, Irvine, California, USA) for anterior and posterior segment ophthalmic surgery. Institutional-based prospective study. Anterior and posterior segment surgeons performed designated standardized tasks on porcine eyes after training on prosthetic plastic eyes. Both anterior and posterior segment surgeons were able to complete tasks requiring minimal or moderate stereoscopic viewing. The results indicate that the system provides improved ergonomics. Improvements in key viewing performance areas would further enhance the value over a conventional operating microscope. The performance of the prototype system is not at par with the planned commercial system. With continued development of this technology, the three- dimensional system may be a novel viewing system in ophthalmic surgery with improved ergonomics with respect to traditional microscopic viewing.

  14. Modeling Cometary Coma with a Three Dimensional, Anisotropic Multiple Scattering Distributed Processing Code

    NASA Technical Reports Server (NTRS)

    Luchini, Chris B.

    1997-01-01

    Development of camera and instrument simulations for space exploration requires the development of scientifically accurate models of the objects to be studied. Several planned cometary missions have prompted the development of a three dimensional, multi-spectral, anisotropic multiple scattering model of cometary coma.

  15. Parallel phase-sensitive three-dimensional imaging camera

    DOEpatents

    Smithpeter, Colin L.; Hoover, Eddie R.; Pain, Bedabrata; Hancock, Bruce R.; Nellums, Robert O.

    2007-09-25

    An apparatus is disclosed for generating a three-dimensional (3-D) image of a scene illuminated by a pulsed light source (e.g. a laser or light-emitting diode). The apparatus, referred to as a phase-sensitive 3-D imaging camera utilizes a two-dimensional (2-D) array of photodetectors to receive light that is reflected or scattered from the scene and processes an electrical output signal from each photodetector in the 2-D array in parallel using multiple modulators, each having inputs of the photodetector output signal and a reference signal, with the reference signal provided to each modulator having a different phase delay. The output from each modulator is provided to a computational unit which can be used to generate intensity and range information for use in generating a 3-D image of the scene. The 3-D camera is capable of generating a 3-D image using a single pulse of light, or alternately can be used to generate subsequent 3-D images with each additional pulse of light.

  16. Stereoscopic camera and viewing systems with undistorted depth presentation and reduced or eliminated erroneous acceleration and deceleration perceptions, or with perceptions produced or enhanced for special effects

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1991-01-01

    Methods for providing stereoscopic image presentation and stereoscopic configurations using stereoscopic viewing systems having converged or parallel cameras may be set up to reduce or eliminate erroneously perceived accelerations and decelerations by proper selection of parameters, such as an image magnification factor, q, and intercamera distance, 2w. For converged cameras, q is selected to be equal to Ve - qwl = 0, where V is the camera distance, e is half the interocular distance of an observer, w is half the intercamera distance, and l is the actual distance from the first nodal point of each camera to the convergence point, and for parallel cameras, q is selected to be equal to e/w. While converged cameras cannot be set up to provide fully undistorted three-dimensional views, they can be set up to provide a linear relationship between real and apparent depth and thus minimize erroneously perceived accelerations and decelerations for three sagittal planes, x = -w, x = 0, and x = +w which are indicated to the observer. Parallel cameras can be set up to provide fully undistorted three-dimensional views by controlling the location of the observer and by magnification and shifting of left and right images. In addition, the teachings of this disclosure can be used to provide methods of stereoscopic image presentation and stereoscopic camera configurations to produce a nonlinear relation between perceived and real depth, and erroneously produce or enhance perceived accelerations and decelerations in order to provide special effects for entertainment, training, or educational purposes.

  17. Two-Dimensional Motions of Rockets

    ERIC Educational Resources Information Center

    Kang, Yoonhwan; Bae, Saebyok

    2007-01-01

    We analyse the two-dimensional motions of the rockets for various types of rocket thrusts, the air friction and the gravitation by using a suitable representation of the rocket equation and the numerical calculation. The slope shapes of the rocket trajectories are discussed for the three types of rocket engines. Unlike the projectile motions, the…

  18. Three-dimensional dynamics of scientific balloon systems in response to sudden gust loadings. [including a computer program user manual

    NASA Technical Reports Server (NTRS)

    Dorsey, D. R., Jr.

    1975-01-01

    A mathematical model was developed of the three-dimensional dynamics of a high-altitude scientific research balloon system perturbed from its equilibrium configuration by an arbitrary gust loading. The platform is modelled as a system of four coupled pendula, and the equations of motion were developed in the Lagrangian formalism assuming a small-angle approximation. Three-dimensional pendulation, torsion, and precessional motion due to Coriolis forces are considered. Aerodynamic and viscous damping effects on the pendulatory and torsional motions are included. A general model of the gust field incident upon the balloon system was developed. The digital computer simulation program is described, and a guide to its use is given.

  19. Spatial Disorientation in Gondola Centrifuges Predicted by the Form of Motion as a Whole in 3-D

    PubMed Central

    Holly, Jan E.; Harmon, Katharine J.

    2009-01-01

    INTRODUCTION During a coordinated turn, subjects can misperceive tilts. Subjects accelerating in tilting-gondola centrifuges without external visual reference underestimate the roll angle, and underestimate more when backward-facing than when forward-facing. In addition, during centrifuge deceleration, the perception of pitch can include tumble while paradoxically maintaining a fixed perceived pitch angle. The goal of the present research was to test two competing hypotheses: (1) that components of motion are perceived relatively independently and then combined to form a three-dimensional perception, and (2) that perception is governed by familiarity of motions as a whole in three dimensions, with components depending more strongly on the overall shape of the motion. METHODS Published experimental data were used from existing tilting-gondola centrifuge studies. The two hypotheses were implemented formally in computer models, and centrifuge acceleration and deceleration were simulated. RESULTS The second, whole-motion oriented, hypothesis better predicted subjects' perceptions, including the forward-backward asymmetry and the paradoxical tumble upon deceleration. Important was the predominant stimulus at the beginning of the motion as well as the familiarity of centripetal acceleration. CONCLUSION Three-dimensional perception is better predicted by taking into account familiarity with the form of three-dimensional motion. PMID:19198199

  20. Three-dimensional analysis of cervical spine segmental motion in rotation.

    PubMed

    Zhao, Xiong; Wu, Zi-Xiang; Han, Bao-Jun; Yan, Ya-Bo; Zhang, Yang; Lei, Wei

    2013-06-20

    The movements of the cervical spine during head rotation are too complicated to measure using conventional radiography or computed tomography (CT) techniques. In this study, we measure three-dimensional segmental motion of cervical spine rotation in vivo using a non-invasive measurement technique. Sixteen healthy volunteers underwent three-dimensional CT of the cervical spine during head rotation. Occiput (Oc) - T1 reconstructions were created of volunteers in each of 3 positions: supine and maximum left and right rotations of the head with respect to the bosom. Segmental motions were calculated using Euler angles and volume merge methods in three major planes. Mean maximum axial rotation of the cervical spine to one side was 1.6° to 38.5° at each level. Coupled lateral bending opposite to lateral bending was observed in the upper cervical levels, while in the subaxial cervical levels, it was observed in the same direction as axial rotation. Coupled extension was observed in the cervical levels of C5-T1, while coupled flexion was observed in the cervical levels of Oc-C5. The three-dimensional cervical segmental motions in rotation were accurately measured with the non-invasive measure. These findings will be helpful as the basis for understanding cervical spine movement in rotation and abnormal conditions. The presented data also provide baseline segmental motions for the design of prostheses for the cervical spine.

  1. Pixel-wise deblurring imaging system based on active vision for structural health monitoring at a speed of 100 km/h

    NASA Astrophysics Data System (ADS)

    Hayakawa, Tomohiko; Moko, Yushi; Morishita, Kenta; Ishikawa, Masatoshi

    2018-04-01

    In this paper, we propose a pixel-wise deblurring imaging (PDI) system based on active vision for compensation of the blur caused by high-speed one-dimensional motion between a camera and a target. The optical axis is controlled by back-and-forth motion of a galvanometer mirror to compensate the motion. High-spatial-resolution image captured by our system in high-speed motion is useful for efficient and precise visual inspection, such as visually judging abnormal parts of a tunnel surface to prevent accidents; hence, we applied the PDI system for structural health monitoring. By mounting the system onto a vehicle in a tunnel, we confirmed significant improvement in image quality for submillimeter black-and-white stripes and real tunnel-surface cracks at a speed of 100 km/h.

  2. Improving accuracy of Plenoptic PIV using two light field cameras

    NASA Astrophysics Data System (ADS)

    Thurow, Brian; Fahringer, Timothy

    2017-11-01

    Plenoptic particle image velocimetry (PIV) has recently emerged as a viable technique for acquiring three-dimensional, three-component velocity field data using a single plenoptic, or light field, camera. The simplified experimental arrangement is advantageous in situations where optical access is limited and/or it is not possible to set-up the four or more cameras typically required in a tomographic PIV experiment. A significant disadvantage of a single camera plenoptic PIV experiment, however, is that the accuracy of the velocity measurement along the optical axis of the camera is significantly worse than in the two lateral directions. In this work, we explore the accuracy of plenoptic PIV when two plenoptic cameras are arranged in a stereo imaging configuration. It is found that the addition of a 2nd camera improves the accuracy in all three directions and nearly eliminates any differences between them. This improvement is illustrated using both synthetic and real experiments conducted on a vortex ring using both one and two plenoptic cameras.

  3. Photography in Dermatologic Surgery: Selection of an Appropriate Camera Type for a Particular Clinical Application.

    PubMed

    Chen, Brian R; Poon, Emily; Alam, Murad

    2017-08-01

    Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.

  4. Light ray field capture using focal plane sweeping and its optical reconstruction using 3D displays.

    PubMed

    Park, Jae-Hyeung; Lee, Sung-Keun; Jo, Na-Young; Kim, Hee-Jae; Kim, Yong-Soo; Lim, Hong-Gi

    2014-10-20

    We propose a method to capture light ray field of three-dimensional scene using focal plane sweeping. Multiple images are captured using a usual camera at different focal distances, spanning the three-dimensional scene. The captured images are then back-projected to four-dimensional spatio-angular space to obtain the light ray field. The obtained light ray field can be visualized either using digital processing or optical reconstruction using various three-dimensional display techniques including integral imaging, layered display, and holography.

  5. Machine Learning of Three-dimensional Right Ventricular Motion Enables Outcome Prediction in Pulmonary Hypertension: A Cardiac MR Imaging Study.

    PubMed

    Dawes, Timothy J W; de Marvao, Antonio; Shi, Wenzhe; Fletcher, Tristan; Watson, Geoffrey M J; Wharton, John; Rhodes, Christopher J; Howard, Luke S G E; Gibbs, J Simon R; Rueckert, Daniel; Cook, Stuart A; Wilkins, Martin R; O'Regan, Declan P

    2017-05-01

    Purpose To determine if patient survival and mechanisms of right ventricular failure in pulmonary hypertension could be predicted by using supervised machine learning of three-dimensional patterns of systolic cardiac motion. Materials and Methods The study was approved by a research ethics committee, and participants gave written informed consent. Two hundred fifty-six patients (143 women; mean age ± standard deviation, 63 years ± 17) with newly diagnosed pulmonary hypertension underwent cardiac magnetic resonance (MR) imaging, right-sided heart catheterization, and 6-minute walk testing with a median follow-up of 4.0 years. Semiautomated segmentation of short-axis cine images was used to create a three-dimensional model of right ventricular motion. Supervised principal components analysis was used to identify patterns of systolic motion that were most strongly predictive of survival. Survival prediction was assessed by using difference in median survival time and area under the curve with time-dependent receiver operating characteristic analysis for 1-year survival. Results At the end of follow-up, 36% of patients (93 of 256) died, and one underwent lung transplantation. Poor outcome was predicted by a loss of effective contraction in the septum and free wall, coupled with reduced basal longitudinal motion. When added to conventional imaging and hemodynamic, functional, and clinical markers, three-dimensional cardiac motion improved survival prediction (area under the receiver operating characteristic curve, 0.73 vs 0.60, respectively; P < .001) and provided greater differentiation according to difference in median survival time between high- and low-risk groups (13.8 vs 10.7 years, respectively; P < .001). Conclusion A machine-learning survival model that uses three-dimensional cardiac motion predicts outcome independent of conventional risk factors in patients with newly diagnosed pulmonary hypertension. Online supplemental material is available for this article.

  6. Validation of the Microsoft Kinect® camera system for measurement of lower extremity jump landing and squatting kinematics.

    PubMed

    Eltoukhy, Moataz; Kelly, Adam; Kim, Chang-Young; Jun, Hyung-Pil; Campbell, Richard; Kuenze, Christopher

    2016-01-01

    Cost effective, quantifiable assessment of lower extremity movement represents potential improvement over standard tools for evaluation of injury risk. Ten healthy participants completed three trials of a drop jump, overhead squat, and single leg squat task. Peak hip and knee kinematics were assessed using an 8 camera BTS Smart 7000DX motion analysis system and the Microsoft Kinect® camera system. The agreement and consistency between both uncorrected and correct Kinect kinematic variables and the BTS camera system were assessed using interclass correlations coefficients. Peak sagittal plane kinematics measured using the Microsoft Kinect® camera system explained a significant amount of variance [Range(hip) = 43.5-62.8%; Range(knee) = 67.5-89.6%] in peak kinematics measured using the BTS camera system. Across tasks, peak knee flexion angle and peak hip flexion were found to be consistent and in agreement when the Microsoft Kinect® camera system was directly compared to the BTS camera system but these values were improved following application of a corrective factor. The Microsoft Kinect® may not be an appropriate surrogate for traditional motion analysis technology, but it may have potential applications as a real-time feedback tool in pathological or high injury risk populations.

  7. Movable Cameras And Monitors For Viewing Telemanipulator

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Venema, Steven C.

    1993-01-01

    Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.

  8. Linear momentum, angular momentum and energy in the linear collision between two balls

    NASA Astrophysics Data System (ADS)

    Hanisch, C.; Hofmann, F.; Ziese, M.

    2018-01-01

    In an experiment of the basic physics laboratory, kinematical motion processes were analysed. The motion was recorded with a standard video camera having frame rates from 30 to 240 fps the videos were processed using video analysis software. Video detection was used to analyse the symmetric one-dimensional collision between two balls. Conservation of linear and angular momentum lead to a crossover from rolling to sliding directly after the collision. By variation of the rolling radius the system could be tuned from a regime in which the balls move away from each other after the collision to a situation in which they re-collide.

  9. Development of a Remote Accessibility Assessment System through three-dimensional reconstruction technology.

    PubMed

    Kim, Jong Bae; Brienza, David M

    2006-01-01

    A Remote Accessibility Assessment System (RAAS) that uses three-dimensional (3-D) reconstruction technology is being developed; it enables clinicians to assess the wheelchair accessibility of users' built environments from a remote location. The RAAS uses commercial software to construct 3-D virtualized environments from photographs. We developed custom screening algorithms and instruments for analyzing accessibility. Characteristics of the camera and 3-D reconstruction software chosen for the system significantly affect its overall reliability. In this study, we performed an accuracy assessment to verify that commercial hardware and software can construct accurate 3-D models by analyzing the accuracy of dimensional measurements in a virtual environment and a comparison of dimensional measurements from 3-D models created with four cameras/settings. Based on these two analyses, we were able to specify a consumer-grade digital camera and PhotoModeler (EOS Systems, Inc, Vancouver, Canada) software for this system. Finally, we performed a feasibility analysis of the system in an actual environment to evaluate its ability to assess the accessibility of a wheelchair user's typical built environment. The field test resulted in an accurate accessibility assessment and thus validated our system.

  10. The Effect of Soft and Rigid Cervical Collars on Head and Neck Immobilization in Healthy Subjects.

    PubMed

    Barati, Kourosh; Arazpour, Mokhtar; Vameghi, Roshanak; Abdoli, Ali; Farmani, Farzad

    2017-06-01

    Whiplash injury is a prevalent and often destructive injury of the cervical column, which can lead to serious neck pain. Many approaches have been suggested for the treatment of whiplash injury, including anti-inflammatory drugs, manipulation, supervised exercise, and cervical collars. Cervical collars are generally divided into two groups: soft and rigid collars. The present study aimed to compare the effect of soft and rigid cervical collars on immobilizing head and neck motion. Many studies have investigated the effect of collars on neck motion. Rigid collars have been shown to provide more immobilization in the sagittal and transverse planes compared with soft collars. However, according to some studies, soft and rigid collars provide the same range of motion in the frontal plane. Twenty-nine healthy subjects aged 18-26 participated in this study. Data were collected using a three-dimensional motion analysis system and six infrared cameras. Eight markers, weighing 4.4 g and thickened 2 cm 2 were used to record kinematic data. According to the normality of the data, a paired t -test was used for statistical analyses. The level of significance was set at α=0.01. All motion significantly decreased when subjects used soft collars ( p <0.01). According to the obtained data, flexion and lateral rotation experienced the maximum (39%) and minimum (11%) immobilization in all six motions using soft collars. Rigid collars caused maximum immobilization in flexion (59%) and minimum immobilization in the lateral rotation (18%) and limited all motion much more than the soft collar. This study showed that different cervical collars have different effects on neck motion. Rigid and soft cervical collars used in the present study limited the neck motion in both directions. Rigid collars contributed to significantly more immobilization in all directions.

  11. Multi-Angle Snowflake Camera Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuefer, Martin; Bailey, J.

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less

  12. Finding Intrinsic and Extrinsic Viewing Parameters from a Single Realist Painting

    NASA Astrophysics Data System (ADS)

    Jordan, Tadeusz; Stork, David G.; Khoo, Wai L.; Zhu, Zhigang

    In this paper we studied the geometry of a three-dimensional tableau from a single realist painting - Scott Fraser’s Three way vanitas (2006). The tableau contains a carefully chosen complex arrangement of objects including a moth, egg, cup, and strand of string, glass of water, bone, and hand mirror. Each of the three plane mirrors presents a different view of the tableau from a virtual camera behind each mirror and symmetric to the artist’s viewing point. Our new contribution was to incorporate single-view geometric information extracted from the direct image of the wooden mirror frames in order to obtain the camera models of both the real camera and the three virtual cameras. Both the intrinsic and extrinsic parameters are estimated for the direct image and the images in three plane mirrors depicted within the painting.

  13. Estimation of skeletal movement of human locomotion from body surface shapes using dynamic spatial video camera (DSVC) and 4D human model.

    PubMed

    Saito, Toshikuni; Suzuki, Naoki; Hattori, Asaki; Suzuki, Shigeyuki; Hayashibe, Mitsuhiro; Otake, Yoshito

    2006-01-01

    We have been developing a DSVC (Dynamic Spatial Video Camera) system to measure and observe human locomotion quantitatively and freely. A 4D (four-dimensional) human model with detailed skeletal structure, joint, muscle, and motor functionality has been built. The purpose of our research was to estimate skeletal movements from body surface shapes using DSVC and the 4D human model. For this purpose, we constructed a body surface model of a subject and resized the standard 4D human model to match with geometrical features of the subject's body surface model. Software that integrates the DSVC system and the 4D human model, and allows dynamic skeletal state analysis from body surface movement data was also developed. We practically applied the developed system in dynamic skeletal state analysis of a lower limb in motion and were able to visualize the motion using geometrically resized standard 4D human model.

  14. Rapid Measurement of Tectonic Deformation Using Structure-from-Motion

    NASA Astrophysics Data System (ADS)

    Pickering, A.; DeLong, S.; Lienkaemper, J. J.; Hecker, S.; Prentice, C. S.; Schwartz, D. P.; Sickler, R. R.

    2016-12-01

    Rapid collection and distribution of accurate surface slip data after earthquakes can support emergency response, help coordinate scientific response, and constrain coseismic slip that can be rapidly overprinted by postseismic slip, or eliminated as evidence of surface deformation is repaired or obscured. Analysis of earthquake deformation can be achieved quickly, repeatedly and inexpensively with the use of Structure-from-Motion (SfM) photogrammetry. Traditional methods of measuring surface slip (e.g. manual measurement with tape measures) have proven inconsistent and irreproducible, and sophisticated methods such as laser scanning require specialized equipment and longer field time. Here we present a simple, cost-effective workflow for rapid, three-dimensional imaging and measurement of features affected by earthquake rupture. As part of a response drill performed by the USGS and collaborators on May 11, 2016, geologists documented offset cultural features along the creeping Hayward Fault in northern California, in simulation of a surface-rupturing earthquake. We present several photo collections from smart phones, tablets, and DSLR cameras from a number of locations along the fault collected by users with a range of experience. Using professionally calibrated photogrammetric scale bars we automatically and accurately scale our 3D models to 1 mm accuracy for precise measurement in three dimensions. We then generate scaled 3D point clouds and extract offsets from manual measurement and multiple linear regression for comparison with collected terrestrial scanner data. These results further establish dense photo collection and SfM processing as an important, low-cost, rapid means of quantifying surface deformation in the critical hours after a surface-rupturing earthquake and emphasize that researchers with minimal training can rapidly collect three-dimensional data that can be used to analyze and archive the surface effects of damaging earthquakes.

  15. On a novel low cost high accuracy experimental setup for tomographic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Discetti, Stefano; Ianiro, Andrea; Astarita, Tommaso; Cardone, Gennaro

    2013-07-01

    This work deals with the critical aspects related to cost reduction of a Tomo PIV setup and to the bias errors introduced in the velocity measurements by the coherent motion of the ghost particles. The proposed solution consists of using two independent imaging systems composed of three (or more) low speed single frame cameras, which can be up to ten times cheaper than double shutter cameras with the same image quality. Each imaging system is used to reconstruct a particle distribution in the same measurement region, relative to the first and the second exposure, respectively. The reconstructed volumes are then interrogated by cross-correlation in order to obtain the measured velocity field, as in the standard tomographic PIV implementation. Moreover, differently from tomographic PIV, the ghost particle distributions of the two exposures are uncorrelated, since their spatial distribution is camera orientation dependent. For this reason, the proposed solution promises more accurate results, without the bias effect of the coherent ghost particles motion. Guidelines for the implementation and the application of the present method are proposed. The performances are assessed with a parametric study on synthetic experiments. The proposed low cost system produces a much lower modulation with respect to an equivalent three-camera system. Furthermore, the potential accuracy improvement using the Motion Tracking Enhanced MART (Novara et al 2010 Meas. Sci. Technol. 21 035401) is much higher than in the case of the standard implementation of tomographic PIV.

  16. System and method for generating motion corrected tomographic images

    DOEpatents

    Gleason, Shaun S [Knoxville, TN; Goddard, Jr., James S.

    2012-05-01

    A method and related system for generating motion corrected tomographic images includes the steps of illuminating a region of interest (ROI) to be imaged being part of an unrestrained live subject and having at least three spaced apart optical markers thereon. Simultaneous images are acquired from a first and a second camera of the markers from different angles. Motion data comprising 3D position and orientation of the markers relative to an initial reference position is then calculated. Motion corrected tomographic data obtained from the ROI using the motion data is then obtained, where motion corrected tomographic images obtained therefrom.

  17. Monitoring lava-dome growth during the 2004-2008 Mount St. Helens, Washington, eruption using oblique terrestrial photography

    USGS Publications Warehouse

    Major, J.J.; Dzurisin, D.; Schilling, S.P.; Poland, Michael P.

    2009-01-01

    We present an analysis of lava dome growth during the 2004–2008 eruption of Mount St. Helens using oblique terrestrial images from a network of remotely placed cameras. This underutilized monitoring tool augmented more traditional monitoring techniques, and was used to provide a robust assessment of the nature, pace, and state of the eruption and to quantify the kinematics of dome growth. Eruption monitoring using terrestrial photography began with a single camera deployed at the mouth of the volcano's crater during the first year of activity. Analysis of those images indicates that the average lineal extrusion rate decayed approximately logarithmically from about 8 m/d to about 2 m/d (± 2 m/d) from November 2004 through December 2005, and suggests that the extrusion rate fluctuated on time scales of days to weeks. From May 2006 through September 2007, imagery from multiple cameras deployed around the volcano allowed determination of 3-dimensional motion across the dome complex. Analysis of the multi-camera imagery shows spatially differential, but remarkably steady to gradually slowing, motion, from about 1–2 m/d from May through October 2006, to about 0.2–1.0 m/d from May through September 2007. In contrast to the fluctuations in lineal extrusion rate documented during the first year of eruption, dome motion from May 2006 through September 2007 was monotonic (± 0.10 m/d) to gradually slowing on time scales of weeks to months. The ability to measure spatial and temporal rates of motion of the effusing lava dome from oblique terrestrial photographs provided a significant, and sometimes the sole, means of identifying and quantifying dome growth during the eruption, and it demonstrates the utility of using frequent, long-term terrestrial photography to monitor and study volcanic eruptions.

  18. Accuracy of three-dimensional seismic ground response analysis in time domain using nonlinear numerical simulations

    NASA Astrophysics Data System (ADS)

    Liang, Fayun; Chen, Haibing; Huang, Maosong

    2017-07-01

    To provide appropriate uses of nonlinear ground response analysis for engineering practice, a three-dimensional soil column with a distributed mass system and a time domain numerical analysis were implemented on the OpenSees simulation platform. The standard mesh of a three-dimensional soil column was suggested to be satisfied with the specified maximum frequency. The layered soil column was divided into multiple sub-soils with a different viscous damping matrix according to the shear velocities as the soil properties were significantly different. It was necessary to use a combination of other one-dimensional or three-dimensional nonlinear seismic ground analysis programs to confirm the applicability of nonlinear seismic ground motion response analysis procedures in soft soil or for strong earthquakes. The accuracy of the three-dimensional soil column finite element method was verified by dynamic centrifuge model testing under different peak accelerations of the earthquake. As a result, nonlinear seismic ground motion response analysis procedures were improved in this study. The accuracy and efficiency of the three-dimensional seismic ground response analysis can be adapted to the requirements of engineering practice.

  19. Analysis of the Three-Dimensional Vector FAÇADE Model Created from Photogrammetric Data

    NASA Astrophysics Data System (ADS)

    Kamnev, I. S.; Seredovich, V. A.

    2017-12-01

    The results of the accuracy assessment analysis for creation of a three-dimensional vector model of building façade are described. In the framework of the analysis, analytical comparison of three-dimensional vector façade models created by photogrammetric and terrestrial laser scanning data has been done. The three-dimensional model built from TLS point clouds was taken as the reference one. In the course of the experiment, the three-dimensional model to be analyzed was superimposed on the reference one, the coordinates were measured and deviations between the same model points were determined. The accuracy estimation of the three-dimensional model obtained by using non-metric digital camera images was carried out. Identified façade surface areas with the maximum deviations were revealed.

  20. Robot-assisted general surgery.

    PubMed

    Hazey, Jeffrey W; Melvin, W Scott

    2004-06-01

    With the initiation of laparoscopic techniques in general surgery, we have seen a significant expansion of minimally invasive techniques in the last 16 years. More recently, robotic-assisted laparoscopy has moved into the general surgeon's armamentarium to address some of the shortcomings of laparoscopic surgery. AESOP (Computer Motion, Goleta, CA) addressed the issue of visualization as a robotic camera holder. With the introduction of the ZEUS robotic surgical system (Computer Motion), the ability to remotely operate laparoscopic instruments became a reality. US Food and Drug Administration approval in July 2000 of the da Vinci robotic surgical system (Intuitive Surgical, Sunnyvale, CA) further defined the ability of a robotic-assist device to address limitations in laparoscopy. This includes a significant improvement in instrument dexterity, dampening of natural hand tremors, three-dimensional visualization, ergonomics, and camera stability. As experience with robotic technology increased and its applications to advanced laparoscopic procedures have become more understood, more procedures have been performed with robotic assistance. Numerous studies have shown equivalent or improved patient outcomes when robotic-assist devices are used. Initially, robotic-assisted laparoscopic cholecystectomy was deemed safe, and now robotics has been shown to be safe in foregut procedures, including Nissen fundoplication, Heller myotomy, gastric banding procedures, and Roux-en-Y gastric bypass. These techniques have been extrapolated to solid-organ procedures (splenectomy, adrenalectomy, and pancreatic surgery) as well as robotic-assisted laparoscopic colectomy. In this chapter, we review the evolution of robotic technology and its applications in general surgical procedures.

  1. Three-dimensional particle tracking velocimetry using dynamic vision sensors

    NASA Astrophysics Data System (ADS)

    Borer, D.; Delbruck, T.; Rösgen, T.

    2017-12-01

    A fast-flow visualization method is presented based on tracking neutrally buoyant soap bubbles with a set of neuromorphic cameras. The "dynamic vision sensors" register only the changes in brightness with very low latency, capturing fast processes at a low data rate. The data consist of a stream of asynchronous events, each encoding the corresponding pixel position, the time instant of the event and the sign of the change in logarithmic intensity. The work uses three such synchronized cameras to perform 3D particle tracking in a medium sized wind tunnel. The data analysis relies on Kalman filters to associate the asynchronous events with individual tracers and to reconstruct the three-dimensional path and velocity based on calibrated sensor information.

  2. Real-time Awake Animal Motion Tracking System for SPECT Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goddard Jr, James Samuel; Baba, Justin S; Lee, Seung Joon

    Enhancements have been made in the development of a real-time optical pose measurement and tracking system that provides 3D position and orientation data for a single photon emission computed tomography (SPECT) imaging system for awake, unanesthetized, unrestrained small animals. Three optical cameras with infrared (IR) illumination view the head movements of an animal enclosed in a transparent burrow. Markers placed on the head provide landmark points for image segmentation. Strobed IR LED s are synchronized to the cameras and illuminate the markers to prevent motion blur for each set of images. The system using the three cameras automatically segments themore » markers, detects missing data, rejects false reflections, performs trinocular marker correspondence, and calculates the 3D pose of the animal s head. Improvements have been made in methods for segmentation, tracking, and 3D calculation to give higher speed and more accurate measurements during a scan. The optical hardware has been installed within a Siemens MicroCAT II small animal scanner at Johns Hopkins without requiring functional changes to the scanner operation. The system has undergone testing using both phantoms and live mice and has been characterized in terms of speed, accuracy, robustness, and reliability. Experimental data showing these motion tracking results are given.« less

  3. Using Three-Dimensional Interactive Graphics To Teach Equipment Procedures.

    ERIC Educational Resources Information Center

    Hamel, Cheryl J.; Ryan-Jones, David L.

    1997-01-01

    Focuses on how three-dimensional graphical and interactive features of computer-based instruction can enhance learning and support human cognition during technical training of equipment procedures. Presents guidelines for using three-dimensional interactive graphics to teach equipment procedures based on studies of the effects of graphics, motion,…

  4. Zebrafish response to a robotic replica in three dimensions

    PubMed Central

    Ruberto, Tommaso; Mwaffo, Violet; Singh, Sukhgewanpreet; Neri, Daniele

    2016-01-01

    As zebrafish emerge as a species of choice for the investigation of biological processes, a number of experimental protocols are being developed to study their social behaviour. While live stimuli may elicit varying response in focal subjects owing to idiosyncrasies, tiredness and circadian rhythms, video stimuli suffer from the absence of physical input and rely only on two-dimensional projections. Robotics has been recently proposed as an alternative approach to generate physical, customizable, effective and consistent stimuli for behavioural phenotyping. Here, we contribute to this field of investigation through a novel four-degree-of-freedom robotics-based platform to manoeuvre a biologically inspired three-dimensionally printed replica. The platform enables three-dimensional motions as well as body oscillations to mimic zebrafish locomotion. In a series of experiments, we demonstrate the differential role of the visual stimuli associated with the biologically inspired replica and its three-dimensional motion. Three-dimensional tracking and information-theoretic tools are complemented to quantify the interaction between zebrafish and the robotic stimulus. Live subjects displayed a robust attraction towards the moving replica, and such attraction was lost when controlling for its visual appearance or motion. This effort is expected to aid zebrafish behavioural phenotyping, by offering a novel approach to generate physical stimuli moving in three dimensions. PMID:27853566

  5. Application Of Three-Dimensional Videography To Human Motion Studies: Constraints, Assumptions, And Mathematics

    NASA Astrophysics Data System (ADS)

    Rab, George T.

    1988-02-01

    Three-dimensional human motion analysis has been used for complex kinematic description of abnormal gait in children with neuromuscular disease. Multiple skin markers estimate skeletal segment position, and a sorting and smoothing routine provides marker trajectories. The position and orientation of the moving skeleton in space are derived mathematically from the marker positions, and joint motions are calculated from the Eulerian transformation matrix between linked proximal and distal skeletal segments. Reproduceability has been excellent, and the technique has proven to be a useful adjunct to surgical planning.

  6. Automation of the targeting and reflective alignment concept

    NASA Technical Reports Server (NTRS)

    Redfield, Robin C.

    1992-01-01

    The automated alignment system, described herein, employs a reflective, passive (requiring no power) target and includes a PC-based imaging system and one camera mounted on a six degree of freedom robot manipulator. The system detects and corrects for manipulator misalignment in three translational and three rotational directions by employing the Targeting and Reflective Alignment Concept (TRAC), which simplifies alignment by decoupling translational and rotational alignment control. The concept uses information on the camera and the target's relative position based on video feedback from the camera. These relative positions are converted into alignment errors and minimized by motions of the robot. The system is robust to exogenous lighting by virtue of a subtraction algorithm which enables the camera to only see the target. These capabilities are realized with relatively minimal complexity and expense.

  7. A method of measuring three-dimensional scapular attitudes using the optotrak probing system.

    PubMed

    Hébert, L J; Moffet, H; McFadyen, B J; St-Vincent, G

    2000-01-01

    To develop a method to obtain accurate three-dimensional scapular attitudes and to assess their concurrent validity and reliability. In this methodological study, the three-dimensional scapular attitudes were calculated in degrees, using a rotation matrix (cyclic Cardanic sequence), from spatial coordinates obtained with the probing of three non colinear landmarks first on an anatomical model and second on a healthy subject. Although abnormal movement of the scapula is related to shoulder impingement syndrome, it is not clearly understood whether or not scapular motion impairment is a predisposing factor. Characterization of three-dimensional scapular attitudes in planes and at joint angles for which sub-acromial impingement is more likely to occur is not known. The Optotrak probing system was used. An anatomical model of the scapula was built and allowed us to impose scapular attitudes of known direction and magnitude. A local coordinate reference system was defined with three non colinear anatomical landmarks to assess accuracy and concurrent validity of the probing method with fixed markers. Axial rotation angles were calculated from a rotation matrix using a cyclic Cardanic sequence of rotations. The same three non colinear body landmarks were digitized on one healthy subject and the three dimensional scapular attitudes obtained were compared between sessions in order to assess the reliability. The measure of three dimensional scapular attitudes calculated from data using the Optotrak probing system was accurate with means of the differences between imposed and calculated rotation angles ranging from 1.5 degrees to 4.2 degrees. Greatest variations were observed around the third axis of the Cardanic sequence associated with posterior-anterior transverse rotations. The mean difference between the Optotrak probing system method and fixed markers was 1.73 degrees showing a good concurrent validity. Differences between the two methods were generally very low for one and two direction displacements and the largest discrepancies were observed for imposed displacements combining movement about the three axes. The between sessions variation of three dimensional scapular attitudes was less than 10% for most of the arm positions adopted by a healthy subject suggesting a good reliability. The Optotrak probing system used with a standardized protocol lead to accurate, valid and reliable measures of scapular attitudes. Although abnormal range of motion of the scapula is often related to shoulder pathologies, reliable outcome measures to quantify three-dimensional scapular motion on subjects are not available. It is important to establish a standardized protocol to characterize three-dimensional scapular motion on subjects using a method for which the accuracy and validity are known. The method used in the present study has provided such a protocol and will now allow to verify to what extent, scapular motion impairment is linked to the development of specific shoulder pathologies.

  8. A marker-free system for the analysis of movement disabilities.

    PubMed

    Legrand, L; Marzani, F; Dusserre, L

    1998-01-01

    A major step toward improving the treatments of disabled persons may be achieved by using motion analysis equipment. We are developing such a system. It allows the analysis of plane human motion (e.g. gait) without using the tracking of markers. The system is composed of one fixed camera which acquires an image sequence of a human in motion. Then the treatment is divided into two steps: first, a large number of pixels belonging to the boundaries of the human body are extracted at each acquisition time. Secondly, a two-dimensional model of the human body, based on tapered superquadrics, is successively matched with the sets of pixels previously extracted; a specific fuzzy clustering process is used for this purpose. Moreover, an optical flow procedure gives a prediction of the model location at each acquisition time from its location at the previous time. Finally we present some results of this process applied to a leg in motion.

  9. Advanced one-dimensional optical strain measurement system, phase 4

    NASA Technical Reports Server (NTRS)

    Lant, Christian T.

    1992-01-01

    An improved version of the speckle-shift strain measurement system was developed. The system uses a two-dimensional sensor array to maintain speckle correlation in the presence of large off-axis rigid body motions. A digital signal processor (DSP) is used to calculate strains at a rate near the RS-170 camera frame rate. Strain measurements were demonstrated on small diameter wires and fibers used in composite materials research. Accurate values of Young's modulus were measured on tungsten wires, and silicon carbide and sapphire fibers. This optical technique has measured surface strains at specimen temperatures above 750 C and has shown the potential for measurements at much higher temperatures.

  10. Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap

    NASA Astrophysics Data System (ADS)

    Spiwok, Vojtěch; Králová, Blanka

    2011-12-01

    Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling.

  11. Direct Numerical Simulation of a Temporally Evolving Incompressible Plane Wake: Effect of Initial Conditions on Evolution and Topology

    NASA Technical Reports Server (NTRS)

    Sondergaard, R.; Cantwell, B.; Mansour, N.

    1997-01-01

    Direct numerical simulations have been used to examine the effect of the initial disturbance field on the development of three-dimensionality and the transition to turbulence in the incompressible plane wake. The simulations were performed using a new numerical method for solving the time-dependent, three-dimensional, incompressible Navier-Stokes equations in flows with one infinite and two periodic directions. The method uses standard Fast Fourier Transforms and is applicable to cases where the vorticity field is compact in the infinite direction. Initial disturbances fields examined were combinations of two-dimensional waves and symmetric pairs of 60 deg oblique waves at the fundamental, subharmonic, and sub-subharmonic wavelengths. The results of these simulations indicate that the presence of 60 deg disturbances at the subharmonic streamwise wavelength results in the development of strong coherent three-dimensional structures. The resulting strong three-dimensional rate-of-strain triggers the growth of intense fine scale motions. Wakes initiated with 60 deg disturbances at the fundamental streamwise wavelength develop weak coherent streamwise structures, and do not develop significant fine scale motions, even at high Reynolds numbers. The wakes which develop strong three-dimensional structures exhibit growth rates on par with experimentally observed turbulent plane wakes. Wakes which develop only weak three-dimensional structures exhibit significantly lower late time growth rates. Preliminary studies of wakes initiated with an oblique fundamental and a two-dimensional subharmonic, which develop asymmetric coherent oblique structures at the subharmonic wavelength, indicate that significant fine scale motions only develop if the resulting oblique structures are above an angle of approximately 45 deg.

  12. Modelling knee flexion effects on joint power absorption and adduction moment.

    PubMed

    Nagano, Hanatsu; Tatsumi, Ichiroh; Sarashina, Eri; Sparrow, W A; Begg, Rezaul K

    2015-12-01

    Knee osteoarthritis is commonly associated with ageing and long-term walking. In this study the effects of flexing motions on knee kinetics during stance were simulated. Extended knees do not facilitate efficient loading. It was therefore, hypothesised that knee flexion would promote power absorption and negative work, while possibly reducing knee adduction moment. Three-dimensional (3D) position and ground reaction forces were collected from the right lower limb stance phase of one healthy young male subject. 3D position was sampled at 100 Hz using three Optotrak Certus (Northern Digital Inc.) motion analysis camera units, set up around an eight metre walkway. Force plates (AMTI) recorded ground reaction forces for inverse dynamics calculations. The Visual 3D (C-motion) 'Landmark' function was used to change knee joint positions to simulate three knee flexion angles during static standing. Effects of the flexion angles on joint kinetics during the stance phase were then modelled. The static modelling showed that each 2.7° increment in knee flexion angle produced 2.74°-2.76° increments in knee flexion during stance. Increased peak extension moment was 6.61 Nm per 2.7° of increased knee flexion. Knee flexion enhanced peak power absorption and negative work, while decreasing adduction moment. Excessive knee extension impairs quadriceps' power absorption and reduces eccentric muscle activity, potentially leading to knee osteoarthritis. A more flexed knee is accompanied by reduced adduction moment. Research is required to determine the optimum knee flexion to prevent further damage to knee-joint structures affected by osteoarthritis. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Three-dimensional image acquisition and reconstruction system on a mobile device based on computer-generated integral imaging.

    PubMed

    Erdenebat, Munkh-Uchral; Kim, Byeong-Jun; Piao, Yan-Ling; Park, Seo-Yeon; Kwon, Ki-Chul; Piao, Mei-Lan; Yoo, Kwan-Hee; Kim, Nam

    2017-10-01

    A mobile three-dimensional image acquisition and reconstruction system using a computer-generated integral imaging technique is proposed. A depth camera connected to the mobile device acquires the color and depth data of a real object simultaneously, and an elemental image array is generated based on the original three-dimensional information for the object, with lens array specifications input into the mobile device. The three-dimensional visualization of the real object is reconstructed on the mobile display through optical or digital reconstruction methods. The proposed system is implemented successfully and the experimental results certify that the system is an effective and interesting method of displaying real three-dimensional content on a mobile device.

  14. Three-dimensional shape measurement system applied to superficial inspection of non-metallic pipes for the hydrocarbons transport

    NASA Astrophysics Data System (ADS)

    Arciniegas, Javier R.; González, Andrés. L.; Quintero, L. A.; Contreras, Carlos R.; Meneses, Jaime E.

    2014-05-01

    Three-dimensional shape measurement is a subject that consistently produces high scientific interest and provides information for medical, industrial and investigative applications, among others. In this paper, it is proposed to implement a three-dimensional (3D) reconstruction system for applications in superficial inspection of non-metallic pipes for the hydrocarbons transport. The system is formed by a CCD camera, a video-projector and a laptop and it is based on fringe projection technique. System functionality is evidenced by evaluating the quality of three-dimensional reconstructions obtained, which allow observing the failures and defects on the study object surface.

  15. Real-time model-based vision system for object acquisition and tracking

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd

    1987-01-01

    A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.

  16. A three-dimensional strain measurement method in elastic transparent materials using tomographic particle image velocimetry

    PubMed Central

    Suzuki, Sara; Aoyama, Yusuke; Umezu, Mitsuo

    2017-01-01

    Background The mechanical interaction between blood vessels and medical devices can induce strains in these vessels. Measuring and understanding these strains is necessary to identify the causes of vascular complications. This study develops a method to measure the three-dimensional (3D) distribution of strain using tomographic particle image velocimetry (Tomo-PIV) and compares the measurement accuracy with the gauge strain in tensile tests. Methods and findings The test system for measuring 3D strain distribution consists of two cameras, a laser, a universal testing machine, an acrylic chamber with a glycerol water solution for adjusting the refractive index with the silicone, and dumbbell-shaped specimens mixed with fluorescent tracer particles. 3D images of the particles were reconstructed from 2D images using a multiplicative algebraic reconstruction technique (MART) and motion tracking enhancement. Distributions of the 3D displacements were calculated using a digital volume correlation. To evaluate the accuracy of the measurement method in terms of particle density and interrogation voxel size, the gauge strain and one of the two cameras for Tomo-PIV were used as a video-extensometer in the tensile test. The results show that the optimal particle density and interrogation voxel size are 0.014 particles per pixel and 40 × 40 × 40 voxels with a 75% overlap. The maximum measurement error was maintained at less than 2.5% in the 4-mm-wide region of the specimen. Conclusions We successfully developed a method to experimentally measure 3D strain distribution in an elastic silicone material using Tomo-PIV and fluorescent particles. To the best of our knowledge, this is the first report that applies Tomo-PIV to investigate 3D strain measurements in elastic materials with large deformation and validates the measurement accuracy. PMID:28910397

  17. A Quasi-Static Method for Determining the Characteristics of a Motion Capture Camera System in a "Split-Volume" Configuration

    NASA Technical Reports Server (NTRS)

    Miller, Chris; Mulavara, Ajitkumar; Bloomberg, Jacob

    2001-01-01

    To confidently report any data collected from a video-based motion capture system, its functional characteristics must be determined, namely accuracy, repeatability and resolution. Many researchers have examined these characteristics with motion capture systems, but they used only two cameras, positioned 90 degrees to each other. Everaert used 4 cameras, but all were aligned along major axes (two in x, one in y and z). Richards compared the characteristics of different commercially available systems set-up in practical configurations, but all cameras viewed a single calibration volume. The purpose of this study was to determine the accuracy, repeatability and resolution of a 6-camera Motion Analysis system in a split-volume configuration using a quasistatic methodology.

  18. Graphics simulation and training aids for advanced teleoperation

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Schenker, Paul S.; Bejczy, Antal K.

    1993-01-01

    Graphics displays can be of significant aid in accomplishing a teleoperation task throughout all three phases of off-line task analysis and planning, operator training, and online operation. In the first phase, graphics displays provide substantial aid to investigate work cell layout, motion planning with collision detection and with possible redundancy resolution, and planning for camera views. In the second phase, graphics displays can serve as very useful tools for introductory training of operators before training them on actual hardware. In the third phase, graphics displays can be used for previewing planned motions and monitoring actual motions in any desired viewing angle, or, when communication time delay prevails, for providing predictive graphics overlay on the actual camera view of the remote site to show the non-time-delayed consequences of commanded motions in real time. This paper addresses potential space applications of graphics displays in all three operational phases of advanced teleoperation. Possible applications are illustrated with techniques developed and demonstrated in the Advanced Teleoperation Laboratory at JPL. The examples described include task analysis and planning of a simulated Solar Maximum Satellite Repair task, a novel force-reflecting teleoperation simulator for operator training, and preview and predictive displays for on-line operations.

  19. Assessment of dynamic balance via measurement of lower extremities tortuosity.

    PubMed

    Eltoukhy, Moataz; Kuenze, Christopher; Jun, Hyung-Pil; Asfour, Shihab; Travascio, Francesco

    2015-03-01

    Tortuosity describes how twisted or how much curvature is present in an observed movement or path. The purpose of this study was to investigate the differences in segmental tortuosity between Star Excursion Balance Test (SEBT) reach directions. Fifteen healthy participants completed this study. Participants completed the modified three direction (anterior, posteromedial, posterolateral) SEBT with three-dimensional motion analysis using an 8 camera BTS Smart 7000DX motion analysis system. The tortuosity of stance limb retro-reflective markers was then calculated and compared between reach directions using a 1 × 3 ANOVA with repeated measures, while the relationship between SEBT performance and tortuosity was established using Pearson product moment correlations. Anterior superior iliac spine tortuosity was significantly greater (p < 0.001) and lateral knee tortuosity was lesser (p = 0.018) in the anterior direction compared to the posteromedial and posterolateral directions. In addition, second metatarsal tortuosity was greater in the anterior reach direction when compared to posteromedial direction (p = 0.024). Tortuosity is a novel biomechanical measurement technique that provides an assessment of segmental movement during common dynamic tasks such as the SEBT. This enhanced level of detail compared to more global measures of joint kinematic may provide insight into compensatory movement strategies adopted following lower extremity joint injury.

  20. Three-Dimensional Digital Image Correlation of a Composite Overwrapped Pressure Vessel During Hydrostatic Pressure Tests

    NASA Technical Reports Server (NTRS)

    Revilock, Duane M., Jr.; Thesken, John C.; Schmidt, Timothy E.

    2007-01-01

    Ambient temperature hydrostatic pressurization tests were conducted on a composite overwrapped pressure vessel (COPV) to understand the fiber stresses in COPV components. Two three-dimensional digital image correlation systems with high speed cameras were used in the evaluation to provide full field displacement and strain data for each pressurization test. A few of the key findings will be discussed including how the principal strains provided better insight into system behavior than traditional gauges, a high localized strain that was measured where gages were not present and the challenges of measuring curved surfaces with the use of a 1.25 in. thick layered polycarbonate panel that protected the cameras.

  1. Mom's shadow: structure-from-motion in newly hatched chicks as revealed by an imprinting procedure.

    PubMed

    Mascalzoni, Elena; Regolin, Lucia; Vallortigara, Giorgio

    2009-03-01

    The ability to recognize three-dimensional objects from two-dimensional (2-D) displays was investigated in domestic chicks, focusing on the role of the object's motion. In Experiment 1 newly hatched chicks, imprinted on a three-dimensional (3-D) object, were allowed to choose between the shadows of the familiar object and of an object never seen before. In Experiments 2 and 3 random-dot displays were used to produce the perception of a solid shape only when set in motion. Overall, the results showed that domestic chicks were able to recognize familiar shapes from 2-D motion stimuli. It is likely that similar general mechanisms underlying the perception of structure-from-motion and the extraction of 3-D information are shared by humans and animals. The present data shows that they occur similarly in birds as known for mammals, two separate vertebrate classes; this possibly indicates a common phylogenetic origin of these processes.

  2. Robotic Surgery in Gynecology

    PubMed Central

    Bouquet de Joliniere, Jean; Librino, Armando; Dubuisson, Jean-Bernard; Khomsi, Fathi; Ben Ali, Nordine; Fadhlaoui, Anis; Ayoubi, J. M.; Feki, Anis

    2016-01-01

    Minimally invasive surgery (MIS) can be considered as the greatest surgical innovation over the past 30 years. It revolutionized surgical practice with well-proven advantages over traditional open surgery: reduced surgical trauma and incision-related complications, such as surgical-site infections, postoperative pain and hernia, reduced hospital stay, and improved cosmetic outcome. Nonetheless, proficiency in MIS can be technically challenging as conventional laparoscopy is associated with several limitations as the two-dimensional (2D) monitor reduction in-depth perception, camera instability, limited range of motion, and steep learning curves. The surgeon has a low force feedback, which allows simple gestures, respect for tissues, and more effective treatment of complications. Since the 1980s, several computer sciences and robotics projects have been set up to overcome the difficulties encountered with conventional laparoscopy, to augment the surgeon’s skills, achieve accuracy and high precision during complex surgery, and facilitate widespread of MIS. Surgical instruments are guided by haptic interfaces that replicate and filter hand movements. Robotically assisted technology offers advantages that include improved three-dimensional stereoscopic vision, wristed instruments that improve dexterity, and tremor canceling software that improves surgical precision. PMID:27200358

  3. Correction And Use Of Jitter In Television Images

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.

    1989-01-01

    Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.

  4. Perceiving environmental properties from motion information: Minimal conditions

    NASA Technical Reports Server (NTRS)

    Proffitt, Dennis R.; Kaiser, Mary K.

    1989-01-01

    The status of motion as a minimal information source for perceiving the environmental properties of surface segregation, three-dimensional (3-D) form, displacement, and dynamics is discussed. The selection of these particular properties was motivated by a desire to present research on perceiving properties that span the range of dimensional complexity.

  5. SAGITTARIUS STREAM THREE-DIMENSIONAL KINEMATICS FROM SLOAN DIGITAL SKY SURVEY STRIPE 82

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koposov, Sergey E.; Belokurov, Vasily; Evans, N. Wyn

    2013-04-01

    Using multi-epoch observations of the Stripe 82 region from the Sloan Digital Sky Survey (SDSS), we measure precise statistical proper motions of the stars in the Sagittarius (Sgr) stellar stream. The multi-band photometry and SDSS radial velocities allow us to efficiently select Sgr members and thus enhance the proper-motion precision to {approx}0.1 mas yr{sup -1}. We measure separately the proper motion of a photometrically selected sample of the main-sequence turn-off stars, as well as spectroscopically selected Sgr giants. The data allow us to determine the proper motion separately for the two Sgr streams in the south found in Koposov etmore » al. Together with the precise velocities from SDSS, our proper motions provide exquisite constraints of the three-dimensional motions of the stars in the Sgr streams.« less

  6. Deflection Measurements of a Thermally Simulated Nuclear Core Using a High-Resolution CCD-Camera

    NASA Technical Reports Server (NTRS)

    Stanojev, B. J.; Houts, M.

    2004-01-01

    Space fission systems under consideration for near-term missions all use compact. fast-spectrum reactor cores. Reactor dimensional change with increasing temperature, which affects neutron leakage. is the dominant source of reactivity feedback in these systems. Accurately measuring core dimensional changes during realistic non-nuclear testing is therefore necessary in predicting the system nuclear equivalent behavior. This paper discusses one key technique being evaluated for measuring such changes. The proposed technique is to use a Charged Couple Device (CCD) sensor to obtain deformation readings of electrically heated prototypic reactor core geometry. This paper introduces a technique by which a single high spatial resolution CCD camera is used to measure core deformation in Real-Time (RT). Initial system checkout results are presented along with a discussion on how additional cameras could be used to achieve a three- dimensional deformation profile of the core during test.

  7. Method and apparatus for coherent imaging of infrared energy

    DOEpatents

    Hutchinson, Donald P.

    1998-01-01

    A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera's two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera's integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting.

  8. The Heliosphere in Space

    NASA Astrophysics Data System (ADS)

    Frisch, P. C.; Hanson, A. J.; Fu, P. C.

    2008-12-01

    A scientifically accurate visualization of the Journey of the Sun through deep space has been created in order to share the excitement of heliospheric physics and scientific discovery with the non-expert. The MHD heliosphere model of Linde (1998) displays the interaction of the solar wind with the interstellar medium for a supersonic heliosphere traveling through a low density magnetized interstellar medium. The camera viewpoint follows the solar motion through a virtual space of the Milky Way Galaxy. This space is constructed from real data placed in the three-dimensional solar neighborhood, and populated with Hipparcos stars in front of a precisely aligned image of the Milky Way itself. The celestial audio track of this three minute movie includes the music of the heliosphere, heard by the two Voyager satellites as 3 kHz emissions from the edge of the heliosphere. This short heliosphere visualization can be downloaded from http://www.cs.indiana.edu/~soljourn/pub/AstroBioScene7Sound.mov, and the full scientific data visualization of the Solar Journey is available commercially.

  9. IMAX camera (12-IML-1)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.

  10. Differences in ball speed and three-dimensional kinematics between male and female handball players during a standing throw with run-up.

    PubMed

    Serrien, Ben; Clijsen, Ron; Blondeel, Jonathan; Goossens, Maggy; Baeyens, Jean-Pierre

    2015-01-01

    The purpose of this paper was to examine differences in ball release speed and throwing kinematics between male and female team-handball players in a standing throw with run-up. Other research has shown that this throwing type produces the highest ball release speeds and comparing groups with differences in ball release speed can suggest where this difference might come from. If throwing technique differs, perhaps gender-specific coordination- and strength-training guidelines are in order. Measurements of three-dimensional kinematics were performed with a seven-camera VICON motion capture system and subsequent joint angles and angular velocities calculations were executed in Mathcad. Data-analysis with Statistical Parametric Mapping allowed us to examine the entire time-series of every variable without having to reduce the data to certain scalar values such as minima/maxima extracted from the time-series. Statistical Parametric Mapping enabled us to detect several differences in the throwing kinematics (12 out of 20 variables had one or more differences somewhere during the motion). The results indicated two distinct strategies in generating and transferring momentum through the kinematic chain. Male team-handball players showed more activity in the transverse plane (pelvis and trunk rotation and shoulder horizontal abduction) whereas female team-handball players showed more activity in the sagital plane (trunk flexion). Also the arm cocking maneuver was quite different. The observed differences between male and female team handball players in the motions of pelvis, trunk and throwing arm can be important information for coaches to give feedback to athletes. Whether these differences contribute to the observed difference in ball release speed is at the present unclear and more research on the relation with anthropometric profile needs to be done. Kinematic differences might suggest gender-specific training guidelines in team-handball.

  11. Motion camera based on a custom vision sensor and an FPGA architecture

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel

    1998-09-01

    A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.

  12. Measurement of three-dimensional posture and trajectory of lower body during standing long jumping utilizing body-mounted sensors.

    PubMed

    Ibata, Yuki; Kitamura, Seiji; Motoi, Kosuke; Sagawa, Koichi

    2013-01-01

    The measurement method of three-dimensional posture and flying trajectory of lower body during jumping motion using body-mounted wireless inertial measurement units (WIMU) is introduced. The WIMU is composed of three-dimensional (3D) accelerometer and gyroscope of two kinds with different dynamic range and one 3D geomagnetic sensor to adapt to quick movement. Three WIMUs are mounted under the chest, right thigh and right shank. Thin film pressure sensors are connected to the shank WIMU and are installed under right heel and tiptoe to distinguish the state of the body motion between grounding and jumping. Initial and final postures of trunk, thigh and shank at standing-still are obtained using gravitational acceleration and geomagnetism. The posture of body is determined using the 3D direction of each segment updated by the numerical integration of angular velocity. Flying motion is detected from pressure sensors and 3D flying trajectory is derived by the double integration of trunk acceleration applying the 3D velocity of trunk at takeoff. Standing long jump experiments are performed and experimental results show that the joint angle and flying trajectory agree with the actual motion measured by the optical motion capture system.

  13. Three dimensional dynamics of a flexible Motorised Momentum Exchange Tether

    NASA Astrophysics Data System (ADS)

    Ismail, N. A.; Cartmell, M. P.

    2016-03-01

    This paper presents a new flexural model for the three dimensional dynamics of the Motorised Momentum Exchange Tether (MMET) concept. This study has uncovered the relationships between planar and nonplanar motions, and the effect of the coupling between these two parameters on pragmatic circular and elliptical orbits. The tether sub-spans are modelled as stiffened strings governed by partial differential equations of motion, with specific boundary conditions. The tether sub-spans are flexible and elastic, thereby allowing three dimensional displacements. The boundary conditions lead to a specific frequency equation and the eigenvalues from this provide the natural frequencies of the orbiting flexible motorised tether when static, accelerating in monotonic spin, and at terminal angular velocity. A rotation transformation matrix has been utilised to get the position vectors of the system's components in an assumed inertial frame. Spatio-temporal coordinates are transformed to modal coordinates before applying Lagrange's equations, and pre-selected linear modes are included to generate the equations of motion. The equations of motion contain inertial nonlinearities which are essentially of cubic order, and these show the potential for intricate intermodal coupling effects. A simulation of planar and non-planar motions has been undertaken and the differences in the modal responses, for both motions, and between the rigid body and flexible models are highlighted and discussed.

  14. Collaborated measurement of three-dimensional position and orientation errors of assembled miniature devices with two vision systems

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Zhang, Wei; Luo, Yi; Yang, Weimin; Chen, Liang

    2013-01-01

    In assembly of miniature devices, the position and orientation of the parts to be assembled should be guaranteed during or after assembly. In some cases, the relative position or orientation errors among the parts can not be measured from only one direction using visual method, because of visual occlusion or for the features of parts located in a three-dimensional way. An automatic assembly system for precise miniature devices is introduced. In the modular assembly system, two machine vision systems were employed for measurement of the three-dimensionally distributed assembly errors. High resolution CCD cameras and high position repeatability precision stages were integrated to realize high precision measurement in large work space. The two cameras worked in collaboration in measurement procedure to eliminate the influence of movement errors of the rotational or translational stages. A set of templates were designed for calibration of the vision systems and evaluation of the system's measurement accuracy.

  15. Effect of Facetectomy on the Three-Dimensional Biomechanical Properties of the Fourth Canine Cervical Functional Spinal Unit: A Cadaveric Study.

    PubMed

    Bösch, Nadja; Hofstetter, Martin; Bürki, Alexander; Vidondo, Beatriz; Davies, Fenella; Forterre, Franck

    2017-11-01

    Objective  To study the biomechanical effect of facetectomy in 10 large breed dogs (>24 kg body weight) on the fourth canine cervical functional spinal unit. Methods  Canine cervical spines were freed from all muscles. Spines were mounted on a six-degrees-of-freedom spine testing machine for three-dimensional motion analysis. Data were recorded with an optoelectronic motion analysis system. The range of motion was determined in all three primary motions as well as range of motion of coupled motions on the intact specimen, after unilateral and after bilateral facetectomy. Repeated-measures analysis of variance models were used to assess the changes of the biomechanical properties in the three treatment groups considered. Results  Facetectomy increased range of motion of primary motions in all directions. Axial rotation was significantly influenced by facetectomy. Coupled motion was not influenced by facetectomy except for lateral bending with coupled motion axial rotation. The coupling factor (coupled motion/primary motion) decreased after facetectomy. Symmetry of motion was influenced by facetectomy in flexion-extension and axial rotation, but not in lateral bending. Clinical Significance  Facet joints play a significant role in the stability of the cervical spine and act to maintain spatial integrity. Therefore, cervical spinal treatments requiring a facetectomy should be carefully planned and if an excessive increase in range of motion is expected, complications should be anticipated and reduced via spinal stabilization. Schattauer GmbH Stuttgart.

  16. Photogrammetry System and Method for Determining Relative Motion Between Two Bodies

    NASA Technical Reports Server (NTRS)

    Miller, Samuel A. (Inventor); Severance, Kurt (Inventor)

    2014-01-01

    A photogrammetry system and method provide for determining the relative position between two objects. The system utilizes one or more imaging devices, such as high speed cameras, that are mounted on a first body, and three or more photogrammetry targets of a known location on a second body. The system and method can be utilized with cameras having fish-eye, hyperbolic, omnidirectional, or other lenses. The system and method do not require overlapping fields-of-view if two or more cameras are utilized. The system and method derive relative orientation by equally weighting information from an arbitrary number of heterogeneous cameras, all with non-overlapping fields-of-view. Furthermore, the system can make the measurements with arbitrary wide-angle lenses on the cameras.

  17. Multifunctional, three-dimensional tomography for analysis of eletrectrohydrodynamic jetting

    NASA Astrophysics Data System (ADS)

    Nguyen, Xuan Hung; Gim, Yeonghyeon; Ko, Han Seo

    2015-05-01

    A three-dimensional optical tomography technique was developed to reconstruct three-dimensional objects using a set of two-dimensional shadowgraphic images and normal gray images. From three high-speed cameras, which were positioned at an offset angle of 45° between each other, number, size, and location of electrohydrodynamic jets with respect to the nozzle position were analyzed using shadowgraphic tomography employing multiplicative algebraic reconstruction technique (MART). Additionally, a flow field inside a cone-shaped liquid (Taylor cone) induced under an electric field was observed using a simultaneous multiplicative algebraic reconstruction technique (SMART), a tomographic method for reconstructing light intensities of particles, combined with three-dimensional cross-correlation. Various velocity fields of circulating flows inside the cone-shaped liquid caused by various physico-chemical properties of liquid were also investigated.

  18. Reliability and concurrent validity of a Smartphone, bubble inclinometer and motion analysis system for measurement of hip joint range of motion.

    PubMed

    Charlton, Paula C; Mentiplay, Benjamin F; Pua, Yong-Hao; Clark, Ross A

    2015-05-01

    Traditional methods of assessing joint range of motion (ROM) involve specialized tools that may not be widely available to clinicians. This study assesses the reliability and validity of a custom Smartphone application for assessing hip joint range of motion. Intra-tester reliability with concurrent validity. Passive hip joint range of motion was recorded for seven different movements in 20 males on two separate occasions. Data from a Smartphone, bubble inclinometer and a three dimensional motion analysis (3DMA) system were collected simultaneously. Intraclass correlation coefficients (ICCs), coefficients of variation (CV) and standard error of measurement (SEM) were used to assess reliability. To assess validity of the Smartphone application and the bubble inclinometer against the three dimensional motion analysis system, intraclass correlation coefficients and fixed and proportional biases were used. The Smartphone demonstrated good to excellent reliability (ICCs>0.75) for four out of the seven movements, and moderate to good reliability for the remaining three movements (ICC=0.63-0.68). Additionally, the Smartphone application displayed comparable reliability to the bubble inclinometer. The Smartphone application displayed excellent validity when compared to the three dimensional motion analysis system for all movements (ICCs>0.88) except one, which displayed moderate to good validity (ICC=0.71). Smartphones are portable and widely available tools that are mostly reliable and valid for assessing passive hip range of motion, with potential for large-scale use when a bubble inclinometer is not available. However, caution must be taken in its implementation as some movement axes demonstrated only moderate reliability. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  19. Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap.

    PubMed

    Spiwok, Vojtěch; Králová, Blanka

    2011-12-14

    Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling. © 2011 American Institute of Physics

  20. Toward an automated low-cost three-dimensional crop surface monitoring system using oblique stereo imagery from consumer-grade smart cameras

    NASA Astrophysics Data System (ADS)

    Brocks, Sebastian; Bendig, Juliane; Bareth, Georg

    2016-10-01

    Crop surface models (CSMs) representing plant height above ground level are a useful tool for monitoring in-field crop growth variability and enabling precision agriculture applications. A semiautomated system for generating CSMs was implemented. It combines an Android application running on a set of smart cameras for image acquisition and transmission and a set of Python scripts automating the structure-from-motion (SfM) software package Agisoft Photoscan and ArcGIS. Only ground-control-point (GCP) marking was performed manually. This system was set up on a barley field experiment with nine different barley cultivars in the growing period of 2014. Images were acquired three times a day for a period of two months. CSMs were successfully generated for 95 out of 98 acquisitions between May 2 and June 30. The best linear regressions of the CSM-derived plot-wise averaged plant-heights compared to manual plant height measurements taken at four dates resulted in a coefficient of determination R2 of 0.87 and a root-mean-square error (RMSE) of 0.08 m, with Willmott's refined index of model performance dr equaling 0.78. In total, 103 mean plot heights were used in the regression based on the noon acquisition time. The presented system succeeded in semiautomatedly monitoring crop height on a plot scale to field scale.

  1. Three-dimensional measurement of small inner surface profiles using feature-based 3-D panoramic registration

    PubMed Central

    Gong, Yuanzheng; Seibel, Eric J.

    2017-01-01

    Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection. PMID:28286351

  2. Introducing a Virtual Reality Experience in Anatomic Pathology Education.

    PubMed

    Madrigal, Emilio; Prajapati, Shyam; Hernandez-Prera, Juan C

    2016-10-01

    A proper examination of surgical specimens is fundamental in anatomic pathology (AP) education. However, the resources available to residents may not always be suitable for efficient skill acquisition. We propose a method to enhance AP education by introducing high-definition videos featuring methods for appropriate specimen handling, viewable on two-dimensional (2D) and stereoscopic three-dimensional (3D) platforms. A stereo camera system recorded the gross processing of commonly encountered specimens. Three edited videos, with instructional audio voiceovers, were experienced by nine junior residents in a crossover study to assess the effects of the exposure (2D vs 3D movie views) on self-reported physiologic symptoms. A questionnaire was used to analyze viewer acceptance. All surveyed residents found the videos beneficial in preparation to examine a new specimen type. Viewer data suggest an improvement in specimen handling confidence and knowledge and enthusiasm toward 3D technology. None of the participants encountered significant motion sickness. Our novel method provides the foundation to create a robust teaching library. AP is inherently a visual discipline, and by building on the strengths of traditional teaching methods, our dynamic approach allows viewers to appreciate the procedural actions involved in specimen processing. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Three-dimensional measurement of small inner surface profiles using feature-based 3-D panoramic registration

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Seibel, Eric J.

    2017-01-01

    Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection.

  4. Bias to experience approaching motion in a three-dimensional virtual environment.

    PubMed

    Lewis, Clifford F; McBeath, Michael K

    2004-01-01

    We used two-frame apparent motion in a three-dimensional virtual environment to test whether observers had biases to experience approaching or receding motion in depth. Observers viewed a tunnel of tiles receding in depth, that moved ambiguously either toward or away from them. We found that observers exhibited biases to experience approaching motion. The strengths of the biases were decreased when stimuli pointed away, but size of the display screen had no effect. Tests with diamond-shaped tiles that varied in the degree of pointing asymmetry resulted in a linear trend in which the bias was strongest for stimuli pointing toward the viewer, and weakest for stimuli pointing away. We show that the overall bias to experience approaching motion is consistent with a computational strategy of matching corresponding features between adjacent foreshortened stimuli in consecutive visual frames. We conclude that there are both adaptational and geometric reasons to favor the experience of approaching motion.

  5. Compton tomography system

    DOEpatents

    Grubsky, Victor; Romanoov, Volodymyr; Shoemaker, Keith; Patton, Edward Matthew; Jannson, Tomasz

    2016-02-02

    A Compton tomography system comprises an x-ray source configured to produce a planar x-ray beam. The beam irradiates a slice of an object to be imaged, producing Compton-scattered x-rays. The Compton-scattered x-rays are imaged by an x-ray camera. Translation of the object with respect to the source and camera or vice versa allows three-dimensional object imaging.

  6. Robotic Vision-Based Localization in an Urban Environment

    NASA Technical Reports Server (NTRS)

    Mchenry, Michael; Cheng, Yang; Matthies

    2007-01-01

    A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.

  7. Automatic respiration tracking for radiotherapy using optical 3D camera

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Rapid optical three-dimensional (O3D) imaging systems provide accurate digitized 3D surface data in real-time, with no patient contact nor radiation. The accurate 3D surface images offer crucial information in image-guided radiation therapy (IGRT) treatments for accurate patient repositioning and respiration management. However, applications of O3D imaging techniques to image-guided radiotherapy have been clinically challenged by body deformation, pathological and anatomical variations among individual patients, extremely high dimensionality of the 3D surface data, and irregular respiration motion. In existing clinical radiation therapy (RT) procedures target displacements are caused by (1) inter-fractional anatomy changes due to weight, swell, food/water intake; (2) intra-fractional variations from anatomy changes within any treatment session due to voluntary/involuntary physiologic processes (e.g. respiration, muscle relaxation); (3) patient setup misalignment in daily reposition due to user errors; and (4) changes of marker or positioning device, etc. Presently, viable solution is lacking for in-vivo tracking of target motion and anatomy changes during the beam-on time without exposing patient with additional ionized radiation or high magnet field. Current O3D-guided radiotherapy systems relay on selected points or areas in the 3D surface to track surface motion. The configuration of the marks or areas may change with time that makes it inconsistent in quantifying and interpreting the respiration patterns. To meet the challenge of performing real-time respiration tracking using O3D imaging technology in IGRT, we propose a new approach to automatic respiration motion analysis based on linear dimensionality reduction technique based on PCA (principle component analysis). Optical 3D image sequence is decomposed with principle component analysis into a limited number of independent (orthogonal) motion patterns (a low dimension eigen-space span by eigen-vectors). New images can be accurately represented as weighted summation of those eigen-vectors, which can be easily discriminated with a trained classifier. We developed algorithms, software and integrated with an O3D imaging system to perform the respiration tracking automatically. The resulting respiration tracking system requires no human intervene during it tracking operation. Experimental results show that our approach to respiration tracking is more accurate and robust than the methods using manual selected markers, even in the presence of incomplete imaging data.

  8. Determination of constant-volume balloon capabilities for aeronautical research. [specifically measurement of atmospheric phenomena

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; King, R. L.

    1977-01-01

    The proper application of constant-volume balloons (CVB) for measurement of atmospheric phenomena was determined. And with the proper interpretation of the resulting data. A literature survey covering 176 references is included. the governing equations describing the three-dimensional motion of a CVB immersed in a flow field are developed. The flowfield model is periodic, three-dimensional, and nonhomogeneous, with mean translational motion. The balloon motion and flow field equations are cast into dimensionless form for greater generality, and certain significant dimensionless groups are identified. An alternate treatment of the balloon motion, based on first-order perturbation analysis, is also presented. A description of the digital computer program, BALLOON, used for numerically integrating the governing equations is provided.

  9. Independent motion detection with a rival penalized adaptive particle filter

    NASA Astrophysics Data System (ADS)

    Becker, Stefan; Hübner, Wolfgang; Arens, Michael

    2014-10-01

    Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic filter for real-time detection and tracking of independently moving objects. The proposed approach introduces a competition scheme between particles in order to ensure an improved multi-modality. Further, the filter design helps to generate a particle distribution which is homogenous even in the presence of multiple targets showing non-rigid motion patterns. The effectiveness of the method is shown on exemplary outdoor sequences.

  10. 4D laser camera for accurate patient positioning, collision avoidance, image fusion and adaptive approaches during diagnostic and therapeutic procedures.

    PubMed

    Brahme, Anders; Nyman, Peter; Skatt, Björn

    2008-05-01

    A four-dimensional (4D) laser camera (LC) has been developed for accurate patient imaging in diagnostic and therapeutic radiology. A complementary metal-oxide semiconductor camera images the intersection of a scanned fan shaped laser beam with the surface of the patient and allows real time recording of movements in a three-dimensional (3D) or four-dimensional (4D) format (3D +time). The LC system was first designed as an accurate patient setup tool during diagnostic and therapeutic applications but was found to be of much wider applicability as a general 4D photon "tag" for the surface of the patient in different clinical procedures. It is presently used as a 3D or 4D optical benchmark or tag for accurate delineation of the patient surface as demonstrated for patient auto setup, breathing and heart motion detection. Furthermore, its future potential applications in gating, adaptive therapy, 3D or 4D image fusion between most imaging modalities and image processing are discussed. It is shown that the LC system has a geometrical resolution of about 0, 1 mm and that the rigid body repositioning accuracy is about 0, 5 mm below 20 mm displacements, 1 mm below 40 mm and better than 2 mm at 70 mm. This indicates a slight need for repeated repositioning when the initial error is larger than about 50 mm. The positioning accuracy with standard patient setup procedures for prostate cancer at Karolinska was found to be about 5-6 mm when independently measured using the LC system. The system was found valuable for positron emission tomography-computed tomography (PET-CT) in vivo tumor and dose delivery imaging where it potentially may allow effective correction for breathing artifacts in 4D PET-CT and image fusion with lymph node atlases for accurate target volume definition in oncology. With a LC system in all imaging and radiation therapy rooms, auto setup during repeated diagnostic and therapeutic procedures may save around 5 min per session, increase accuracy and allow efficient image fusion between all imaging modalities employed.

  11. 3D for the people: multi-camera motion capture in the field with consumer-grade cameras and open source software

    PubMed Central

    Evangelista, Dennis J.; Ray, Dylan D.; Hedrick, Tyson L.

    2016-01-01

    ABSTRACT Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts. PMID:27444791

  12. Human body motion capture from multi-image video sequences

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2003-01-01

    In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.

  13. Two-dimensional Imaging Velocity Interferometry: Technique and Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erskine, D J; Smith, R F; Bolme, C

    2011-03-23

    We describe the data analysis procedures for an emerging interferometric technique for measuring motion across a two-dimensional image at a moment in time, i.e. a snapshot 2d-VISAR. Velocity interferometers (VISAR) measuring target motion to high precision have been an important diagnostic in shockwave physics for many years Until recently, this diagnostic has been limited to measuring motion at points or lines across a target. We introduce an emerging interferometric technique for measuring motion across a two-dimensional image, which could be called a snapshot 2d-VISAR. If a sufficiently fast movie camera technology existed, it could be placed behind a traditional VISARmore » optical system and record a 2d image vs time. But since that technology is not yet available, we use a CCD detector to record a single 2d image, with the pulsed nature of the illumination providing the time resolution. Consequently, since we are using pulsed illumination having a coherence length shorter than the VISAR interferometer delay ({approx}0.1 ns), we must use the white light velocimetry configuration to produce fringes with significant visibility. In this scheme, two interferometers (illuminating, detecting) having nearly identical delays are used in series, with one before the target and one after. This produces fringes with at most 50% visibility, but otherwise has the same fringe shift per target motion of a traditional VISAR. The 2d-VISAR observes a new world of information about shock behavior not readily accessible by traditional point or 1d-VISARS, simultaneously providing both a velocity map and an 'ordinary' snapshot photograph of the target. The 2d-VISAR has been used to observe nonuniformities in NIF related targets (polycrystalline diamond, Be), and in Si and Al.« less

  14. Three-dimensional characterization of tethered microspheres by total internal reflection fluorescence microscopy

    NASA Technical Reports Server (NTRS)

    Blumberg, Seth; Gajraj, Arivalagan; Pennington, Matthew W.; Meiners, Jens-Christian

    2005-01-01

    Tethered particle microscopy is a powerful tool to study the dynamics of DNA molecules and DNA-protein complexes in single-molecule experiments. We demonstrate that stroboscopic total internal reflection microscopy can be used to characterize the three-dimensional spatiotemporal motion of DNA-tethered particles. By calculating characteristic measures such as symmetry and time constants of the motion, well-formed tethers can be distinguished from defective ones for which the motion is dominated by aberrant surface effects. This improves the reliability of measurements on tether dynamics. For instance, in observations of protein-mediated DNA looping, loop formation is distinguished from adsorption and other nonspecific events.

  15. Intraoperative implant rod three-dimensional geometry measured by dual camera system during scoliosis surgery.

    PubMed

    Salmingo, Remel Alingalan; Tadano, Shigeru; Abe, Yuichiro; Ito, Manabu

    2016-05-12

    Treatment for severe scoliosis is usually attained when the scoliotic spine is deformed and fixed by implant rods. Investigation of the intraoperative changes of implant rod shape in three-dimensions is necessary to understand the biomechanics of scoliosis correction, establish consensus of the treatment, and achieve the optimal outcome. The objective of this study was to measure the intraoperative three-dimensional geometry and deformation of implant rod during scoliosis corrective surgery.A pair of images was obtained intraoperatively by the dual camera system before rotation and after rotation of rods during scoliosis surgery. The three-dimensional implant rod geometry before implantation was measured directly by the surgeon and after surgery using a CT scanner. The images of rods were reconstructed in three-dimensions using quintic polynomial functions. The implant rod deformation was evaluated using the angle between the two three-dimensional tangent vectors measured at the ends of the implant rod.The implant rods at the concave side were significantly deformed during surgery. The highest rod deformation was found after the rotation of rods. The implant curvature regained after the surgical treatment.Careful intraoperative rod maneuver is important to achieve a safe clinical outcome because the intraoperative forces could be higher than the postoperative forces. Continuous scoliosis correction was observed as indicated by the regain of the implant rod curvature after surgery.

  16. Three-Dimensional Lissajous Figures.

    ERIC Educational Resources Information Center

    D'Mura, John M.

    1989-01-01

    Described is a mechanically driven device for generating three-dimensional harmonic space figures with different frequencies and phase angles on the X, Y, and Z axes. Discussed are apparatus, viewing stereo pairs, equations of motion, and using space figures in classroom. (YP)

  17. On the Transition from Two-Dimensional to Three-Dimensional MHD Turbulence

    NASA Technical Reports Server (NTRS)

    Thess, A.; Zikanov, Oleg

    2004-01-01

    We report a theoretical investigation of the robustness of two-dimensional inviscid MHD flows at low magnetic Reynolds numbers with respect to three-dimensional perturbations. We analyze three model problems, namely flow in the interior of a triaxial ellipsoid, an unbounded vortex with elliptical streamlines, and a vortex sheet parallel to the magnetic field. We demonstrate that motion perpendicular to the magnetic field with elliptical streamlines becomes unstable with respect to the elliptical instability once the velocity has reached a critical magnitude whose value tends to zero as the eccentricity of the streamlines becomes large. Furthermore, vortex sheets parallel to the magnetic field, which are unstable for any velocity and any magnetic field, are found to emit eddies with vorticity perpendicular to the magnetic field and with an aspect ratio proportional to N(sup 1/2). The results suggest that purely two-dimensional motion without Joule energy dissipation is a singular type of flow which does not represent the asymptotic behaviour of three-dimensional MHD turbulence in the limit of infinitely strong magnetic fields.

  18. SU-E-J-12: An Image-Guided Soft Robotic Patient Positioning System for Maskless Head-And-Neck Cancer Radiotherapy: A Proof-Of-Concept Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogunmolu, O; Gans, N; Jiang, S

    Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance ofmore » the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control will be pursued.« less

  19. Three-dimensional displays and stereo vision

    PubMed Central

    Westheimer, Gerald

    2011-01-01

    Procedures for three-dimensional image reconstruction that are based on the optical and neural apparatus of human stereoscopic vision have to be designed to work in conjunction with it. The principal methods of implementing stereo displays are described. Properties of the human visual system are outlined as they relate to depth discrimination capabilities and achieving optimal performance in stereo tasks. The concept of depth rendition is introduced to define the change in the parameters of three-dimensional configurations for cases in which the physical disposition of the stereo camera with respect to the viewed object differs from that of the observer's eyes. PMID:21490023

  20. a Novel Technique for Precision Geometric Correction of Jitter Distortion for the Europa Imaging System and Other Rolling-Shutter Cameras

    NASA Astrophysics Data System (ADS)

    Kirk, R. L.; Shepherd, M.; Sides, S. C.

    2018-04-01

    We use simulated images to demonstrate a novel technique for mitigating geometric distortions caused by platform motion ("jitter") as two-dimensional image sensors are exposed and read out line by line ("rolling shutter"). The results indicate that the Europa Imaging System (EIS) on NASA's Europa Clipper can likely meet its scientific goals requiring 0.1-pixel precision. We are therefore adapting the software used to demonstrate and test rolling shutter jitter correction to become part of the standard processing pipeline for EIS. The correction method will also apply to other rolling-shutter cameras, provided they have the operational flexibility to read out selected "check lines" at chosen times during the systematic readout of the frame area.

  1. Moving Object Detection on a Vehicle Mounted Back-Up Camera

    PubMed Central

    Kim, Dong-Sun; Kwon, Jinsan

    2015-01-01

    In the detection of moving objects from vision sources one usually assumes that the scene has been captured by stationary cameras. In case of backing up a vehicle, however, the camera mounted on the vehicle moves according to the vehicle’s movement, resulting in ego-motions on the background. This results in mixed motion in the scene, and makes it difficult to distinguish between the target objects and background motions. Without further treatments on the mixed motion, traditional fixed-viewpoint object detection methods will lead to many false-positive detection results. In this paper, we suggest a procedure to be used with the traditional moving object detection methods relaxing the stationary cameras restriction, by introducing additional steps before and after the detection. We also decribe the implementation as a FPGA platform along with the algorithm. The target application of this suggestion is use with a road vehicle’s rear-view camera systems. PMID:26712761

  2. Three-dimensional intrafractional internal target motions in accelerated partial breast irradiation using three-dimensional conformal external beam radiotherapy.

    PubMed

    Hirata, Kimiko; Yoshimura, Michio; Mukumoto, Nobutaka; Nakamura, Mitsuhiro; Inoue, Minoru; Sasaki, Makoto; Fujimoto, Takahiro; Yano, Shinsuke; Nakata, Manabu; Mizowaki, Takashi; Hiraoka, Masahiro

    2017-07-01

    We evaluated three-dimensional intrafractional target motion, divided into respiratory-induced motion and baseline drift, in accelerated partial breast irradiation (APBI). Paired fluoroscopic images were acquired simultaneously using orthogonal kV X-ray imaging systems at pre- and post-treatment for 23 patients who underwent APBI with external beam radiotherapy. The internal target motion was calculated from the surgical clips placed around the tumour cavity. The peak-to-peak respiratory-induced motions ranged from 0.6 to 1.5mm in all directions. A systematic baseline drift of 1.5mm towards the posterior direction and a random baseline drift of 0.3mm in the lateral-medial and cranial-caudal directions were observed. The baseline for an outer tumour cavity drifted towards the lateral and posterior directions, and that for an upper tumour cavity drifted towards the cranial direction. Moderate correlations were observed between the posterior baseline drift and the patients' physical characteristics. The posterior margin for intrafractional uncertainties was larger than 5mm in patients with greater fat thickness due to the baseline drift. The magnitude of the intrafractional motion was not uniform according to the direction, patients' physical characteristics, or tumour cavity location due to the baseline drift. Therefore, the intrafractional systematic movement should be properly managed. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Discriminating Rigid from Nonrigid Motion

    DTIC Science & Technology

    1989-07-31

    motion can be given a three-dimensional interpretation using a constraint of rigidity. Kruppa’s result and others (Faugeras & Maybank , 1989; Huang...Experimental Psychology: Human Perception and Performance, 10, 1-11. Faugeras, 0., & Maybank , S. (1989). Motion from point matches: multiplicity of

  4. Kinematics and Flow Evolution of a Flexible Wing in Stall Flutter

    NASA Astrophysics Data System (ADS)

    Farnsworth, John; Akkala, James; Buchholz, James; McLaughlin, Thomas

    2014-11-01

    Large amplitude stall flutter limit cycle oscillations were observed on an aspect ratio six finite span NACA0018 flexible wing model at a free stream velocity of 23 m/s and an initial angle of attack of six degrees. The wing motion was characterized by periodic oscillations of predominately a torsional mode at a reduced frequency of k = 0.1. The kinematics were quantified via stereoscopic tracking of the wing surface with high speed camera imaging and direct linear transformation. Simultaneously acquired accelerometer measurements were used to track the wing motion and trigger the collection of two-dimensional particle image velocimetry field measurements to the phase angle of the periodic motion. Aerodynamically, the flutter motion is driven by the development and shedding of a dynamic stall vortex system, the evolution of which is characterized and discussed. This work was supported by the AFOSR Flow Interactions and Control Portfolio monitored by Dr. Douglas Smith and the AFOSR/ASEE Summer Faculty Fellowship Program (JA and JB).

  5. Pattern formation and three-dimensional instability in rotating flows

    NASA Astrophysics Data System (ADS)

    Christensen, Erik A.; Aubry, Nadine; Sorensen, Jens N.

    1997-03-01

    A fluid flow enclosed in a cylindrical container where fluid motion is created by the rotation of one end wall as a centrifugal fan is studied. Direct numerical simulations and spatio-temporal analysis have been performed in the early transition scenario, which includes a steady-unsteady transition and a breakdown of axisymmetric to three-dimensional flow behavior. In the early unsteady regime of the flow, the central vortex undergoes a vertical beating motion, accompanied by axisymmetric spikes formation on the edge of the breakdown bubble. As traveling waves, the spikes move along the central vortex core toward the rotating end-wall. As the Reynolds number is increased further, the flow undergoes a three-dimensional instability. The influence of the latter on the previous patterns is studied.

  6. Three-dimensional organization of vestibular related eye movements to rotational motion in pigeons

    NASA Technical Reports Server (NTRS)

    Dickman, J. D.; Beyer, M.; Hess, B. J.

    2000-01-01

    During rotational motions, compensatory eye movement adjustments must continually occur in order to maintain objects of visual interest as stable images on the retina. In the present study, the three-dimensional organization of the vestibulo-ocular reflex in pigeons was quantitatively examined. Rotations about different head axes produced horizontal, vertical, and torsional eye movements, whose component magnitude was dependent upon the cosine of the stimulus axis relative to the animal's visual axis. Thus, the three-dimensional organization of the VOR in pigeons appears to be compensatory for any direction of head rotation. Frequency responses of the horizontal, vertical, and torsional slow phase components exhibited high pass filter properties with dominant time constants of approximately 3 s.

  7. A three-dimensional autonomous nonlinear dynamical system modelling equatorial ocean flows

    NASA Astrophysics Data System (ADS)

    Ionescu-Kruse, Delia

    2018-04-01

    We investigate a nonlinear three-dimensional model for equatorial flows, finding exact solutions that capture the most relevant geophysical features: depth-dependent currents, poleward or equatorial surface drift and a vertical mixture of upward and downward motions.

  8. Study of journal bearing dynamics using 3-dimensional motion picture graphics

    NASA Technical Reports Server (NTRS)

    Brewe, D. E.; Sosoka, D. J.

    1985-01-01

    Computer generated motion pictures of three dimensional graphics are being used to analyze journal bearings under dynamically loaded conditions. The motion pictures simultaneously present the motion of the journal and the pressures predicted within the fluid film of the bearing as they evolve in time. The correct prediction of these fluid film pressures can be complicated by the development of cavitation within the fluid. The numerical model that is used predicts the formation of the cavitation bubble and its growth, downstream movement, and subsequent collapse. A complete physical picture is created in the motion picture as the journal traverses through the entire dynamic cycle.

  9. Multiple-camera/motion stereoscopy for range estimation in helicopter flight

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.

    1993-01-01

    Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.

  10. Camera Operator and Videographer

    ERIC Educational Resources Information Center

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  11. Radiation camera motion correction system

    DOEpatents

    Hoffer, P.B.

    1973-12-18

    The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)

  12. A vision-based system for measuring the displacements of large structures: Simultaneous adaptive calibration and full motion estimation

    NASA Astrophysics Data System (ADS)

    Santos, C. Almeida; Costa, C. Oliveira; Batista, J.

    2016-05-01

    The paper describes a kinematic model-based solution to estimate simultaneously the calibration parameters of the vision system and the full-motion (6-DOF) of large civil engineering structures, namely of long deck suspension bridges, from a sequence of stereo images captured by digital cameras. Using an arbitrary number of images and assuming a smooth structure motion, an Iterated Extended Kalman Filter is used to recursively estimate the projection matrices of the cameras and the structure full-motion (displacement and rotation) over time, helping to meet the structure health monitoring fulfilment. Results related to the performance evaluation, obtained by numerical simulation and with real experiments, are reported. The real experiments were carried out in indoor and outdoor environment using a reduced structure model to impose controlled motions. In both cases, the results obtained with a minimum setup comprising only two cameras and four non-coplanar tracking points, showed a high accuracy results for on-line camera calibration and structure full motion estimation.

  13. A clinical study of the biomechanics of step descent using different treatment modalities for patellofemoral pain.

    PubMed

    Selfe, James; Thewlis, Dominic; Hill, Stephen; Whitaker, Jonathan; Sutton, Chris; Richards, Jim

    2011-05-01

    In the previous study we have demonstrated that in healthy subjects significant changes in coronal and transverse plane mechanics can be produced by the application of a neutral patella taping technique and a patellar brace. Recently it has also been identified that patients with patellofemoral pain syndrome (PFPS) display alterations in gait in the coronal and transverse planes. This study investigated the effect of patellar bracing and taping on the three-dimensional mechanics of the knee of patellofemoral pain patients during a step descent task. Thirteen patients diagnosed with patellofemoral pain syndrome performed a slow step descent. This was conducted under three randomized conditions: (a) no intervention, (b) neutral patella taping, (c) patellofemoral bracing. A 20cm step was constructed to accommodate an AMTI force platform. Kinematic data were collected using a ten camera infra-red Oqus motion analysis system. Reflective markers were placed on the foot, shank and thigh using the Calibrated Anatomical System Technique (CAST). The coronal plane knee range of motion was significantly reduced with taping (P=0.031) and bracing (P=0.005). The transverse plane showed a significant reduction in the knee range of motion with the brace compared to taping (P=0.032) and no treatment (P=0.046). Patients suffering from patellofemoral pain syndrome demonstrated improved coronal plane and torsional control of the knee during slow step descent following the application of bracing and taping. This study further reinforces the view that coronal and transverse plane mechanics should not be overlooked when studying patellofemoral pain. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Vestibular coriolis effect differences modeled with three-dimensional linear-angular interactions.

    PubMed

    Holly, Jan E

    2004-01-01

    The vestibular coriolis (or "cross-coupling") effect is traditionally explained by cross-coupled angular vectors, which, however, do not explain the differences in perceptual disturbance under different acceleration conditions. For example, during head roll tilt in a rotating chair, the magnitude of perceptual disturbance is affected by a number of factors, including acceleration or deceleration of the chair rotation or a zero-g environment. Therefore, it has been suggested that linear-angular interactions play a role. The present research investigated whether these perceptual differences and others involving linear coriolis accelerations could be explained under one common framework: the laws of motion in three dimensions, which include all linear-angular interactions among all six components of motion (three angular and three linear). The results show that the three-dimensional laws of motion predict the differences in perceptual disturbance. No special properties of the vestibular system or nervous system are required. In addition, simulations were performed with angular, linear, and tilt time constants inserted into the model, giving the same predictions. Three-dimensional graphics were used to highlight the manner in which linear-angular interaction causes perceptual disturbance, and a crucial component is the Stretch Factor, which measures the "unexpected" linear component.

  15. High-immersion three-dimensional display of the numerical computer model

    NASA Astrophysics Data System (ADS)

    Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu

    2013-08-01

    High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.

  16. Gesture-Controlled Interfaces for Self-Service Machines

    NASA Technical Reports Server (NTRS)

    Cohen, Charles J.; Beach, Glenn

    2006-01-01

    Gesture-controlled interfaces are software- driven systems that facilitate device control by translating visual hand and body signals into commands. Such interfaces could be especially attractive for controlling self-service machines (SSMs) for example, public information kiosks, ticket dispensers, gasoline pumps, and automated teller machines (see figure). A gesture-controlled interface would include a vision subsystem comprising one or more charge-coupled-device video cameras (at least two would be needed to acquire three-dimensional images of gestures). The output of the vision system would be processed by a pure software gesture-recognition subsystem. Then a translator subsystem would convert a sequence of recognized gestures into commands for the SSM to be controlled; these could include, for example, a command to display requested information, change control settings, or actuate a ticket- or cash-dispensing mechanism. Depending on the design and operational requirements of the SSM to be controlled, the gesture-controlled interface could be designed to respond to specific static gestures, dynamic gestures, or both. Static and dynamic gestures can include stationary or moving hand signals, arm poses or motions, and/or whole-body postures or motions. Static gestures would be recognized on the basis of their shapes; dynamic gestures would be recognized on the basis of both their shapes and their motions. Because dynamic gestures include temporal as well as spatial content, this gesture- controlled interface can extract more information from dynamic than it can from static gestures.

  17. EVA: laparoscopic instrument tracking based on Endoscopic Video Analysis for psychomotor skills assessment.

    PubMed

    Oropesa, Ignacio; Sánchez-González, Patricia; Chmarra, Magdalena K; Lamata, Pablo; Fernández, Alvaro; Sánchez-Margallo, Juan A; Jansen, Frank Willem; Dankelman, Jenny; Sánchez-Margallo, Francisco M; Gómez, Enrique J

    2013-03-01

    The EVA (Endoscopic Video Analysis) tracking system is a new system for extracting motions of laparoscopic instruments based on nonobtrusive video tracking. The feasibility of using EVA in laparoscopic settings has been tested in a box trainer setup. EVA makes use of an algorithm that employs information of the laparoscopic instrument's shaft edges in the image, the instrument's insertion point, and the camera's optical center to track the three-dimensional position of the instrument tip. A validation study of EVA comprised a comparison of the measurements achieved with EVA and the TrEndo tracking system. To this end, 42 participants (16 novices, 22 residents, and 4 experts) were asked to perform a peg transfer task in a box trainer. Ten motion-based metrics were used to assess their performance. Construct validation of the EVA has been obtained for seven motion-based metrics. Concurrent validation revealed that there is a strong correlation between the results obtained by EVA and the TrEndo for metrics, such as path length (ρ = 0.97), average speed (ρ = 0.94), or economy of volume (ρ = 0.85), proving the viability of EVA. EVA has been successfully validated in a box trainer setup, showing the potential of endoscopic video analysis to assess laparoscopic psychomotor skills. The results encourage further implementation of video tracking in training setups and image-guided surgery.

  18. Integration of fringe projection and two-dimensional digital image correlation for three-dimensional displacements measurements

    NASA Astrophysics Data System (ADS)

    Felipe-Sesé, Luis; López-Alba, Elías; Siegmann, Philip; Díaz, Francisco A.

    2016-12-01

    A low-cost approach for three-dimensional (3-D) full-field displacement measurement is applied for the analysis of large displacements involved in two different mechanical events. The method is based on a combination of fringe projection and two-dimensional digital image correlation (DIC) techniques. The two techniques have been employed simultaneously using an RGB camera and a color encoding method; therefore, it is possible to measure in-plane and out-of-plane displacements at the same time with only one camera even at high speed rates. The potential of the proposed methodology has been employed for the analysis of large displacements during contact experiments in a soft material block. Displacement results have been successfully compared with those obtained using a 3D-DIC commercial system. Moreover, the analysis of displacements during an impact test on a metal plate was performed to emphasize the application of the methodology for dynamics events. Results show a good level of agreement, highlighting the potential of FP + 2D DIC as low-cost alternative for the analysis of large deformations problems.

  19. Three-dimensional turbulent boundary layers; Proceedings of the Symposium, Berlin, West Germany, March 29-April 1, 1982

    NASA Astrophysics Data System (ADS)

    Fernholz, H. H.; Krause, E.

    Papers are presented on recent research concerning three-dimensional turbulent boundary layers. Topics examined include experimental techniques in three-dimensional turbulent boundary layers, turbulence measurements in ship-model flow, measurements of Reynolds-stress profiles in the stern region of a ship model, the effects of crossflow on the vortex-layer-type three-dimensional flow separation, and wind tunnel investigations of some three-dimensional separated turbulent boundary layers. Also examined are three-dimensional boundary layers in turbomachines, the boundary layers on bodies of revolution spinning in axial flows, the effect on a developed turbulent boundary layer of a sudden local wall motion, three-dimensional turbulent boundary layer along a concave wall, the numerical computation of three-dimensional boundary layers, a numerical study of corner flows, three-dimensional boundary calculations in design aerodynamics, and turbulent boundary-layer calculations in design aerodynamics. For individual items see A83-47012 to A83-47036

  20. A geometrically exact formulation for three-dimensional numerical simulation of the umbilical cable in a deep-sea ROV system

    NASA Astrophysics Data System (ADS)

    Quan, Wei-cai; Zhang, Zhu-ying; Zhang, Ai-qun; Zhang, Qi-feng; Tian, Yu

    2015-04-01

    This paper proposes a geometrically exact formulation for three-dimensional static and dynamic analyses of the umbilical cable in a deep-sea remotely operated vehicle (ROV) system. The presented formulation takes account of the geometric nonlinearities of large displacement, effects of axial load and bending stiffness for modeling of slack cables. The resulting nonlinear second-order governing equations are discretized spatially by the finite element method and solved temporally by the generalized- α implicit time integration algorithm, which is adapted to the case of varying coefficient matrices. The ability to consider three-dimensional union action of ocean current and ship heave motion upon the umbilical cable is the key feature of this analysis. The presented formulation is firstly validated, and then three numerical examples for the umbilical cable in a deep-sea ROV system are demonstrated and discussed, including the steady configurations only under the action of depth-dependent ocean current, the dynamic responses in the case of the only ship heave motion, and in the case of the combined action of the ship heave motion and ocean current.

  1. Tomographic PIV behind a prosthetic heart valve

    NASA Astrophysics Data System (ADS)

    Hasler, D.; Landolt, A.; Obrist, D.

    2016-05-01

    The instantaneous three-dimensional velocity field past a bioprosthetic heart valve was measured using tomographic particle image velocimetry. Two digital cameras were used together with a mirror setup to record PIV images from four different angles. Measurements were conducted in a transparent silicone phantom with a simplified geometry of the aortic root. The refraction indices of the silicone phantom and the working fluid were matched to minimize optical distortion from the flow field to the cameras. The silicone phantom of the aorta was integrated in a flow loop driven by a piston pump. Measurements were conducted for steady and pulsatile flow conditions. Results of the instantaneous, ensemble and phase-averaged flow field are presented. The three-dimensional velocity field reveals a flow topology, which can be related to features of the aortic valve prosthesis.

  2. Hybrid three-dimensional and support vector machine approach for automatic vehicle tracking and classification using a single camera

    NASA Astrophysics Data System (ADS)

    Kachach, Redouane; Cañas, José María

    2016-05-01

    Using video in traffic monitoring is one of the most active research domains in the computer vision community. TrafficMonitor, a system that employs a hybrid approach for automatic vehicle tracking and classification on highways using a simple stationary calibrated camera, is presented. The proposed system consists of three modules: vehicle detection, vehicle tracking, and vehicle classification. Moving vehicles are detected by an enhanced Gaussian mixture model background estimation algorithm. The design includes a technique to resolve the occlusion problem by using a combination of two-dimensional proximity tracking algorithm and the Kanade-Lucas-Tomasi feature tracking algorithm. The last module classifies the shapes identified into five vehicle categories: motorcycle, car, van, bus, and truck by using three-dimensional templates and an algorithm based on histogram of oriented gradients and the support vector machine classifier. Several experiments have been performed using both real and simulated traffic in order to validate the system. The experiments were conducted on GRAM-RTM dataset and a proper real video dataset which is made publicly available as part of this work.

  3. 4DCAPTURE: a general purpose software package for capturing and analyzing two- and three-dimensional motion data acquired from video sequences

    NASA Astrophysics Data System (ADS)

    Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake

    2003-07-01

    4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The latter is particularly useful for high-speed applications where the raw images are often captured and stored by the camera before being downloaded. Provision has been made to synchronize data acquired from any combination of these video sources using audio and visual "tags." Additional "front-ends," designed for digital cameras, are anticipated.

  4. Mechanical Design of the LSST Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordby, Martin; Bowden, Gordon; Foss, Mike

    2008-06-13

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors inmore » image reconstruction. Design and analysis for the camera body and cryostat will be detailed.« less

  5. Suitability of digital camcorders for virtual reality image data capture

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola; Maas, Hans-Gerd

    1998-12-01

    Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.

  6. Towards building a team of intelligent robots

    NASA Technical Reports Server (NTRS)

    Varanasi, Murali R.; Mehrotra, R.

    1987-01-01

    Topics addressed include: collision-free motion planning of multiple robot arms; two-dimensional object recognition; and pictorial databases (storage and sharing of the representations of three-dimensional objects).

  7. Earth orbital teleoperator visual system evaluation program

    NASA Technical Reports Server (NTRS)

    Frederick, P. N.; Shields, N. L., Jr.; Kirkpatrick, M., III

    1977-01-01

    Visual system parameters and stereoptic television component geometries were evaluated for optimum viewing. The accuracy of operator range estimation using a Fresnell stereo television system with a three dimensional cursor was examined. An operator's ability to align three dimensional targets using vidicon tube and solid state television cameras as part of a Fresnell stereoptic system was evaluated. An operator's ability to discriminate between varied color samples viewed with a color television system was determined.

  8. More About The Farley Three-Dimensional Braider

    NASA Technical Reports Server (NTRS)

    Farley, Gary L.

    1993-01-01

    Farley three-dimensional braider, undergoing development, is machine for automatic fabrication of three-dimensional braided structures. Incorporates yarns into structure at arbitrary braid angles to produce complicated shape. Braiding surface includes movable braiding segments containing pivot points, along which yarn carriers travel during braiding process. Yarn carrier travels along sequence of pivot points as braiding segments move. Combined motions position yarns for braiding onto preform. Intended for use in making fiber preforms for fiber/matrix composite parts, such as multiblade propellers. Machine also described in "Farley Three-Dimensional Braiding Machine" (LAR-13911).

  9. The association between left ventricular twisting motion and mechanical dyssynchrony: a three-dimensional speckle tracking study.

    PubMed

    Fujiwara, Shohei; Komamura, Kazuo; Nakabo, Ayumi; Masaki, Mitsuru; Fukui, Miho; Sugahara, Masataka; Itohara, Kanako; Soyama, Yuko; Goda, Akiko; Hirotani, Shinichi; Mano, Toshiaki; Masuyama, Tohru

    2016-02-01

    Left ventricular (LV) dyssynchrony is a causal factor in LV dysfunction and thought to be associated with LV twisting motion. We tested whether three-dimensional speckle tracking (3DT) can be used to evaluate the relationship between LV twisting motion and dyssynchrony. We examined 25 patients with sick sinus syndrome who had received dual chamber pacemakers. The acute effects of ventricular pacing on LV wall motion after the switch from atrial to ventricular pacing were assessed. LV twisting motion and dyssynchrony during each pacing mode were measured using 3DT. LV dyssynchrony was calculated from the time to the minimum peak systolic area strain of 16 LV imaging segments. Ventricular pacing increased LV dyssynchrony and decreased twist and torsion. A significant correlation was observed between changes in LV dyssynchrony and changes in torsion (r = -0.65, p < 0.01). Evaluation of LV twisting motion can potentially be used for diagnosing LV dyssynchrony.

  10. A computer code for three-dimensional incompressible flows using nonorthogonal body-fitted coordinate systems

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.

    1986-01-01

    In this report, a numerical method for solving the equations of motion of three-dimensional incompressible flows in nonorthogonal body-fitted coordinate (BFC) systems has been developed. The equations of motion are transformed to a generalized curvilinear coordinate system from which the transformed equations are discretized using finite difference approximations in the transformed domain. The hybrid scheme is used to approximate the convection terms in the governing equations. Solutions of the finite difference equations are obtained iteratively by using a pressure-velocity correction algorithm (SIMPLE-C). Numerical examples of two- and three-dimensional, laminar and turbulent flow problems are employed to evaluate the accuracy and efficiency of the present computer code. The user's guide and computer program listing of the present code are also included.

  11. Recommended survey designs for occupancy modelling using motion-activated cameras: insights from empirical wildlife data

    PubMed Central

    Lewis, Jesse S.; Gerber, Brian D.

    2014-01-01

    Motion-activated cameras are a versatile tool that wildlife biologists can use for sampling wild animal populations to estimate species occurrence. Occupancy modelling provides a flexible framework for the analysis of these data; explicitly recognizing that given a species occupies an area the probability of detecting it is often less than one. Despite the number of studies using camera data in an occupancy framework, there is only limited guidance from the scientific literature about survey design trade-offs when using motion-activated cameras. A fuller understanding of these trade-offs will allow researchers to maximise available resources and determine whether the objectives of a monitoring program or research study are achievable. We use an empirical dataset collected from 40 cameras deployed across 160 km2 of the Western Slope of Colorado, USA to explore how survey effort (number of cameras deployed and the length of sampling period) affects the accuracy and precision (i.e., error) of the occupancy estimate for ten mammal and three virtual species. We do this using a simulation approach where species occupancy and detection parameters were informed by empirical data from motion-activated cameras. A total of 54 survey designs were considered by varying combinations of sites (10–120 cameras) and occasions (20–120 survey days). Our findings demonstrate that increasing total sampling effort generally decreases error associated with the occupancy estimate, but changing the number of sites or sampling duration can have very different results, depending on whether a species is spatially common or rare (occupancy = ψ) and easy or hard to detect when available (detection probability = p). For rare species with a low probability of detection (i.e., raccoon and spotted skunk) the required survey effort includes maximizing the number of sites and the number of survey days, often to a level that may be logistically unrealistic for many studies. For common species with low detection (i.e., bobcat and coyote) the most efficient sampling approach was to increase the number of occasions (survey days). However, for common species that are moderately detectable (i.e., cottontail rabbit and mule deer), occupancy could reliably be estimated with comparatively low numbers of cameras over a short sampling period. We provide general guidelines for reliably estimating occupancy across a range of terrestrial species (rare to common: ψ = 0.175–0.970, and low to moderate detectability: p = 0.003–0.200) using motion-activated cameras. Wildlife researchers/managers with limited knowledge of the relative abundance and likelihood of detection of a particular species can apply these guidelines regardless of location. We emphasize the importance of prior biological knowledge, defined objectives and detailed planning (e.g., simulating different study-design scenarios) for designing effective monitoring programs and research studies. PMID:25210658

  12. Enhanced mixing and spatial instability in concentrated bacterial suspensions

    NASA Astrophysics Data System (ADS)

    Sokolov, Andrey; Goldstein, Raymond E.; Feldchtein, Felix I.; Aranson, Igor S.

    2009-09-01

    High-resolution optical coherence tomography is used to study the onset of a large-scale convective motion in free-standing thin films of adjustable thickness containing suspensions of swimming aerobic bacteria. Clear evidence is found that beyond a threshold film thickness there exists a transition from quasi-two-dimensional collective swimming to three-dimensional turbulent behavior. The latter state, qualitatively different from bioconvection in dilute bacterial suspensions, is characterized by enhanced diffusivities of oxygen and bacteria. These results emphasize the impact of self-organized bacterial locomotion on the onset of three-dimensional dynamics, and suggest key ingredients necessary to extend standard models of bioconvection to incorporate effects of large-scale collective motion.

  13. Fusion of range camera and photogrammetry: a systematic procedure for improving 3-D models metric accuracy.

    PubMed

    Guidi, G; Beraldin, J A; Ciofi, S; Atzeni, C

    2003-01-01

    The generation of three-dimensional (3-D) digital models produced by optical technologies in some cases involves metric errors. This happens when small high-resolution 3-D images are assembled together in order to model a large object. In some applications, as for example 3-D modeling of Cultural Heritage, the problem of metric accuracy is a major issue and no methods are currently available for enhancing it. The authors present a procedure by which the metric reliability of the 3-D model, obtained through iterative alignments of many range maps, can be guaranteed to a known acceptable level. The goal is the integration of the 3-D range camera system with a close range digital photogrammetry technique. The basic idea is to generate a global coordinate system determined by the digital photogrammetric procedure, measuring the spatial coordinates of optical targets placed around the object to be modeled. Such coordinates, set as reference points, allow the proper rigid motion of few key range maps, including a portion of the targets, in the global reference system defined by photogrammetry. The other 3-D images are normally aligned around these locked images with usual iterative algorithms. Experimental results on an anthropomorphic test object, comparing the conventional and the proposed alignment method, are finally reported.

  14. A functional video-based anthropometric measuring system

    NASA Technical Reports Server (NTRS)

    Nixon, J. H.; Cater, J. P.

    1982-01-01

    A high-speed anthropometric three dimensional measurement system using the Selcom Selspot motion tracking instrument for visual data acquisition is discussed. A three-dimensional scanning system was created which collects video, audio, and performance data on a single standard video cassette recorder. Recording rates of 1 megabit per second for periods of up to two hours are possible with the system design. A high-speed off-the-shelf motion analysis system for collecting optical information as used. The video recording adapter (VRA) is interfaced to the Selspot data acquisition system.

  15. The recovery and utilization of space suit range-of-motion data

    NASA Technical Reports Server (NTRS)

    Reinhardt, AL; Walton, James S.

    1988-01-01

    A technique for recovering data for the range of motion of a subject wearing a space suit is described along with the validation of this technique on an EVA space suit. Digitized data are automatically acquired from video images of the subject; three-dimensional trajectories are recovered from these data, and can be displayed using three-dimensional computer graphics. Target locations are recovered using a unique video processor and close-range photogrammetry. It is concluded that such data can be used in such applications as the animation of anthropometric computer models.

  16. Mathematical model for the simulation of Dynamic Docking Test System (DDST) active table motion

    NASA Technical Reports Server (NTRS)

    Gates, R. M.; Graves, D. L.

    1974-01-01

    The mathematical model developed to describe the three-dimensional motion of the dynamic docking test system active table is described. The active table is modeled as a rigid body supported by six flexible hydraulic actuators which produce the commanded table motions.

  17. Neural Integration of Information Specifying Human Structure from Form, Motion, and Depth

    PubMed Central

    Jackson, Stuart; Blake, Randolph

    2010-01-01

    Recent computational models of biological motion perception operate on ambiguous two-dimensional representations of the body (e.g., snapshots, posture templates) and contain no explicit means for disambiguating the three-dimensional orientation of a perceived human figure. Are there neural mechanisms in the visual system that represent a moving human figure’s orientation in three dimensions? To isolate and characterize the neural mechanisms mediating perception of biological motion, we used an adaptation paradigm together with bistable point-light (PL) animations whose perceived direction of heading fluctuates over time. After exposure to a PL walker with a particular stereoscopically defined heading direction, observers experienced a consistent aftereffect: a bistable PL walker, which could be perceived in the adapted orientation or reversed in depth, was perceived predominantly reversed in depth. A phase-scrambled adaptor produced no aftereffect, yet when adapting and test walkers differed in size or appeared on opposite sides of fixation aftereffects did occur. Thus, this heading direction aftereffect cannot be explained by local, disparity-specific motion adaptation, and the properties of scale and position invariance imply higher-level origins of neural adaptation. Nor is disparity essential for producing adaptation: when suspended on top of a stereoscopically defined, rotating globe, a context-disambiguated “globetrotter” was sufficient to bias the bistable walker’s direction, as were full-body adaptors. In sum, these results imply that the neural signals supporting biomotion perception integrate information on the form, motion, and three-dimensional depth orientation of the moving human figure. Models of biomotion perception should incorporate mechanisms to disambiguate depth ambiguities in two-dimensional body representations. PMID:20089892

  18. SPH-DEM approach to numerically simulate the deformation of three-dimensional RBCs in non-uniform capillaries.

    PubMed

    Polwaththe-Gallage, Hasitha-Nayanajith; Saha, Suvash C; Sauret, Emilie; Flower, Robert; Senadeera, Wijitha; Gu, YuanTong

    2016-12-28

    Blood continuously flows through the blood vessels in the human body. When blood flows through the smallest blood vessels, red blood cells (RBCs) in the blood exhibit various types of motion and deformed shapes. Computational modelling techniques can be used to successfully predict the behaviour of the RBCs in capillaries. In this study, we report the application of a meshfree particle approach to model and predict the motion and deformation of three-dimensional RBCs in capillaries. An elastic spring network based on the discrete element method (DEM) is employed to model the three-dimensional RBC membrane. The haemoglobin in the RBC and the plasma in the blood are modelled as smoothed particle hydrodynamics (SPH) particles. For validation purposes, the behaviour of a single RBC in a simple shear flow is examined and compared against experimental results. Then simulations are carried out to predict the behaviour of RBCs in a capillary; (i) the motion of five identical RBCs in a uniform capillary, (ii) the motion of five identical RBCs with different bending stiffness (K b ) values in a stenosed capillary, (iii) the motion of three RBCs in a narrow capillary. Finally five identical RBCs are employed to determine the critical diameter of a stenosed capillary. Validation results showed a good agreement with less than 10% difference. From the above simulations, the following results are obtained; (i) RBCs exhibit different deformation behaviours due to the hydrodynamic interaction between them. (ii) Asymmetrical deformation behaviours of the RBCs are clearly observed when the bending stiffness (K b ) of the RBCs is changed. (iii) The model predicts the ability of the RBCs to squeeze through smaller blood vessels. Finally, from the simulations, the critical diameter of the stenosed section to stop the motion of blood flow is predicted. A three-dimensional spring network model based on DEM in combination with the SPH method is successfully used to model the motion and deformation of RBCs in capillaries. Simulation results reveal that the condition of blood flow stopping depends on the pressure gradient of the capillary and the severity of stenosis of the capillary. In addition, this model is capable of predicting the critical diameter which prevents motion of RBCs for different blood pressures.

  19. Calculation for simulation of archery goal value using a web camera and ultrasonic sensor

    NASA Astrophysics Data System (ADS)

    Rusjdi, Darma; Abdurrasyid, Wulandari, Dewi Arianti

    2017-08-01

    Development of the device simulator digital indoor archery-based embedded systems as a solution to the limitations of the field or open space is adequate, especially in big cities. Development of the device requires simulations to calculate the value of achieving the target based on the approach defined by the parabolic motion variable initial velocity and direction of motion of the arrow reaches the target. The simulator device should be complemented with an initial velocity measuring device using ultrasonic sensors and measuring direction of the target using a digital camera. The methodology uses research and development of application software from modeling and simulation approach. The research objective to create simulation applications calculating the value of the achievement of the target arrows. Benefits as a preliminary stage for the development of the simulator device of archery. Implementation of calculating the value of the target arrows into the application program generates a simulation game of archery that can be used as a reference development of the digital archery simulator in a room with embedded systems using ultrasonic sensors and web cameras. Applications developed with the simulation calculation comparing the outer radius of the circle produced a camera from a distance of three meters.

  20. OBSERVER RATING VERSUS THREE-DIMENSIONAL MOTION ANALYSIS OF LOWER EXTREMITY KINEMATICS DURING FUNCTIONAL SCREENING TESTS: A SYSTEMATIC REVIEW.

    PubMed

    Maclachlan, Liam; White, Steven G; Reid, Duncan

    2015-08-01

    Functional assessments are conducted in both clinical and athletic settings in an attempt to identify those individuals who exhibit movement patterns that may increase their risk of non-contact injury. In place of highly sophisticated three-dimensional motion analysis, functional testing can be completed through observation. To evaluate the validity of movement observation assessments by summarizing the results of articles comparing human observation in real-time or video play-back and three-dimensional motion analysis of lower extremity kinematics during functional screening tests. Systematic review. A computerized systematic search was conducted through Medline, SPORTSdiscus, Scopus, Cinhal, and Cochrane health databases between February and April of 2014. Validity studies comparing human observation (real-time or video play-back) to three-dimensional motion analysis of functional tasks were selected. Only studies comprising uninjured, healthy subjects conducting lower extremity functional assessments were appropriate for review. Eligible observers were certified health practitioners or qualified members of sports and athletic training teams that conduct athlete screening. The Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) was used to appraise the literature. Results are presented in terms of functional tasks. Six studies met the inclusion criteria. Across these studies, two-legged squats, single-leg squats, drop-jumps, and running and cutting manoeuvres were the functional tasks analysed. When compared to three-dimensional motion analysis, observer ratings of lower extremity kinematics, such as knee position in relation to the foot, demonstrated mixed results. Single-leg squats achieved target sensitivity values (≥ 80%) but not specificity values (≥ 50%>%). Drop-jump task agreement ranged from poor (< 50%) to excellent (> 80%). Two-legged squats achieved 88% sensitivity and 85% specificity. Mean underestimations as large as 198 (peak knee flexion) were found in the results of those assessing running and side-step cutting manoeuvres. Variables such as the speed of movement, the methods of rating, the profiles of participants and the experience levels of observers may have influenced the outcomes of functional testing. The small number of studies used limits generalizability. Furthermore, this review used two dimensional video-playback for the majority of observations. If the movements had been rated in real-time three dimensional video, the results may have been different. Slower, speed controlled movements using dichotomous ratings reach target sensitivity and demonstrate higher overall levels of agreement. As a result, their utilization in functional screening is advocated. 1A.

  1. Correlation between hip function and knee kinematics evaluated by three-dimensional motion analysis during lateral and medial side-hopping.

    PubMed

    Itoh, Hiromitsu; Takiguchi, Kohei; Shibata, Yohei; Okubo, Satoshi; Yoshiya, Shinichi; Kuroda, Ryosuke

    2016-09-01

    [Purpose] Kinematic and kinetic characteristics of the limb during side-hopping and hip/knee interaction during this motion have not been clarified. The purposes of this study were to examine the biomechanical parameters of the knee during side hop and analyze its relationship with clinical measurements of hip function. [Subjects and Methods] Eleven male college rugby players were included. A three-dimensional motion analysis system was used to assess motion characteristics of the knee during side hop. In addition, hip range of motion and muscle strength were evaluated. Subsequently, the relationship between knee motion and the clinical parameters of the hip was analyzed. [Results] In the lateral touchdown phase, the knee was positioned in an abducted and externally rotated position, and increasing abduction moment was applied to the knee. An analysis of the interaction between knee motion and hip function showed that range of motion for hip internal rotation was significantly correlated with external rotation angle and external rotation/abduction moments of the knee during the lateral touchdown phase. [Conclusion] Range of motion for hip internal rotation should be taken into consideration for identifying the biomechanical characteristics in the side hop test results.

  2. Correlation between hip function and knee kinematics evaluated by three-dimensional motion analysis during lateral and medial side-hopping

    PubMed Central

    Itoh, Hiromitsu; Takiguchi, Kohei; Shibata, Yohei; Okubo, Satoshi; Yoshiya, Shinichi; Kuroda, Ryosuke

    2016-01-01

    [Purpose] Kinematic and kinetic characteristics of the limb during side-hopping and hip/knee interaction during this motion have not been clarified. The purposes of this study were to examine the biomechanical parameters of the knee during side hop and analyze its relationship with clinical measurements of hip function. [Subjects and Methods] Eleven male college rugby players were included. A three-dimensional motion analysis system was used to assess motion characteristics of the knee during side hop. In addition, hip range of motion and muscle strength were evaluated. Subsequently, the relationship between knee motion and the clinical parameters of the hip was analyzed. [Results] In the lateral touchdown phase, the knee was positioned in an abducted and externally rotated position, and increasing abduction moment was applied to the knee. An analysis of the interaction between knee motion and hip function showed that range of motion for hip internal rotation was significantly correlated with external rotation angle and external rotation/abduction moments of the knee during the lateral touchdown phase. [Conclusion] Range of motion for hip internal rotation should be taken into consideration for identifying the biomechanical characteristics in the side hop test results. PMID:27799670

  3. Using a Digital Video Camera to Study Motion

    ERIC Educational Resources Information Center

    Abisdris, Gil; Phaneuf, Alain

    2007-01-01

    To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…

  4. Remote camera observations of lava dome growth at Mount St. Helens, Washington, October 2004 to February 2006: Chapter 11 in A volcano rekindled: the renewed eruption of Mount St. Helens, 2004-2006

    USGS Publications Warehouse

    Poland, Michael P.; Dzurisin, Daniel; LaHusen, Richard G.; Major, John J.; Lapcewich, Dennis; Endo, Elliot T.; Gooding, Daniel J.; Schilling, Steve P.; Janda, Christine G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.

    2008-01-01

    Images from a Web-based camera (Webcam) located 8 km north of Mount St. Helens and a network of remote, telemetered digital cameras were used to observe eruptive activity at the volcano between October 2004 and February 2006. The cameras offered the advantages of low cost, low power, flexibility in deployment, and high spatial and temporal resolution. Images obtained from the cameras provided important insights into several aspects of dome extrusion, including rockfalls, lava extrusion rates, and explosive activity. Images from the remote, telemetered digital cameras were assembled into time-lapse animations of dome extrusion that supported monitoring, research, and outreach efforts. The wide-ranging utility of remote camera imagery should motivate additional work, especially to develop the three-dimensional quantitative capabilities of terrestrial camera networks.

  5. Automatic three-dimensional quantitative analysis for evaluation of facial movement.

    PubMed

    Hontanilla, B; Aubá, C

    2008-01-01

    The aim of this study is to present a new 3D capture system of facial movements called FACIAL CLIMA. It is an automatic optical motion system that involves placing special reflecting dots on the subject's face and video recording with three infrared-light cameras the subject performing several face movements such as smile, mouth puckering, eye closure and forehead elevation. Images from the cameras are automatically processed with a software program that generates customised information such as 3D data on velocities and areas. The study has been performed in 20 healthy volunteers. The accuracy of the measurement process and the intrarater and interrater reliabilities have been evaluated. Comparison of a known distance and angle with those obtained by FACIAL CLIMA shows that this system is accurate to within 0.13 mm and 0.41 degrees . In conclusion, the accuracy of the FACIAL CLIMA system for evaluation of facial movements is demonstrated and also the high intrarater and interrater reliability. It has advantages with respect to other systems that have been developed for evaluation of facial movements, such as short calibration time, short measuring time, easiness to use and it provides not only distances but also velocities and areas. Thus the FACIAL CLIMA system could be considered as an adequate tool to assess the outcome of facial paralysis reanimation surgery. Thus, patients with facial paralysis could be compared between surgical centres such that effectiveness of facial reanimation operations could be evaluated.

  6. Instantaneous three-dimensional visualization of concentration distributions in turbulent flows with crossed-plane laser-induced fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Hoffmann, A.; Zimmermann, F.; Scharr, H.; Krömker, S.; Schulz, C.

    2005-01-01

    A laser-based technique for measuring instantaneous three-dimensional species concentration distributions in turbulent flows is presented. The laser beam from a single laser is formed into two crossed light sheets that illuminate the area of interest. The laser-induced fluorescence (LIF) signal emitted from excited species within both planes is detected with a single camera via a mirror arrangement. Image processing enables the reconstruction of the three-dimensional data set in close proximity to the cutting line of the two light sheets. Three-dimensional intensity gradients are computed and compared to the two-dimensional projections obtained from the two directly observed planes. Volume visualization by digital image processing gives unique insight into the three-dimensional structures within the turbulent processes. We apply this technique to measurements of toluene-LIF in a turbulent, non-reactive mixing process of toluene and air and to hydroxyl (OH) LIF in a turbulent methane-air flame upon excitation at 248 nm with a tunable KrF excimer laser.

  7. Measuring the circular motion of small objects using laser stroboscopic images.

    PubMed

    Wang, Hairong; Fu, Y; Du, R

    2008-01-01

    Measuring the circular motion of a small object, including its displacement, speed, and acceleration, is a challenging task. This paper presents a new method for measuring repetitive and/or nonrepetitive, constant speed and/or variable speed circular motion using laser stroboscopic images. Under stroboscopic illumination, each image taken by an ordinary camera records multioutlines of an object in motion; hence, processing the stroboscopic image will be able to extract the motion information. We built an experiment apparatus consisting of a laser as the light source, a stereomicroscope to magnify the image, and a normal complementary metal oxide semiconductor camera to record the image. As the object is in motion, the stroboscopic illumination generates a speckle pattern on the object that can be recorded by the camera and analyzed by a computer. Experimental results indicate that the stroboscopic imaging is stable under various conditions. Moreover, the characteristics of the motion, including the displacement, the velocity, and the acceleration can be calculated based on the width of speckle marks, the illumination intensity, the duty cycle, and the sampling frequency. Compared with the popular high-speed camera method, the presented method may achieve the same measuring accuracy, but with much reduced cost and complexity.

  8. The NACA High-Speed Motion-Picture Camera Optical Compensation at 40,000 Photographs Per Second

    NASA Technical Reports Server (NTRS)

    Miller, Cearcy D

    1946-01-01

    The principle of operation of the NACA high-speed camera is completely explained. This camera, operating at the rate of 40,000 photographs per second, took the photographs presented in numerous NACA reports concerning combustion, preignition, and knock in the spark-ignition engine. Many design details are presented and discussed, details of an entirely conventional nature are omitted. The inherent aberrations of the camera are discussed and partly evaluated. The focal-plane-shutter effect of the camera is explained. Photographs of the camera are presented. Some high-speed motion pictures of familiar objects -- photoflash bulb, firecrackers, camera shutter -- are reproduced as an illustration of the quality of the photographs taken by the camera.

  9. Accurate three-dimensional virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot

    PubMed Central

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2014-01-01

    Abstract. Brain tumor margin removal is challenging because diseased tissue is often visually indistinguishable from healthy tissue. Leaving residual tumor leads to decreased survival, and removing normal tissue causes life-long neurological deficits. Thus, a surgical robotics system with a high degree of dexterity, accurate navigation, and highly precise resection is an ideal candidate for image-guided removal of fluorescently labeled brain tumor cells. To image, we developed a scanning fiber endoscope (SFE) which acquires concurrent reflectance and fluorescence wide-field images at a high resolution. This miniature flexible endoscope was affixed to the arm of a RAVEN II surgical robot providing programmable motion with feedback control using stereo-pair surveillance cameras. To verify the accuracy of the three-dimensional (3-D) reconstructed surgical field, a multimodal physical-sized model of debulked brain tumor was used to obtain the 3-D locations of residual tumor for robotic path planning to remove fluorescent cells. Such reconstruction is repeated intraoperatively during margin clean-up so the algorithm efficiency and accuracy are important to the robotically assisted surgery. Experimental results indicate that the time for creating this 3-D surface can be reduced to one-third by using known trajectories of a robot arm, and the error from the reconstructed phantom is within 0.67 mm in average compared to the model design. PMID:26158071

  10. Differences in wrist mechanics during the golf swing based on golf handicap.

    PubMed

    Fedorcik, Gregory G; Queen, Robin M; Abbey, Alicia N; Moorman, Claude T; Ruch, David S

    2012-05-01

    Variation in swing mechanics between golfers of different skill levels has been previously reported. To investigate if differences in three-dimensional wrist kinematics and the angle of golf club descent between low and high handicap golfers. A descriptive laboratory study was performed with twenty-eight male golfers divided into two groups, low handicap golfers (handicap = 0-5, n = 15) and high handicap golfers (handicap ≥ 10, n = 13). Bilateral peak three-dimensional wrist mechanics, bilateral wrist mechanics at ball contact (BC), peak angle of descent from the end of the backswing to ball contact, and the angle of descent when the forearm was parallel to the ground (DEC-PAR) were determined using an 8 camera motion capture system. Independent t-tests were completed for each study variable (α = 0.05). Pearson correlation coefficients were determined between golf handicap and each of the study variables. The peak lead arm radial deviation (5.7 degrees, p = 0.008), lead arm radial deviation at ball contact (7.1 degrees, p = 0.001), and DEC-PAR (15.8 degrees, p = 0.002) were significantly greater in the high handicap group. In comparison with golfers with a low handicap, golfers with a high handicap have increased radial deviation during the golf swing and at ball contact. Copyright © 2011 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  11. Ankle joint function during walking in tophaceous gout: A biomechanical gait analysis study.

    PubMed

    Carroll, Matthew; Boocock, Mark; Dalbeth, Nicola; Stewart, Sarah; Frampton, Christopher; Rome, Keith

    2018-04-17

    The foot and ankle are frequently affected in tophaceous gout, yet kinematic and kinetic changes in this region during gait are unknown. The aim of the study was to evaluate ankle biomechanical characteristics in people with tophaceous gout using three-dimensional gait analysis. Twenty-four participants with tophaceous gout were compared with 24 age-and sex-matched control participants. A 9-camera motion analysis system and two floor-mounted force plates were used to calculate kinematic and kinetic parameters. Peak ankle joint angular velocity was significantly decreased in participants with gout (P < 0.01). No differences were found for ankle ROM in either the sagittal (P = 0.43) or frontal planes (P = 0.08). No differences were observed between groups for peak ankle joint power (P = 0.41), peak ankle joint force (P = 0.25), peak ankle joint moment (P = 0.16), timing for peak ankle joint force (P = 0.81), or timing for peak ankle joint moment (P = 0.16). Three dimensional gait analysis demonstrated that ankle joint function does not change in people with gout. People with gout demonstrated a reduced peak ankle joint angular velocity which may reflect gait-limiting factors and adaptations from the high levels of foot pain, impairment and disability experienced by this population. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Optical Mapping of Membrane Potential and Epicardial Deformation in Beating Hearts.

    PubMed

    Zhang, Hanyu; Iijima, Kenichi; Huang, Jian; Walcott, Gregory P; Rogers, Jack M

    2016-07-26

    Cardiac optical mapping uses potentiometric fluorescent dyes to image membrane potential (Vm). An important limitation of conventional optical mapping is that contraction is usually arrested pharmacologically to prevent motion artifacts from obscuring Vm signals. However, these agents may alter electrophysiology, and by abolishing contraction, also prevent optical mapping from being used to study coupling between electrical and mechanical function. Here, we present a method to simultaneously map Vm and epicardial contraction in the beating heart. Isolated perfused swine hearts were stained with di-4-ANEPPS and fiducial markers were glued to the epicardium for motion tracking. The heart was imaged at 750 Hz with a video camera. Fluorescence was excited with cyan or blue LEDs on alternating camera frames, thus providing a 375-Hz effective sampling rate. Marker tracking enabled the pixel(s) imaging any epicardial site within the marked region to be identified in each camera frame. Cyan- and blue-elicited fluorescence have different sensitivities to Vm, but other signal features, primarily motion artifacts, are common. Thus, taking the ratio of fluorescence emitted by a motion-tracked epicardial site in adjacent frames removes artifacts, leaving Vm (excitation ratiometry). Reconstructed Vm signals were validated by comparison to monophasic action potentials and to conventional optical mapping signals. Binocular imaging with additional video cameras enabled marker motion to be tracked in three dimensions. From these data, epicardial deformation during the cardiac cycle was quantified by computing finite strain fields. We show that the method can simultaneously map Vm and strain in a left-sided working heart preparation and can image changes in both electrical and mechanical function 5 min after the induction of regional ischemia. By allowing high-resolution optical mapping in the absence of electromechanical uncoupling agents, the method relieves a long-standing limitation of optical mapping and has potential to enhance new studies in coupled cardiac electromechanics. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  13. Encrypted Three-dimensional Dynamic Imaging using Snapshot Time-of-flight Compressed Ultrafast Photography

    PubMed Central

    Liang, Jinyang; Gao, Liang; Hai, Pengfei; Li, Chiye; Wang, Lihong V.

    2015-01-01

    Compressed ultrafast photography (CUP), a computational imaging technique, is synchronized with short-pulsed laser illumination to enable dynamic three-dimensional (3D) imaging. By leveraging the time-of-flight (ToF) information of pulsed light backscattered by the object, ToF-CUP can reconstruct a volumetric image from a single camera snapshot. In addition, the approach unites the encryption of depth data with the compressed acquisition of 3D data in a single snapshot measurement, thereby allowing efficient and secure data storage and transmission. We demonstrated high-speed 3D videography of moving objects at up to 75 volumes per second. The ToF-CUP camera was applied to track the 3D position of a live comet goldfish. We have also imaged a moving object obscured by a scattering medium. PMID:26503834

  14. Stability analysis for a multi-camera photogrammetric system.

    PubMed

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-08-18

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  15. Stability Analysis for a Multi-Camera Photogrammetric System

    PubMed Central

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-01-01

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012

  16. Repurposing video recordings for structure motion estimations

    NASA Astrophysics Data System (ADS)

    Khaloo, Ali; Lattanzi, David

    2016-04-01

    Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.

  17. A multi-criteria approach to camera motion design for volume data animation.

    PubMed

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.

  18. Systems and methods for estimating the structure and motion of an object

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dani, Ashwin P; Dixon, Warren

    2015-11-03

    In one embodiment, the structure and motion of a stationary object are determined using two images and a linear velocity and linear acceleration of a camera. In another embodiment, the structure and motion of a stationary or moving object are determined using an image and linear and angular velocities of a camera.

  19. Multispectral image alignment using a three channel endoscope in vivo during minimally invasive surgery

    PubMed Central

    Clancy, Neil T.; Stoyanov, Danail; James, David R. C.; Di Marco, Aimee; Sauvage, Vincent; Clark, James; Yang, Guang-Zhong; Elson, Daniel S.

    2012-01-01

    Sequential multispectral imaging is an acquisition technique that involves collecting images of a target at different wavelengths, to compile a spectrum for each pixel. In surgical applications it suffers from low illumination levels and motion artefacts. A three-channel rigid endoscope system has been developed that allows simultaneous recording of stereoscopic and multispectral images. Salient features on the tissue surface may be tracked during the acquisition in the stereo cameras and, using multiple camera triangulation techniques, this information used to align the multispectral images automatically even though the tissue or camera is moving. This paper describes a detailed validation of the set-up in a controlled experiment before presenting the first in vivo use of the device in a porcine minimally invasive surgical procedure. Multispectral images of the large bowel were acquired and used to extract the relative concentration of haemoglobin in the tissue despite motion due to breathing during the acquisition. Using the stereoscopic information it was also possible to overlay the multispectral information on the reconstructed 3D surface. This experiment demonstrates the ability of this system for measuring blood perfusion changes in the tissue during surgery and its potential use as a platform for other sequential imaging modalities. PMID:23082296

  20. Effect of robotic-assisted three-dimensional repetitive motion to improve hand motor function and control in children with handwriting deficits: a nonrandomized phase 2 device trial.

    PubMed

    Palsbo, Susan E; Hood-Szivek, Pamela

    2012-01-01

    We explored the efficacy of robotic technology in improving handwriting in children with impaired motor skills. Eighteen participants had impairments arising from cerebral palsy (CP), autism spectrum disorder (ASD), attention deficit disorder (ADD), attention deficit hyperactivity disorder (ADHD), or other disorders. The intervention was robotic-guided three-dimensional repetitive motion in 15-20 daily sessions of 25-30 min each over 4-8 wk. Fine motor control improved for the children with learning disabilities and those ages 9 or older but not for those with CP or under age 9. All children with ASD or ADHD referred for slow writing speed were able to increase speed while maintaining legibility. Three-dimensional, robot-assisted, repetitive motion training improved handwriting fluidity in children with mild to moderate fine motor deficits associated with ASD or ADHD within 10 hr of training. This dosage may not be sufficient for children with CP. Copyright © 2012 by the American Occupational Therapy Association, Inc.

  1. Deep circulations under simple classes of stratification

    NASA Technical Reports Server (NTRS)

    Salby, Murry L.

    1989-01-01

    Deep circulations where the motion field is vertically aligned over one or more scale heights are studied under barotropic and equivalent barotropic stratifications. The study uses two-dimensional equations reduced from the three-dimensional primitive equations in spherical geometry. A mapping is established between the full primitive equations and general shallow water behavior and the correspondence between variables describing deep atmospheric motion and those of shallow water behavior is established.

  2. Three-dimensional compact explicit-finite difference time domain scheme with density variation

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Takao; Maruta, Naoki

    2018-07-01

    In this paper, the density variation is implemented in the three-dimensional compact-explicit finite-difference time-domain (CE-FDTD) method. The formulation is first developed based on the continuity equation and the equation of motion, which include the density. Some numerical demonstrations are performed for the three-dimensional sound wave propagation in a two density layered medium. The numerical results are compared with the theoretical results to verify the proposed formulation.

  3. Video segmentation and camera motion characterization using compressed data

    NASA Astrophysics Data System (ADS)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  4. Keyboard before Head Tracking Depresses User Success in Remote Camera Control

    NASA Astrophysics Data System (ADS)

    Zhu, Dingyun; Gedeon, Tom; Taylor, Ken

    In remote mining, operators of complex machinery have more tasks or devices to control than they have hands. For example, operating a rock breaker requires two handed joystick control to position and fire the jackhammer, leaving the camera control to either automatic control or require the operator to switch between controls. We modelled such a teleoperated setting by performing experiments using a simple physical game analogue, being a half size table soccer game with two handles. The complex camera angles of the mining application were modelled by obscuring the direct view of the play area and the use of a Pan-Tilt-Zoom (PTZ) camera. The camera control was via either a keyboard or via head tracking using two different sets of head gestures called “head motion” and “head flicking” for turning camera motion on/off. Our results show that the head motion control was able to provide a comparable performance to using a keyboard, while head flicking was significantly worse. In addition, the sequence of use of the three control methods is highly significant. It appears that use of the keyboard first depresses successful use of the head tracking methods, with significantly better results when one of the head tracking methods was used first. Analysis of the qualitative survey data collected supports that the worst (by performance) method was disliked by participants. Surprisingly, use of that worst method as the first control method significantly enhanced performance using the other two control methods.

  5. Projection of controlled repeatable real-time moving targets to test and evaluate motion imagery quality

    NASA Astrophysics Data System (ADS)

    Scopatz, Stephen D.; Mendez, Michael; Trent, Randall

    2015-05-01

    The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.

  6. Hurricane Debby

    Atmospheric Science Data Center

    2013-04-19

    ... cloud-tracked winds at the different cloud levels. The wind vectors, shown in the right panel, reveal cyclonic motion associated with ... of cloud height and motions globally will help us monitor the effects of climate change on the three-dimensional distribution of ...

  7. Equations of motion for train derailment dynamics

    DOT National Transportation Integrated Search

    2007-09-11

    This paper describes a planar or two-dimensional model to : examine the gross motions of rail cars in a generalized train : derailment. Three coupled, second-order differential equations : are derived from Newton's Laws to calculate rigid-body car : ...

  8. The V-Scope: An "Oscilloscope" for Motion.

    ERIC Educational Resources Information Center

    Ronen, Miky; Lipman, Aharon

    1991-01-01

    Proposes the V-Scope as a teaching aid to measure, analyze, and display three-dimensional multibody motion. Describes experiment setup considerations, how measurements are calculated, graphic representation capabilities, and modes of operation of this microcomputer-based system. (MDH)

  9. 3D Surface Reconstruction and Volume Calculation of Rills

    NASA Astrophysics Data System (ADS)

    Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.

    2015-04-01

    We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.

  10. Ripples in The Soil

    NASA Image and Video Library

    2004-02-10

    This is a three-dimensional stereo anaglyph of an image taken by the front navigation camera onboard NASA Mars Exploration Rover Spirit, showing an interesting patch of rippled soil. 3D glasses are necessary to view this image.

  11. In-air versus underwater comparison of 3D reconstruction accuracy using action sport cameras.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2017-01-25

    Action sport cameras (ASC) have achieved a large consensus for recreational purposes due to ongoing cost decrease, image resolution and frame rate increase, along with plug-and-play usability. Consequently, they have been recently considered for sport gesture studies and quantitative athletic performance evaluation. In this paper, we evaluated the potential of two ASCs (GoPro Hero3+) for in-air (laboratory) and underwater (swimming pool) three-dimensional (3D) motion analysis as a function of different camera setups involving the acquisition frequency, image resolution and field of view. This is motivated by the fact that in swimming, movement cycles are characterized by underwater and in-air phases what imposes the technical challenge of having a split volume configuration: an underwater measurement volume observed by underwater cameras and an in-air measurement volume observed by in-air cameras. The reconstruction of whole swimming cycles requires thus merging of simultaneous measurements acquired in both volumes. Characterizing and optimizing the instrumental errors of such a configuration makes mandatory the assessment of the instrumental errors of both volumes. In order to calibrate the camera stereo pair, black spherical markers placed on two calibration tools, used both in-air and underwater, and a two-step nonlinear optimization were exploited. The 3D reconstruction accuracy of testing markers and the repeatability of the estimated camera parameters accounted for system performance. For both environments, statistical tests were focused on the comparison of the different camera configurations. Then, each camera configuration was compared across the two environments. In all assessed resolutions, and in both environments, the reconstruction error (true distance between the two testing markers) was less than 3mm and the error related to the working volume diagonal was in the range of 1:2000 (3×1.3×1.5m 3 ) to 1:7000 (4.5×2.2×1.5m 3 ) in agreement with the literature. Statistically, the 3D accuracy obtained in the in-air environment was poorer (p<10 -5 ) than the one in the underwater environment, across all the tested camera configurations. Related to the repeatability of the camera parameters, we found a very low variability in both environments (1.7% and 2.9%, in-air and underwater). This result encourage the use of ASC technology to perform quantitative reconstruction both in-air and underwater environments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Real time three dimensional sensing system

    DOEpatents

    Gordon, S.J.

    1996-12-31

    The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane. 7 figs.

  13. Real time three dimensional sensing system

    DOEpatents

    Gordon, Steven J.

    1996-01-01

    The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane.

  14. 7. MOTION PICTURE CAMERA STAND AT BUILDING 8768. Edwards ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. MOTION PICTURE CAMERA STAND AT BUILDING 8768. - Edwards Air Force Base, Air Force Rocket Propulsion Laboratory, Observation Bunkers for Test Stand 1-A, Test Area 1-120, north end of Jupiter Boulevard, Boron, Kern County, CA

  15. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  16. Optical fringe-reflection deflectometry with bundle adjustment

    NASA Astrophysics Data System (ADS)

    Xiao, Yong-Liang; Li, Sikun; Zhang, Qican; Zhong, Jianxin; Su, Xianyu; You, Zhisheng

    2018-06-01

    Liquid crystal display (LCD) screens are located outside of a camera's field of view in fringe-reflection deflectometry. Therefore, fringes that are displayed on LCD screens are obtained through specular reflection by a fixed camera. Thus, the pose calibration between the camera and LCD screen is one of the main challenges in fringe-reflection deflectometry. A markerless planar mirror is used to reflect the LCD screen more than three times, and the fringes are mapped into the fixed camera. The geometrical calibration can be accomplished by estimating the pose between the camera and the virtual image of fringes. Considering the relation between their pose, the incidence and reflection rays can be unified in the camera frame, and a forward triangulation intersection can be operated in the camera frame to measure three-dimensional (3D) coordinates of the specular surface. In the final optimization, constraint-bundle adjustment is operated to refine simultaneously the camera intrinsic parameters, including distortion coefficients, estimated geometrical pose between the LCD screen and camera, and 3D coordinates of the specular surface, with the help of the absolute phase collinear constraint. Simulation and experiment results demonstrate that the pose calibration with planar mirror reflection is simple and feasible, and the constraint-bundle adjustment can enhance the 3D coordinate measurement accuracy in fringe-reflection deflectometry.

  17. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  18. Assessment of Photogrammetry Structure-from-Motion Compared to Terrestrial LiDAR Scanning for Generating Digital Elevation Models. Application to the Austre Lovéenbreen Polar Glacier Basin, Spitsbergen 79°N

    NASA Astrophysics Data System (ADS)

    Tolle, F.; Friedt, J. M.; Bernard, É.; Prokop, A.; Griselin, M.

    2014-12-01

    Digital Elevation Model (DEM) is a key tool for analyzing spatially dependent processes including snow accumulation on slopes or glacier mass balance. Acquiring DEM within short time intervals provides new opportunities to evaluate such phenomena at the daily to seasonal rates.DEMs are usually generated from satellite imagery, aerial photography, airborne and ground-based LiDAR, and GPS surveys. In addition to these classical methods, we consider another alternative for periodic DEM acquisition with lower logistics requirements: digital processing of ground based, oblique view digital photography. Such a dataset, acquired using commercial off the shelf cameras, provides the source for generating elevation models using Structure from Motion (SfM) algorithms. Sets of pictures of a same structure but taken from various points of view are acquired. Selected features are identified on the images and allow for the reconstruction of the three-dimensional (3D) point cloud after computing the camera positions and optical properties. This cloud point, generated in an arbitrary coordinate system, is converted to an absolute coordinate system either by adding constraints of Ground Control Points (GCP), or including the (GPS) position of the cameras in the processing chain. We selected the opensource digital signal processing library provided by the French Geographic Institute (IGN) called Micmac for its fine processing granularity and the ability to assess the quality of each processing step.Although operating in snow covered environments appears challenging due to the lack of relevant features, we observed that enough reference points could be identified for 3D reconstruction. Despite poor climatic environment of the Arctic region considered (Ny Alesund area, 79oN) is not a problem for SfM, the low lying spring sun and the cast shadows appear as a limitation because of the lack of color dynamics in the digital cameras we used. A detailed understanding of the processing steps is mandatory during the image acquisition phase: compliance with acquisition rules reducing digital processing errors helps minimizing the uncertainty on the point cloud absolute position in its coordinate system. 3D models from SfM are compared with terrestrial LiDAR acquisitions for resolution assesment.

  19. Trochanteric fracture-implant motion during healing - A radiostereometry (RSA) study.

    PubMed

    Bojan, Alicja J; Jönsson, Anders; Granhed, Hans; Ekholm, Carl; Kärrholm, Johan

    2018-03-01

    Cut-out complication remains a major unsolved problem in the treatment of trochanteric hip fractures. A better understanding of the three-dimensional fracture-implant motions is needed to enable further development of clinical strategies and countermeasures. The aim of this clinical study was to characterise and quantify three-dimensional motions between the implant and the bone and between the lag screw and nail of the Gamma nail. Radiostereometry Analysis (RSA) analysis was applied in 20 patients with trochanteric hip fractures treated with an intramedullary nail. The following three-dimensional motions were measured postoperatively, at 1 week, 3, 6 and 12 months: translations of the tip of the lag screw in the femoral head, motions of the lag screw in the nail, femoral head motions relative to the nail and nail movements in the femoral shaft. Cranial migration of the tip of the lag screw dominated over the other two translation components in the femoral head. In all fractures the lag screw slid laterally in the nail and the femoral head moved both laterally and inferiorly towards the nail. All femoral heads translated posteriorly relative to the nail, and rotations occurred in both directions with median values close to zero. The nail tended to retrovert in the femoral shaft. Adverse fracture-implant motions were detected in stable trochanteric hip fractures treated with intramedullary nails with high resolution. Therefore, RSA method can be used to evaluate new implant designs and clinical strategies, which aim to reduce cut-out complications. Future RSA studies should aim at more unstable fractures as these are more likely to fail with cut-out. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Does the Powers™ strap influence the lower limb biomechanics during running?

    PubMed

    Greuel, Henrike; Herrington, Lee; Liu, Anmin; Jones, Richard K

    2017-09-01

    Previous research has reported a prevalence of running related injuries in 25.9% to 72% of all runners. A greater hip internal rotation and adduction during the stance phase in running has been associated with many running related injuries, such as patellofemoral pain. Researchers in the USA designed a treatment device 'the Powers™ strap' to facilitate an external rotation of the femur and to thereby control abnormal hip and knee motion during leisure and sport activities. However, to date no literature exists to demonstrate whether the Powers™ strap is able to reduce hip internal rotation during running. 22 healthy participants, 11 males and 11 females (age: 27.45±4.43 years, height: 1.73±0.06m, mass: 66.77±9.24kg) were asked to run on a 22m track under two conditions: without and with the Powers™ strap. Three-dimensional motion analysis was conducted using ten Qualisys OQUS 7 cameras (Qualisys AB, Sweden) and force data was captured with three AMTI force plates (BP600900, Advanced Mechanical Technology, Inc.USA). Paired sample t-tests were performed at the 95% confidence interval on all lower limb kinematic and kinetic data. The Powers™ strap significantly reduced hip and knee internal rotation throughout the stance phase of running. These results showed that the Powers™ strap has the potential to influence hip motion during running related activities, in doing so this might be beneficial for patients with lower limb injuries. Future research should investigate the influence of the Powers™ strap in subjects who suffer from running related injuries, such as patellofemoral pain. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Visual Target Tracking in the Presence of Unknown Observer Motion

    NASA Technical Reports Server (NTRS)

    Williams, Stephen; Lu, Thomas

    2009-01-01

    Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.

  2. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    PubMed Central

    Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji

    2017-01-01

    This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385

  3. Automatic acquisition of motion trajectories: tracking hockey players

    NASA Astrophysics Data System (ADS)

    Okuma, Kenji; Little, James J.; Lowe, David

    2003-12-01

    Computer systems that have the capability of analyzing complex and dynamic scenes play an essential role in video annotation. Scenes can be complex in such a way that there are many cluttered objects with different colors, shapes and sizes, and can be dynamic with multiple interacting moving objects and a constantly changing background. In reality, there are many scenes that are complex, dynamic, and challenging enough for computers to describe. These scenes include games of sports, air traffic, car traffic, street intersections, and cloud transformations. Our research is about the challenge of inventing a descriptive computer system that analyzes scenes of hockey games where multiple moving players interact with each other on a constantly moving background due to camera motions. Ultimately, such a computer system should be able to acquire reliable data by extracting the players" motion as their trajectories, querying them by analyzing the descriptive information of data, and predict the motions of some hockey players based on the result of the query. Among these three major aspects of the system, we primarily focus on visual information of the scenes, that is, how to automatically acquire motion trajectories of hockey players from video. More accurately, we automatically analyze the hockey scenes by estimating parameters (i.e., pan, tilt, and zoom) of the broadcast cameras, tracking hockey players in those scenes, and constructing a visual description of the data by displaying trajectories of those players. Many technical problems in vision such as fast and unpredictable players' motions and rapid camera motions make our challenge worth tackling. To the best of our knowledge, there have not been any automatic video annotation systems for hockey developed in the past. Although there are many obstacles to overcome, our efforts and accomplishments would hopefully establish the infrastructure of the automatic hockey annotation system and become a milestone for research in automatic video annotation in this domain.

  4. Visual acuity, contrast sensitivity, and range performance with compressed motion video

    NASA Astrophysics Data System (ADS)

    Bijl, Piet; de Vries, Sjoerd C.

    2010-10-01

    Video of visual acuity (VA) and contrast sensitivity (CS) test charts in a complex background was recorded using a CCD color camera mounted on a computer-controlled tripod and was fed into real-time MPEG-2 compression/decompression equipment. The test charts were based on the triangle orientation discrimination (TOD) test method and contained triangle test patterns of different sizes and contrasts in four possible orientations. In a perception experiment, observers judged the orientation of the triangles in order to determine VA and CS thresholds at the 75% correct level. Three camera velocities (0, 1.0, and 2.0 deg/s, or 0, 4.1, and 8.1 pixels/frame) and four compression rates (no compression, 4 Mb/s, 2 Mb/s, and 1 Mb/s) were used. VA is shown to be rather robust to any combination of motion and compression. CS, however, dramatically decreases when motion is combined with high compression ratios. The measured thresholds were fed into the TOD target acquisition model to predict the effect of motion and compression on acquisition ranges for tactical military vehicles. The effect of compression on static performance is limited but strong with motion video. The data suggest that with the MPEG2 algorithm, the emphasis is on the preservation of image detail at the cost of contrast loss.

  5. Two-dimensional fruit ripeness estimation using thermal imaging

    NASA Astrophysics Data System (ADS)

    Sumriddetchkajorn, Sarun; Intaravanne, Yuttana

    2013-06-01

    Some green fruits do not change their color from green to yellow when being ripe. As a result, ripeness estimation via color and fluorescent analytical approaches cannot be applied. In this article, we propose and show for the first time how a thermal imaging camera can be used to two-dimensionally classify fruits into different ripeness levels. Our key idea relies on the fact that the mature fruits have higher heat capacity than the immature ones and therefore the change in surface temperature overtime is slower. Our experimental proof of concept using a thermal imaging camera shows a promising result in non-destructively identifying three different ripeness levels of mangoes Mangifera indica L.

  6. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  7. The Impact of Stereoscopic Imagery and Motion on Anatomical Structure Recognition and Visual Attention Performance

    ERIC Educational Resources Information Center

    Remmele, Martin; Schmidt, Elena; Lingenfelder, Melissa; Martens, Andreas

    2018-01-01

    Gross anatomy is located in a three-dimensional space. Visualizing aspects of structures in gross anatomy education should aim to provide information that best resembles their original spatial proportions. Stereoscopic three-dimensional imagery might offer possibilities to implement this aim, though some research has revealed potential impairments…

  8. Mighty Math[TM] Zoo Zillions[TM]. [CD-ROM].

    ERIC Educational Resources Information Center

    1996

    Zoo Zillions contains five activities for grades K-2: Annie's Jungle Trail, 3D Gallery, Number Line Express, Gnu Ewe Boutique, and Fish Stories. These activities enable children to review and practice basic mathematics skills; identify three-dimensional shapes, watch them in motion, and create their own three-dimensional designs; locate numbers…

  9. Three-dimensional modeling of tea-shoots using images and models.

    PubMed

    Wang, Jian; Zeng, Xianyin; Liu, Jianbing

    2011-01-01

    In this paper, a method for three-dimensional modeling of tea-shoots with images and calculation models is introduced. The process is as follows: the tea shoots are photographed with a camera, color space conversion is conducted, using an improved algorithm that is based on color and regional growth to divide the tea shoots in the images, and the edges of the tea shoots extracted with the help of edge detection; after that, using the divided tea-shoot images, the three-dimensional coordinates of the tea shoots are worked out and the feature parameters extracted, matching and calculation conducted according to the model database, and finally the three-dimensional modeling of tea-shoots is completed. According to the experimental results, this method can avoid a lot of calculations and has better visual effects and, moreover, performs better in recovering the three-dimensional information of the tea shoots, thereby providing a new method for monitoring the growth of and non-destructive testing of tea shoots.

  10. Analysis of eletrectrohydrodynamic jetting using multifunctional and three-dimensional tomography

    NASA Astrophysics Data System (ADS)

    Ko, Han Seo; Nguyen, Xuan Hung; Lee, Soo-Hong; Kim, Young Hyun

    2013-11-01

    Three-dimensional optical tomography technique was developed to reconstruct three-dimensional flow fields using a set of two-dimensional shadowgraphic images and normal gray images. From three high speed cameras, which were positioned at an offset angle of 45° relative to one another, number, size and location of electrohydrodynamic jets with respect to the nozzle position were analyzed using shadowgraphic tomography employing a multiplicative algebraic reconstruction technique (MART). Additionally, a flow field inside cone-shaped liquid (Taylor cone) which was induced under electric field was also observed using a simultaneous multiplicative algebraic reconstruction technique (SMART) for reconstructing intensities of particle light and combining with a three-dimensional cross correlation. Various velocity fields of a circulating flow inside the cone-shaped liquid due to different physico-chemical properties of liquid and applied voltages were also investigated. This work supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korean government (MEST) (No. S-2011-0023457).

  11. Six-degrees-of-freedom sensing based on pictures taken by single camera.

    PubMed

    Zhongke, Li; Yong, Wang; Yongyuan, Qin; Peijun, Lu

    2005-02-01

    Two six-degrees-of-freedom sensing methods are presented. In the first method, three laser beams are employed to set up Descartes' frame on a rigid body and a screen is adopted to form diffuse spots. In the second method, two superimposed grid screens and two laser beams are used. A CCD camera is used to take photographs in both methods. Both approaches provide a simple and error-free method to record the positions and the attitudes of a rigid body in motion continuously.

  12. Trained neurons-based motion detection in optical camera communications

    NASA Astrophysics Data System (ADS)

    Teli, Shivani; Cahyadi, Willy Anugrah; Chung, Yeon Ho

    2018-04-01

    A concept of trained neurons-based motion detection (TNMD) in optical camera communications (OCC) is proposed. The proposed TNMD is based on neurons present in a neural network that perform repetitive analysis in order to provide efficient and reliable motion detection in OCC. This efficient motion detection can be considered another functionality of OCC in addition to two traditional functionalities of illumination and communication. To verify the proposed TNMD, the experiments were conducted in an indoor static downlink OCC, where a mobile phone front camera is employed as the receiver and an 8 × 8 red, green, and blue (RGB) light-emitting diode array as the transmitter. The motion is detected by observing the user's finger movement in the form of centroid through the OCC link via a camera. Unlike conventional trained neurons approaches, the proposed TNMD is trained not with motion itself but with centroid data samples, thus providing more accurate detection and far less complex detection algorithm. The experiment results demonstrate that the TNMD can detect all considered motions accurately with acceptable bit error rate (BER) performances at a transmission distance of up to 175 cm. In addition, while the TNMD is performed, a maximum data rate of 3.759 kbps over the OCC link is obtained. The OCC with the proposed TNMD combined can be considered an efficient indoor OCC system that provides illumination, communication, and motion detection in a convenient smart home environment.

  13. Three-dimensional structural dynamics of DNA origami Bennett linkages using individual-particle electron tomography

    DOE PAGES

    Lei, Dongsheng; Marras, Alexander E.; Liu, Jianfang; ...

    2018-02-09

    Scaffolded DNA origami has proven to be a powerful and efficient technique to fabricate functional nanomachines by programming the folding of a single-stranded DNA template strand into three-dimensional (3D) nanostructures, designed to be precisely motion-controlled. Although two-dimensional (2D) imaging of DNA nanomachines using transmission electron microscopy and atomic force microscopy suggested these nanomachines are dynamic in 3D, geometric analysis based on 2D imaging was insufficient to uncover the exact motion in 3D. In this paper, we use the individual-particle electron tomography method and reconstruct 129 density maps from 129 individual DNA origami Bennett linkage mechanisms at ~6-14 nm resolution. The statisticalmore » analyses of these conformations lead to understanding the 3D structural dynamics of Bennett linkage mechanisms. Moreover, our effort provides experimental verification of a theoretical kinematics model of DNA origami, which can be used as feedback to improve the design and control of motion via optimized DNA sequences and routing.« less

  14. Three-dimensional structural dynamics of DNA origami Bennett linkages using individual-particle electron tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Dongsheng; Marras, Alexander E.; Liu, Jianfang

    Scaffolded DNA origami has proven to be a powerful and efficient technique to fabricate functional nanomachines by programming the folding of a single-stranded DNA template strand into three-dimensional (3D) nanostructures, designed to be precisely motion-controlled. Although two-dimensional (2D) imaging of DNA nanomachines using transmission electron microscopy and atomic force microscopy suggested these nanomachines are dynamic in 3D, geometric analysis based on 2D imaging was insufficient to uncover the exact motion in 3D. In this paper, we use the individual-particle electron tomography method and reconstruct 129 density maps from 129 individual DNA origami Bennett linkage mechanisms at ~6-14 nm resolution. The statisticalmore » analyses of these conformations lead to understanding the 3D structural dynamics of Bennett linkage mechanisms. Moreover, our effort provides experimental verification of a theoretical kinematics model of DNA origami, which can be used as feedback to improve the design and control of motion via optimized DNA sequences and routing.« less

  15. Motion Imagery and Robotics Application Project (MIRA)

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney P.

    2010-01-01

    This viewgraph presentation describes the Motion Imagery and Robotics Application (MIRA) Project. A detailed description of the MIRA camera service software architecture, encoder features, and on-board communications are presented. A description of a candidate camera under development is also shown.

  16. Real-Time Three-Dimensional Echocardiography: Characterization of Cardiac Anatomy and Function-Current Clinical Applications and Literature Review Update.

    PubMed

    Velasco, Omar; Beckett, Morgan Q; James, Aaron W; Loehr, Megan N; Lewis, Taylor G; Hassan, Tahmin; Janardhanan, Rajesh

    2017-01-01

    Our review of real-time three-dimensional echocardiography (RT3DE) discusses the diagnostic utility of RT3DE and provides a comparison with two-dimensional echocardiography (2DE) in clinical cardiology. A Pubmed literature search on RT3DE was performed using the following key words: transthoracic, two-dimensional, three-dimensional, real-time, and left ventricular (LV) function. Articles included perspective clinical studies and meta-analyses in the English language, and focused on the role of RT3DE in human subjects. Application of RT3DE includes analysis of the pericardium, right ventricular (RV) and LV cavities, wall motion, valvular disease, great vessels, congenital anomalies, and traumatic injury, such as myocardial contusion. RT3DE, through a transthoracic echocardiography (TTE), allows for increasingly accurate volume and valve motion assessment, estimated LV ejection fraction, and volume measurements. Chamber motion and LV mass approximation have been more accurately evaluated by RT3DE by improved inclusion of the third dimension and quantification of volumetric movement. Moreover, RT3DE was shown to have no statistical significance when comparing the ejection fractions of RT3DE to cardiac magnetic resonance (CMR). Analysis of RT3DE data sets of the LV endocardial exterior allows for the volume to be directly quantified for specific phases of the cardiac cycle, ranging from end systole to end diastole, eliminating error from wall motion abnormalities and asymmetrical left ventricles. RT3DE through TTE measures cardiac function with superior diagnostic accuracy in predicting LV mass, systolic function, along with LV and RV volume when compared with 2DE with comparable results to CMR.

  17. A projector calibration method for monocular structured light system based on digital image correlation

    NASA Astrophysics Data System (ADS)

    Feng, Zhixin

    2018-02-01

    Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.

  18. A biomechanical comparison in the lower limb and lumbar spine between a hit and drag flick in field hockey.

    PubMed

    Ng, Leo; Rosalie, Simon M; Sherry, Dorianne; Loh, Wei Bing; Sjurseth, Andreas M; Iyengar, Shrikant; Wild, Catherine Y

    2018-03-01

    Research has revealed that field hockey drag flickers have greater odds of hip and lumbar injuries compared to non-drag flickers (DF). This study aimed to compare the biomechanics of a field hockey hit and a specialised field hockey drag flick. Eighteen male and seven female specialised hockey DF performed a hit and a drag flick in a motion analysis laboratory with an 18-camera three-dimensional motion analysis system and a calibrated multichannel force platform to examine differences in lower limb and lumbar kinematics and kinetics. Results revealed that drag flicks were performed with more of a forward lunge on the left lower limb resulting in significantly greater left ankle dorsiflexion, knee, hip and lumbar flexion (Ps<0.001) compared to a hit. Drag flicks were also performed with significantly greater lateral flexion (P < 0.002) and rotation of the lumbar spine (P < 0.006) compared to a hit. Differences in kinematics lead to greater shear, compression and tensile forces in multiple left lower limb and lumbar joints in the drag flick compared to the hit (P < 0.05). The biomechanical differences in drag flicks compared to a hit may have ramifications with respect to injury in field hockey drag flickers.

  19. Opportunity Stretches Out 3-D

    NASA Image and Video Library

    2004-02-02

    This is a three-dimensional stereo anaglyph of an image taken by the front hazard-identification camera onboard NASA Mars Exploration Rover Opportunity, showing the rover arm in its extended position. 3D glasses are necessary to view this image.

  20. 4-mm-diameter three-dimensional imaging endoscope with steerable camera for minimally invasive surgery (3-D-MARVEL).

    PubMed

    Bae, Sam Y; Korniski, Ronald J; Shearn, Michael; Manohara, Harish M; Shahinian, Hrayr

    2017-01-01

    High-resolution three-dimensional (3-D) imaging (stereo imaging) by endoscopes in minimally invasive surgery, especially in space-constrained applications such as brain surgery, is one of the most desired capabilities. Such capability exists at larger than 4-mm overall diameters. We report the development of a stereo imaging endoscope of 4-mm maximum diameter, called Multiangle, Rear-Viewing Endoscopic Tool (MARVEL) that uses a single-lens system with complementary multibandpass filter (CMBF) technology to achieve 3-D imaging. In addition, the system is endowed with the capability to pan from side-to-side over an angle of [Formula: see text], which is another unique aspect of MARVEL for such a class of endoscopes. The design and construction of a single-lens, CMBF aperture camera with integrated illumination to generate 3-D images, and the actuation mechanism built into it is summarized.

  1. Three-dimensional computer graphic animations for studying social approach behaviour in medaka fish: Effects of systematic manipulation of morphological and motion cues.

    PubMed

    Nakayasu, Tomohiro; Yasugi, Masaki; Shiraishi, Soma; Uchida, Seiichi; Watanabe, Eiji

    2017-01-01

    We studied social approach behaviour in medaka fish using three-dimensional computer graphic (3DCG) animations based on the morphological features and motion characteristics obtained from real fish. This is the first study which used 3DCG animations and examined the relative effects of morphological and motion cues on social approach behaviour in medaka. Various visual stimuli, e.g., lack of motion, lack of colour, alternation in shape, lack of locomotion, lack of body motion, and normal virtual fish in which all four features (colour, shape, locomotion, and body motion) were reconstructed, were created and presented to fish using a computer display. Medaka fish presented with normal virtual fish spent a long time in proximity to the display, whereas time spent near the display was decreased in other groups when compared with normal virtual medaka group. The results suggested that the naturalness of visual cues contributes to the induction of social approach behaviour. Differential effects between body motion and locomotion were also detected. 3DCG animations can be a useful tool to study the mechanisms of visual processing and social behaviour in medaka.

  2. Three-dimensional computer graphic animations for studying social approach behaviour in medaka fish: Effects of systematic manipulation of morphological and motion cues

    PubMed Central

    Nakayasu, Tomohiro; Yasugi, Masaki; Shiraishi, Soma; Uchida, Seiichi; Watanabe, Eiji

    2017-01-01

    We studied social approach behaviour in medaka fish using three-dimensional computer graphic (3DCG) animations based on the morphological features and motion characteristics obtained from real fish. This is the first study which used 3DCG animations and examined the relative effects of morphological and motion cues on social approach behaviour in medaka. Various visual stimuli, e.g., lack of motion, lack of colour, alternation in shape, lack of locomotion, lack of body motion, and normal virtual fish in which all four features (colour, shape, locomotion, and body motion) were reconstructed, were created and presented to fish using a computer display. Medaka fish presented with normal virtual fish spent a long time in proximity to the display, whereas time spent near the display was decreased in other groups when compared with normal virtual medaka group. The results suggested that the naturalness of visual cues contributes to the induction of social approach behaviour. Differential effects between body motion and locomotion were also detected. 3DCG animations can be a useful tool to study the mechanisms of visual processing and social behaviour in medaka. PMID:28399163

  3. Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table.

    PubMed

    Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo

    2013-05-06

    A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.

  4. Monitoring the Wall Mechanics During Stent Deployment in a Vessel

    PubMed Central

    Steinert, Brian D.; Zhao, Shijia; Gu, Linxia

    2012-01-01

    Clinical trials have reported different restenosis rates for various stent designs1. It is speculated that stent-induced strain concentrations on the arterial wall lead to tissue injury, which initiates restenosis2-7. This hypothesis needs further investigations including better quantifications of non-uniform strain distribution on the artery following stent implantation. A non-contact surface strain measurement method for the stented artery is presented in this work. ARAMIS stereo optical surface strain measurement system uses two optical high speed cameras to capture the motion of each reference point, and resolve three dimensional strains over the deforming surface8,9. As a mesh stent is deployed into a latex vessel with a random contrasting pattern sprayed or drawn on its outer surface, the surface strain is recorded at every instant of the deformation. The calculated strain distributions can then be used to understand the local lesion response, validate the computational models, and formulate hypotheses for further in vivo study. PMID:22588353

  5. Robotic Colorectal Surgery

    PubMed Central

    2008-01-01

    Robotic colorectal surgery has gradually been performed more with the help of the technological advantages of the da Vinci® system. Advanced technological advantages of the da Vinci® system compared with standard laparoscopic colorectal surgery have been reported. These are a stable camera platform, three-dimensional imaging, excellent ergonomics, tremor elimination, ambidextrous capability, motion scaling, and instruments with multiple degrees of freedom. However, despite these technological advantages, most studies did not report the clinical advantages of robotic colorectal surgery compared to standard laparoscopic colorectal surgery. Only one study recently implies the real benefits of robotic rectal cancer surgery. The purpose of this review article is to outline the early concerns of robotic colorectal surgery using the da Vinci® system, to present early clinical outcomes from the most current series, and to discuss not only the safety and the feasibility but also the real benefits of robotic colorectal surgery. Moreover, this article will comment on the possible future clinical advantages and limitations of the da Vinci® system in robotic colorectal surgery. PMID:19108010

  6. Camera systems in human motion analysis for biomedical applications

    NASA Astrophysics Data System (ADS)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  7. Registration of Large Motion Blurred Images

    DTIC Science & Technology

    2016-05-09

    in handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce...handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce two types...blur in the captured image when there is camera motion during exposure. However, contemporary CMOS sensors employ an electronic rolling shutter (RS

  8. Teacher-in-Space Trainees - Arriflex Motion Picture Camera

    NASA Image and Video Library

    1985-09-20

    S85-40668 (18 Sept. 1985) --- The two teachers, Sharon Christa McAuliffe (left) and Barbara R. Morgan have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedures for space travelers. The second week of training included camera training, aircraft familiarization and other activities. Photo credit: NASA

  9. Photogrammetry of Apollo 15 photography, part C

    NASA Technical Reports Server (NTRS)

    Wu, S. S. C.; Schafer, F. J.; Jordan, R.; Nakata, G. M.; Derick, J. L.

    1972-01-01

    In the Apollo 15 mission, a mapping camera system and a 61 cm optical bar, high resolution panoramic camera, as well as a laser altimeter were used. The panoramic camera is described, having several distortion sources, such as cylindrical shape of the negative film surface, the scanning action of the lens, the image motion compensator, and the spacecraft motion. Film products were processed on a specifically designed analytical plotter.

  10. Real time moving scene holographic camera system

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1973-01-01

    A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).

  11. A three-dimensional model to assess the effect of ankle joint axis misalignments in ankle-foot orthoses.

    PubMed

    Fatone, Stefania; Johnson, William Brett; Tucker, Kerice

    2016-04-01

    Misalignment of an articulated ankle-foot orthosis joint axis with the anatomic joint axis may lead to discomfort, alterations in gait, and tissue damage. Theoretical, two-dimensional models describe the consequences of misalignments, but cannot capture the three-dimensional behavior of ankle-foot orthosis use. The purpose of this project was to develop a model to describe the effects of ankle-foot orthosis ankle joint misalignment in three dimensions. Computational simulation. Three-dimensional scans of a leg and ankle-foot orthosis were incorporated into a link segment model where the ankle-foot orthosis joint axis could be misaligned with the anatomic ankle joint axis. The leg/ankle-foot orthosis interface was modeled as a network of nodes connected by springs to estimate interface pressure. Motion between the leg and ankle-foot orthosis was calculated as the ankle joint moved through a gait cycle. While the three-dimensional model corroborated predictions of the previously published two-dimensional model that misalignments in the anterior -posterior direction would result in greater relative motion compared to misalignments in the proximal -distal direction, it provided greater insight showing that misalignments have asymmetrical effects. The three-dimensional model has been incorporated into a freely available computer program to assist others in understanding the consequences of joint misalignments. Models and simulations can be used to gain insight into functioning of systems of interest. We have developed a three-dimensional model to assess the effect of ankle joint axis misalignments in ankle-foot orthoses. The model has been incorporated into a freely available computer program to assist understanding of trainees and others interested in orthotics. © The International Society for Prosthetics and Orthotics 2014.

  12. Development of a 300,000-pixel ultrahigh-speed high-sensitivity CCD

    NASA Astrophysics Data System (ADS)

    Ohtake, H.; Hayashida, T.; Kitamura, K.; Arai, T.; Yonai, J.; Tanioka, K.; Maruyama, H.; Etoh, T. Goji; Poggemann, D.; Ruckelshausen, A.; van Kuijk, H.; Bosiers, Jan T.

    2006-02-01

    We are developing an ultrahigh-speed, high-sensitivity broadcast camera that is capable of capturing clear, smooth slow-motion videos even where lighting is limited, such as at professional baseball games played at night. In earlier work, we developed an ultrahigh-speed broadcast color camera1) using three 80,000-pixel ultrahigh-speed, highsensitivity CCDs2). This camera had about ten times the sensitivity of standard high-speed cameras, and enabled an entirely new style of presentation for sports broadcasts and science programs. Most notably, increasing the pixel count is crucially important for applying ultrahigh-speed, high-sensitivity CCDs to HDTV broadcasting. This paper provides a summary of our experimental development aimed at improving the resolution of CCD even further: a new ultrahigh-speed high-sensitivity CCD that increases the pixel count four-fold to 300,000 pixels.

  13. Noise Reduction in Brainwaves by Using Both EEG Signals and Frontal Viewing Camera Images

    PubMed Central

    Bang, Jae Won; Choi, Jong-Suk; Park, Kang Ryoung

    2013-01-01

    Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have been used in various applications, including human–computer interfaces, diagnosis of brain diseases, and measurement of cognitive status. However, EEG signals can be contaminated with noise caused by user's head movements. Therefore, we propose a new method that combines an EEG acquisition device and a frontal viewing camera to isolate and exclude the sections of EEG data containing these noises. This method is novel in the following three ways. First, we compare the accuracies of detecting head movements based on the features of EEG signals in the frequency and time domains and on the motion features of images captured by the frontal viewing camera. Second, the features of EEG signals in the frequency domain and the motion features captured by the frontal viewing camera are selected as optimal ones. The dimension reduction of the features and feature selection are performed using linear discriminant analysis. Third, the combined features are used as inputs to support vector machine (SVM), which improves the accuracy in detecting head movements. The experimental results show that the proposed method can detect head movements with an average error rate of approximately 3.22%, which is smaller than that of other methods. PMID:23669713

  14. Prism-based single-camera system for stereo display

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  15. Trajectory of Charged Particle in Combined Electric and Magnetic Fields Using Interactive Spreadsheets

    ERIC Educational Resources Information Center

    Tambade, Popat S.

    2011-01-01

    The objective of this article is to graphically illustrate to the students the physical phenomenon of motion of charged particle under the action of simultaneous electric and magnetic fields by simulating particle motion on a computer. Differential equations of motions are solved analytically and path of particle in three-dimensional space are…

  16. Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model.

    PubMed

    Li, Jing; Zhang, Fangbing; Wei, Lisong; Yang, Tao; Lu, Zhaoyang

    2017-10-16

    Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. To respond to this need, in this paper, we provide an affordable solution with a near-infrared stereo network camera, as well as a novel three-dimensional foreground pedestrian detection model. Specifically, instead of using an expensive thermal camera, we build a near-infrared stereo vision system with two calibrated network cameras and near-infrared lamps. The core of the system is a novel voxel surface model, which is able to estimate the dynamic changes of three-dimensional geometric information of the surveillance scene and to segment and locate foreground pedestrians in real time. A free update policy for unknown points is designed for model updating, and the extracted shadow of the pedestrian is adopted to remove foreground false alarms. To evaluate the performance of the proposed model, the system is deployed in several nighttime surveillance scenes. Experimental results demonstrate that our method is capable of nighttime pedestrian segmentation and detection in real time under heavy occlusion. In addition, the qualitative and quantitative comparison results show that our work outperforms classical background subtraction approaches and a recent RGB-D method, as well as achieving comparable performance with the state-of-the-art deep learning pedestrian detection method even with a much lower hardware cost.

  17. Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model

    PubMed Central

    Li, Jing; Zhang, Fangbing; Wei, Lisong; Lu, Zhaoyang

    2017-01-01

    Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. To respond to this need, in this paper, we provide an affordable solution with a near-infrared stereo network camera, as well as a novel three-dimensional foreground pedestrian detection model. Specifically, instead of using an expensive thermal camera, we build a near-infrared stereo vision system with two calibrated network cameras and near-infrared lamps. The core of the system is a novel voxel surface model, which is able to estimate the dynamic changes of three-dimensional geometric information of the surveillance scene and to segment and locate foreground pedestrians in real time. A free update policy for unknown points is designed for model updating, and the extracted shadow of the pedestrian is adopted to remove foreground false alarms. To evaluate the performance of the proposed model, the system is deployed in several nighttime surveillance scenes. Experimental results demonstrate that our method is capable of nighttime pedestrian segmentation and detection in real time under heavy occlusion. In addition, the qualitative and quantitative comparison results show that our work outperforms classical background subtraction approaches and a recent RGB-D method, as well as achieving comparable performance with the state-of-the-art deep learning pedestrian detection method even with a much lower hardware cost. PMID:29035295

  18. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ingram, S; Rao, A; Wendt, R

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the cameramore » by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.« less

  19. New Evidence for the Dynamical Decay of a Multiple System in the Orion Kleinmann–Low Nebula

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luhman, K. L.; Robberto, M.; Gabellini, M. Giulia Ubeira

    We have measured astrometry for members of the Orion Nebula Cluster with images obtained in 2015 with the Wide Field Camera 3 on board the Hubble Space Telescope . By comparing those data to previous measurements with the Near-Infrared Camera and Multi-Object Spectrometer on Hubble in 1998, we have discovered that a star in the Kleinmann–Low Nebula, source x from Lonsdale et al., is moving with an unusually high proper motion of 29 mas yr{sup −1}, which corresponds to 55 km s{sup −1} at the distance of Orion. Previous radio observations have found that three other stars in the Kleinmann–Lowmore » Nebula (the Becklin–Neugebauer object and sources I and n) have high proper motions (5–14 mas yr{sup −1}) and were near a single location ∼540 years ago, and thus may have been members of a multiple system that dynamically decayed. The proper motion of source x is consistent with ejection from that same location 540 years ago, which provides strong evidence that the dynamical decay did occur and that the runaway star BN originated in the Kleinmann–Low Nebula rather than the nearby Trapezium cluster. However, our constraint on the motion of source n is significantly smaller than the most recent radio measurement, which indicates that it did not participate in the event that ejected the other three stars.« less

  20. New Evidence for the Dynamical Decay of a Multiple System in the Orion Kleinmann-Low Nebula

    NASA Astrophysics Data System (ADS)

    Luhman, K. L.; Robberto, M.; Tan, J. C.; Andersen, M.; Giulia Ubeira Gabellini, M.; Manara, C. F.; Platais, I.; Ubeda, L.

    2017-03-01

    We have measured astrometry for members of the Orion Nebula Cluster with images obtained in 2015 with the Wide Field Camera 3 on board the Hubble Space Telescope. By comparing those data to previous measurements with the Near-Infrared Camera and Multi-Object Spectrometer on Hubble in 1998, we have discovered that a star in the Kleinmann-Low Nebula, source x from Lonsdale et al., is moving with an unusually high proper motion of 29 mas yr-1, which corresponds to 55 km s-1 at the distance of Orion. Previous radio observations have found that three other stars in the Kleinmann-Low Nebula (the Becklin-Neugebauer object and sources I and n) have high proper motions (5-14 mas yr-1) and were near a single location ˜540 years ago, and thus may have been members of a multiple system that dynamically decayed. The proper motion of source x is consistent with ejection from that same location 540 years ago, which provides strong evidence that the dynamical decay did occur and that the runaway star BN originated in the Kleinmann-Low Nebula rather than the nearby Trapezium cluster. However, our constraint on the motion of source n is significantly smaller than the most recent radio measurement, which indicates that it did not participate in the event that ejected the other three stars. Based on observations made with the NASA/ESA Hubble Space Telescope and the NASA Infrared Telescope Facility.

  1. Three-dimensional mechanisms of macro-to-micro-scale transport and absorption enhancement by gut villi motions

    NASA Astrophysics Data System (ADS)

    Wang, Yanxing; Brasseur, James G.

    2017-06-01

    We evaluate the potential for physiological control of intestinal absorption by the generation of "micromixing layers" (MMLs) induced by coordinated motions of mucosal villi coupled with lumen-scale "macro" eddying motions generated by gut motility. To this end, we apply a three-dimensional (3D) multigrid lattice-Boltzmann model of a lid-driven macroscale cavity flow with microscale fingerlike protuberances at the lower surface. Integrated with a previous 2D study of leaflike villi, we generalize to 3D the 2D mechanisms found there to enhance nutrient absorption by controlled villi motility. In three dimensions, increased lateral spacing within villi within groups that move axially with the macroeddy reduces MML strength and absorptive enhancement relative to two dimensions. However, lateral villi motions create helical 3D particle trajectories that enhance absorption rate to the level of axially moving 2D leaflike villi. The 3D enhancements are associated with interesting fundamental adjustments to 2D micro-macro-motility coordination mechanisms and imply a refined potential for physiological or pharmaceutical control of intestinal absorption.

  2. Three-dimensional mapping of microcircuit correlation structure

    PubMed Central

    Cotton, R. James; Froudarakis, Emmanouil; Storer, Patrick; Saggau, Peter; Tolias, Andreas S.

    2013-01-01

    Great progress has been made toward understanding the properties of single neurons, yet the principles underlying interactions between neurons remain poorly understood. Given that connectivity in the neocortex is locally dense through both horizontal and vertical connections, it is of particular importance to characterize the activity structure of local populations of neurons arranged in three dimensions. However, techniques for simultaneously measuring microcircuit activity are lacking. We developed an in vivo 3D high-speed, random-access two-photon microscope that is capable of simultaneous 3D motion tracking. This allows imaging from hundreds of neurons at several hundred Hz, while monitoring tissue movement. Given that motion will induce common artifacts across the population, accurate motion tracking is absolutely necessary for studying population activity with random-access based imaging methods. We demonstrate the potential of this imaging technique by measuring the correlation structure of large populations of nearby neurons in the mouse visual cortex, and find that the microcircuit correlation structure is stimulus-dependent. Three-dimensional random access multiphoton imaging with concurrent motion tracking provides a novel, powerful method to characterize the microcircuit activity in vivo. PMID:24133414

  3. The suitability of lightfield camera depth maps for coordinate measurement applications

    NASA Astrophysics Data System (ADS)

    Rangappa, Shreedhar; Tailor, Mitul; Petzing, Jon; Kinnell, Peter; Jackson, Michael

    2015-12-01

    Plenoptic cameras can capture 3D information in one exposure without the need for structured illumination, allowing grey scale depth maps of the captured image to be created. The Lytro, a consumer grade plenoptic camera, provides a cost effective method of measuring depth of multiple objects under controlled lightning conditions. In this research, camera control variables, environmental sensitivity, image distortion characteristics, and the effective working range of two Lytro first generation cameras were evaluated. In addition, a calibration process has been created, for the Lytro cameras, to deliver three dimensional output depth maps represented in SI units (metre). The novel results show depth accuracy and repeatability of +10.0 mm to -20.0 mm, and 0.5 mm respectively. For the lateral X and Y coordinates, the accuracy was +1.56 μm to -2.59 μm and the repeatability was 0.25 μm.

  4. Richardson-Lucy deblurring for the star scene under a thinning motion path

    NASA Astrophysics Data System (ADS)

    Su, Laili; Shao, Xiaopeng; Wang, Lin; Wang, Haixin; Huang, Yining

    2015-05-01

    This paper puts emphasis on how to model and correct image blur that arises from a camera's ego motion while observing a distant star scene. Concerning the significance of accurate estimation of point spread function (PSF), a new method is employed to obtain blur kernel by thinning star motion path. In particular, how the blurred star image can be corrected to reconstruct the clear scene with a thinning motion blur model which describes the camera's path is presented. This thinning motion path to build blur kernel model is more effective at modeling the spatially motion blur introduced by camera's ego motion than conventional blind estimation of kernel-based PSF parameterization. To gain the reconstructed image, firstly, an improved thinning algorithm is used to obtain the star point trajectory, so as to extract the blur kernel of the motion-blurred star image. Then how motion blur model can be incorporated into the Richardson-Lucy (RL) deblurring algorithm, which reveals its overall effectiveness, is detailed. In addition, compared with the conventional estimated blur kernel, experimental results show that the proposed method of using thinning algorithm to get the motion blur kernel is of less complexity, higher efficiency and better accuracy, which contributes to better restoration of the motion-blurred star images.

  5. Documenting Western Burrowing Owl Reproduction and Activity Patterns Using Motion-Activated Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Derek B.; Greger, Paul D.

    We used motion-activated cameras to monitor the reproduction and patterns of activity of the Burrowing Owl (Athene cunicularia) above ground at 45 burrows in south-central Nevada during the breeding seasons of 1999, 2000, 2001, and 2005. The 37 broods, encompassing 180 young, raised over the four years represented an average of 4.9 young per successful breeding pair. Young and adult owls were detected at the burrow entrance at all times of the day and night, but adults were detected more frequently during afternoon/early evening than were young. Motion-activated cameras require less effort to implement than other techniques. Limitations include photographingmore » only a small percentage of owl activity at the burrow; not detecting the actual number of eggs, young, or number fledged; and not being able to track individual owls over time. Further work is also necessary to compare the accuracy of productivity estimates generated from motion-activated cameras with other techniques.« less

  6. Automatic 3D relief acquisition and georeferencing of road sides by low-cost on-motion SfM

    NASA Astrophysics Data System (ADS)

    Voumard, Jérémie; Bornemann, Perrick; Malet, Jean-Philippe; Derron, Marc-Henri; Jaboyedoff, Michel

    2017-04-01

    3D terrain relief acquisition is important for a large part of geosciences. Several methods have been developed to digitize terrains, such as total station, LiDAR, GNSS or photogrammetry. To digitize road (or rail tracks) sides on long sections, mobile spatial imaging system or UAV are commonly used. In this project, we compare a still fairly new method -the SfM on-motion technics- with some traditional technics of terrain digitizing (terrestrial laser scanning, traditional SfM, UAS imaging solutions, GNSS surveying systems and total stations). The SfM on-motion technics generates 3D spatial data by photogrammetric processing of images taken from a moving vehicle. Our mobile system consists of six action cameras placed on a vehicle. Four fisheye cameras mounted on a mast on the vehicle roof are placed at 3.2 meters above the ground. Three of them have a GNNS chip providing geotagged images. Two pictures were acquired every second by each camera. 4K resolution fisheye videos were also used to extract 8.3M not geotagged pictures. All these pictures are then processed with the Agisoft PhotoScan Professional software. Results from the SfM on-motion technics are compared with results from classical SfM photogrammetry on a 500 meters long alpine track. They were also compared with mobile laser scanning data on the same road section. First results seem to indicate that slope structures are well observable up to decimetric accuracy. For the georeferencing, the planimetric (XY) accuracy of few meters is much better than the altimetric (Z) accuracy. There is indeed a Z coordinate shift of few tens of meters between GoPro cameras and Garmin camera. This makes necessary to give a greater freedom to altimetric coordinates in the processing software. Benefits of this low-cost SfM on-motion method are: 1) a simple setup to use in the field (easy to switch between vehicle types as car, train, bike, etc.), 2) a low cost and 3) an automatic georeferencing of 3D points clouds. Main disadvantages are: 1) results are less accurate than those from LiDAR system, 2) a heavy images processing and 3) a short distance of acquisition.

  7. Superimposed Code Theorectic Analysis of DNA Codes and DNA Computing

    DTIC Science & Technology

    2010-03-01

    because only certain collections (partitioned by font type) of sequences are allowed to be in each position (e.g., Arial = position 0, Comic ...rigidity of short oligos and the shape of the polar charge. Oligo movement was modeled by a Brownian motion 3 dimensional random walk. The one...temperature, kB is Boltz he viscosity of the medium. The random walk motion is modeled by assuming the oligo is on a three dimensional lattice and may

  8. Three-Dimensional Motion Estimation Using Shading Information in Multiple Frames

    DTIC Science & Technology

    1989-09-01

    j. Threle-D.imensionai GO Motion Estimation U sing, Shadin g Ilnformation in Multiple Frames- IJean-Pierre Schotf MIT Artifi -cial intelligence...vision 3-D structure 3-D vision- shape from shading multiple frames 20. ABSTRACT (Cofrn11,00 an reysrf* OWd Of Rssss00n7 Ad 4111111& F~ block f)nseq See...motion and shading have been treated as two disjoint problems. On the one hand, researchers studying motion or structure from motion often assume

  9. Needle path planning and steering in a three-dimensional non-static environment using two-dimensional ultrasound images

    PubMed Central

    Vrooijink, Gustaaf J.; Abayazid, Momen; Patil, Sachin; Alterovitz, Ron; Misra, Sarthak

    2015-01-01

    Needle insertion is commonly performed in minimally invasive medical procedures such as biopsy and radiation cancer treatment. During such procedures, accurate needle tip placement is critical for correct diagnosis or successful treatment. Accurate placement of the needle tip inside tissue is challenging, especially when the target moves and anatomical obstacles must be avoided. We develop a needle steering system capable of autonomously and accurately guiding a steerable needle using two-dimensional (2D) ultrasound images. The needle is steered to a moving target while avoiding moving obstacles in a three-dimensional (3D) non-static environment. Using a 2D ultrasound imaging device, our system accurately tracks the needle tip motion in 3D space in order to estimate the tip pose. The needle tip pose is used by a rapidly exploring random tree-based motion planner to compute a feasible needle path to the target. The motion planner is sufficiently fast such that replanning can be performed repeatedly in a closed-loop manner. This enables the system to correct for perturbations in needle motion, and movement in obstacle and target locations. Our needle steering experiments in a soft-tissue phantom achieves maximum targeting errors of 0.86 ± 0.35 mm (without obstacles) and 2.16 ± 0.88 mm (with a moving obstacle). PMID:26279600

  10. Split ring resonator based THz-driven electron streak camera featuring femtosecond resolution

    PubMed Central

    Fabiańska, Justyna; Kassier, Günther; Feurer, Thomas

    2014-01-01

    Through combined three-dimensional electromagnetic and particle tracking simulations we demonstrate a THz driven electron streak camera featuring a temporal resolution on the order of a femtosecond. The ultrafast streaking field is generated in a resonant THz sub-wavelength antenna which is illuminated by an intense single-cycle THz pulse. Since electron bunches and THz pulses are generated with parts of the same laser system, synchronization between the two is inherently guaranteed. PMID:25010060

  11. Parallel-Processing Software for Correlating Stereo Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric

    2007-01-01

    A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.

  12. Making 3D movies of Northern Lights

    NASA Astrophysics Data System (ADS)

    Hivon, Eric; Mouette, Jean; Legault, Thierry

    2017-10-01

    We describe the steps necessary to create three-dimensional (3D) movies of Northern Lights or Aurorae Borealis out of real-time images taken with two distant high-resolution fish-eye cameras. Astrometric reconstruction of the visible stars is used to model the optical mapping of each camera and correct for it in order to properly align the two sets of images. Examples of the resulting movies can be seen at http://www.iap.fr/aurora3d

  13. Rectification of curved document images based on single view three-dimensional reconstruction.

    PubMed

    Kang, Lai; Wei, Yingmei; Jiang, Jie; Bai, Liang; Lao, Songyang

    2016-10-01

    Since distortions in camera-captured document images significantly affect the accuracy of optical character recognition (OCR), distortion removal plays a critical role for document digitalization systems using a camera for image capturing. This paper proposes a novel framework that performs three-dimensional (3D) reconstruction and rectification of camera-captured document images. While most existing methods rely on additional calibrated hardware or multiple images to recover the 3D shape of a document page, or make a simple but not always valid assumption on the corresponding 3D shape, our framework is more flexible and practical since it only requires a single input image and is able to handle a general locally smooth document surface. The main contributions of this paper include a new iterative refinement scheme for baseline fitting from connected components of text line, an efficient discrete vertical text direction estimation algorithm based on convex hull projection profile analysis, and a 2D distortion grid construction method based on text direction function estimation using 3D regularization. In order to examine the performance of our proposed method, both qualitative and quantitative evaluation and comparison with several recent methods are conducted in our experiments. The experimental results demonstrate that the proposed method outperforms relevant approaches for camera-captured document image rectification, in terms of improvements on both visual distortion removal and OCR accuracy.

  14. Comparison of three-dimensional optical coherence tomography and combining a rotating Scheimpflug camera with a Placido topography system for forme fruste keratoconus diagnosis.

    PubMed

    Fukuda, Shinichi; Beheregaray, Simone; Hoshi, Sujin; Yamanari, Masahiro; Lim, Yiheng; Hiraoka, Takahiro; Yasuno, Yoshiaki; Oshika, Tetsuro

    2013-12-01

    To evaluate the ability of parameters measured by three-dimensional (3D) corneal and anterior segment optical coherence tomography (CAS-OCT) and a rotating Scheimpflug camera combined with a Placido topography system (Scheimpflug camera with topography) to discriminate between normal eyes and forme fruste keratoconus. Forty-eight eyes of 48 patients with keratoconus, 25 eyes of 25 patients with forme fruste keratoconus and 128 eyes of 128 normal subjects were evaluated. Anterior and posterior keratometric parameters (steep K, flat K, average K), elevation, topographic parameters, regular and irregular astigmatism (spherical, asymmetry, regular and higher-order astigmatism) and five pachymetric parameters (minimum, minimum-median, inferior-superior, inferotemporal-superonasal, vertical thinnest location of the cornea) were measured using 3D CAS-OCT and a Scheimpflug camera with topography. The area under the receiver operating curve (AUROC) was calculated to assess the discrimination ability. Compatibility and repeatability of both devices were evaluated. Posterior surface elevation showed higher AUROC values in discrimination analysis of forme fruste keratoconus using both devices. Both instruments showed significant linear correlations (p<0.05, Pearson's correlation coefficient) and good repeatability (ICCs: 0.885-0.999) for normal and forme fruste keratoconus. Posterior elevation was the best discrimination parameter for forme fruste keratoconus. Both instruments presented good correlation and repeatability for this condition.

  15. Three Dimensional Modeling of the Attenuation Structure in the Part of the Kumaon Himalaya, India Using Strong Motion Data

    NASA Astrophysics Data System (ADS)

    Joshi, A.; LAL, S.

    2017-12-01

    Attenuation property of the medium determines the amplitude of seismic waves at different locations during an earthquake. Attenuation can be defined by the inverse of the parameter known as quality factor `Q' (Knopoff, 1964). It has been observed that the peak ground acceleration in the strong motion accelerogram is associated with arrival of S-waves which is controlled mainly by the shear wave attenuation characteristics of the medium. In the present work attenuation structure is obtained using the modified inversion algorithm given by Joshi et al. (2010). The modified inversion algorithm is designed to provide three dimensional attenuation structure of the region at different frequencies. A strong motion network is installed in the Kumaon Himalaya by the Department of Earth Sciences, Indian Institute of Technology Roorkee under a major research project sponsored by the Ministry of Earth Sciences, Government of India. In this work the detailed three dimensional shear wave quality factor has been determined for the Kumaon Himalaya using strong motion data obtained from this network. In the present work 164 records from 26 events recorded at 15 stations located in an area of 129 km x 62 km has been used. The shear wave attenuation structure for the Kumaon Himalaya has been calculated by dividing the study region into 108 three dimensional rectangular blocks of size 22 km x 11 km x 5 km. The input to the inversion algorithm is the acceleration spectra of S wave identified from each record. A total of 164 spectra from equal number of accelerograms with sampling frequency of .024 Hz is used as an input to the algorithms. A total of 2048 three dimensional attenuation structure is obtained upto frequency of 50 Hz. The obtained structure at various frequencies is compared with the existing geological models in the region and it is seen that the obtained model correlated well with the geological model of the region. References: Joshi, A., Mohanty, M., Bansal, A. R., Dimri, V. P. and Chadha, R. K., 2010, Use of spectral acceleration data for determination of three dimensional attenuation structure in the Pithoragarh region of Kumaon Himalaya, J Seismol., 14, 247-272. Knopoff, L., 1964, Q, Reviews of Geophysics, 2, 625-660.

  16. Extracting cardiac shapes and motion of the chick embryo heart outflow tract from four-dimensional optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Yin, Xin; Liu, Aiping; Thornburg, Kent L.; Wang, Ruikang K.; Rugonyi, Sandra

    2012-09-01

    Recent advances in optical coherence tomography (OCT), and the development of image reconstruction algorithms, enabled four-dimensional (4-D) (three-dimensional imaging over time) imaging of the embryonic heart. To further analyze and quantify the dynamics of cardiac beating, segmentation procedures that can extract the shape of the heart and its motion are needed. Most previous studies analyzed cardiac image sequences using manually extracted shapes and measurements. However, this is time consuming and subject to inter-operator variability. Automated or semi-automated analyses of 4-D cardiac OCT images, although very desirable, are also extremely challenging. This work proposes a robust algorithm to semi automatically detect and track cardiac tissue layers from 4-D OCT images of early (tubular) embryonic hearts. Our algorithm uses a two-dimensional (2-D) deformable double-line model (DLM) to detect target cardiac tissues. The detection algorithm uses a maximum-likelihood estimator and was successfully applied to 4-D in vivo OCT images of the heart outflow tract of day three chicken embryos. The extracted shapes captured the dynamics of the chick embryonic heart outflow tract wall, enabling further analysis of cardiac motion.

  17. Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry

    NASA Technical Reports Server (NTRS)

    Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)

    2016-01-01

    A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.

  18. Image intensifier-based volume tomographic angiography imaging system: system evaluation

    NASA Astrophysics Data System (ADS)

    Ning, Ruola; Wang, Xiaohui; Shen, Jianjun; Conover, David L.

    1995-05-01

    An image intensifier-based rotational volume tomographic angiography imaging system has been constructed. The system consists of an x-ray tube and an image intensifier that are separately mounted on a gantry. This system uses an image intensifier coupled to a TV camera as a two-dimensional detector so that a set of two-dimensional projections can be acquired for a direct three-dimensional reconstruction (3D). This system has been evaluated with two phantoms: a vascular phantom and a monkey head cadaver. One hundred eighty projections of each phantom were acquired with the system. A set of three-dimensional images were directly reconstructed from the projection data. The experimental results indicate that good imaging quality can be obtained with this system.

  19. Integrated calibration of multiview phase-measuring profilometry

    NASA Astrophysics Data System (ADS)

    Lee, Yeong Beum; Kim, Min H.

    2017-11-01

    Phase-measuring profilometry (PMP) measures per-pixel height information of a surface with high accuracy. Height information captured by a camera in PMP relies on its screen coordinates. Therefore, a PMP measurement from a view cannot be integrated directly to other measurements from different views due to the intrinsic difference of the screen coordinates. In order to integrate multiple PMP scans, an auxiliary calibration of each camera's intrinsic and extrinsic properties is required, in addition to principal PMP calibration. This is cumbersome and often requires physical constraints in the system setup, and multiview PMP is consequently rarely practiced. In this work, we present a novel multiview PMP method that yields three-dimensional global coordinates directly so that three-dimensional measurements can be integrated easily. Our PMP calibration parameterizes intrinsic and extrinsic properties of the configuration of both a camera and a projector simultaneously. It also does not require any geometric constraints on the setup. In addition, we propose a novel calibration target that can remain static without requiring any mechanical operation while conducting multiview calibrations, whereas existing calibration methods require manually changing the target's position and orientation. Our results validate the accuracy of measurements and demonstrate the advantages on our multiview PMP.

  20. Single-camera three-dimensional tracking of natural particulate and zooplankton

    NASA Astrophysics Data System (ADS)

    Troutman, Valerie A.; Dabiri, John O.

    2018-07-01

    We develop and characterize an image processing algorithm to adapt single-camera defocusing digital particle image velocimetry (DDPIV) for three-dimensional (3D) particle tracking velocimetry (PTV) of natural particulates, such as those present in the ocean. The conventional DDPIV technique is extended to facilitate tracking of non-uniform, non-spherical particles within a volume depth an order of magnitude larger than current single-camera applications (i.e. 10 cm  ×  10 cm  ×  24 cm depth) by a dynamic template matching method. This 2D cross-correlation method does not rely on precise determination of the centroid of the tracked objects. To accommodate the broad range of particle number densities found in natural marine environments, the performance of the measurement technique at higher particle densities has been improved by utilizing the time-history of tracked objects to inform 3D reconstruction. The developed processing algorithms were analyzed using synthetically generated images of flow induced by Hill’s spherical vortex, and the capabilities of the measurement technique were demonstrated empirically through volumetric reconstructions of the 3D trajectories of particles and highly non-spherical, 5 mm zooplankton.

  1. Freehand three-dimensional ultrasound imaging of carotid artery using motion tracking technology.

    PubMed

    Chung, Shao-Wen; Shih, Cho-Chiang; Huang, Chih-Chung

    2017-02-01

    Ultrasound imaging has been extensively used for determining the severity of carotid atherosclerotic stenosis. In particular, the morphological characterization of carotid plaques can be performed for risk stratification of patients. However, using 2D ultrasound imaging for detecting morphological changes in plaques has several limitations. Due to the scan was performed on a single longitudinal cross-section, the selected 2D image is difficult to represent the entire morphology and volume of plaque and vessel lumen. In addition, the precise positions of 2D ultrasound images highly depend on the radiologists' experience, it makes the serial long-term exams of anti-atherosclerotic therapies are difficult to relocate the same corresponding planes by using 2D B-mode images. This has led to the recent development of three-dimensional (3D) ultrasound imaging, which offers improved visualization and quantification of complex morphologies of carotid plaques. In the present study, a freehand 3D ultrasound imaging technique based on optical motion tracking technology is proposed. Unlike other optical tracking systems, the marker is a small rigid body that is attached to the ultrasound probe and is tracked by eight high-performance digital cameras. The probe positions in 3D space coordinates are then calibrated at spatial and temporal resolutions of 10μm and 0.01s, respectively. The image segmentation procedure involves Otsu's and the active contour model algorithms and accurately detects the contours of the carotid arteries. The proposed imaging technique was verified using normal artery and atherosclerotic stenosis phantoms. Human experiments involving freehand scanning of the carotid artery of a volunteer were also performed. The results indicated that compared with manual segmentation, the lowest percentage errors of the proposed segmentation procedure were 7.8% and 9.1% for the external and internal carotid arteries, respectively. Finally, the effect of handshaking was calibrated using the optical tracking system for reconstructing a 3D image. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Temporal Audiovisual Motion Prediction in 2D- vs. 3D-Environments

    PubMed Central

    Dittrich, Sandra; Noesselt, Tömme

    2018-01-01

    Predicting motion is essential for many everyday life activities, e.g., in road traffic. Previous studies on motion prediction failed to find consistent results, which might be due to the use of very different stimulus material and behavioural tasks. Here, we directly tested the influence of task (detection, extrapolation) and stimulus features (visual vs. audiovisual and three-dimensional vs. non-three-dimensional) on temporal motion prediction in two psychophysical experiments. In both experiments a ball followed a trajectory toward the observer and temporarily disappeared behind an occluder. In audiovisual conditions a moving white noise (congruent or non-congruent to visual motion direction) was presented concurrently. In experiment 1 the ball reappeared on a predictable or a non-predictable trajectory and participants detected when the ball reappeared. In experiment 2 the ball did not reappear after occlusion and participants judged when the ball would reach a specified position at two possible distances from the occluder (extrapolation task). Both experiments were conducted in three-dimensional space (using stereoscopic screen and polarised glasses) and also without stereoscopic presentation. Participants benefitted from visually predictable trajectories and concurrent sounds during detection. Additionally, visual facilitation was more pronounced for non-3D stimulation during detection task. In contrast, for a more complex extrapolation task group mean results indicated that auditory information impaired motion prediction. However, a post hoc cross-validation procedure (split-half) revealed that participants varied in their ability to use sounds during motion extrapolation. Most participants selectively profited from either near or far extrapolation distances but were impaired for the other one. We propose that interindividual differences in extrapolation efficiency might be the mechanism governing this effect. Together, our results indicate that both a realistic experimental environment and subject-specific differences modulate the ability of audiovisual motion prediction and need to be considered in future research. PMID:29618999

  3. Temporal Audiovisual Motion Prediction in 2D- vs. 3D-Environments.

    PubMed

    Dittrich, Sandra; Noesselt, Tömme

    2018-01-01

    Predicting motion is essential for many everyday life activities, e.g., in road traffic. Previous studies on motion prediction failed to find consistent results, which might be due to the use of very different stimulus material and behavioural tasks. Here, we directly tested the influence of task (detection, extrapolation) and stimulus features (visual vs. audiovisual and three-dimensional vs. non-three-dimensional) on temporal motion prediction in two psychophysical experiments. In both experiments a ball followed a trajectory toward the observer and temporarily disappeared behind an occluder. In audiovisual conditions a moving white noise (congruent or non-congruent to visual motion direction) was presented concurrently. In experiment 1 the ball reappeared on a predictable or a non-predictable trajectory and participants detected when the ball reappeared. In experiment 2 the ball did not reappear after occlusion and participants judged when the ball would reach a specified position at two possible distances from the occluder (extrapolation task). Both experiments were conducted in three-dimensional space (using stereoscopic screen and polarised glasses) and also without stereoscopic presentation. Participants benefitted from visually predictable trajectories and concurrent sounds during detection. Additionally, visual facilitation was more pronounced for non-3D stimulation during detection task. In contrast, for a more complex extrapolation task group mean results indicated that auditory information impaired motion prediction. However, a post hoc cross-validation procedure (split-half) revealed that participants varied in their ability to use sounds during motion extrapolation. Most participants selectively profited from either near or far extrapolation distances but were impaired for the other one. We propose that interindividual differences in extrapolation efficiency might be the mechanism governing this effect. Together, our results indicate that both a realistic experimental environment and subject-specific differences modulate the ability of audiovisual motion prediction and need to be considered in future research.

  4. Hydrodynamic Characteristics and Strength Analysis of a Novel Dot-matrix Oscillating Wave Energy Converter

    NASA Astrophysics Data System (ADS)

    Shao, Meng; Xiao, Chengsi; Sun, Jinwei; Shao, Zhuxiao; Zheng, Qiuhong

    2017-12-01

    The paper analyzes hydrodynamic characteristics and the strength of a novel dot-matrix oscillating wave energy converter, which is in accordance with nowadays’ research tendency: high power, high efficiency, high reliability and low cost. Based on three-dimensional potential flow theory, the paper establishes motion control equations of the wave energy converter unit and calculates wave loads and motions. On this basis, a three-dimensional finite element model of the device is built to check its strength. Through the analysis, it can be confirmed that the WEC is feasible and the research results could be a reference for wave energy’s exploration and utilization.

  5. Generating Stereoscopic Television Images With One Camera

    NASA Technical Reports Server (NTRS)

    Coan, Paul P.

    1996-01-01

    Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.

  6. Validating two-dimensional leadership models on three-dimensionally structured fish schools

    PubMed Central

    Nagy, Máté; Holbrook, Robert I.; Biro, Dora; Burt de Perera, Theresa

    2017-01-01

    Identifying leader–follower interactions is crucial for understanding how a group decides where or when to move, and how this information is transferred between members. Although many animal groups have a three-dimensional structure, previous studies investigating leader–follower interactions have often ignored vertical information. This raises the question of whether commonly used two-dimensional leader–follower analyses can be used justifiably on groups that interact in three dimensions. To address this, we quantified the individual movements of banded tetra fish (Astyanax mexicanus) within shoals by computing the three-dimensional trajectories of all individuals using a stereo-camera technique. We used these data firstly to identify and compare leader–follower interactions in two and three dimensions, and secondly to analyse leadership with respect to an individual's spatial position in three dimensions. We show that for 95% of all pairwise interactions leadership identified through two-dimensional analysis matches that identified through three-dimensional analysis, and we reveal that fish attend to the same shoalmates for vertical information as they do for horizontal information. Our results therefore highlight that three-dimensional analyses are not always required to identify leader–follower relationships in species that move freely in three dimensions. We discuss our results in terms of the importance of taking species' sensory capacities into account when studying interaction networks within groups. PMID:28280582

  7. Evaluation of Event-Based Algorithms for Optical Flow with Ground-Truth from Inertial Measurement Sensor

    PubMed Central

    Rueckauer, Bodo; Delbruck, Tobi

    2016-01-01

    In this study we compare nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost. In contrast to conventional, frame-based motion flow algorithms, our open-source implementations compute optical flow based on address-events from a neuromorphic Dynamic Vision Sensor (DVS). For this benchmarking we created a dataset of two synthesized and three real samples recorded from a 240 × 180 pixel Dynamic and Active-pixel Vision Sensor (DAVIS). This dataset contains events from the DVS as well as conventional frames to support testing state-of-the-art frame-based methods. We introduce a new source for the ground truth: In the special case that the perceived motion stems solely from a rotation of the vision sensor around its three camera axes, the true optical flow can be estimated using gyro data from the inertial measurement unit integrated with the DAVIS camera. This provides a ground-truth to which we can compare algorithms that measure optical flow by means of motion cues. An analysis of error sources led to the use of a refractory period, more accurate numerical derivatives and a Savitzky-Golay filter to achieve significant improvements in accuracy. Our pure Java implementations of two recently published algorithms reduce computational cost by up to 29% compared to the original implementations. Two of the algorithms introduced in this paper further speed up processing by a factor of 10 compared with the original implementations, at equal or better accuracy. On a desktop PC, they run in real-time on dense natural input recorded by a DAVIS camera. PMID:27199639

  8. Direct Measurement of Lung Motion Using Hyperpolarized Helium-3 MR Tagging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai Jing; Miller, G. Wilson; Altes, Talissa A.

    2007-07-01

    Purpose: To measure lung motion between end-inhalation and end-exhalation using a hyperpolarized helium-3 (HP {sup 3}He) magnetic resonance (MR) tagging technique. Methods and Materials: Three healthy volunteers underwent MR tagging studies after inhalation of 1 L HP {sup 3}He gas diluted with nitrogen. Multiple-slice two-dimensional and volumetric three-dimensional MR tagged images of the lungs were obtained at end-inhalation and end-exhalation, and displacement vector maps were computed. Results: The grids of tag lines in the HP {sup 3}He MR images were well defined at end-inhalation and remained evident at end-exhalation. Displacement vector maps clearly demonstrated the regional lung motion and deformationmore » that occurred during exhalation. Discontinuity and differences in motion pattern between two adjacent lung lobes were readily resolved. Conclusions: Hyperpolarized helium-3 MR tagging technique can be used for direct in vivo measurement of respiratory lung motion on a regional basis. This technique may lend new insights into the regional pulmonary biomechanics and thus provide valuable information for the deformable registration of lung.« less

  9. Three-Dimensional ISAR Imaging Method for High-Speed Targets in Short-Range Using Impulse Radar Based on SIMO Array.

    PubMed

    Zhou, Xinpeng; Wei, Guohua; Wu, Siliang; Wang, Dawei

    2016-03-11

    This paper proposes a three-dimensional inverse synthetic aperture radar (ISAR) imaging method for high-speed targets in short-range using an impulse radar. According to the requirements for high-speed target measurement in short-range, this paper establishes the single-input multiple-output (SIMO) antenna array, and further proposes a missile motion parameter estimation method based on impulse radar. By analyzing the motion geometry relationship of the warhead scattering center after translational compensation, this paper derives the receiving antenna position and the time delay after translational compensation, and thus overcomes the shortcomings of conventional translational compensation methods. By analyzing the motion characteristics of the missile, this paper estimates the missile's rotation angle and the rotation matrix by establishing a new coordinate system. Simulation results validate the performance of the proposed algorithm.

  10. A low cost real-time motion tracking approach using webcam technology.

    PubMed

    Krishnan, Chandramouli; Washabaugh, Edward P; Seetharaman, Yogesh

    2015-02-05

    Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject's limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. A low cost real-time motion tracking approach using webcam technology

    PubMed Central

    Krishnan, Chandramouli; Washabaugh, Edward P.; Seetharaman, Yogesh

    2014-01-01

    Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject’s limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. PMID:25555306

  12. Plantar-flexion of the ankle joint complex in terminal stance is initiated by subtalar plantar-flexion: A bi-planar fluoroscopy study.

    PubMed

    Koo, Seungbum; Lee, Kyoung Min; Cha, Young Joo

    2015-10-01

    Gross motion of the ankle joint complex (AJC) is a summation of the ankle and subtalar joints. Although AJC kinematics have been widely used to evaluate the function of the AJC, the coordinated movements of the ankle and subtalar joints are not well understood. The purpose of this study was to accurately quantify the individual kinematics of the ankle and subtalar joints in the intact foot during ground walking by using a bi-planar fluoroscopic system. Bi-planar fluoroscopic images of the foot and ankle during walking and standing were acquired from 10 healthy subjects. The three-dimensional movements of the tibia, talus, and calcaneus were calculated with a three-dimensional/two-dimensional registration method. The skeletal kinematics were quantified from 9% to 86% of the full stance phase because of the limited camera speed of the X-ray system. At the beginning of terminal stance, plantar-flexion of the AJC was initiated in the subtalar joint on average at 75% ranging from 62% to 76% of the stance phase, and plantar-flexion of the ankle joint did not start until 86% of the stance phase. The earlier change to plantar-flexion in the AJC than the ankle joint due to the early plantar-flexion in the subtalar joint was observed in 8 of the 10 subjects. This phenomenon could be explained by the absence of direct muscle insertion on the talus. Preceding subtalar plantar-flexion could contribute to efficient and stable ankle plantar-flexion by locking the midtarsal joint, but this explanation needs further investigation. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. 3D SAPIV particle field reconstruction method based on adaptive threshold.

    PubMed

    Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi

    2018-03-01

    Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.

  14. 3D kinematic measurement of human movement using low cost fish-eye cameras

    NASA Astrophysics Data System (ADS)

    Islam, Atiqul; Asikuzzaman, Md.; Garratt, Matthew A.; Pickering, Mark R.

    2017-02-01

    3D motion capture is difficult when the capturing is performed in an outdoor environment without controlled surroundings. In this paper, we propose a new approach of using two ordinary cameras arranged in a special stereoscopic configuration and passive markers on a subject's body to reconstruct the motion of the subject. Firstly for each frame of the video, an adaptive thresholding algorithm is applied for extracting the markers on the subject's body. Once the markers are extracted, an algorithm for matching corresponding markers in each frame is applied. Zhang's planar calibration method is used to calibrate the two cameras. As the cameras use the fisheye lens, they cannot be well estimated using a pinhole camera model which makes it difficult to estimate the depth information. In this work, to restore the 3D coordinates we use a unique calibration method for fisheye lenses. The accuracy of the 3D coordinate reconstruction is evaluated by comparing with results from a commercially available Vicon motion capture system.

  15. On a modified form of navier-stokes equations for three-dimensional flows.

    PubMed

    Venetis, J

    2015-01-01

    A rephrased form of Navier-Stokes equations is performed for incompressible, three-dimensional, unsteady flows according to Eulerian formalism for the fluid motion. In particular, we propose a geometrical method for the elimination of the nonlinear terms of these fundamental equations, which are expressed in true vector form, and finally arrive at an equivalent system of three semilinear first order PDEs, which hold for a three-dimensional rectangular Cartesian coordinate system. Next, we present the related variational formulation of these modified equations as well as a general type of weak solutions which mainly concern Sobolev spaces.

  16. On a Modified Form of Navier-Stokes Equations for Three-Dimensional Flows

    PubMed Central

    Venetis, J.

    2015-01-01

    A rephrased form of Navier-Stokes equations is performed for incompressible, three-dimensional, unsteady flows according to Eulerian formalism for the fluid motion. In particular, we propose a geometrical method for the elimination of the nonlinear terms of these fundamental equations, which are expressed in true vector form, and finally arrive at an equivalent system of three semilinear first order PDEs, which hold for a three-dimensional rectangular Cartesian coordinate system. Next, we present the related variational formulation of these modified equations as well as a general type of weak solutions which mainly concern Sobolev spaces. PMID:25918743

  17. Investigation of optimal method for inducing harmonic motion in tissue using a linear ultrasound phased array--a simulation study.

    PubMed

    Heikkilä, Janne; Hynynen, Kullervo

    2006-04-01

    Many noninvasive ultrasound techniques have been developed to explore mechanical properties of soft tissues. One of these methods, Localized Harmonic Motion Imaging (LHMI), has been proposed to be used for ultrasound surgery monitoring. In LHMI, dynamic ultrasound radiation-force stimulation induces displacements in a target that can be measured using pulse-echo imaging and used to estimate the elastic properties of the target. In this initial, simulation study, the use of a one-dimensional phased array is explored for the induction of the tissue motion. The study compares three different dual-frequency and amplitude-modulated single-frequency methods for the inducing tissue motion. Simulations were computed in a homogeneous soft-tissue volume. The Rayleigh integral was used in the simulations of the ultrasound fields and the tissue displacements were computed using a finite-element method (FEM). The simulations showed that amplitude-modulated sonication using a single frequency produced the largest vibration amplitude of the target tissue. These simulations demonstrate that the properties of the tissue motion are highly dependent on the sonication method and that it is important to consider the full three-dimensional distribution of the ultrasound field for controlling the induction of tissue motion.

  18. Improved head-controlled TV system produces high-quality remote image

    NASA Technical Reports Server (NTRS)

    Goertz, R.; Lindberg, J.; Mingesz, D.; Potts, C.

    1967-01-01

    Manipulator operator uses an improved resolution tv camera/monitor positioning system to view the remote handling and processing of reactive, flammable, explosive, or contaminated materials. The pan and tilt motions of the camera and monitor are slaved to follow the corresponding motions of the operators head.

  19. Whole-Motion Model of Perception during Forward- and Backward-Facing Centrifuge Runs

    PubMed Central

    Holly, Jan E.; Vrublevskis, Arturs; Carlson, Lindsay E.

    2009-01-01

    Illusory perceptions of motion and orientation arise during human centrifuge runs without vision. Asymmetries have been found between acceleration and deceleration, and between forward-facing and backward-facing runs. Perceived roll tilt has been studied extensively during upright fixed-carriage centrifuge runs, and other components have been studied to a lesser extent. Certain, but not all, perceptual asymmetries in acceleration-vs-deceleration and forward-vs-backward motion can be explained by existing analyses. The immediate acceleration-deceleration roll-tilt asymmetry can be explained by the three-dimensional physics of the external stimulus; in addition, longer-term data has been modeled in a standard way using physiological time constants. However, the standard modeling approach is shown in the present research to predict forward-vs-backward-facing symmetry in perceived roll tilt, contradicting experimental data, and to predict perceived sideways motion, rather than forward or backward motion, around a curve. The present work develops a different whole-motion-based model taking into account the three-dimensional form of perceived motion and orientation. This model predicts perceived forward or backward motion around a curve, and predicts additional asymmetries such as the forward-backward difference in roll tilt. This model is based upon many of the same principles as the standard model, but includes an additional concept of familiarity of motions as a whole. PMID:19208962

  20. TH-AB-202-11: Spatial and Rotational Quality Assurance of 6DOF Patient Tracking Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belcher, AH; Liu, X; Grelewicz, Z

    2016-06-15

    Purpose: External tracking systems used for patient positioning and motion monitoring during radiotherapy are now capable of detecting both translations and rotations (6DOF). In this work, we develop a novel technique to evaluate the 6DOF performance of external motion tracking systems. We apply this methodology to an infrared (IR) marker tracking system and two 3D optical surface mapping systems in a common tumor 6DOF workspace. Methods: An in-house designed and built 6DOF parallel kinematics robotic motion phantom was used to follow input trajectories with sub-millimeter and sub-degree accuracy. The 6DOF positions of the robotic system were then tracked and recordedmore » independently by three optical camera systems. A calibration methodology which associates the motion phantom and camera coordinate frames was first employed, followed by a comprehensive 6DOF trajectory evaluation, which spanned a full range of positions and orientations in a 20×20×16 mm and 5×5×5 degree workspace. The intended input motions were compared to the calibrated 6DOF measured points. Results: The technique found the accuracy of the IR marker tracking system to have maximal root mean square error (RMSE) values of 0.25 mm translationally and 0.09 degrees rotationally, in any one axis, comparing intended 6DOF positions to positions measured by the IR camera. The 6DOF RSME discrepancy for the first 3D optical surface tracking unit yielded maximal values of 0.60 mm and 0.11 degrees over the same 6DOF volume. An earlier generation 3D optical surface tracker was observed to have worse tracking capabilities than both the IR camera unit and the newer 3D surface tracking system with maximal RMSE of 0.74 mm and 0.28 degrees within the same 6DOF evaluation space. Conclusion: The proposed technique was effective at evaluating the performance of 6DOF patient tracking systems. All systems examined exhibited tracking capabilities at the sub-millimeter and sub-degree level within a 6DOF workspace.« less

  1. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    NASA Astrophysics Data System (ADS)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and the most popular ones in each category were selected (Arc 3D, Visual SfM, Sure, Agisoft). Also four small objects with distinct geometric properties and especial complexities were chosen and their accurate models as reliable true data was created using ATOS Compact Scan 2M 3D scanner. Images were taken using Fujifilm Real 3D stereo camera, Apple iPhone 5 and Nikon D3200 professional camera and three dimensional models of the objects were obtained using each of the software. Finally, a comprehensive comparison between the detailed reviews of the results on the data set showed that the best combination of software and sensors for generating three-dimensional models is directly related to the object shape as well as the expected accuracy of the final model. Generally better quantitative and qualitative results were obtained by using the Nikon D3200 professional camera, while Fujifilm Real 3D stereo camera and Apple iPhone 5 were the second and third respectively in this comparison. On the other hand, three software of Visual SfM, Sure and Agisoft had a hard competition to achieve the most accurate and complete model of the objects and the best software was different according to the geometric properties of the object.

  2. Superimposed Code Theoretic Analysis of Deoxyribonucleic Acid (DNA) Codes and DNA Computing

    DTIC Science & Technology

    2010-01-01

    partitioned by font type) of sequences are allowed to be in each position (e.g., Arial = position 0, Comic = position 1, etc. ) and within each collection...movement was modeled by a Brownian motion 3 dimensional random walk. The one dimensional diffusion coefficient D for the ellipsoid shape with 3...temperature, kB is Boltzmann’s constant, and η is the viscosity of the medium. The random walk motion is modeled by assuming the oligo is on a three

  3. The three-dimensional wake of a cylinder undergoing a combination of translational and rotational oscillation in a quiescent fluid

    NASA Astrophysics Data System (ADS)

    Nazarinia, M.; Lo Jacono, D.; Thompson, M. C.; Sheridan, J.

    2009-06-01

    Previous two-dimensional numerical studies have shown that a circular cylinder undergoing both oscillatory rotational and translational motions can generate thrust so that it will actually self-propel through a stationary fluid. Although a cylinder undergoing a single oscillation has been thoroughly studied, the combination of the two oscillations has not received much attention until now. The current research reported here extends the numerical study of Blackburn et al. [Phys. Fluids 11, L4 (1999)] both experimentally and numerically, recording detailed vorticity fields in the wake and using these to elucidate the underlying physics, examining the three-dimensional wake development experimentally, and determining the three-dimensional stability of the wake through Floquet stability analysis. Experiments conducted in the laboratory are presented for a given parameter range, confirming the early results from Blackburn et al. [Phys. Fluids 11, L4 (1999)]. In particular, we confirm the thrust generation ability of a circular cylinder undergoing combined oscillatory motions. Importantly, we also find that the wake undergoes three-dimensional transition at low Reynolds numbers (Re≃100) to an instability mode with a wavelength of about two cylinder diameters. The stability analysis indicates that the base flow is also unstable to another mode at slightly higher Reynolds numbers, broadly analogous to the three-dimensional wake transition mode for a circular cylinder, despite the distinct differences in wake/mode topology. The stability of these flows was confirmed by experimental measurements.

  4. Accuracy of an optical active-marker system to track the relative motion of rigid bodies.

    PubMed

    Maletsky, Lorin P; Sun, Junyi; Morton, Nicholas A

    2007-01-01

    The measurement of relative motion between two moving bones is commonly accomplished for in vitro studies by attaching to each bone a series of either passive or active markers in a fixed orientation to create a rigid body (RB). This work determined the accuracy of motion between two RBs using an Optotrak optical motion capture system with active infrared LEDs. The stationary noise in the system was quantified by recording the apparent change in position with the RBs stationary and found to be 0.04 degrees and 0.03 mm. Incremental 10 degrees rotations and 10-mm translations were made using a more precise tool than the Optotrak. Increasing camera distance decreased the precision or increased the range of values observed for a set motion and increased the error in rotation or bias between the measured and actual rotation. The relative positions of the RBs with respect to the camera-viewing plane had a minimal effect on the kinematics and, therefore, for a given distance in the volume less than or close to the precalibrated camera distance, any motion was similarly reliable. For a typical operating set-up, a 10 degrees rotation showed a bias of 0.05 degrees and a 95% repeatability limit of 0.67 degrees. A 10-mm translation showed a bias of 0.03 mm and a 95% repeatability limit of 0.29 mm. To achieve a high level of accuracy it is important to keep the distance between the cameras and the markers near the distance the cameras are focused to during calibration.

  5. Measurement of impinging butane flame using combined optical system with digital speckle tomography

    NASA Astrophysics Data System (ADS)

    Ko, Han Seo; Ahn, Seong Soo; Kim, Hyun Jung

    2011-11-01

    Three-dimensional density distributions of an impinging and eccentric flame were measured experimentally using a combined optical system with digital speckle tomography. In addition, a three-dimensional temperature distribution of the flame was reconstructed from an ideal gas equation based on the reconstructed density data. The flame was formed by the ignition of premixed butane/air from air holes and impinged upward against a plate located 24 mm distance from the burner nozzle. In order to verify the reconstruction process for the experimental measurements, numerically synthesized phantoms of impinging and eccentric flames were derived and reconstructed using a developed three-dimensional multiplicative algebraic reconstruction technique (MART). A new scanning technique was developed for the accurate analysis of speckle displacements necessary for investigating the wall jet regions of the impinging flame at which a sharp variation of the flow direction and pressure gradient occur. The reconstructed temperatures by the digital speckle tomography were applied to the boundary condition for numerical analysis of a flame impinged plate. Then, the numerically calculated temperature distribution of the upper side of the flame impinged plate was compared to temperature data taken by an infrared camera. The absolute average uncertainty between the numerical and infrared camera data was 3.7%.

  6. Three-dimensional image signals: processing methods

    NASA Astrophysics Data System (ADS)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  7. Digital X-ray camera for quality evaluation three-dimensional topographic reconstruction of single crystals of biological macromolecules

    NASA Technical Reports Server (NTRS)

    Borgstahl, Gloria (Inventor); Lovelace, Jeff (Inventor); Snell, Edward Holmes (Inventor); Bellamy, Henry (Inventor)

    2008-01-01

    The present invention provides a digital topography imaging system for determining the crystalline structure of a biological macromolecule, wherein the system employs a charge coupled device (CCD) camera with antiblooming circuitry to directly convert x-ray signals to electrical signals without the use of phosphor and measures reflection profiles from the x-ray emitting source after x-rays are passed through a sample. Methods for using said system are also provided.

  8. Modelling, Visibility Testing and Projection of an Orthogonal Three Dimensional World in Support of a Single Camera Vision System

    DTIC Science & Technology

    1992-03-01

    construction were completed and data, "’dm blue prints and physical measurements, was entered concurrent with the coding of routines for data retrieval. While...desirable for that view to accurately reflect what a person (or camera) would see if they were to stand at the same point in the physical world. To... physical dimensions. A parallel projection does not perform this scaling and is therefore not suitable to our application. B. GENERAL PERSPECTIVE

  9. Feasibility and accuracy assessment of light field (plenoptic) PIV flow-measurement technique

    NASA Astrophysics Data System (ADS)

    Shekhar, Chandra; Ogawa, Syo; Kawaguchi, Tatsuya

    A light field camera can enable measurement of all the three velocity components of a flow field inside a three-dimensional volume when implemented in a PIV measurement. Due to the usage of only one camera, the measurement procedure gets greatly simplified, as well as measurement of the flows with limited visual access also becomes possible. Due to these advantages, light field cameras and their usage in PIV measurements are actively studied. The overall procedure of obtaining an instantaneous flow field consists of imaging a seeded flow at two closely separated time instants, reconstructing the two volumetric distributions of the particles using algorithms such as MART, followed by obtaining the flow velocity through cross-correlations. In this study, we examined effects of various configuration parameters of a light field camera on the in-plane and the depth resolutions, obtained near-optimal parameters in a given case, and then used it to simulate a PIV measurement scenario in order to assess the reconstruction accuracy.

  10. Automatic Camera Calibration Using Multiple Sets of Pairwise Correspondences.

    PubMed

    Vasconcelos, Francisco; Barreto, Joao P; Boyer, Edmond

    2018-04-01

    We propose a new method to add an uncalibrated node into a network of calibrated cameras using only pairwise point correspondences. While previous methods perform this task using triple correspondences, these are often difficult to establish when there is limited overlap between different views. In such challenging cases we must rely on pairwise correspondences and our solution becomes more advantageous. Our method includes an 11-point minimal solution for the intrinsic and extrinsic calibration of a camera from pairwise correspondences with other two calibrated cameras, and a new inlier selection framework that extends the traditional RANSAC family of algorithms to sampling across multiple datasets. Our method is validated on different application scenarios where a lack of triple correspondences might occur: addition of a new node to a camera network; calibration and motion estimation of a moving camera inside a camera network; and addition of views with limited overlap to a Structure-from-Motion model.

  11. Marker-less multi-frame motion tracking and compensation in PET-brain imaging

    NASA Astrophysics Data System (ADS)

    Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.

    2015-03-01

    In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.

  12. Three-dimensional microbubble streaming flows

    NASA Astrophysics Data System (ADS)

    Rallabandi, Bhargav; Marin, Alvaro; Rossi, Massimiliano; Kaehler, Christian; Hilgenfeldt, Sascha

    2014-11-01

    Streaming due to acoustically excited bubbles has been used successfully for applications such as size-sorting, trapping and focusing of particles, as well as fluid mixing. Many of these applications involve the precise control of particle trajectories, typically achieved using cylindrical bubbles, which establish planar flows. Using astigmatic particle tracking velocimetry (APTV), we show that, while this two-dimensional picture is a useful description of the flow over short times, a systematic three-dimensional flow structure is evident over long time scales. We demonstrate that this long-time three-dimensional fluid motion can be understood through asymptotic theory, superimposing secondary axial flows (induced by boundary conditions at the device walls) onto the two-dimensional description. This leads to a general framework that describes three-dimensional flows in confined microstreaming systems, guiding the design of applications that profit from minimizing or maximizing these effects.

  13. Differences in kinematic control of ankle joint motions in people with chronic ankle instability.

    PubMed

    Kipp, Kristof; Palmieri-Smith, Riann M

    2013-06-01

    People with chronic ankle instability display different ankle joint motions compared to healthy people. The purpose of this study was to investigate the strategies used to control ankle joint motions between a group of people with chronic ankle instability and a group of healthy, matched controls. Kinematic data were collected from 11 people with chronic ankle instability and 11 matched control subjects as they performed a single-leg land-and-cut maneuver. Three-dimensional ankle joint angles were calculated from 100 ms before, to 200 ms after landing. Kinematic control of the three rotational ankle joint degrees of freedom was investigated by simultaneously examining the three-dimensional co-variation of plantarflexion/dorsiflexion, toe-in/toe-out rotation, and inversion/eversion motions with principal component analysis. Group differences in the variance proportions of the first two principal components indicated that the angular co-variation between ankle joint motions was more linear in the control group, but more planar in the chronic ankle instability group. Frontal and transverse plane motions, in particular, contributed to the group differences in the linearity and planarity of angular co-variation. People with chronic ankle instability use a different kinematic control strategy to coordinate ankle joint motions during a single-leg landing task. Compared to the healthy group, the chronic ankle instability group's control strategy appeared to be more complex and involved joint-specific contributions that would tend to predispose this group to recurring episodes of instability. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Comparison of Artificial Immune System and Particle Swarm Optimization Techniques for Error Optimization of Machine Vision Based Tool Movements

    NASA Astrophysics Data System (ADS)

    Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod

    2015-10-01

    In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.

  15. Biomechanics of the Treadmill Locomotion on the International Space Station

    NASA Technical Reports Server (NTRS)

    DeWitt, John; Cromwell, R. L.; Ploutz-Snyder, L. L.

    2014-01-01

    Exercise prescriptions completed by International Space Station (ISS) crewmembers are typically based upon evidence obtained during ground-based investigations, with the assumption that the results of long-term training in weightlessness will be similar to that attained in normal gravity. Coupled with this supposition are the assumptions that exercise motions and external loading are also similar between gravitational environments. Normal control of locomotion is dependent upon learning patterns of muscular activation and requires continual monitoring of internal and external sensory input [1]. Internal sensory input includes signals that may be dependent on or independent of gravity. Bernstein hypothesized that movement strategy planning and execution must include the consideration of segmental weights and inertia [2]. Studies of arm movements in microgravity showed that individuals tend to make errors but that compensation strategies result in adaptations, suggesting that control mechanisms must include peripheral information [3-5]. To date, however, there have been no studies examining a gross motor activity such as running in weightlessness other than using microgravity analogs [6-8]. The objective of this evaluation was to collect biomechanical data from crewmembers during treadmill exercise before and during flight. The goal was to determine locomotive biomechanics similarities and differences between normal and weightless environments. The data will be used to optimize future exercise prescriptions. This project addresses the Critical Path Roadmap risks 1 (Accelerated Bone Loss and Fracture Risk) and 11 (Reduced Muscle Mass, Strength, and Endurance). Data were collected from 7 crewmembers before flight and during their ISS missions. Before launch, crewmembers performed a single data collection session at the NASA Johnson Space Center. Three-dimensional motion capture data were collected for 30 s at speeds ranging from 1.5 to 9.5 mph in 0.5 mph increments with a 12-camera system. During flight, each crewmember completed up to 6 data collection sessions spread across their missions, performing their normal exercise prescription for the test day, resulting in varying data collection protocols between sessions. Motion data were collected by a single HD video camera positioned to view the crewmembers' left side, and tape markers were placed on their feet, legs, and neck on specific landmarks. Before data collection, the crewmembers calibrated the video camera. Video data were collected during the entire exercise session at 30 Hz. Kinematic data were used to determine left leg hip, knee, and ankle range of motion and contact time, flight time, and stride time for each stride. 129 trials in weightlessness were analyzed. Mean time-normalized strides were found for each trial, and cross-correlation procedures were used to examine the strength and direction of relationships between segment movement pattern timing in each gravitational condition. Cross-correlation analyses between gravitational conditions revealed highly consistent movement patterns at each joint. Peak correlation coefficients occurred at 0% phase, indicating there were no lags in movement timing. Joint ranges of motion were similar between gravitational conditions, with some slight differences between subjects. Motion patterns in weightlessness were highly consistent at a given speed with those occurring in 1G, indicating that despite differing sensory input, subjects maintain running kinematics. The data suggest that individuals are capable of compensating for loss of limb weight when creating movement strategies. These results have important implications for creating training programs for use in weightlessness as practitioners can have greater confidence in running motions transferring across gravitational environments. Furthermore, these results have implications for use by researchers investigating motor control mechanisms and investigating hypotheses related to movement strategies when using sensory input that is dependent upon gravity.

  16. W-76 PBX 9501 cylinder tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, L.G.; Catanach, R.A.

    1998-07-01

    Five 1-inch diameter cylinder tests were fired in support of the W-76 high explosive surveillance program. Three of the tests used baseline material, and two used stockpile return material. The diagnostics were electrical pins to measure detonation velocity and a streak camera to measure wall motion. The data was analyzed for cylinder energy, Gurney energy, and detonation velocity. The results of all three measures were consistent for all five tests, to within the experimental accuracy.

  17. A high-resolution three-dimensional far-infrared thermal and true-color imaging system for medical applications.

    PubMed

    Cheng, Victor S; Bai, Jinfen; Chen, Yazhu

    2009-11-01

    As the needs for various kinds of body surface information are wide-ranging, we developed an imaging-sensor integrated system that can synchronously acquire high-resolution three-dimensional (3D) far-infrared (FIR) thermal and true-color images of the body surface. The proposed system integrates one FIR camera and one color camera with a 3D structured light binocular profilometer. To eliminate the emotion disturbance of the inspector caused by the intensive light projection directly into the eye from the LCD projector, we have developed a gray encoding strategy based on the optimum fringe projection layout. A self-heated checkerboard has been employed to perform the calibration of different types of cameras. Then, we have calibrated the structured light emitted by the LCD projector, which is based on the stereo-vision idea and the least-squares quadric surface-fitting algorithm. Afterwards, the precise 3D surface can fuse with undistorted thermal and color images. To enhance medical applications, the region-of-interest (ROI) in the temperature or color image representing the surface area of clinical interest can be located in the corresponding position in the other images through coordinate system transformation. System evaluation demonstrated a mapping error between FIR and visual images of three pixels or less. Experiments show that this work is significantly useful in certain disease diagnoses.

  18. Three dimensional modelling for the target asteroid of HAYABUSA

    NASA Astrophysics Data System (ADS)

    Demura, H.; Kobayashi, S.; Asada, N.; Hashimoto, T.; Saito, J.

    Hayabusa program is the first sample return mission of Japan. This was launched at May 9 2003, and will arrive at the target asteroid 25143 Itokawa on June 2005. The spacecraft has three optical navigation cameras, which are two wide angle ones and a telescopic one. The telescope with a filter wheel was named AMICA (Asteroid Multiband Imaging CAmera). We are going to model a shape of the target asteroid by this telescope; expected resolution: 1m/pixel at 10 km in distanc, field of view: 5.7 squared degrees, MPP-type CCD with 1024 x 1000 pixels. Because size of the Hayabusa is about 1x1x1 m, our goal is shape modeling with about 1m in precision on the basis of a camera system with scanning by rotation of the asteroid. This image-based modeling requires sequential images via AMICA and a history of distance between the asteroid and Hayabusa provided by a Laser Range Finder. We established a system of hierarchically recursive search with sub-pixel matching of Ground Control Points, which are picked up with Susan Operator. The matched dataset is restored with a restriction of epipolar geometry, and the obtained a group of three dimensional points are converted to a polygon model with Delaunay Triangulation. The current status of our development for the shape modeling is displayed.

  19. Morphological analysis of hummocks in debris avalanche deposits using UAS-derived high-definition topographic data

    NASA Astrophysics Data System (ADS)

    Hayakawa, Yuichi S.; Obanawa, Hiroyuki; Yoshida, Hidetsugu; Naruhashi, Ryutaro; Okumura, Koji; Zaiki, Masumi

    2016-04-01

    Debris avalanche caused by sector collapse of a volcanic mountain often forms depositional landforms with characteristic surface morphology comprising hummocks. Geomorphological and sedimentological analyses of debris avalanche deposits (DAD) at the northeastern face of Mt. Erciyes in central Turkey have been performed to investigate the mechanisms and processes of the debris avalanche. The morphometry of hummocks provides an opportunity to examine the volumetric and kinematic characteristics of the DAD. Although the exact age has been unknown, the sector collapse of this DAD was supposed to have occurred in the late Pleistocene (sometime during 90-20 ka), and subsequent sediment supply from the DAD could have affected ancient human activities in the downstream basin areas. In order to measure detailed surface morphology and depositional structures of the DAD, we apply structure-from-motion multi-view stereo (SfM-MVS) photogrammetry using unmanned aerial system (UAS) and a handheld camera. The UAS, including small unmanned aerial vehicle (sUAV) and a digital camera, provides low-altitude aerial photographs to capture surface morphology for an area of several square kilometers. A high-resolution topographic data, as well as an orthorectified image, of the hummocks were then obtained from the digital elevation model (DEM), and the geometric features of the hummocks were examined. A handheld camera is also used to obtain photographs of outcrop face of the DAD along a road to support the seimentological investigation. The three-dimensional topographic models of the outcrop, with a panoramic orthorectified image projected on a vertical plane, were obtained. This data enables to effectively describe sedimentological structure of the hummock in DAD. The detailed map of the DAD is also further examined with a regional geomorphological map to be compared with other geomorphological features including fluvial valleys, terraces, lakes and active faults.

  20. "Teacher in Space" Trainees - Arriflex Motion Picture Camera

    NASA Image and Video Library

    1985-09-20

    S85-40670 (18 Sept. 1985) --- The two teachers, Sharon Christa McAuliffe and Barbara R. Morgan (out of frame) have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedures for space travelers. The second week of training included camera training, aircraft familiarization and other activities. McAuliffe zeroes in on a test subject during a practice session with the Arriflex. Photo credit: NASA

  1. Teacher-in-Space Trainees - Arriflex Motion Picture Camera

    NASA Image and Video Library

    1985-09-20

    S85-40669 (18 Sept. 1985) --- The two teachers, Sharon Christa McAuliffe (left) and Barbara R. Morgan have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedure for space travelers. The second week of training included camera training, aircraft familiarization and other activities. Morgan adjusts a lens as a studious McAuliffe looks on. Photo credit: NASA

  2. "Teacher in Space" Trainees - Arriflex Motion Picture Camera

    NASA Image and Video Library

    1985-09-20

    S85-40671 (18 Sept. 1985) --- The two teachers, Barbara R. Morgan and Sharon Christa McAuliffe (out of frame) have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedures for space travelers. The second week of training included camera training, aircraft familiarization and other activities. Morgan zeroes in on a test subject during a practice session with the Arriflex. Photo credit: NASA

  3. Toward an affordable and user-friendly visual motion capture system.

    PubMed

    Bonnet, V; Sylla, N; Cherubini, A; Gonzáles, A; Azevedo Coste, C; Fraisse, P; Venture, G

    2014-01-01

    The present study aims at designing and evaluating a low-cost, simple and portable system for arm joint angle estimation during grasping-like motions. The system is based on a single RGB-D camera and three customized markers. The automatically detected and tracked marker positions were used as inputs to an offline inverse kinematic process based on bio-mechanical constraints to reduce noise effect and handle marker occlusion. The method was validated on 4 subjects with different motions. The joint angles were estimated both with the proposed low-cost system and, a stereophotogrammetric system. Comparative analysis shows good accuracy with high correlation coefficient (r= 0.92) and low average RMS error (3.8 deg).

  4. Asymmetries and three-dimensional features of vestibular cross-coupled stimuli illuminated through modeling

    PubMed Central

    Holly, Jan E.; Masood, M. Arjumand; Bhandari, Chiran S.

    2017-01-01

    Head movements during sustained rotation can cause angular cross-coupling which leads to tumbling illusions. Even though angular vectors predict equal magnitude illusions for head movements in opposite directions, the magnitudes of the illusions are often surprisingly asymmetric, such as during leftward versus rightward yaw while horizontal in a centrifuge. This paper presents a comprehensive investigation of the angular-linear stimulus combinations from eight different published papers in which asymmetries were found. Interactions between all angular and linear vectors, including gravity, are taken into account to model the three-dimensional consequences of the stimuli. Three main results followed. First, for every pair of head yaw movements, an asymmetry was found in the stimulus itself when considered in a fully three-dimensional manner, and the direction of the asymmetry matched the subjectively reported magnitude asymmetry. Second, for pitch and roll head movements for which motion sickness was measured, the stimulus was found symmetric in every case except one, and motion sickness generally aligned with other factors such as the existence of a head rest. Third, three-dimensional modeling predicted subjective inconsistency in the direction of perceived rotation when linear and angular components were oppositely-directed, and predicted surplus illusory rotation in the direction of head movement. PMID:27814310

  5. Astronaut Walz on flight deck with IMAX camera

    NASA Image and Video Library

    1996-11-04

    STS079-362-023 (16-26 Sept. 1996) --- Astronaut Carl E. Walz, mission specialist, positions the IMAX camera for a shoot on the flight deck of the Space Shuttle Atlantis. The IMAX project is a collaboration among NASA, the Smithsonian Institution's National Air and Space Museum, IMAX Systems Corporation and the Lockheed Corporation to document in motion picture format significant space activities and promote NASA's educational goals using the IMAX film medium. This system, developed by IMAX of Toronto, uses specially designed 65mm cameras and projectors to record and display very high definition color motion pictures which, accompanied by six-channel high fidelity sound, are displayed on screens in IMAX and OMNIMAX theaters that are up to ten times larger than a conventional screen, producing a feeling of "being there." The 65mm photography is transferred to 70mm motion picture films for showing in IMAX theaters. IMAX cameras have been flown on 14 previous missions.

  6. Hand motion segmentation against skin colour background in breast awareness applications.

    PubMed

    Hu, Yuqin; Naguib, Raouf N G; Todman, Alison G; Amin, Saad A; Al-Omishy, Hassanein; Oikonomou, Andreas; Tucker, Nick

    2004-01-01

    Skin colour modelling and classification play significant roles in face and hand detection, recognition and tracking. A hand is an essential tool used in breast self-examination, which needs to be detected and analysed during the process of breast palpation. However, the background of a woman's moving hand is her breast that has the same or similar colour as the hand. Additionally, colour images recorded by a web camera are strongly affected by the lighting or brightness conditions. Hence, it is a challenging task to segment and track the hand against the breast without utilising any artificial markers, such as coloured nail polish. In this paper, a two-dimensional Gaussian skin colour model is employed in a particular way to identify a breast but not a hand. First, an input image is transformed to YCbCr colour space, which is less sensitive to the lighting conditions and more tolerant of skin tone. The breast, thus detected by the Gaussian skin model, is used as the baseline or framework for the hand motion. Secondly, motion cues are used to segment the hand motion against the detected baseline. Desired segmentation results have been achieved and the robustness of this algorithm is demonstrated in this paper.

  7. Variation in detection among passive infrared triggered-cameras used in wildlife research

    USGS Publications Warehouse

    Damm, Philip E.; Grand, James B.; Barnett, Steven W.

    2010-01-01

    Precise and accurate estimates of demographics such as age structure, productivity, and density are necessary in determining habitat and harvest management strategies for wildlife populations. Surveys using automated cameras are becoming an increasingly popular tool for estimating these parameters. However, most camera studies fail to incorporate detection probabilities, leading to parameter underestimation. The objective of this study was to determine the sources of heterogeneity in detection for trail cameras that incorporate a passive infrared (PIR) triggering system sensitive to heat and motion. Images were collected at four baited sites within the Conecuh National Forest, Alabama, using three cameras at each site operating continuously over the same seven-day period. Detection was estimated for four groups of animals based on taxonomic group and body size. Our hypotheses of detection considered variation among bait sites and cameras. The best model (w=0.99) estimated different rates of detection for each camera in addition to different detection rates for four animal groupings. Factors that explain this variability might include poor manufacturing tolerances, variation in PIR sensitivity, animal behavior, and species-specific infrared radiation. Population surveys using trail cameras with PIR systems must incorporate detection rates for individual cameras. Incorporating time-lapse triggering systems into survey designs should eliminate issues associated with PIR systems.

  8. Three Dimensional Plenoptic PIV Measurements of a Turbulent Boundary Layer Overlying a Hemispherical Roughness Element

    NASA Astrophysics Data System (ADS)

    Johnson, Kyle; Thurow, Brian; Kim, Taehoon; Blois, Gianluca; Christensen, Kenneth

    2016-11-01

    Three-dimensional, three-component (3D-3C) measurements were made using a plenoptic camera on the flow around a roughness element immersed in a turbulent boundary layer. A refractive index matched approach allowed whole-field optical access from a single camera to a measurement volume that includes transparent solid geometries. In particular, this experiment measures the flow over a single hemispherical roughness element made of acrylic and immersed in a working fluid consisting of Sodium Iodide solution. Our results demonstrate that plenoptic particle image velocimetry (PIV) is a viable technique to obtaining statistically-significant volumetric velocity measurements even in a complex separated flow. The boundary layer to roughness height-ratio of the flow was 4.97 and the Reynolds number (based on roughness height) was 4.57×103. Our measurements reveal key flow features such as spiraling legs of the shear layer, a recirculation region, and shed arch vortices. Proper orthogonal decomposition (POD) analysis was applied to the instantaneous velocity and vorticity data to extract these features. Supported by the National Science Foundation Grant No. 1235726.

  9. Eliminating Bias In Acousto-Optical Spectrum Analysis

    NASA Technical Reports Server (NTRS)

    Ansari, Homayoon; Lesh, James R.

    1992-01-01

    Scheme for digital processing of video signals in acousto-optical spectrum analyzer provides real-time correction for signal-dependent spectral bias. Spectrum analyzer described in "Two-Dimensional Acousto-Optical Spectrum Analyzer" (NPO-18092), related apparatus described in "Three-Dimensional Acousto-Optical Spectrum Analyzer" (NPO-18122). Essence of correction is to average over digitized outputs of pixels in each CCD row and to subtract this from the digitized output of each pixel in row. Signal processed electro-optically with reference-function signals to form two-dimensional spectral image in CCD camera.

  10. Human silhouette matching based on moment invariants

    NASA Astrophysics Data System (ADS)

    Sun, Yong-Chao; Qiu, Xian-Jie; Xia, Shi-Hong; Wang, Zhao-Qi

    2005-07-01

    This paper aims to apply the method of silhouette matching based on moment invariants to infer the human motion parameters from video sequences of single monocular uncalibrated camera. Currently, there are two ways of tracking human motion: Marker and Markerless. While a hybrid framework is introduced in this paper to recover the input video contents. A standard 3D motion database is built up by marker technique in advance. Given a video sequences, human silhouettes are extracted as well as the viewpoint information of the camera which would be utilized to project the standard 3D motion database onto the 2D one. Therefore, the video recovery problem is formulated as a matching issue of finding the most similar body pose in standard 2D library with the one in video image. The framework is applied to the special trampoline sport where we can obtain the complicated human motion parameters in the single camera video sequences, and a lot of experiments are demonstrated that this approach is feasible in the field of monocular video-based 3D motion reconstruction.

  11. Human Age Estimation Method Robust to Camera Sensor and/or Face Movement

    PubMed Central

    Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung

    2015-01-01

    Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282

  12. Multiple object, three-dimensional motion tracking using the Xbox Kinect sensor

    NASA Astrophysics Data System (ADS)

    Rosi, T.; Onorato, P.; Oss, S.

    2017-11-01

    In this article we discuss the capability of the Xbox Kinect sensor to acquire three-dimensional motion data of multiple objects. Two experiments regarding fundamental features of Newtonian mechanics are performed to test the tracking abilities of our setup. Particular attention is paid to check and visualise the conservation of linear momentum, angular momentum and energy. In both experiments, two objects are tracked while falling in the gravitational field. The obtained data is visualised in a 3D virtual environment to help students understand the physics behind the performed experiments. The proposed experiments were analysed with a group of university students who are aspirant physics and mathematics teachers. Their comments are presented in this paper.

  13. Interface of Augmented Reality Game Using Face Tracking and Its Application to Advertising

    NASA Astrophysics Data System (ADS)

    Lee, Young Jae; Lee, Yong Jae

    This paper proposes the face interface method which can be used in recognizing gamer's movements in the real world for application in the cyber space so that we could make three-dimensional space recognition motion-based game. The proposed algorithm is the new face recognition technology which incorporates the strengths of two existing algorithms, CBCH and CAMSHIFT and its validity has been proved through a series of experiments. Moreover, for the purpose of the interdisciplinary studies, concepts of advertising have been introduced into the three-dimensional motion-based game to look into the possible new beneficiary models for the game industry. This kind of attempt may be significant in that it tried to see if the advertising brand when placed in the game could play the role of the game item or quest. The proposed method can provide the basic references for developing motion-based game development.

  14. The three dimensional motion and stability of a rotating space station: Cable-counterweight configuration

    NASA Technical Reports Server (NTRS)

    Bainum, P. M.; Evans, K. S.

    1974-01-01

    The three dimensional equations of motion for a cable connected space station--counterweight system are developed using a Lagrangian formulation. The system model employed allows for cable and end body damping and restoring effects. The equations are then linearized about the equilibrium motion and nondimensionalized. To first degree, the out-of-plane equations uncouple from the inplane equations. Therefore, the characteristic polynomials for the in-plane and out-of-plane equations are developed and treated separately. From the general in-plane characteristic equation, necessary conditions for stability are obtained. The Routh-Hurwitz necessary and sufficient conditions for stability are derived for the general out-of-plane characteristic equation. Special cases of the in-plane and out-of-plane equations (such as identical end masses, and when the cable is attached to the centers of mass of the two end bodies) are then examined for stability criteria.

  15. Three-Dimensional ISAR Imaging Method for High-Speed Targets in Short-Range Using Impulse Radar Based on SIMO Array

    PubMed Central

    Zhou, Xinpeng; Wei, Guohua; Wu, Siliang; Wang, Dawei

    2016-01-01

    This paper proposes a three-dimensional inverse synthetic aperture radar (ISAR) imaging method for high-speed targets in short-range using an impulse radar. According to the requirements for high-speed target measurement in short-range, this paper establishes the single-input multiple-output (SIMO) antenna array, and further proposes a missile motion parameter estimation method based on impulse radar. By analyzing the motion geometry relationship of the warhead scattering center after translational compensation, this paper derives the receiving antenna position and the time delay after translational compensation, and thus overcomes the shortcomings of conventional translational compensation methods. By analyzing the motion characteristics of the missile, this paper estimates the missile’s rotation angle and the rotation matrix by establishing a new coordinate system. Simulation results validate the performance of the proposed algorithm. PMID:26978372

  16. Camera calibration for multidirectional flame chemiluminescence tomography

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Zhang, Weiguang; Zhang, Yuhong; Yu, Xun

    2017-04-01

    Flame chemiluminescence tomography (FCT), which combines computerized tomography theory and multidirectional chemiluminescence emission measurements, can realize instantaneous three-dimensional (3-D) diagnostics for flames with high spatial and temporal resolutions. One critical step of FCT is to record the projections by multiple cameras from different view angles. For high accuracy reconstructions, it requires that extrinsic parameters (the positions and orientations) and intrinsic parameters (especially the image distances) of cameras be accurately calibrated first. Taking the focus effect of the camera into account, a modified camera calibration method was presented for FCT, and a 3-D calibration pattern was designed to solve the parameters. The precision of the method was evaluated by reprojections of feature points to cameras with the calibration results. The maximum root mean square error of the feature points' position is 1.42 pixels and 0.0064 mm for the image distance. An FCT system with 12 cameras was calibrated by the proposed method and the 3-D CH* intensity of a propane flame was measured. The results showed that the FCT system provides reasonable reconstruction accuracy using the camera's calibration results.

  17. Stereoscopic 3D entertainment and its effect on viewing comfort: comparison of children and adults.

    PubMed

    Pölönen, Monika; Järvenpää, Toni; Bilcu, Beatrice

    2013-01-01

    Children's and adults' viewing comfort during stereoscopic three-dimensional film viewing and computer game playing was studied. Certain mild changes in visual function, heterophoria and near point of accommodation values, as well as eyestrain and visually induced motion sickness levels were found when single setups were compared. The viewing system had an influence on viewing comfort, in particular for eyestrain levels, but no clear difference between two- and three-dimensional systems was found. Additionally, certain mild changes in visual functions and visually induced motion sickness levels between adults and children were found. In general, all of the system-task combinations caused mild eyestrain and possible changes in visual functions, but these changes in magnitude were small. According to subjective opinions that further support these measurements, using a stereoscopic three-dimensional system for up to 2 h was acceptable for most of the users regardless of their age. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  18. Three-dimensional multispectral hand-held optoacoustic imaging with microsecond-level delayed laser pulses

    NASA Astrophysics Data System (ADS)

    Deán-Ben, X. L.; Bay, Erwin; Razansky, Daniel

    2015-03-01

    Three-dimensional hand-held optoacoustic imaging comes with important advantages that prompt the clinical translation of this modality, with applications envisioned in cardiovascular and peripheral vascular disease, disorders of the lymphatic system, breast cancer, arthritis or inflammation. Of particular importance is the multispectral acquisition of data by exciting the tissue at several wavelengths, which enables functional imaging applications. However, multispectral imaging of entire three-dimensional regions is significantly challenged by motion artefacts in concurrent acquisitions at different wavelengths. A method based on acquisition of volumetric datasets having a microsecond-level delay between pulses at different wavelengths is described in this work. This method can avoid image artefacts imposed by a scanning velocity greater than 2 m/s, thus, does not only facilitate imaging influenced by respiratory, cardiac or other intrinsic fast movements in living tissues, but can achieve artifact-free imaging in the presence of more significant motion, e.g., abrupt displacements during handheld-mode operation in a clinical environment.

  19. The "Collisions Cube" Molecular Dynamics Simulator.

    ERIC Educational Resources Information Center

    Nash, John J.; Smith, Paul E.

    1995-01-01

    Describes a molecular dynamics simulator that employs ping-pong balls as the atoms or molecules and is suitable for either large lecture halls or small classrooms. Discusses its use in illustrating many of the fundamental concepts related to molecular motion and dynamics and providing a three-dimensional perspective of molecular motion. (JRH)

  20. An electrically tunable plenoptic camera using a liquid crystal microlens array.

    PubMed

    Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng

    2015-05-01

    Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.

  1. An electrically tunable plenoptic camera using a liquid crystal microlens array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Yu; School of Automation, Huazhong University of Science and Technology, Wuhan 430074; Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074

    2015-05-15

    Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated withmore » an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.« less

  2. An electrically tunable plenoptic camera using a liquid crystal microlens array

    NASA Astrophysics Data System (ADS)

    Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng

    2015-05-01

    Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.

  3. OpenCV and TYZX : video surveillance for tracking.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Jim; Spencer, Andrew; Chu, Eric

    2008-08-01

    As part of the National Security Engineering Institute (NSEI) project, several sensors were developed in conjunction with an assessment algorithm. A camera system was developed in-house to track the locations of personnel within a secure room. In addition, a commercial, off-the-shelf (COTS) tracking system developed by TYZX was examined. TYZX is a Bay Area start-up that has developed its own tracking hardware and software which we use as COTS support for robust tracking. This report discusses the pros and cons of each camera system, how they work, a proposed data fusion method, and some visual results. Distributed, embedded image processingmore » solutions show the most promise in their ability to track multiple targets in complex environments and in real-time. Future work on the camera system may include three-dimensional volumetric tracking by using multiple simple cameras, Kalman or particle filtering, automated camera calibration and registration, and gesture or path recognition.« less

  4. PubMed Central

    Baum, S.; Sillem, M.; Ney, J. T.; Baum, A.; Friedrich, M.; Radosa, J.; Kramer, K. M.; Gronwald, B.; Gottschling, S.; Solomayer, E. F.; Rody, A.; Joukhadar, R.

    2017-01-01

    Introduction Minimally invasive operative techniques are being used increasingly in gynaecological surgery. The expansion of the laparoscopic operation spectrum is in part the result of improved imaging. This study investigates the practical advantages of using 3D cameras in routine surgical practice. Materials and Methods Two different 3-dimensional camera systems were compared with a 2-dimensional HD system; the operating surgeonʼs experiences were documented immediately postoperatively using a questionnaire. Results Significant advantages were reported for suturing and cutting of anatomical structures when using the 3D compared to 2D camera systems. There was only a slight advantage for coagulating. The use of 3D cameras significantly improved the general operative visibility and in particular the representation of spacial depth compared to 2-dimensional images. There was not a significant advantage for image width. Depiction of adhesions and retroperitoneal neural structures was significantly improved by the stereoscopic cameras, though this did not apply to blood vessels, ureter, uterus or ovaries. Conclusion 3-dimensional cameras were particularly advantageous for the depiction of fine anatomical structures due to improved spacial depth representation compared to 2D systems. 3D cameras provide the operating surgeon with a monitor image that more closely resembles actual anatomy, thus simplifying laparoscopic procedures. PMID:28190888

  5. The Example of Using the Xiaomi Cameras in Inventory of Monumental Objects - First Results

    NASA Astrophysics Data System (ADS)

    Markiewicz, J. S.; Łapiński, S.; Bienkowski, R.; Kaliszewska, A.

    2017-11-01

    At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. Today, photogrammetry is becoming more and more popular and is becoming the standard of documentation in many projects involving the recording of all possible spatial data on landscape, architecture, or even single objects. Low-cost sensors allow for the creation of reliable and accurate three-dimensional models of investigated objects. This paper presents the results of a comparison between the outcomes obtained when using three sources of image: low-cost Xiaomi cameras, a full-frame camera (Canon 5D Mark II) and middle-frame camera (Hasselblad-Hd4). In order to check how the results obtained from the two sensors differ the following parameters were analysed: the accuracy of the orientation of the ground level photos on the control and check points, the distribution of appointed distortion in the self-calibration process, the flatness of the walls, the discrepancies between point clouds from the low-cost cameras and references data. The results presented below are a result of co-operation of researchers from three institutions: the Systems Research Institute PAS, The Department of Geodesy and Cartography at the Warsaw University of Technology and the National Museum in Warsaw.

  6. Casting Light and Shadows on a Saharan Dust Storm

    NASA Technical Reports Server (NTRS)

    2003-01-01

    On March 2, 2003, near-surface winds carried a large amount of Saharan dust aloft and transported the material westward over the Atlantic Ocean. These observations from the Multi-angle Imaging SpectroRadiometer (MISR) aboard NASA's Terra satellite depict an area near the Cape Verde Islands (situated about 700 kilometers off of Africa's western coast) and provide images of the dust plume along with measurements of its height and motion. Tracking the three-dimensional extent and motion of air masses containing dust or other types of aerosols provides data that can be used to verify and improve computer simulations of particulate transport over large distances, with application to enhancing our understanding of the effects of such particles on meteorology, ocean biological productivity, and human health.

    MISR images the Earth by measuring the spatial patterns of reflected sunlight. In the upper panel of the still image pair, the observations are displayed as a natural-color snapshot from MISR's vertical-viewing (nadir) camera. High-altitude cirrus clouds cast shadows on the underlying ocean and dust layer, which are visible in shades of blue and tan, respectively. In the lower panel, heights derived from automated stereoscopic processing of MISR's multi-angle imagery show the cirrus clouds (yellow areas) to be situated about 12 kilometers above sea level. The distinctive spatial patterns of these clouds provide the necessary contrast to enable automated feature matching between images acquired at different view angles. For most of the dust layer, which is spatially much more homogeneous, the stereoscopic approach was unable to retrieve elevation data. However, the edges of shadows cast by the cirrus clouds onto the dust (indicated by blue and cyan pixels) provide sufficient spatial contrast for a retrieval of the dust layer's height, and indicate that the top of layer is only about 2.5 kilometers above sea level.

    Motion of the dust and clouds is directly observable with the assistance of the multi-angle 'fly-over' animation (Below). The frames of the animation consist of data acquired by the 70-degree, 60-degree, 46-degree and 26-degree forward-viewing cameras in sequence, followed by the images from the nadir camera and each of the four backward-viewing cameras, ending with 70-degree backward image. Much of the south-to-north shift in the position of the clouds is due to geometric parallax between the nine view angles (rather than true motion), whereas the west-to-east motion is due to actual motion of the clouds over the seven minutes during which all nine cameras observed the scene. MISR's automated data processing retrieved a primarily westerly (eastward) motion of these clouds with speeds of 30-40 meters per second. Note that there is much less geometric parallax for the cloud shadows owing to the relatively low altitude of the dust layer upon which the shadows are cast (the amount of parallax is proportional to elevation and a feature at the surface would have no geometric parallax at all); however, the westerly motion of the shadows matches the actual motion of the clouds. The automated processing was not able to resolve a velocity for the dust plume, but by manually tracking dust features within the plume images that comprise the animation sequence we can derive an easterly (westward) speed of about 16 meters per second. These analyses and visualizations of the MISR data demonstrate that not only are the cirrus clouds and dust separated significantly in elevation, but they exist in completely different wind regimes, with the clouds moving toward the east and the dust moving toward the west.

    [figure removed for brevity, see original site]

    (Click on image above for high resolution version)

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17040. The panels cover an area of about 312 kilometers x 242 kilometers, and use data from blocks 74 to 77 within World Reference System-2 path 207.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  7. The role of three-dimensional high-definition laparoscopic surgery for gynaecology.

    PubMed

    Usta, Taner A; Gundogdu, Elif C

    2015-08-01

    This article reviews the potential benefits and disadvantages of new three-dimensional (3D) high-definition laparoscopic surgery for gynaecology. With the new-generation 3D high-definition laparoscopic vision systems (LVSs), operation time and learning period are reduced and procedural error margin is decreased. New-generation 3D high-definition LVSs enable to reduce operation time both for novice and experienced surgeons. Headache, eye fatigue or nausea reported with first-generation systems are not different than two-dimensional (2D) LVSs. The system's being more expensive, having the obligation to wear glasses, big and heavy camera probe in some of the devices are accounted for negative aspects of the system that need to be improved. Depth loss in tissues in 2D LVSs and associated adverse events can be eliminated with 3D high-definition LVSs. By virtue of faster learning curve, shorter operation time, reduced error margin and lack of side-effects reported by surgeons with first-generation systems, 3D LVSs seem to be a strong competition to classical laparoscopic imaging systems. Thanks to technological advancements, using lighter and smaller cameras and monitors without glasses is in the near future.

  8. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  9. Method and apparatus for three dimensional braiding

    NASA Technical Reports Server (NTRS)

    Farley, Gary L. (Inventor)

    1997-01-01

    A machine for three-dimensional braiding of fibers is provided in which carrier members travel on a curved, segmented and movable braiding surface. The carrier members are capable of independent, self-propelled motion along the braiding surface. Carrier member position on the braiding surface is controlled and monitored by computer. Also disclosed is a yarn take-up device capable of maintaining tension in the braiding fiber.

  10. Method and apparatus for three dimensional braiding

    NASA Technical Reports Server (NTRS)

    Farley, Gary L. (Inventor)

    1995-01-01

    A machine for three-dimensional braiding of fibers is provided in which carrier members travel on a curved, segmented and movable braiding surface. The carrier members are capable of independent, self-propelled motion along the braiding surface. Carrier member position on the braiding surface is controlled and monitored by computer. Also disclosed is a yarn take-up device capable of maintaining tension in the braiding fiber.

  11. Numerical study on wave loads and motions of two ships advancing in waves by using three-dimensional translating-pulsating source

    NASA Astrophysics Data System (ADS)

    Xu, Yong; Dong, Wen-Cai

    2013-08-01

    A frequency domain analysis method based on the three-dimensional translating-pulsating (3DTP) source Green function is developed to investigate wave loads and free motions of two ships advancing on parallel course in waves. Two experiments are carried out respectively to measure the wave loads and the freemotions for a pair of side-byside arranged ship models advancing with an identical speed in head regular waves. For comparison, each model is also tested alone. Predictions obtained by the present solution are found in favorable agreement with the model tests and are more accurate than the traditional method based on the three dimensional pulsating (3DP) source Green function. Numerical resonances and peak shift can be found in the 3DP predictions, which result from the wave energy trapped in the gap between two ships and the extremely inhomogeneous wave load distribution on each hull. However, they can be eliminated by 3DTP, in which the speed affects the free surface and most of the wave energy can be escaped from the gap. Both the experiment and the present prediction show that hydrodynamic interaction effects on wave loads and free motions are significant. The present solver may serve as a validated tool to predict wave loads and motions of two vessels under replenishment at sea, and may help to evaluate the hydrodynamic interaction effects on the ships safety in replenishment operation.

  12. Active elastic dimers: cells moving on rigid tracks.

    PubMed

    Lopez, J H; Das, Moumita; Schwarz, J M

    2014-09-01

    Experiments suggest that the migration of some cells in the three-dimensional extracellular matrix bears strong resemblance to one-dimensional cell migration. Motivated by this observation, we construct and study a minimal one-dimensional model cell made of two beads and an active spring moving along a rigid track. The active spring models the stress fibers with their myosin-driven contractility and α-actinin-driven extendability, while the friction coefficients of the two beads describe the catch and slip-bond behaviors of the integrins in focal adhesions. In the absence of active noise, net motion arises from an interplay between active contractility (and passive extendability) of the stress fibers and an asymmetry between the front and back of the cell due to catch-bond behavior of integrins at the front of the cell and slip-bond behavior of integrins at the back. We obtain reasonable cell speeds with independently estimated parameters. We also study the effects of hysteresis in the active spring, due to catch-bond behavior and the dynamics of cross linking, and the addition of active noise on the motion of the cell. Our model highlights the role of α-actinin in three-dimensional cell motility and does not require Arp2/3 actin filament nucleation for net motion.

  13. Quantized vortices and superflow in arbitrary dimensions: structure, energetics and dynamics

    NASA Astrophysics Data System (ADS)

    Goldbart, Paul M.; Bora, Florin

    2009-05-01

    The structure and energetics of superflow around quantized vortices, and the motion inherited by these vortices from this superflow, are explored in the general setting of a superfluid in arbitrary dimensions. The vortices may be idealized as objects of codimension 2, such as one-dimensional loops and two-dimensional closed surfaces, respectively, in the cases of three- and four-dimensional superfluidity. By using the analogy between the vortical superflow and Ampère-Maxwell magnetostatics, the equilibrium superflow containing any specified collection of vortices is constructed. The energy of the superflow is found to take on a simple form for vortices that are smooth and asymptotically large, compared with the vortex core size. The motion of vortices is analyzed in general, as well as for the special cases of hyper-spherical and weakly distorted hyper-planar vortices. In all dimensions, vortex motion reflects vortex geometry. In dimension 4 and higher, this includes not only extrinsic but also intrinsic aspects of the vortex shape, which enter via the first and second fundamental forms of classical geometry. For hyper-spherical vortices, which generalize the vortex rings of three-dimensional superfluidity, the energy-momentum relation is determined. Simple scaling arguments recover the essential features of these results, up to numerical and logarithmic factors.

  14. Automated flight path planning for virtual endoscopy.

    PubMed

    Paik, D S; Beaulieu, C F; Jeffrey, R B; Rubin, G D; Napel, S

    1998-05-01

    In this paper, a novel technique for rapid and automatic computation of flight paths for guiding virtual endoscopic exploration of three-dimensional medical images is described. While manually planning flight paths is a tedious and time consuming task, our algorithm is automated and fast. Our method for positioning the virtual camera is based on the medial axis transform but is much more computationally efficient. By iteratively correcting a path toward the medial axis, the necessity of evaluating simple point criteria during morphological thinning is eliminated. The virtual camera is also oriented in a stable viewing direction, avoiding sudden twists and turns. We tested our algorithm on volumetric data sets of eight colons, one aorta and one bronchial tree. The algorithm computed the flight paths in several minutes per volume on an inexpensive workstation with minimal computation time added for multiple paths through branching structures (10%-13% per extra path). The results of our algorithm are smooth, centralized paths that aid in the task of navigation in virtual endoscopic exploration of three-dimensional medical images.

  15. Dust as a versatile matter for high-temperature plasma diagnostic.

    PubMed

    Wang, Zhehui; Ticos, Catalin M

    2008-10-01

    Dust varies from a few nanometers to a fraction of a millimeter in size. Dust also offers essentially unlimited choices in material composition and structure. The potential of dust for high-temperature plasma diagnostic is largely unfulfilled yet. The principles of dust spectroscopy to measure internal magnetic field, microparticle tracer velocimetry to measure plasma flow, and dust photometry to measure heat flux are described. Two main components of the different dust diagnostics are a dust injector and a dust imaging system. The dust injector delivers a certain number of dust grains into a plasma. The imaging system collects and selectively detects certain photons resulted from dust-plasma interaction. One piece of dust gives the local plasma quantity, a collection of dust grains together reveals either two-dimensional (using only one or two imaging cameras) or three-dimensional (using two or more imaging cameras) structures of the measured quantity. A generic conceptual design suitable for all three types of dust diagnostics is presented.

  16. A tiger cannot change its stripes: using a three-dimensional model to match images of living tigers and tiger skins.

    PubMed

    Hiby, Lex; Lovell, Phil; Patil, Narendra; Kumar, N Samba; Gopalaswamy, Arjun M; Karanth, K Ullas

    2009-06-23

    The tiger is one of many species in which individuals can be identified by surface patterns. Camera traps can be used to record individual tigers moving over an array of locations and provide data for monitoring and studying populations and devising conservation strategies. We suggest using a combination of algorithms to calculate similarity scores between pattern samples scanned from the images to automate the search for a match to a new image. We show how using a three-dimensional surface model of a tiger to scan the pattern samples allows comparison of images that differ widely in camera angles and body posture. The software, which is free to download, considerably reduces the effort required to maintain an image catalogue and we suggest it could be used to trace the origin of a tiger skin by searching a central database of living tigers' images for matches to an image of the skin.

  17. A tiger cannot change its stripes: using a three-dimensional model to match images of living tigers and tiger skins

    PubMed Central

    Hiby, Lex; Lovell, Phil; Patil, Narendra; Kumar, N. Samba; Gopalaswamy, Arjun M.; Karanth, K. Ullas

    2009-01-01

    The tiger is one of many species in which individuals can be identified by surface patterns. Camera traps can be used to record individual tigers moving over an array of locations and provide data for monitoring and studying populations and devising conservation strategies. We suggest using a combination of algorithms to calculate similarity scores between pattern samples scanned from the images to automate the search for a match to a new image. We show how using a three-dimensional surface model of a tiger to scan the pattern samples allows comparison of images that differ widely in camera angles and body posture. The software, which is free to download, considerably reduces the effort required to maintain an image catalogue and we suggest it could be used to trace the origin of a tiger skin by searching a central database of living tigers' images for matches to an image of the skin. PMID:19324633

  18. A four-dimensional motion field atlas of the tongue from tagged and cine magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Xing, Fangxu; Prince, Jerry L.; Stone, Maureen; Wedeen, Van J.; El Fakhri, Georges; Woo, Jonghye

    2017-02-01

    Representation of human tongue motion using three-dimensional vector fields over time can be used to better understand tongue function during speech, swallowing, and other lingual behaviors. To characterize the inter-subject variability of the tongue's shape and motion of a population carrying out one of these functions it is desirable to build a statistical model of the four-dimensional (4D) tongue. In this paper, we propose a method to construct a spatio-temporal atlas of tongue motion using magnetic resonance (MR) images acquired from fourteen healthy human subjects. First, cine MR images revealing the anatomical features of the tongue are used to construct a 4D intensity image atlas. Second, tagged MR images acquired to capture internal motion are used to compute a dense motion field at each time frame using a phase-based motion tracking method. Third, motion fields from each subject are pulled back to the cine atlas space using the deformation fields computed during the cine atlas construction. Finally, a spatio-temporal motion field atlas is created to show a sequence of mean motion fields and their inter-subject variation. The quality of the atlas was evaluated by deforming cine images in the atlas space. Comparison between deformed and original cine images showed high correspondence. The proposed method provides a quantitative representation to observe the commonality and variability of the tongue motion field for the first time, and shows potential in evaluation of common properties such as strains and other tensors based on motion fields.

  19. A Four-dimensional Motion Field Atlas of the Tongue from Tagged and Cine Magnetic Resonance Imaging.

    PubMed

    Xing, Fangxu; Prince, Jerry L; Stone, Maureen; Wedeen, Van J; Fakhri, Georges El; Woo, Jonghye

    2017-01-01

    Representation of human tongue motion using three-dimensional vector fields over time can be used to better understand tongue function during speech, swallowing, and other lingual behaviors. To characterize the inter-subject variability of the tongue's shape and motion of a population carrying out one of these functions it is desirable to build a statistical model of the four-dimensional (4D) tongue. In this paper, we propose a method to construct a spatio-temporal atlas of tongue motion using magnetic resonance (MR) images acquired from fourteen healthy human subjects. First, cine MR images revealing the anatomical features of the tongue are used to construct a 4D intensity image atlas. Second, tagged MR images acquired to capture internal motion are used to compute a dense motion field at each time frame using a phase-based motion tracking method. Third, motion fields from each subject are pulled back to the cine atlas space using the deformation fields computed during the cine atlas construction. Finally, a spatio-temporal motion field atlas is created to show a sequence of mean motion fields and their inter-subject variation. The quality of the atlas was evaluated by deforming cine images in the atlas space. Comparison between deformed and original cine images showed high correspondence. The proposed method provides a quantitative representation to observe the commonality and variability of the tongue motion field for the first time, and shows potential in evaluation of common properties such as strains and other tensors based on motion fields.

  20. Three-dimensional separation and reattachment

    NASA Technical Reports Server (NTRS)

    Peake, D. J.; Tobak, M.

    1982-01-01

    The separation of three dimensional turbulent boundary layers from the lee of flight vehicles at high angles of attack is investigated. The separation results in dominant, large scale, coiled vortex motions that pass along the body in the general direction of the free stream. In all cases of three dimensional flow separation and reattachment, the assumption of continuous vector fields of skin friction lines and external flow streamlines, coupled with simple laws of topology, provides a flow grammar whose elemental constituents are the singular points: the nodes, spiral nodes (foci), and saddles. The phenomenon of three dimensional separation may be construed as either a local or a global event, depending on whether the skin friction line that becomes a line of separation originates at a node or a saddle point.

Top