Sample records for camera motion parameters

  1. Motion capture for human motion measuring by using single camera with triangle markers

    NASA Astrophysics Data System (ADS)

    Takahashi, Hidenori; Tanaka, Takayuki; Kaneko, Shun'ichi

    2005-12-01

    This study aims to realize a motion capture for measuring 3D human motions by using single camera. Although motion capture by using multiple cameras is widely used in sports field, medical field, engineering field and so on, optical motion capture method with one camera is not established. In this paper, the authors achieved a 3D motion capture by using one camera, named as Mono-MoCap (MMC), on the basis of two calibration methods and triangle markers which each length of side is given. The camera calibration methods made 3D coordinates transformation parameter and a lens distortion parameter with Modified DLT method. The triangle markers enabled to calculate a coordinate value of a depth direction on a camera coordinate. Experiments of 3D position measurement by using the MMC on a measurement space of cubic 2 m on each side show an average error of measurement of a center of gravity of a triangle marker was less than 2 mm. As compared with conventional motion capture method by using multiple cameras, the MMC has enough accuracy for 3D measurement. Also, by putting a triangle marker on each human joint, the MMC was able to capture a walking motion, a standing-up motion and a bending and stretching motion. In addition, a method using a triangle marker together with conventional spherical markers was proposed. Finally, a method to estimate a position of a marker by measuring the velocity of the marker was proposed in order to improve the accuracy of MMC.

  2. Video segmentation and camera motion characterization using compressed data

    NASA Astrophysics Data System (ADS)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  3. Effects of Different Camera Motions on the Error in Estimates of Epipolar Geometry between Two Dimensional Images in Order to Provide a Framework for Solutions to Vision Based Simultaneous Localization and Mapping (SLAM)

    DTIC Science & Technology

    2007-09-01

    the projective camera matrix (P) which is a 3x4 matrix that is represents both the intrinsic and extrinsic parameters of a camera. It is used to...K contains the intrinsic parameters of the camera and |R t⎡ ⎤⎣ ⎦ represents the extrinsic parameters of the camera. By definition, the extrinsic ... extrinsic parameters are known then the camera is said to be calibrated. If only the intrinsic parameters are known, then the projective camera can

  4. Scene-aware joint global and local homographic video coding

    NASA Astrophysics Data System (ADS)

    Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.

    2016-09-01

    Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.

  5. Human silhouette matching based on moment invariants

    NASA Astrophysics Data System (ADS)

    Sun, Yong-Chao; Qiu, Xian-Jie; Xia, Shi-Hong; Wang, Zhao-Qi

    2005-07-01

    This paper aims to apply the method of silhouette matching based on moment invariants to infer the human motion parameters from video sequences of single monocular uncalibrated camera. Currently, there are two ways of tracking human motion: Marker and Markerless. While a hybrid framework is introduced in this paper to recover the input video contents. A standard 3D motion database is built up by marker technique in advance. Given a video sequences, human silhouettes are extracted as well as the viewpoint information of the camera which would be utilized to project the standard 3D motion database onto the 2D one. Therefore, the video recovery problem is formulated as a matching issue of finding the most similar body pose in standard 2D library with the one in video image. The framework is applied to the special trampoline sport where we can obtain the complicated human motion parameters in the single camera video sequences, and a lot of experiments are demonstrated that this approach is feasible in the field of monocular video-based 3D motion reconstruction.

  6. Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm

    NASA Astrophysics Data System (ADS)

    Lahamy, H.; Lichti, D.

    2011-09-01

    Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.

  7. A vision-based system for measuring the displacements of large structures: Simultaneous adaptive calibration and full motion estimation

    NASA Astrophysics Data System (ADS)

    Santos, C. Almeida; Costa, C. Oliveira; Batista, J.

    2016-05-01

    The paper describes a kinematic model-based solution to estimate simultaneously the calibration parameters of the vision system and the full-motion (6-DOF) of large civil engineering structures, namely of long deck suspension bridges, from a sequence of stereo images captured by digital cameras. Using an arbitrary number of images and assuming a smooth structure motion, an Iterated Extended Kalman Filter is used to recursively estimate the projection matrices of the cameras and the structure full-motion (displacement and rotation) over time, helping to meet the structure health monitoring fulfilment. Results related to the performance evaluation, obtained by numerical simulation and with real experiments, are reported. The real experiments were carried out in indoor and outdoor environment using a reduced structure model to impose controlled motions. In both cases, the results obtained with a minimum setup comprising only two cameras and four non-coplanar tracking points, showed a high accuracy results for on-line camera calibration and structure full motion estimation.

  8. Experimental investigation of strain errors in stereo-digital image correlation due to camera calibration

    NASA Astrophysics Data System (ADS)

    Shao, Xinxing; Zhu, Feipeng; Su, Zhilong; Dai, Xiangjun; Chen, Zhenning; He, Xiaoyuan

    2018-03-01

    The strain errors in stereo-digital image correlation (DIC) due to camera calibration were investigated using precisely controlled numerical experiments and real experiments. Three-dimensional rigid body motion tests were conducted to examine the effects of camera calibration on the measured results. For a fully accurate calibration, rigid body motion causes negligible strain errors. However, for inaccurately calibrated camera parameters and a short working distance, rigid body motion will lead to more than 50-μɛ strain errors, which significantly affects the measurement. In practical measurements, it is impossible to obtain a fully accurate calibration; therefore, considerable attention should be focused on attempting to avoid these types of errors, especially for high-accuracy strain measurements. It is necessary to avoid large rigid body motions in both two-dimensional DIC and stereo-DIC.

  9. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.

    PubMed

    Ci, Wenyan; Huang, Yingping

    2016-10-17

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.

  10. Automatic techniques for 3D reconstruction of critical workplace body postures from range imaging data

    NASA Astrophysics Data System (ADS)

    Westfeld, Patrick; Maas, Hans-Gerd; Bringmann, Oliver; Gröllich, Daniel; Schmauder, Martin

    2013-11-01

    The paper shows techniques for the determination of structured motion parameters from range camera image sequences. The core contribution of the work presented here is the development of an integrated least squares 3D tracking approach based on amplitude and range image sequences to calculate dense 3D motion vector fields. Geometric primitives of a human body model are fitted to time series of range camera point clouds using these vector fields as additional information. Body poses and motion information for individual body parts are derived from the model fit. On the basis of these pose and motion parameters, critical body postures are detected. The primary aim of the study is to automate ergonomic studies for risk assessments regulated by law, identifying harmful movements and awkward body postures in a workplace.

  11. Pose-free structure from motion using depth from motion constraints.

    PubMed

    Zhang, Ji; Boutin, Mireille; Aliaga, Daniel G

    2011-10-01

    Structure from motion (SFM) is the problem of recovering the geometry of a scene from a stream of images taken from unknown viewpoints. One popular approach to estimate the geometry of a scene is to track scene features on several images and reconstruct their position in 3-D. During this process, the unknown camera pose must also be recovered. Unfortunately, recovering the pose can be an ill-conditioned problem which, in turn, can make the SFM problem difficult to solve accurately. We propose an alternative formulation of the SFM problem with fixed internal camera parameters known a priori. In this formulation, obtained by algebraic variable elimination, the external camera pose parameters do not appear. As a result, the problem is better conditioned in addition to involving much fewer variables. Variable elimination is done in three steps. First, we take the standard SFM equations in projective coordinates and eliminate the camera orientations from the equations. We then further eliminate the camera center positions. Finally, we also eliminate all 3-D point positions coordinates, except for their depths with respect to the camera center, thus obtaining a set of simple polynomial equations of degree two and three. We show that, when there are merely a few points and pictures, these "depth-only equations" can be solved in a global fashion using homotopy methods. We also show that, in general, these same equations can be used to formulate a pose-free cost function to refine SFM solutions in a way that is more accurate than by minimizing the total reprojection error, as done when using the bundle adjustment method. The generalization of our approach to the case of varying internal camera parameters is briefly discussed. © 2011 IEEE

  12. Space-variant restoration of images degraded by camera motion blur.

    PubMed

    Sorel, Michal; Flusser, Jan

    2008-02-01

    We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.

  13. Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.

    PubMed

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.

  14. Optimising rigid motion compensation for small animal brain PET imaging

    NASA Astrophysics Data System (ADS)

    Spangler-Bickell, Matthew G.; Zhou, Lin; Kyme, Andre Z.; De Laat, Bart; Fulton, Roger R.; Nuyts, Johan

    2016-10-01

    Motion compensation (MC) in PET brain imaging of awake small animals is attracting increased attention in preclinical studies since it avoids the confounding effects of anaesthesia and enables behavioural tests during the scan. A popular MC technique is to use multiple external cameras to track the motion of the animal’s head, which is assumed to be represented by the motion of a marker attached to its forehead. In this study we have explored several methods to improve the experimental setup and the reconstruction procedures of this method: optimising the camera-marker separation; improving the temporal synchronisation between the motion tracker measurements and the list-mode stream; post-acquisition smoothing and interpolation of the motion data; and list-mode reconstruction with appropriately selected subsets. These techniques have been tested and verified on measurements of a moving resolution phantom and brain scans of an awake rat. The proposed techniques improved the reconstructed spatial resolution of the phantom by 27% and of the rat brain by 14%. We suggest a set of optimal parameter values to use for awake animal PET studies and discuss the relative significance of each parameter choice.

  15. A Comprehensive Motion Estimation Technique for the Improvement of EIS Methods Based on the SURF Algorithm and Kalman Filter.

    PubMed

    Cheng, Xuemin; Hao, Qun; Xie, Mengdi

    2016-04-07

    Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.

  16. Video pulse rate variability analysis in stationary and motion conditions.

    PubMed

    Melchor Rodríguez, Angel; Ramos-Castro, J

    2018-01-29

    In the last few years, some studies have measured heart rate (HR) or heart rate variability (HRV) parameters using a video camera. This technique focuses on the measurement of the small changes in skin colour caused by blood perfusion. To date, most of these works have obtained HRV parameters in stationary conditions, and there are practically no studies that obtain these parameters in motion scenarios and by conducting an in-depth statistical analysis. In this study, a video pulse rate variability (PRV) analysis is conducted by measuring the pulse-to-pulse (PP) intervals in stationary and motion conditions. Firstly, given the importance of the sampling rate in a PRV analysis and the low frame rate of commercial cameras, we carried out an analysis of two models to evaluate their performance in the measurements. We propose a selective tracking method using the Viola-Jones and KLT algorithms, with the aim of carrying out a robust video PRV analysis in stationary and motion conditions. Data and results of the proposed method are contrasted with those reported in the state of the art. The webcam achieved better results in the performance analysis of video cameras. In stationary conditions, high correlation values were obtained in PRV parameters with results above 0.9. The PP time series achieved an RMSE (mean ± standard deviation) of 19.45 ± 5.52 ms (1.70 ± 0.75 bpm). In the motion analysis, most of the PRV parameters also achieved good correlation results, but with lower values as regards stationary conditions. The PP time series presented an RMSE of 21.56 ± 6.41 ms (1.79 ± 0.63 bpm). The statistical analysis showed good agreement between the reference system and the proposed method. In stationary conditions, the results of PRV parameters were improved by our method in comparison with data reported in related works. An overall comparative analysis of PRV parameters in motion conditions was more limited due to the lack of studies or studies containing insufficient data analysis. Based on the results, the proposed method could provide a low-cost, contactless and reliable alternative for measuring HR or PRV parameters in non-clinical environments.

  17. Development of a new calibration procedure and its experimental validation applied to a human motion capture system.

    PubMed

    Royo Sánchez, Ana Cristina; Aguilar Martín, Juan José; Santolaria Mazo, Jorge

    2014-12-01

    Motion capture systems are often used for checking and analyzing human motion in biomechanical applications. It is important, in this context, that the systems provide the best possible accuracy. Among existing capture systems, optical systems are those with the highest accuracy. In this paper, the development of a new calibration procedure for optical human motion capture systems is presented. The performance and effectiveness of that new calibration procedure are also checked by experimental validation. The new calibration procedure consists of two stages. In the first stage, initial estimators of intrinsic and extrinsic parameters are sought. The camera calibration method used in this stage is the one proposed by Tsai. These parameters are determined from the camera characteristics, the spatial position of the camera, and the center of the capture volume. In the second stage, a simultaneous nonlinear optimization of all parameters is performed to identify the optimal values, which minimize the objective function. The objective function, in this case, minimizes two errors. The first error is the distance error between two markers placed in a wand. The second error is the error of position and orientation of the retroreflective markers of a static calibration object. The real co-ordinates of the two objects are calibrated in a co-ordinate measuring machine (CMM). The OrthoBio system is used to validate the new calibration procedure. Results are 90% lower than those from the previous calibration software and broadly comparable with results from a similarly configured Vicon system.

  18. Utilizing Commercial Hardware and Open Source Computer Vision Software to Perform Motion Capture for Reduced Gravity Flight

    NASA Technical Reports Server (NTRS)

    Humphreys, Brad; Bellisario, Brian; Gallo, Christopher; Thompson, William K.; Lewandowski, Beth

    2016-01-01

    Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts. To perform validation of these models and to support the Advanced Exercise Concepts Project, several candidate devices have been flown onboard NASAs Reduced Gravity Aircraft. In terrestrial laboratories, researchers typically have available to them motion capture systems for the measurement of subject kinematics. Onboard the parabolic flight aircraft it is not practical to utilize the traditional motion capture systems due to the large working volume they require and their relatively high replacement cost if damaged. To support measuring kinematics on board parabolic aircraft, a motion capture system is being developed utilizing open source computer vision code with commercial off the shelf (COTS) video camera hardware. While the systems accuracy is lower than lab setups, it provides a means to produce quantitative comparison motion capture kinematic data. Additionally, data such as required exercise volume for small spaces such as the Orion capsule can be determined. METHODS: OpenCV is an open source computer vision library that provides the ability to perform multi-camera 3 dimensional reconstruction. Utilizing OpenCV, via the Python programming language, a set of tools has been developed to perform motion capture in confined spaces using commercial cameras. Four Sony Video Cameras were intrinsically calibrated prior to flight. Intrinsic calibration provides a set of camera specific parameters to remove geometric distortion of the lens and sensor (specific to each individual camera). A set of high contrast markers were placed on the exercising subject (safety also necessitated that they be soft in case they become detached during parabolic flight); small yarn balls were used. Extrinsic calibration, the determination of camera location and orientation parameters, is performed using fixed landmark markers shared by the camera scenes. Additionally a wand calibration, the sweeping of the camera scenes simultaneously, was also performed. Techniques have been developed to perform intrinsic calibration, extrinsic calibration, isolation of the markers in the scene, calculation of marker 2D centroids, and 3D reconstruction from multiple cameras. These methods have been tested in the laboratory side-by-side comparison to a traditional motion capture system and also on a parabolic flight.

  19. Integrating motion, illumination, and structure in video sequences with applications in illumination-invariant tracking.

    PubMed

    Xu, Yilei; Roy-Chowdhury, Amit K

    2007-05-01

    In this paper, we present a theory for combining the effects of motion, illumination, 3D structure, albedo, and camera parameters in a sequence of images obtained by a perspective camera. We show that the set of all Lambertian reflectance functions of a moving object, at any position, illuminated by arbitrarily distant light sources, lies "close" to a bilinear subspace consisting of nine illumination variables and six motion variables. This result implies that, given an arbitrary video sequence, it is possible to recover the 3D structure, motion, and illumination conditions simultaneously using the bilinear subspace formulation. The derivation builds upon existing work on linear subspace representations of reflectance by generalizing it to moving objects. Lighting can change slowly or suddenly, locally or globally, and can originate from a combination of point and extended sources. We experimentally compare the results of our theory with ground truth data and also provide results on real data by using video sequences of a 3D face and the entire human body with various combinations of motion and illumination directions. We also show results of our theory in estimating 3D motion and illumination model parameters from a video sequence.

  20. Bio-inspired motion detection in an FPGA-based smart camera module.

    PubMed

    Köhler, T; Röchter, F; Lindemann, J P; Möller, R

    2009-03-01

    Flying insects, despite their relatively coarse vision and tiny nervous system, are capable of carrying out elegant and fast aerial manoeuvres. Studies of the fly visual system have shown that this is accomplished by the integration of signals from a large number of elementary motion detectors (EMDs) in just a few global flow detector cells. We developed an FPGA-based smart camera module with more than 10,000 single EMDs, which is closely modelled after insect motion-detection circuits with respect to overall architecture, resolution and inter-receptor spacing. Input to the EMD array is provided by a CMOS camera with a high frame rate. Designed as an adaptable solution for different engineering applications and as a testbed for biological models, the EMD detector type and parameters such as the EMD time constants, the motion-detection directions and the angle between correlated receptors are reconfigurable online. This allows a flexible and simultaneous detection of complex motion fields such as translation, rotation and looming, such that various tasks, e.g., obstacle avoidance, height/distance control or speed regulation can be performed by the same compact device.

  1. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera

    PubMed Central

    Ci, Wenyan; Huang, Yingping

    2016-01-01

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508

  2. Vision System Measures Motions of Robot and External Objects

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2008-01-01

    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.

  3. Heliostat calibration using attached cameras and artificial targets

    NASA Astrophysics Data System (ADS)

    Burisch, Michael; Sanchez, Marcelino; Olarra, Aitor; Villasante, Cristobal

    2016-05-01

    The efficiency of the solar field greatly depends on the ability of the heliostats to precisely reflect solar radiation onto a central receiver. To control the heliostats with such a precision requires the accurate knowledge of the motion of each of them. The motion of each heliostat can be described by a set of parameters, most notably the position and axis configuration. These parameters have to be determined individually for each heliostat during a calibration process. With the ongoing development of small sized heliostats, the ability to automatically perform such a calibration becomes more and more crucial as possibly hundreds of thousands of heliostats are involved. Furthermore, efficiency becomes an important factor as small sized heliostats potentially have to be recalibrated far more often, due to the limited stability of the components. In the following we present an automatic calibration procedure using cameras attached to each heliostat which are observing different targets spread throughout the solar field. Based on a number of observations of these targets under different heliostat orientations, the parameters describing the heliostat motion can be estimated with high precision.

  4. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  5. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis

    PubMed Central

    Cerveri, Pietro; Barros, Ricardo M. L.; Marins, João C. B.; Silvatti, Amanda P.

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846

  6. Method for separating video camera motion from scene motion for constrained 3D displacement measurements

    NASA Astrophysics Data System (ADS)

    Gauthier, L. R.; Jansen, M. E.; Meyer, J. R.

    2014-09-01

    Camera motion is a potential problem when a video camera is used to perform dynamic displacement measurements. If the scene camera moves at the wrong time, the apparent motion of the object under study can easily be confused with the real motion of the object. In some cases, it is practically impossible to prevent camera motion, as for instance, when a camera is used outdoors in windy conditions. A method to address this challenge is described that provides an objective means to measure the displacement of an object of interest in the scene, even when the camera itself is moving in an unpredictable fashion at the same time. The main idea is to synchronously measure the motion of the camera and to use those data ex post facto to subtract out the apparent motion in the scene that is caused by the camera motion. The motion of the scene camera is measured by using a reference camera that is rigidly attached to the scene camera and oriented towards a stationary reference object. For instance, this reference object may be on the ground, which is known to be stationary. It is necessary to calibrate the reference camera by simultaneously measuring the scene images and the reference images at times when it is known that the scene object is stationary and the camera is moving. These data are used to map camera movement data to apparent scene movement data in pixel space and subsequently used to remove the camera movement from the scene measurements.

  7. Restoration of motion blurred images

    NASA Astrophysics Data System (ADS)

    Gaxiola, Leopoldo N.; Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.

    2017-08-01

    Image restoration is a classic problem in image processing. Image degradations can occur due to several reasons, for instance, imperfections of imaging systems, quantization errors, atmospheric turbulence, relative motion between camera or objects, among others. Motion blur is a typical degradation in dynamic imaging systems. In this work, we present a method to estimate the parameters of linear motion blur degradation from a captured blurred image. The proposed method is based on analyzing the frequency spectrum of a captured image in order to firstly estimate the degradation parameters, and then, to restore the image with a linear filter. The performance of the proposed method is evaluated by processing synthetic and real-life images. The obtained results are characterized in terms of accuracy of image restoration given by an objective criterion.

  8. Clinical Gait Evaluation of Patients with Lumbar Spine Stenosis.

    PubMed

    Sun, Jun; Liu, Yan-Cheng; Yan, Song-Hua; Wang, Sha-Sha; Lester, D Kevin; Zeng, Ji-Zhou; Miao, Jun; Zhang, Kuan

    2018-02-01

    The third generation Intelligent Device for Energy Expenditure and Activity (IDEEA3, MiniSun, CA) has been developed for clinical gait evaluation, and this study was designed to evaluate the accuracy and reliability of IDEEA3 for the gait measurement of lumbar spinal stenosis (LSS) patients. Twelve healthy volunteers were recruited to compare gait cycle, cadence, step length, velocity, and number of steps between a motion analysis system and a high-speed video camera. Twenty hospitalized LSS patients were recruited for the comparison of the five parameters between the IDEEA3 and GoPro camera. Paired t-test, intraclass correlation coefficient, concordance correlation coefficient, and Bland-Altman plots were used for the data analysis. The ratios of GoPro camera results to motion analysis system results, and the ratios of IDEEA3 results to GoPro camera results were all around 1.00. All P-values of paired t-tests for gait cycle, cadence, step length, and velocity were greater than 0.05, while all the ICC and CCC results were above 0.950 with P < 0.001. The measurements for gait cycle, cadence, step length, velocity, and number of steps with the GoPro camera are highly consistent with the measurements with the motion analysis system. The measurements for IDEEA3 are consistent with those for the GoPro camera. IDEEA3 can be effectively used in the gait measurement of LSS patients. © 2018 Chinese Orthopaedic Association and John Wiley & Sons Australia, Ltd.

  9. Research on three-dimensional reconstruction method based on binocular vision

    NASA Astrophysics Data System (ADS)

    Li, Jinlin; Wang, Zhihui; Wang, Minjun

    2018-03-01

    As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.

  10. FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.

    PubMed

    Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu

    2017-07-18

    Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.

  11. An overview of the stereo correlation and triangulation formulations used in DICe.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Daniel Z.

    This document provides a detailed overview of the stereo correlation algorithm and triangulation formulation used in the Digital Image Correlation Engine (DICe) to triangulate three dimensional motion in space given the image coordinates and camera calibration parameters.

  12. The algorithm of motion blur image restoration based on PSF half-blind estimation

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ke; Lin, Zhe

    2011-08-01

    A novel algorithm of motion blur image restoration based on PSF half-blind estimation with Hough transform was introduced on the basis of full analysis of the principle of TDICCD camera, with the problem that vertical uniform linear motion estimation used by IBD algorithm as the original value of PSF led to image restoration distortion. Firstly, the mathematical model of image degradation was established with the transcendental information of multi-frame images, and then two parameters (movement blur length and angle) that have crucial influence on PSF estimation was set accordingly. Finally, the ultimate restored image can be acquired through multiple iterative of the initial value of PSF estimation in Fourier domain, which the initial value was gained by the above method. Experimental results show that the proposal algorithm can not only effectively solve the image distortion problem caused by relative motion between TDICCD camera and movement objects, but also the details characteristics of original image are clearly restored.

  13. Spatiotemporal motion boundary detection and motion boundary velocity estimation for tracking moving objects with a moving camera: a level sets PDEs approach with concurrent camera motion compensation.

    PubMed

    Feghali, Rosario; Mitiche, Amar

    2004-11-01

    The purpose of this study is to investigate a method of tracking moving objects with a moving camera. This method estimates simultaneously the motion induced by camera movement. The problem is formulated as a Bayesian motion-based partitioning problem in the spatiotemporal domain of the image quence. An energy functional is derived from the Bayesian formulation. The Euler-Lagrange descent equations determine imultaneously an estimate of the image motion field induced by camera motion and an estimate of the spatiotemporal motion undary surface. The Euler-Lagrange equation corresponding to the surface is expressed as a level-set partial differential equation for topology independence and numerically stable implementation. The method can be initialized simply and can track multiple objects with nonsimultaneous motions. Velocities on motion boundaries can be estimated from geometrical properties of the motion boundary. Several examples of experimental verification are given using synthetic and real-image sequences.

  14. Variation in detection among passive infrared triggered-cameras used in wildlife research

    USGS Publications Warehouse

    Damm, Philip E.; Grand, James B.; Barnett, Steven W.

    2010-01-01

    Precise and accurate estimates of demographics such as age structure, productivity, and density are necessary in determining habitat and harvest management strategies for wildlife populations. Surveys using automated cameras are becoming an increasingly popular tool for estimating these parameters. However, most camera studies fail to incorporate detection probabilities, leading to parameter underestimation. The objective of this study was to determine the sources of heterogeneity in detection for trail cameras that incorporate a passive infrared (PIR) triggering system sensitive to heat and motion. Images were collected at four baited sites within the Conecuh National Forest, Alabama, using three cameras at each site operating continuously over the same seven-day period. Detection was estimated for four groups of animals based on taxonomic group and body size. Our hypotheses of detection considered variation among bait sites and cameras. The best model (w=0.99) estimated different rates of detection for each camera in addition to different detection rates for four animal groupings. Factors that explain this variability might include poor manufacturing tolerances, variation in PIR sensitivity, animal behavior, and species-specific infrared radiation. Population surveys using trail cameras with PIR systems must incorporate detection rates for individual cameras. Incorporating time-lapse triggering systems into survey designs should eliminate issues associated with PIR systems.

  15. Three-dimensional cinematography with control object of unknown shape.

    PubMed

    Dapena, J; Harman, E A; Miller, J A

    1982-01-01

    A technique for reconstruction of three-dimensional (3D) motion which involves a simple filming procedure but allows the deduction of coordinates in large object volumes was developed. Internal camera parameters are calculated from measurements of the film images of two calibrated crosses while external camera parameters are calculated from the film images of points in a control object of unknown shape but at least one known length. The control object, which includes the volume in which the activity is to take place, is formed by a series of poles placed at unknown locations, each carrying two targets. From the internal and external camera parameters, and from locations of the images of point in the films of the two cameras, 3D coordinates of the point can be calculated. Root mean square errors of the three coordinates of points in a large object volume (5m x 5m x 1.5m) were 15 mm, 13 mm, 13 mm and 6 mm, and relative errors in lengths averaged 0.5%, 0.7% and 0.5%, respectively.

  16. D Animation Reconstruction from Multi-Camera Coordinates Transformation

    NASA Astrophysics Data System (ADS)

    Jhan, J. P.; Rau, J. Y.; Chou, C. M.

    2016-06-01

    Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP) in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australiscoded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  17. HDR video synthesis for vision systems in dynamic scenes

    NASA Astrophysics Data System (ADS)

    Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried

    2016-09-01

    High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.

  18. Dimensional coordinate measurements: application in characterizing cervical spine motion

    NASA Astrophysics Data System (ADS)

    Zheng, Weilong; Li, Linan; Wang, Shibin; Wang, Zhiyong; Shi, Nianke; Xue, Yuan

    2014-06-01

    Cervical spine as a complicated part in the human body, the form of its movement is diverse. The movements of the segments of vertebrae are three-dimensional, and it is reflected in the changes of the angle between two joint and the displacement in different directions. Under normal conditions, cervical can flex, extend, lateral flex and rotate. For there is no relative motion between measuring marks fixed on one segment of cervical vertebra, the cervical vertebrae with three marked points can be seen as a body. Body's motion in space can be decomposed into translational movement and rotational movement around a base point .This study concerns the calculation of dimensional coordinate of the marked points pasted to the human body's cervical spine by an optical method. Afterward, these measures will allow the calculation of motion parameters for every spine segment. For this study, we choose a three-dimensional measurement method based on binocular stereo vision. The object with marked points is placed in front of the CCD camera. Through each shot, we will get there two parallax images taken from different cameras. According to the principle of binocular vision we can be realized three-dimensional measurements. Cameras are erected parallelly. This paper describes the layout of experimental system and a mathematical model to get the coordinates.

  19. Methods and new approaches to the calculation of physiological parameters by videodensitometry

    NASA Technical Reports Server (NTRS)

    Kedem, D.; Londstrom, D. P.; Rhea, T. C., Jr.; Nelson, J. H.; Price, R. R.; Smith, C. W.; Graham, T. P., Jr.; Brill, A. B.; Kedem, D.

    1976-01-01

    A complex system featuring a video-camera connected to a video disk, cine (medical motion picture) camera and PDP-9 computer with various input/output facilities has been developed. This system enables the performance of quantitative analysis of various functions recorded in clinical studies. Several studies are described, such as heart chamber volume calculations, left ventricle ejection fraction, blood flow through the lungs and also the possibility of obtaining information about blood flow and constrictions in small cross-section vessels

  20. Influence of camera parameters on the quality of mobile 3D capture

    NASA Astrophysics Data System (ADS)

    Georgiev, Mihail; Boev, Atanas; Gotchev, Atanas; Hannuksela, Miska

    2010-01-01

    We investigate the effect of camera de-calibration on the quality of depth estimation. Dense depth map is a format particularly suitable for mobile 3D capture (scalable and screen independent). However, in real-world scenario cameras might move (vibrations, temp. bend) form their designated positions. For experiments, we create a test framework, described in the paper. We investigate how mechanical changes will affect different (4) stereo-matching algorithms. We also assess how different geometric corrections (none, motion compensation-like, full rectification) will affect the estimation quality (how much offset can be still compensated with "crop" over a larger CCD). Finally, we show how estimated camera pose change (E) relates with stereo-matching, which can be used for "rectification quality" measure.

  1. Holographic motion picture camera with Doppler shift compensation

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1976-01-01

    A holographic motion picture camera is reported for producing three dimensional images by employing an elliptical optical system. There is provided in one of the beam paths (the object or reference beam path) a motion compensator which enables the camera to photograph faster moving objects.

  2. Detection of obstacles on runway using Ego-Motion compensation and tracking of significant features

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar (Principal Investigator); Camps, Octavia (Principal Investigator); Gandhi, Tarak; Devadiga, Sadashiva

    1996-01-01

    This report describes a method for obstacle detection on a runway for autonomous navigation and landing of an aircraft. Detection is done in the presence of extraneous features such as tiremarks. Suitable features are extracted from the image and warping using approximately known camera and plane parameters is performed in order to compensate ego-motion as far as possible. Residual disparity after warping is estimated using an optical flow algorithm. Features are tracked from frame to frame so as to obtain more reliable estimates of their motion. Corrections are made to motion parameters with the residual disparities using a robust method, and features having large residual disparities are signaled as obstacles. Sensitivity analysis of the procedure is also studied. Nelson's optical flow constraint is proposed to separate moving obstacles from stationary ones. A Bayesian framework is used at every stage so that the confidence in the estimates can be determined.

  3. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras

    PubMed Central

    Morris, Mark; Sellers, William I.

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778

  4. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.

    PubMed

    Peyer, Kathrin E; Morris, Mark; Sellers, William I

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.

  5. Ubiquitous human upper-limb motion estimation using wearable sensors.

    PubMed

    Zhang, Zhi-Qiang; Wong, Wai-Choong; Wu, Jian-Kang

    2011-07-01

    Human motion capture technologies have been widely used in a wide spectrum of applications, including interactive game and learning, animation, film special effects, health care, navigation, and so on. The existing human motion capture techniques, which use structured multiple high-resolution cameras in a dedicated studio, are complicated and expensive. With the rapid development of microsensors-on-chip, human motion capture using wearable microsensors has become an active research topic. Because of the agility in movement, upper-limb motion estimation has been regarded as the most difficult problem in human motion capture. In this paper, we take the upper limb as our research subject and propose a novel ubiquitous upper-limb motion estimation algorithm, which concentrates on modeling the relationship between upper-arm movement and forearm movement. A link structure with 5 degrees of freedom (DOF) is proposed to model the human upper-limb skeleton structure. Parameters are defined according to Denavit-Hartenberg convention, forward kinematics equations are derived, and an unscented Kalman filter is deployed to estimate the defined parameters. The experimental results have shown that the proposed upper-limb motion capture and analysis algorithm outperforms other fusion methods and provides accurate results in comparison to the BTS optical motion tracker.

  6. Dynamic Estimation of Rigid Motion from Perspective Views via Recursive Identification of Exterior Differential Systems with Parameters on a Topological Manifold

    DTIC Science & Technology

    1994-02-15

    0. Faugeras. Three dimensional vision, a geometric viewpoint. MIT Press, 1993. [19] 0 . D. Faugeras and S. Maybank . Motion from point mathces...multiplicity of solutions. Int. J. of Computer Vision, 1990. 1201 0.D. Faugeras, Q.T. Luong, and S.J. Maybank . Camera self-calibration: theory and...Kalrnan filter-based algorithms for estimating depth from image sequences. Int. J. of computer vision, 1989. [41] S. Maybank . Theory of

  7. Uniscale multi-view registration using double dog-leg method

    NASA Astrophysics Data System (ADS)

    Chen, Chao-I.; Sargent, Dusty; Tsai, Chang-Ming; Wang, Yuan-Fang; Koppel, Dan

    2009-02-01

    3D computer models of body anatomy can have many uses in medical research and clinical practices. This paper describes a robust method that uses videos of body anatomy to construct multiple, partial 3D structures and then fuse them to form a larger, more complete computer model using the structure-from-motion framework. We employ the Double Dog-Leg (DDL) method, a trust-region based nonlinear optimization method, to jointly optimize the camera motion parameters (rotation and translation) and determine a global scale that all partial 3D structures should agree upon. These optimized motion parameters are used for constructing local structures, and the global scale is essential for multi-view registration after all these partial structures are built. In order to provide a good initial guess of the camera movement parameters and outlier free 2D point correspondences for DDL, we also propose a two-stage scheme where multi-RANSAC with a normalized eight-point algorithm is first performed and then a few iterations of an over-determined five-point algorithm is used to polish the results. Our experimental results using colonoscopy video show that the proposed scheme always produces more accurate outputs than the standard RANSAC scheme. Furthermore, since we have obtained many reliable point correspondences, time-consuming and error-prone registration methods like the iterative closest points (ICP) based algorithms can be replaced by a simple rigid-body transformation solver when merging partial structures into a larger model.

  8. A Quasi-Static Method for Determining the Characteristics of a Motion Capture Camera System in a "Split-Volume" Configuration

    NASA Technical Reports Server (NTRS)

    Miller, Chris; Mulavara, Ajitkumar; Bloomberg, Jacob

    2001-01-01

    To confidently report any data collected from a video-based motion capture system, its functional characteristics must be determined, namely accuracy, repeatability and resolution. Many researchers have examined these characteristics with motion capture systems, but they used only two cameras, positioned 90 degrees to each other. Everaert used 4 cameras, but all were aligned along major axes (two in x, one in y and z). Richards compared the characteristics of different commercially available systems set-up in practical configurations, but all cameras viewed a single calibration volume. The purpose of this study was to determine the accuracy, repeatability and resolution of a 6-camera Motion Analysis system in a split-volume configuration using a quasistatic methodology.

  9. Correction And Use Of Jitter In Television Images

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.

    1989-01-01

    Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.

  10. Biomechanics Analysis of Combat Sport (Silat) By Using Motion Capture System

    NASA Astrophysics Data System (ADS)

    Zulhilmi Kaharuddin, Muhammad; Badriah Khairu Razak, Siti; Ikram Kushairi, Muhammad; Syawal Abd. Rahman, Mohamed; An, Wee Chang; Ngali, Z.; Siswanto, W. A.; Salleh, S. M.; Yusup, E. M.

    2017-01-01

    ‘Silat’ is a Malay traditional martial art that is practiced in both amateur and in professional levels. The intensity of the motion spurs the scientific research in biomechanics. The main purpose of this abstract is to present the biomechanics method used in the study of ‘silat’. By using the 3D Depth Camera motion capture system, two subjects are to perform ‘Jurus Satu’ in three repetitions each. One subject is set as the benchmark for the research. The videos are captured and its data is processed using the 3D Depth Camera server system in the form of 16 3D body joint coordinates which then will be transformed into displacement, velocity and acceleration components by using Microsoft excel for data calculation and Matlab software for simulation of the body. The translated data obtained serves as an input to differentiate both subjects’ execution of the ‘Jurus Satu’. Nine primary movements with the addition of five secondary movements are observed visually frame by frame from the simulation obtained to get the exact frame that the movement takes place. Further analysis involves the differentiation of both subjects’ execution by referring to the average mean and standard deviation of joints for each parameter stated. The findings provide useful data for joints kinematic parameters as well as to improve the execution of ‘Jurus Satu’ and to exhibit the process of learning a movement that is relatively unknown by the use of a motion capture system.

  11. Plenoptic Image Motion Deblurring.

    PubMed

    Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo

    2018-04-01

    We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.

  12. IMAX camera (12-IML-1)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.

  13. Analysis of sediment particle velocity in wave motion based on wave flume experiments

    NASA Astrophysics Data System (ADS)

    Krupiński, Adam

    2012-10-01

    The experiment described was one of the elements of research into sediment transport conducted by the Division of Geotechnics of West-Pomeranian University of Technology. The experimental analyses were performed within the framework of the project "Building a knowledge transfer network on the directions and perspectives of developing wave laboratory and in situ research using innovative research equipment" launched by the Institute of Hydroengineering of the Polish Academy of Sciences in Gdańsk. The objective of the experiment was to determine relations between sediment transport and wave motion parameters and then use the obtained results to modify formulas defining sediment transport in rivers, like Ackers-White formula, by introducing basic parameters of wave motion as the force generating bed material transport. The article presents selected results of the experiment concerning sediment velocity field analysis conducted for different parameters of wave motion. The velocity vectors of particles suspended in water were measured with a Particle Image Velocimetry (PIV) apparatus registering suspended particles in a measurement flume by producing a series of laser pulses and analysing their displacement with a high-sensitivity camera connected to a computer. The article presents velocity fields of suspended bed material particles measured in the longitudinal section of the wave flume and their comparison with water velocity profiles calculated for the definite wave parameters. The results presented will be used in further research for relating parameters essential for the description of monochromatic wave motion to basic sediment transport parameters and "transforming" mean velocity and dynamic velocity in steady motion to mean wave front velocity and dynamic velocity in wave motion for a single wave.

  14. Motion camera based on a custom vision sensor and an FPGA architecture

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel

    1998-09-01

    A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.

  15. Adaptive-Repetitive Visual-Servo Control of Low-Flying Aerial Robots via Uncalibrated High-Flying Cameras

    NASA Astrophysics Data System (ADS)

    Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.

    2017-08-01

    This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.

  16. LabVIEW application for motion tracking using USB camera

    NASA Astrophysics Data System (ADS)

    Rob, R.; Tirian, G. O.; Panoiu, M.

    2017-05-01

    The technical state of the contact line and also the additional equipment in electric rail transport is very important for realizing the repairing and maintenance of the contact line. During its functioning, the pantograph motion must stay in standard limits. Present paper proposes a LabVIEW application which is able to track in real time the motion of a laboratory pantograph and also to acquire the tracking images. An USB webcam connected to a computer acquires the desired images. The laboratory pantograph contains an automatic system which simulates the real motion. The tracking parameters are the horizontally motion (zigzag) and the vertically motion which can be studied in separate diagrams. The LabVIEW application requires appropriate tool-kits for vision development. Therefore the paper describes the subroutines that are especially programmed for real-time image acquisition and also for data processing.

  17. Recommended survey designs for occupancy modelling using motion-activated cameras: insights from empirical wildlife data

    PubMed Central

    Lewis, Jesse S.; Gerber, Brian D.

    2014-01-01

    Motion-activated cameras are a versatile tool that wildlife biologists can use for sampling wild animal populations to estimate species occurrence. Occupancy modelling provides a flexible framework for the analysis of these data; explicitly recognizing that given a species occupies an area the probability of detecting it is often less than one. Despite the number of studies using camera data in an occupancy framework, there is only limited guidance from the scientific literature about survey design trade-offs when using motion-activated cameras. A fuller understanding of these trade-offs will allow researchers to maximise available resources and determine whether the objectives of a monitoring program or research study are achievable. We use an empirical dataset collected from 40 cameras deployed across 160 km2 of the Western Slope of Colorado, USA to explore how survey effort (number of cameras deployed and the length of sampling period) affects the accuracy and precision (i.e., error) of the occupancy estimate for ten mammal and three virtual species. We do this using a simulation approach where species occupancy and detection parameters were informed by empirical data from motion-activated cameras. A total of 54 survey designs were considered by varying combinations of sites (10–120 cameras) and occasions (20–120 survey days). Our findings demonstrate that increasing total sampling effort generally decreases error associated with the occupancy estimate, but changing the number of sites or sampling duration can have very different results, depending on whether a species is spatially common or rare (occupancy = ψ) and easy or hard to detect when available (detection probability = p). For rare species with a low probability of detection (i.e., raccoon and spotted skunk) the required survey effort includes maximizing the number of sites and the number of survey days, often to a level that may be logistically unrealistic for many studies. For common species with low detection (i.e., bobcat and coyote) the most efficient sampling approach was to increase the number of occasions (survey days). However, for common species that are moderately detectable (i.e., cottontail rabbit and mule deer), occupancy could reliably be estimated with comparatively low numbers of cameras over a short sampling period. We provide general guidelines for reliably estimating occupancy across a range of terrestrial species (rare to common: ψ = 0.175–0.970, and low to moderate detectability: p = 0.003–0.200) using motion-activated cameras. Wildlife researchers/managers with limited knowledge of the relative abundance and likelihood of detection of a particular species can apply these guidelines regardless of location. We emphasize the importance of prior biological knowledge, defined objectives and detailed planning (e.g., simulating different study-design scenarios) for designing effective monitoring programs and research studies. PMID:25210658

  18. Visualizing Interstellar's Wormhole

    NASA Astrophysics Data System (ADS)

    James, Oliver; von Tunzelmann, Eugénie; Franklin, Paul; Thorne, Kip S.

    2015-06-01

    Christopher Nolan's science fiction movie Interstellar offers a variety of opportunities for students in elementary courses on general relativity theory. This paper describes such opportunities, including: (i) At the motivational level, the manner in which elementary relativity concepts underlie the wormhole visualizations seen in the movie; (ii) At the briefest computational level, instructive calculations with simple but intriguing wormhole metrics, including, e.g., constructing embedding diagrams for the three-parameter wormhole that was used by our visual effects team and Christopher Nolan in scoping out possible wormhole geometries for the movie; (iii) Combining the proper reference frame of a camera with solutions of the geodesic equation, to construct a light-ray-tracing map backward in time from a camera's local sky to a wormhole's two celestial spheres; (iv) Implementing this map, for example, in Mathematica, Maple or Matlab, and using that implementation to construct images of what a camera sees when near or inside a wormhole; (v) With the student's implementation, exploring how the wormhole's three parameters influence what the camera sees—which is precisely how Christopher Nolan, using our implementation, chose the parameters for Interstellar's wormhole; (vi) Using the student's implementation, exploring the wormhole's Einstein ring and particularly the peculiar motions of star images near the ring, and exploring what it looks like to travel through a wormhole.

  19. Conceptual Design and Dynamics Testing and Modeling of a Mars Tumbleweed Rover

    NASA Technical Reports Server (NTRS)

    Calhoun Philip C.; Harris, Steven B.; Raiszadeh, Behzad; Zaleski, Kristina D.

    2005-01-01

    The NASA Langley Research Center has been developing a novel concept for a Mars planetary rover called the Mars Tumbleweed. This concept utilizes the wind to propel the rover along the Mars surface, bringing it the potential to cover vast distances not possible with current Mars rover technology. This vehicle, in its deployed configuration, must be large and lightweight to provide the ratio of drag force to rolling resistance necessary to initiate motion from rest on the Mars surface. One Tumbleweed design concept that satisfies these considerations is called the Eggbeater-Dandelion. This paper describes the basic design considerations and a proposed dynamics model of the concept for use in simulation studies. It includes a summary of rolling/bouncing dynamics tests that used videogrammetry to better understand, characterize, and validate the dynamics model assumptions, especially the effective rolling resistance in bouncing/rolling dynamic conditions. The dynamics test used cameras to capture the motion of 32 targets affixed to a test article s outer structure. Proper placement of the cameras and alignment of their respective fields of view provided adequate image resolution of multiple targets along the trajectory as the test article proceeded down the ramp. Image processing of the frames from multiple cameras was used to determine the target positions. Position data from a set of these test runs was compared with results of a three dimensional, flexible dynamics model. Model input parameters were adjusted to match the test data for runs conducted. This process presented herein provided the means to characterize the dynamics and validate the simulation of the Eggbeater-Dandelion concept. The simulation model was used to demonstrate full scale Tumbleweed motion from a stationary condition on a flat-sloped terrain using representative Mars environment parameters.

  20. Development and application of 3-D foot-shape measurement system under different loads

    NASA Astrophysics Data System (ADS)

    Liu, Guozhong; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi

    2008-03-01

    The 3-D foot-shape measurement system under different loads based on laser-line-scanning principle was designed and the model of the measurement system was developed. 3-D foot-shape measurements without blind areas under different loads and the automatic extraction of foot-parameter are achieved with the system. A global calibration method for CCD cameras using a one-axis motion unit in the measurement system and the specialized calibration kits is presented. Errors caused by the nonlinearity of CCD cameras and other devices and caused by the installation of the one axis motion platform, the laser plane and the toughened glass plane can be eliminated by using the nonlinear coordinate mapping function and the Powell optimized method in calibration. Foot measurements under different loads for 170 participants were conducted and the statistic foot parameter measurement results for male and female participants under non-weight condition and changes of foot parameters under half-body-weight condition, full-body-weight condition and over-body-weight condition compared with non-weight condition are presented. 3-D foot-shape measurement under different loads makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization, and establishment of a feet database for consumers and athletes.

  1. The determination of some requirements for a helicopter flight research simulation facility

    NASA Technical Reports Server (NTRS)

    Sinacori, J. B.

    1977-01-01

    Important requirements were defined for a flight simulation facility to support Army helicopter development. In particular requirements associated with the visual and motion subsystems of the planned simulator were studied. The method used in the motion requirements study is presented together with the underlying assumptions and a description of the supporting data. Results are given in a form suitable for use in a preliminary design. Visual requirements associated with a television camera/model concept are related. The important parameters are described together with substantiating data and assumptions. Research recommendations are given.

  2. Linear Acceleration Measurement Utilizing Inter-Instrument Synchronization: A Comparison between Accelerometers and Motion-Based Tracking Approaches

    ERIC Educational Resources Information Center

    Callaway, Andrew J.; Cobb, Jon E.

    2012-01-01

    Where as video cameras are a reliable and established technology for the measurement of kinematic parameters, accelerometers are increasingly being employed for this type of measurement due to their ease of use, performance, and comparatively low cost. However, the majority of accelerometer-based studies involve a single channel due to the…

  3. A rotorcraft flight database for validation of vision-based ranging algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.

    1992-01-01

    A helicopter flight test experiment was conducted at the NASA Ames Research Center to obtain a database consisting of video imagery and accurate measurements of camera motion, camera calibration parameters, and true range information. The database was developed to allow verification of monocular passive range estimation algorithms for use in the autonomous navigation of rotorcraft during low altitude flight. The helicopter flight experiment is briefly described. Four data sets representative of the different helicopter maneuvers and the visual scenery encountered during the flight test are presented. These data sets will be made available to researchers in the computer vision community.

  4. Automatic acquisition of motion trajectories: tracking hockey players

    NASA Astrophysics Data System (ADS)

    Okuma, Kenji; Little, James J.; Lowe, David

    2003-12-01

    Computer systems that have the capability of analyzing complex and dynamic scenes play an essential role in video annotation. Scenes can be complex in such a way that there are many cluttered objects with different colors, shapes and sizes, and can be dynamic with multiple interacting moving objects and a constantly changing background. In reality, there are many scenes that are complex, dynamic, and challenging enough for computers to describe. These scenes include games of sports, air traffic, car traffic, street intersections, and cloud transformations. Our research is about the challenge of inventing a descriptive computer system that analyzes scenes of hockey games where multiple moving players interact with each other on a constantly moving background due to camera motions. Ultimately, such a computer system should be able to acquire reliable data by extracting the players" motion as their trajectories, querying them by analyzing the descriptive information of data, and predict the motions of some hockey players based on the result of the query. Among these three major aspects of the system, we primarily focus on visual information of the scenes, that is, how to automatically acquire motion trajectories of hockey players from video. More accurately, we automatically analyze the hockey scenes by estimating parameters (i.e., pan, tilt, and zoom) of the broadcast cameras, tracking hockey players in those scenes, and constructing a visual description of the data by displaying trajectories of those players. Many technical problems in vision such as fast and unpredictable players' motions and rapid camera motions make our challenge worth tackling. To the best of our knowledge, there have not been any automatic video annotation systems for hockey developed in the past. Although there are many obstacles to overcome, our efforts and accomplishments would hopefully establish the infrastructure of the automatic hockey annotation system and become a milestone for research in automatic video annotation in this domain.

  5. Independent motion detection with a rival penalized adaptive particle filter

    NASA Astrophysics Data System (ADS)

    Becker, Stefan; Hübner, Wolfgang; Arens, Michael

    2014-10-01

    Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic filter for real-time detection and tracking of independently moving objects. The proposed approach introduces a competition scheme between particles in order to ensure an improved multi-modality. Further, the filter design helps to generate a particle distribution which is homogenous even in the presence of multiple targets showing non-rigid motion patterns. The effectiveness of the method is shown on exemplary outdoor sequences.

  6. Estimation of spatial-temporal gait parameters using a low-cost ultrasonic motion analysis system.

    PubMed

    Qi, Yongbin; Soh, Cheong Boon; Gunawan, Erry; Low, Kay-Soon; Thomas, Rijil

    2014-08-20

    In this paper, a low-cost motion analysis system using a wireless ultrasonic sensor network is proposed and investigated. A methodology has been developed to extract spatial-temporal gait parameters including stride length, stride duration, stride velocity, stride cadence, and stride symmetry from 3D foot displacements estimated by the combination of spherical positioning technique and unscented Kalman filter. The performance of this system is validated against a camera-based system in the laboratory with 10 healthy volunteers. Numerical results show the feasibility of the proposed system with average error of 2.7% for all the estimated gait parameters. The influence of walking speed on the measurement accuracy of proposed system is also evaluated. Statistical analysis demonstrates its capability of being used as a gait assessment tool for some medical applications.

  7. Moving target feature phenomenology data collection at China Lake

    NASA Astrophysics Data System (ADS)

    Gross, David C.; Hill, Jeff; Schmitz, James L.

    2002-08-01

    This paper describes the DARPA Moving Target Feature Phenomenology (MTFP) data collection conducted at the China Lake Naval Weapons Center's Junction Ranch in July 2001. The collection featured both X-band and Ku-band radars positioned on top of Junction Ranch's Parrot Peak. The test included seven targets used in eleven configurations with vehicle motion consisting of circular, straight-line, and 90-degree turning motion. Data was collected at 10-degree and 17-degree depression angles. Key parameters in the collection were polarization, vehicle speed, and road roughness. The collection also included a canonical target positioned at Junction Ranch's tilt-deck turntable. The canonical target included rotating wheels (military truck tire and civilian pick-up truck tire) and a flat plate with variable positioned corner reflectors. The canonical target was also used to simulate a rotating antenna and a vibrating plate. The target vehicles were instrumented with ARDS pods for differential GPS and roll, pitch and yaw measurements. Target motion was also documented using a video camera slaved to the X-band radar antenna and by a video camera operated near the target site.

  8. Moving Object Detection on a Vehicle Mounted Back-Up Camera

    PubMed Central

    Kim, Dong-Sun; Kwon, Jinsan

    2015-01-01

    In the detection of moving objects from vision sources one usually assumes that the scene has been captured by stationary cameras. In case of backing up a vehicle, however, the camera mounted on the vehicle moves according to the vehicle’s movement, resulting in ego-motions on the background. This results in mixed motion in the scene, and makes it difficult to distinguish between the target objects and background motions. Without further treatments on the mixed motion, traditional fixed-viewpoint object detection methods will lead to many false-positive detection results. In this paper, we suggest a procedure to be used with the traditional moving object detection methods relaxing the stationary cameras restriction, by introducing additional steps before and after the detection. We also decribe the implementation as a FPGA platform along with the algorithm. The target application of this suggestion is use with a road vehicle’s rear-view camera systems. PMID:26712761

  9. Maximum likelihood estimation in calibrating a stereo camera setup.

    PubMed

    Muijtjens, A M; Roos, J M; Arts, T; Hasman, A

    1999-02-01

    Motion and deformation of the cardiac wall may be measured by following the positions of implanted radiopaque markers in three dimensions, using two x-ray cameras simultaneously. Regularly, calibration of the position measurement system is obtained by registration of the images of a calibration object, containing 10-20 radiopaque markers at known positions. Unfortunately, an accidental change of the position of a camera after calibration requires complete recalibration. Alternatively, redundant information in the measured image positions of stereo pairs can be used for calibration. Thus, a separate calibration procedure can be avoided. In the current study a model is developed that describes the geometry of the camera setup by five dimensionless parameters. Maximum Likelihood (ML) estimates of these parameters were obtained in an error analysis. It is shown that the ML estimates can be found by application of a nonlinear least squares procedure. Compared to the standard unweighted least squares procedure, the ML method resulted in more accurate estimates without noticeable bias. The accuracy of the ML method was investigated in relation to the object aperture. The reconstruction problem appeared well conditioned as long as the object aperture is larger than 0.1 rad. The angle between the two viewing directions appeared to be the parameter that was most likely to cause major inaccuracies in the reconstruction of the 3-D positions of the markers. Hence, attempts to improve the robustness of the method should primarily focus on reduction of the error in this parameter.

  10. Multiple-camera/motion stereoscopy for range estimation in helicopter flight

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.

    1993-01-01

    Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.

  11. Camera Operator and Videographer

    ERIC Educational Resources Information Center

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  12. Radiation camera motion correction system

    DOEpatents

    Hoffer, P.B.

    1973-12-18

    The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)

  13. A math model for high velocity sensoring with a focal plane shuttered camera.

    NASA Technical Reports Server (NTRS)

    Morgan, P.

    1971-01-01

    A new mathematical model is presented which describes the image produced by a focal plane shutter-equipped camera. The model is based upon the well-known collinearity condition equations and incorporates both the translational and rotational motion of the camera during the exposure interval. The first differentials of the model with respect to exposure interval, delta t, yield the general matrix expressions for image velocities which may be simplified to known cases. The exposure interval, delta t, may be replaced under certain circumstances with a function incorporating blind velocity and image position if desired. The model is tested using simulated Lunar Orbiter data and found to be computationally stable as well as providing excellent results, provided that some external information is available on the velocity parameters.

  14. Robust Notion Vision For A Vehicle Moving On A Plane

    NASA Astrophysics Data System (ADS)

    Moni, Shankar; Weldon, E. J.

    1987-05-01

    A vehicle equipped with a cemputer vision system moves on a plane. We show that subject to certain constraints, the system can determine the motion of the vehicle (one rotational and two translational degrees of freedom) and the depth of the scene in front of the vehicle. The constraints include limits on the speed of the vehicle, presence of texture on the plane and absence of pitch and roll in the vehicular motion. It is possible to decouple the problems of finding the vehicle's motion and the depth of the scene in front of the vehicle by using two rigidly connected cameras. One views a field with known depth (i.e. the ground plane) and estimates the motion parameters and the other determines the depth map knowing the motion parameters. The motion is constrained to be planar to increase robustness. We use a least squares method of fitting the vehicle motion to observer brightness gradients. With this method, no correspondence between image points needs to be established and information fran the entire image is used in calculating notion. The algorithm performs very reliably on real image sequences and these results have been included. The results compare favourably to the performance of the algorithm of Negandaripour and Horn [2] where six degrees of freedom are assumed.

  15. Accurate motion parameter estimation for colonoscopy tracking using a regression method

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.

    2010-03-01

    Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.

  16. Camera Image Transformation and Registration for Safe Spacecraft Landing and Hazard Avoidance

    NASA Technical Reports Server (NTRS)

    Jones, Brandon M.

    2005-01-01

    Inherent geographical hazards of Martian terrain may impede a safe landing for science exploration spacecraft. Surface visualization software for hazard detection and avoidance may accordingly be applied in vehicles such as the Mars Exploration Rover (MER) to induce an autonomous and intelligent descent upon entering the planetary atmosphere. The focus of this project is to develop an image transformation algorithm for coordinate system matching between consecutive frames of terrain imagery taken throughout descent. The methodology involves integrating computer vision and graphics techniques, including affine transformation and projective geometry of an object, with the intrinsic parameters governing spacecraft dynamic motion and camera calibration.

  17. On-sky performance of the tip-tilt correction system for GLAS using an EMCCD camera

    NASA Astrophysics Data System (ADS)

    Skvarč, Jure; Tulloch, Simon

    2008-07-01

    Adaptive optics systems based on laser guide stars still need a natural guide star (NGS) to correct for the image motion caused by the atmosphere and by imperfect telescope tracking. The ability to properly compensate for this motion using a faint NGS is critical to achieve large sky coverage. For the laser guide system (GLAS) on the 4.2 m William Herschel Telescope we designed and tested in the laboratory and on-sky a tip-tilt correction system based on a PC running Linux and an EMCCD technology camera. The control software allows selection of different centroiding algorithms and loop control methods as well as the control parameters. Parameter analysis has been performed using tip-tilt only correction before the laser commissioning and the selected sets of parameters were then used during commissioning of the laser guide star system. We have established the SNR of the guide star as a function of magnitude, depending on the image sampling frequency and on the dichroic used in the optical system; achieving a measurable improvement using full AO correction with NGSes down to magnitude range R=16.5 to R=18. A minimum SNR of about 10 was established to be necessary for a useful correction. The system was used to produce 0.16 arcsecond images in H band using bright NGS and laser correction during GLAS commissioning runs.

  18. Using a Digital Video Camera to Study Motion

    ERIC Educational Resources Information Center

    Abisdris, Gil; Phaneuf, Alain

    2007-01-01

    To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…

  19. Measuring the circular motion of small objects using laser stroboscopic images.

    PubMed

    Wang, Hairong; Fu, Y; Du, R

    2008-01-01

    Measuring the circular motion of a small object, including its displacement, speed, and acceleration, is a challenging task. This paper presents a new method for measuring repetitive and/or nonrepetitive, constant speed and/or variable speed circular motion using laser stroboscopic images. Under stroboscopic illumination, each image taken by an ordinary camera records multioutlines of an object in motion; hence, processing the stroboscopic image will be able to extract the motion information. We built an experiment apparatus consisting of a laser as the light source, a stereomicroscope to magnify the image, and a normal complementary metal oxide semiconductor camera to record the image. As the object is in motion, the stroboscopic illumination generates a speckle pattern on the object that can be recorded by the camera and analyzed by a computer. Experimental results indicate that the stroboscopic imaging is stable under various conditions. Moreover, the characteristics of the motion, including the displacement, the velocity, and the acceleration can be calculated based on the width of speckle marks, the illumination intensity, the duty cycle, and the sampling frequency. Compared with the popular high-speed camera method, the presented method may achieve the same measuring accuracy, but with much reduced cost and complexity.

  20. The NACA High-Speed Motion-Picture Camera Optical Compensation at 40,000 Photographs Per Second

    NASA Technical Reports Server (NTRS)

    Miller, Cearcy D

    1946-01-01

    The principle of operation of the NACA high-speed camera is completely explained. This camera, operating at the rate of 40,000 photographs per second, took the photographs presented in numerous NACA reports concerning combustion, preignition, and knock in the spark-ignition engine. Many design details are presented and discussed, details of an entirely conventional nature are omitted. The inherent aberrations of the camera are discussed and partly evaluated. The focal-plane-shutter effect of the camera is explained. Photographs of the camera are presented. Some high-speed motion pictures of familiar objects -- photoflash bulb, firecrackers, camera shutter -- are reproduced as an illustration of the quality of the photographs taken by the camera.

  1. A novel super-resolution camera model

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  2. Repurposing video recordings for structure motion estimations

    NASA Astrophysics Data System (ADS)

    Khaloo, Ali; Lattanzi, David

    2016-04-01

    Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.

  3. Vision-guided gripping of a cylinder

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1991-01-01

    The motivation for vision-guided servoing is taken from tasks in automated or telerobotic space assembly and construction. Vision-guided servoing requires the ability to perform rapid pose estimates and provide predictive feature tracking. Monocular information from a gripper-mounted camera is used to servo the gripper to grasp a cylinder. The procedure is divided into recognition and servo phases. The recognition stage verifies the presence of a cylinder in the camera field of view. Then an initial pose estimate is computed and uncluttered scan regions are selected. The servo phase processes only the selected scan regions of the image. Given the knowledge, from the recognition phase, that there is a cylinder in the image and knowing the radius of the cylinder, 4 of the 6 pose parameters can be estimated with minimal computation. The relative motion of the cylinder is obtained by using the current pose and prior pose estimates. The motion information is then used to generate a predictive feature-based trajectory for the path of the gripper.

  4. A multi-criteria approach to camera motion design for volume data animation.

    PubMed

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.

  5. Systems and methods for estimating the structure and motion of an object

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dani, Ashwin P; Dixon, Warren

    2015-11-03

    In one embodiment, the structure and motion of a stationary object are determined using two images and a linear velocity and linear acceleration of a camera. In another embodiment, the structure and motion of a stationary or moving object are determined using an image and linear and angular velocities of a camera.

  6. Projection of controlled repeatable real-time moving targets to test and evaluate motion imagery quality

    NASA Astrophysics Data System (ADS)

    Scopatz, Stephen D.; Mendez, Michael; Trent, Randall

    2015-05-01

    The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.

  7. 7. MOTION PICTURE CAMERA STAND AT BUILDING 8768. Edwards ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. MOTION PICTURE CAMERA STAND AT BUILDING 8768. - Edwards Air Force Base, Air Force Rocket Propulsion Laboratory, Observation Bunkers for Test Stand 1-A, Test Area 1-120, north end of Jupiter Boulevard, Boron, Kern County, CA

  8. Visual Target Tracking in the Presence of Unknown Observer Motion

    NASA Technical Reports Server (NTRS)

    Williams, Stephen; Lu, Thomas

    2009-01-01

    Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.

  9. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    PubMed Central

    Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji

    2017-01-01

    This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385

  10. A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming

    2018-06-01

    This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.

  11. Trained neurons-based motion detection in optical camera communications

    NASA Astrophysics Data System (ADS)

    Teli, Shivani; Cahyadi, Willy Anugrah; Chung, Yeon Ho

    2018-04-01

    A concept of trained neurons-based motion detection (TNMD) in optical camera communications (OCC) is proposed. The proposed TNMD is based on neurons present in a neural network that perform repetitive analysis in order to provide efficient and reliable motion detection in OCC. This efficient motion detection can be considered another functionality of OCC in addition to two traditional functionalities of illumination and communication. To verify the proposed TNMD, the experiments were conducted in an indoor static downlink OCC, where a mobile phone front camera is employed as the receiver and an 8 × 8 red, green, and blue (RGB) light-emitting diode array as the transmitter. The motion is detected by observing the user's finger movement in the form of centroid through the OCC link via a camera. Unlike conventional trained neurons approaches, the proposed TNMD is trained not with motion itself but with centroid data samples, thus providing more accurate detection and far less complex detection algorithm. The experiment results demonstrate that the TNMD can detect all considered motions accurately with acceptable bit error rate (BER) performances at a transmission distance of up to 175 cm. In addition, while the TNMD is performed, a maximum data rate of 3.759 kbps over the OCC link is obtained. The OCC with the proposed TNMD combined can be considered an efficient indoor OCC system that provides illumination, communication, and motion detection in a convenient smart home environment.

  12. Motion Imagery and Robotics Application Project (MIRA)

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney P.

    2010-01-01

    This viewgraph presentation describes the Motion Imagery and Robotics Application (MIRA) Project. A detailed description of the MIRA camera service software architecture, encoder features, and on-board communications are presented. A description of a candidate camera under development is also shown.

  13. A comparison between soft x-ray and magnetic phase data on the Madison symmetric torus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    VanMeter, P. D., E-mail: pvanmeter@wisc.edu; Reusch, L. M.; Sarff, J. S.

    The Soft X-Ray (SXR) tomography system on the Madison Symmetric Torus uses four cameras to determine the emissivity structure of the plasma. This structure should directly correspond to the structure of the magnetic field; however, there is an apparent phase difference between the emissivity reconstructions and magnetic field reconstructions when using a cylindrical approximation. The difference between the phase of the dominant rotating helical mode of the magnetic field and the motion of the brightest line of sight for each SXR camera is dependent on both the camera viewing angle and the plasma conditions. Holding these parameters fixed, this phasemore » difference is shown to be consistent over multiple measurements when only toroidal or poloidal magnetic field components are considered. These differences emerge from physical effects of the toroidal geometry which are not captured in the cylindrical approximation.« less

  14. Camera systems in human motion analysis for biomedical applications

    NASA Astrophysics Data System (ADS)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  15. Registration of Large Motion Blurred Images

    DTIC Science & Technology

    2016-05-09

    in handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce...handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce two types...blur in the captured image when there is camera motion during exposure. However, contemporary CMOS sensors employ an electronic rolling shutter (RS

  16. Teacher-in-Space Trainees - Arriflex Motion Picture Camera

    NASA Image and Video Library

    1985-09-20

    S85-40668 (18 Sept. 1985) --- The two teachers, Sharon Christa McAuliffe (left) and Barbara R. Morgan have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedures for space travelers. The second week of training included camera training, aircraft familiarization and other activities. Photo credit: NASA

  17. Photogrammetry of Apollo 15 photography, part C

    NASA Technical Reports Server (NTRS)

    Wu, S. S. C.; Schafer, F. J.; Jordan, R.; Nakata, G. M.; Derick, J. L.

    1972-01-01

    In the Apollo 15 mission, a mapping camera system and a 61 cm optical bar, high resolution panoramic camera, as well as a laser altimeter were used. The panoramic camera is described, having several distortion sources, such as cylindrical shape of the negative film surface, the scanning action of the lens, the image motion compensator, and the spacecraft motion. Film products were processed on a specifically designed analytical plotter.

  18. Real time moving scene holographic camera system

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1973-01-01

    A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).

  19. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ingram, S; Rao, A; Wendt, R

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the cameramore » by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.« less

  20. Robust and Accurate Image-Based Georeferencing Exploiting Relative Orientation Constraints

    NASA Astrophysics Data System (ADS)

    Cavegn, S.; Blaser, S.; Nebiker, S.; Haala, N.

    2018-05-01

    Urban environments with extended areas of poor GNSS coverage as well as indoor spaces that often rely on real-time SLAM algorithms for camera pose estimation require sophisticated georeferencing in order to fulfill our high requirements of a few centimeters for absolute 3D point measurement accuracies. Since we focus on image-based mobile mapping, we extended the structure-from-motion pipeline COLMAP with georeferencing capabilities by integrating exterior orientation parameters from direct sensor orientation or SLAM as well as ground control points into bundle adjustment. Furthermore, we exploit constraints for relative orientation parameters among all cameras in bundle adjustment, which leads to a significant robustness and accuracy increase especially by incorporating highly redundant multi-view image sequences. We evaluated our integrated georeferencing approach on two data sets, one captured outdoors by a vehicle-based multi-stereo mobile mapping system and the other captured indoors by a portable panoramic mobile mapping system. We obtained mean RMSE values for check point residuals between image-based georeferencing and tachymetry of 2 cm in an indoor area, and 3 cm in an urban environment where the measurement distances are a multiple compared to indoors. Moreover, in comparison to a solely image-based procedure, our integrated georeferencing approach showed a consistent accuracy increase by a factor of 2-3 at our outdoor test site. Due to pre-calibrated relative orientation parameters, images of all camera heads were oriented correctly in our challenging indoor environment. By performing self-calibration of relative orientation parameters among respective cameras of our vehicle-based mobile mapping system, remaining inaccuracies from suboptimal test field calibration were successfully compensated.

  1. Richardson-Lucy deblurring for the star scene under a thinning motion path

    NASA Astrophysics Data System (ADS)

    Su, Laili; Shao, Xiaopeng; Wang, Lin; Wang, Haixin; Huang, Yining

    2015-05-01

    This paper puts emphasis on how to model and correct image blur that arises from a camera's ego motion while observing a distant star scene. Concerning the significance of accurate estimation of point spread function (PSF), a new method is employed to obtain blur kernel by thinning star motion path. In particular, how the blurred star image can be corrected to reconstruct the clear scene with a thinning motion blur model which describes the camera's path is presented. This thinning motion path to build blur kernel model is more effective at modeling the spatially motion blur introduced by camera's ego motion than conventional blind estimation of kernel-based PSF parameterization. To gain the reconstructed image, firstly, an improved thinning algorithm is used to obtain the star point trajectory, so as to extract the blur kernel of the motion-blurred star image. Then how motion blur model can be incorporated into the Richardson-Lucy (RL) deblurring algorithm, which reveals its overall effectiveness, is detailed. In addition, compared with the conventional estimated blur kernel, experimental results show that the proposed method of using thinning algorithm to get the motion blur kernel is of less complexity, higher efficiency and better accuracy, which contributes to better restoration of the motion-blurred star images.

  2. Documenting Western Burrowing Owl Reproduction and Activity Patterns Using Motion-Activated Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Derek B.; Greger, Paul D.

    We used motion-activated cameras to monitor the reproduction and patterns of activity of the Burrowing Owl (Athene cunicularia) above ground at 45 burrows in south-central Nevada during the breeding seasons of 1999, 2000, 2001, and 2005. The 37 broods, encompassing 180 young, raised over the four years represented an average of 4.9 young per successful breeding pair. Young and adult owls were detected at the burrow entrance at all times of the day and night, but adults were detected more frequently during afternoon/early evening than were young. Motion-activated cameras require less effort to implement than other techniques. Limitations include photographingmore » only a small percentage of owl activity at the burrow; not detecting the actual number of eggs, young, or number fledged; and not being able to track individual owls over time. Further work is also necessary to compare the accuracy of productivity estimates generated from motion-activated cameras with other techniques.« less

  3. A novel camera localization system for extending three-dimensional digital image correlation

    NASA Astrophysics Data System (ADS)

    Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher

    2018-03-01

    The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.

  4. Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry

    NASA Technical Reports Server (NTRS)

    Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)

    2016-01-01

    A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.

  5. 3D kinematic measurement of human movement using low cost fish-eye cameras

    NASA Astrophysics Data System (ADS)

    Islam, Atiqul; Asikuzzaman, Md.; Garratt, Matthew A.; Pickering, Mark R.

    2017-02-01

    3D motion capture is difficult when the capturing is performed in an outdoor environment without controlled surroundings. In this paper, we propose a new approach of using two ordinary cameras arranged in a special stereoscopic configuration and passive markers on a subject's body to reconstruct the motion of the subject. Firstly for each frame of the video, an adaptive thresholding algorithm is applied for extracting the markers on the subject's body. Once the markers are extracted, an algorithm for matching corresponding markers in each frame is applied. Zhang's planar calibration method is used to calibrate the two cameras. As the cameras use the fisheye lens, they cannot be well estimated using a pinhole camera model which makes it difficult to estimate the depth information. In this work, to restore the 3D coordinates we use a unique calibration method for fisheye lenses. The accuracy of the 3D coordinate reconstruction is evaluated by comparing with results from a commercially available Vicon motion capture system.

  6. Chaotic behavior in the locomotion of Amoeba proteus.

    PubMed

    Miyoshi, H; Kagawa, Y; Tsuchiya, Y

    2001-01-01

    The locomotion of Amoeba proteus has been investigated by algorithms evaluating correlation dimension and Lyapunov spectrum developed in the field of nonlinear science. It is presumed by these parameters whether the random behavior of the system is stochastic or deterministic. For the analysis of the nonlinear parameters, n-dimensional time-delayed vectors have been reconstructed from a time series of periphery and area of A. proteus images captured with a charge-coupled-device camera, which characterize its random motion. The correlation dimension analyzed has shown the random motion of A. proteus is subjected only to 3-4 macrovariables, though the system is a complex system composed of many degrees of freedom. Furthermore, the analysis of the Lyapunov spectrum has shown its largest exponent takes positive values. These results indicate the random behavior of A. proteus is chaotic and deterministic motion on an attractor with low dimension. It may be important for the elucidation of the cell locomotion to take account of nonlinear interactions among a small number of dynamics such as the sol-gel transformation, the cytoplasmic streaming, and the relating chemical reaction occurring in the cell.

  7. Improved head-controlled TV system produces high-quality remote image

    NASA Technical Reports Server (NTRS)

    Goertz, R.; Lindberg, J.; Mingesz, D.; Potts, C.

    1967-01-01

    Manipulator operator uses an improved resolution tv camera/monitor positioning system to view the remote handling and processing of reactive, flammable, explosive, or contaminated materials. The pan and tilt motions of the camera and monitor are slaved to follow the corresponding motions of the operators head.

  8. Accuracy of an optical active-marker system to track the relative motion of rigid bodies.

    PubMed

    Maletsky, Lorin P; Sun, Junyi; Morton, Nicholas A

    2007-01-01

    The measurement of relative motion between two moving bones is commonly accomplished for in vitro studies by attaching to each bone a series of either passive or active markers in a fixed orientation to create a rigid body (RB). This work determined the accuracy of motion between two RBs using an Optotrak optical motion capture system with active infrared LEDs. The stationary noise in the system was quantified by recording the apparent change in position with the RBs stationary and found to be 0.04 degrees and 0.03 mm. Incremental 10 degrees rotations and 10-mm translations were made using a more precise tool than the Optotrak. Increasing camera distance decreased the precision or increased the range of values observed for a set motion and increased the error in rotation or bias between the measured and actual rotation. The relative positions of the RBs with respect to the camera-viewing plane had a minimal effect on the kinematics and, therefore, for a given distance in the volume less than or close to the precalibrated camera distance, any motion was similarly reliable. For a typical operating set-up, a 10 degrees rotation showed a bias of 0.05 degrees and a 95% repeatability limit of 0.67 degrees. A 10-mm translation showed a bias of 0.03 mm and a 95% repeatability limit of 0.29 mm. To achieve a high level of accuracy it is important to keep the distance between the cameras and the markers near the distance the cameras are focused to during calibration.

  9. SU-E-T-562: Motion Tracking Optimization for Conformal Arc Radiotherapy Plans: A QUASAR Phantom Based Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Z; Wang, I; Yao, R

    Purpose: This study is to use plan parameters optimization (Dose rate, collimator angle, couch angle, initial starting phase) to improve the performance of conformal arc radiotherapy plans with motion tracking by increasing the plan performance score (PPS). Methods: Two types of 3D conformal arc plans were created based on QUASAR respiratory motion phantom with spherical and cylindrical targets. Sinusoidal model was applied to the MLC leaves to generate motion tracking plans. A MATLAB program was developed to calculate PPS of each plan (ranges from 0–1) and optimize plan parameters. We first selected the dose rate for motion tracking plans andmore » then used simulated annealing algorithm to search for the combination of the other parameters that resulted in the plan of the maximal PPS. The optimized motion tracking plan was delivered by Varian Truebeam Linac. In-room cameras and stopwatch were used for starting phase selection and synchronization between phantom motion and plan delivery. Gaf-EBT2 dosimetry films were used to measure the dose delivered to the target in QUASAR phantom. Dose profiles and Truebeam trajectory log files were used for plan delivery performance evaluation. Results: For spherical target, the maximal PPS (PPSsph) of the optimized plan was 0.79: (Dose rate: 500MU/min, Collimator: 90°, Couch: +10°, starting phase: 0.83π). For cylindrical target, the maximal PPScyl was 0.75 (Dose rate: 300MU/min, Collimator: 87°, starting phase: 0.97π) with couch at 0°. Differences of dose profiles between motion tracking plans (with the maximal and the minimal PPS) and 3D conformal plans were as follows: PPSsph=0.79: %ΔFWHM: 8.9%, %Dmax: 3.1%; PPSsph=0.52: %ΔFWHM: 10.4%, %Dmax: 6.1%. PPScyl=0.75: %ΔFWHM: 4.7%, %Dmax: 3.6%; PPScyl=0.42: %ΔFWHM: 12.5%, %Dmax: 9.6%. Conclusion: By achieving high plan performance score through parameters optimization, we can improve target dose conformity of motion tracking plan by decreasing total MLC leaf travel distance and leaf speed.« less

  10. Automatic Camera Calibration Using Multiple Sets of Pairwise Correspondences.

    PubMed

    Vasconcelos, Francisco; Barreto, Joao P; Boyer, Edmond

    2018-04-01

    We propose a new method to add an uncalibrated node into a network of calibrated cameras using only pairwise point correspondences. While previous methods perform this task using triple correspondences, these are often difficult to establish when there is limited overlap between different views. In such challenging cases we must rely on pairwise correspondences and our solution becomes more advantageous. Our method includes an 11-point minimal solution for the intrinsic and extrinsic calibration of a camera from pairwise correspondences with other two calibrated cameras, and a new inlier selection framework that extends the traditional RANSAC family of algorithms to sampling across multiple datasets. Our method is validated on different application scenarios where a lack of triple correspondences might occur: addition of a new node to a camera network; calibration and motion estimation of a moving camera inside a camera network; and addition of views with limited overlap to a Structure-from-Motion model.

  11. Marker-less multi-frame motion tracking and compensation in PET-brain imaging

    NASA Astrophysics Data System (ADS)

    Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.

    2015-03-01

    In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.

  12. "Teacher in Space" Trainees - Arriflex Motion Picture Camera

    NASA Image and Video Library

    1985-09-20

    S85-40670 (18 Sept. 1985) --- The two teachers, Sharon Christa McAuliffe and Barbara R. Morgan (out of frame) have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedures for space travelers. The second week of training included camera training, aircraft familiarization and other activities. McAuliffe zeroes in on a test subject during a practice session with the Arriflex. Photo credit: NASA

  13. Teacher-in-Space Trainees - Arriflex Motion Picture Camera

    NASA Image and Video Library

    1985-09-20

    S85-40669 (18 Sept. 1985) --- The two teachers, Sharon Christa McAuliffe (left) and Barbara R. Morgan have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedure for space travelers. The second week of training included camera training, aircraft familiarization and other activities. Morgan adjusts a lens as a studious McAuliffe looks on. Photo credit: NASA

  14. "Teacher in Space" Trainees - Arriflex Motion Picture Camera

    NASA Image and Video Library

    1985-09-20

    S85-40671 (18 Sept. 1985) --- The two teachers, Barbara R. Morgan and Sharon Christa McAuliffe (out of frame) have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedures for space travelers. The second week of training included camera training, aircraft familiarization and other activities. Morgan zeroes in on a test subject during a practice session with the Arriflex. Photo credit: NASA

  15. Astronaut Walz on flight deck with IMAX camera

    NASA Image and Video Library

    1996-11-04

    STS079-362-023 (16-26 Sept. 1996) --- Astronaut Carl E. Walz, mission specialist, positions the IMAX camera for a shoot on the flight deck of the Space Shuttle Atlantis. The IMAX project is a collaboration among NASA, the Smithsonian Institution's National Air and Space Museum, IMAX Systems Corporation and the Lockheed Corporation to document in motion picture format significant space activities and promote NASA's educational goals using the IMAX film medium. This system, developed by IMAX of Toronto, uses specially designed 65mm cameras and projectors to record and display very high definition color motion pictures which, accompanied by six-channel high fidelity sound, are displayed on screens in IMAX and OMNIMAX theaters that are up to ten times larger than a conventional screen, producing a feeling of "being there." The 65mm photography is transferred to 70mm motion picture films for showing in IMAX theaters. IMAX cameras have been flown on 14 previous missions.

  16. Visual fatigue modeling for stereoscopic video shot based on camera motion

    NASA Astrophysics Data System (ADS)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  17. Human Age Estimation Method Robust to Camera Sensor and/or Face Movement

    PubMed Central

    Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung

    2015-01-01

    Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282

  18. Optical Flow Estimation for Flame Detection in Videos

    PubMed Central

    Mueller, Martin; Karasev, Peter; Kolesov, Ivan; Tannenbaum, Allen

    2014-01-01

    Computational vision-based flame detection has drawn significant attention in the past decade with camera surveillance systems becoming ubiquitous. Whereas many discriminating features, such as color, shape, texture, etc., have been employed in the literature, this paper proposes a set of motion features based on motion estimators. The key idea consists of exploiting the difference between the turbulent, fast, fire motion, and the structured, rigid motion of other objects. Since classical optical flow methods do not model the characteristics of fire motion (e.g., non-smoothness of motion, non-constancy of intensity), two optical flow methods are specifically designed for the fire detection task: optimal mass transport models fire with dynamic texture, while a data-driven optical flow scheme models saturated flames. Then, characteristic features related to the flow magnitudes and directions are computed from the flow fields to discriminate between fire and non-fire motion. The proposed features are tested on a large video database to demonstrate their practical usefulness. Moreover, a novel evaluation method is proposed by fire simulations that allow for a controlled environment to analyze parameter influences, such as flame saturation, spatial resolution, frame rate, and random noise. PMID:23613042

  19. Combined use of a priori data for fast system self-calibration of a non-rigid multi-camera fringe projection system

    NASA Astrophysics Data System (ADS)

    Stavroulakis, Petros I.; Chen, Shuxiao; Sims-Waterhouse, Danny; Piano, Samanta; Southon, Nicholas; Bointon, Patrick; Leach, Richard

    2017-06-01

    In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns on planes, before the actual object can continue to be measured after a motion of a camera or projector has been introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the success of this calibration pipeline can be greatly improved by using adequate a priori information from the aforementioned sources.

  20. Camera Trajectory fromWide Baseline Images

    NASA Astrophysics Data System (ADS)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the angle θ of its corresponding rays w.r.t. the optical axis as θ = ar 1+br2 . After a successful calibration, we know the correspondence of the image points to the 3D optical rays in the coordinate system of the camera. The following steps aim at finding the transformation between the camera and the world coordinate systems, i.e. the pose of the camera in the 3D world, using 2D image matches. For computing 3D structure, we construct a set of tentative matches detecting different affine covariant feature regions including MSER, Harris Affine, and Hessian Affine in acquired images. These features are alternative to popular SIFT features and work comparably in our situation. Parameters of the detectors are chosen to limit the number of regions to 1-2 thousands per image. The detected regions are assigned local affine frames (LAF) and transformed into standard positions w.r.t. their LAFs. Discrete Cosine Descriptors are computed for each region in the standard position. Finally, mutual distances of all regions in one image and all regions in the other image are computed as the Euclidean distances of their descriptors and tentative matches are constructed by selecting the mutually closest pairs. Opposed to the methods using short baseline images, simpler image features which are not affine covariant cannot be used because the view point can change a lot between consecutive frames. Furthermore, feature matching has to be performed on the whole frame because no assumptions on the proximity of the consecutive projections can be made for wide baseline images. This is making the feature detection, description, and matching much more time-consuming than it is for short baseline images and limits the usage to low frame rate sequences when operating in real-time. Robust 3D structure can be computed by RANSAC which searches for the largest subset of the set of tentative matches which is, within a predefined threshold ", consistent with an epipolar geometry. We use ordered sampling as suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an arbitrary object recognition routine designed to work with images acquired by perspective cameras can be used without any further modifications.

  1. Foot and Ankle Kinematics and Dynamic Electromyography: Quantitative Analysis of Recovery From Peroneal Neuropathy in a Professional Football Player.

    PubMed

    Prasad, Nikhil K; Coleman Wood, Krista A; Spinner, Robert J; Kaufman, Kenton R

    The assessment of neuromuscular recovery after peripheral nerve surgery has typically been a subjective physical examination. The purpose of this report was to assess the value of gait analysis in documenting recovery quantitatively. A professional football player underwent gait analysis before and after surgery for a peroneal intraneural ganglion cyst causing a left-sided foot drop. Surface electromyography (SEMG) recording from surface electrodes and motion parameter acquisition from a computerized motion capture system consisting of 10 infrared cameras were performed simultaneously. A comparison between SEMG recordings before and after surgery showed a progression from disorganized activation in the left tibialis anterior and peroneus longus muscles to temporally appropriate activation for the phase of the gait cycle. Kinematic analysis of ankle motion planes showed resolution from a complete foot drop preoperatively to phase-appropriate dorsiflexion postoperatively. Gait analysis with dynamic SEMG and motion capture complements physical examination when assessing postoperative recovery in athletes.

  2. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Smith, Philip W.

    1993-01-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  3. Feasibility evaluation of a motion detection system with face images for stereotactic radiosurgery.

    PubMed

    Yamakawa, Takuya; Ogawa, Koichi; Iyatomi, Hitoshi; Kunieda, Etsuo

    2011-01-01

    In stereotactic radiosurgery we can irradiate a targeted volume precisely with a narrow high-energy x-ray beam, and thus the motion of a targeted area may cause side effects to normal organs. This paper describes our motion detection system with three USB cameras. To reduce the effect of change in illuminance in a tracking area we used an infrared light and USB cameras that were sensitive to the infrared light. The motion detection of a patient was performed by tracking his/her ears and nose with three USB cameras, where pattern matching between a predefined template image for each view and acquired images was done by an exhaustive search method with a general-purpose computing on a graphics processing unit (GPGPU). The results of the experiments showed that the measurement accuracy of our system was less than 0.7 mm, amounting to less than half of that of our previous system.

  4. Dense depth maps from correspondences derived from perceived motion

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2017-01-01

    Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.

  5. The Effect of Selected Cinemagraphic Elements on Audience Perception of Mediated Concepts.

    ERIC Educational Resources Information Center

    Orr, Quinn

    This study is to explore cinemagraphic and visual elements and their inter-relations through the reinterpretation of previous research and literature. The cinemagraphic elements of visual images (camera angle, camera motion, subject motion, color, and lighting) work as a language requiring a proper grammar for the messages to be conveyed in their…

  6. Time-Lapse Motion Picture Technique Applied to the Study of Geological Processes.

    PubMed

    Miller, R D; Crandell, D R

    1959-09-25

    Light-weight, battery-operated timers were built and coupled to 16-mm motion-picture cameras having apertures controlled by photoelectric cells. The cameras were placed adjacent to Emmons Glacier on Mount Rainier. The film obtained confirms the view that exterior time-lapse photography can be applied to the study of slow-acting geologic processes.

  7. Combining High-Speed Cameras and Stop-Motion Animation Software to Support Students' Modeling of Human Body Movement

    ERIC Educational Resources Information Center

    Lee, Victor R.

    2015-01-01

    Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video,…

  8. Noise level and MPEG-2 encoder statistics

    NASA Astrophysics Data System (ADS)

    Lee, Jungwoo

    1997-01-01

    Most software in the movie and broadcasting industries are still in analog film or tape format, which typically contains random noise that originated from film, CCD camera, and tape recording. The performance of the MPEG-2 encoder may be significantly degraded by the noise. It is also affected by the scene type that includes spatial and temporal activity. The statistical property of noise originating from camera and tape player is analyzed and the models for the two types of noise are developed. The relationship between the noise, the scene type, and encoder statistics of a number of MPEG-2 parameters such as motion vector magnitude, prediction error, and quant scale are discussed. This analysis is intended to be a tool for designing robust MPEG encoding algorithms such as preprocessing and rate control.

  9. Earth elevation map production and high resolution sensing camera imaging analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai

    2010-11-01

    The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.

  10. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    PubMed Central

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  11. Heliostat kinematic system calibration using uncalibrated cameras

    NASA Astrophysics Data System (ADS)

    Burisch, Michael; Gomez, Luis; Olasolo, David; Villasante, Cristobal

    2017-06-01

    The efficiency of the solar field greatly depends on the ability of the heliostats to precisely reflect solar radiation onto a central receiver. To control the heliostats with such a precision accurate knowledge of the motion of each of them modeled as a kinematic system is required. Determining the parameters of this system for each heliostat by a calibration system is crucial for the efficient operation of the solar field. For small sized heliostats being able to make such a calibration in a fast and automatic manner is imperative as the solar field potentially contain tens or even hundreds of thousands of them. A calibration system which can rapidly recalibrate a whole solar field would also allow reducing costs. Heliostats are generally designed to provide stability over a large period of time. Being able to relax this requirement and compensate any occurring error by adapting parameters in a model, the costs of the heliostat can be reduced. The presented method describes such an automatic calibration system using uncalibrated cameras rigidly attached to each heliostat. The cameras are used to observe targets spread out through the solar field; based on this the kinematic system of the heliostat can be estimated with high precision. A comparison of this approach to similar solutions shows the viability of the proposed solution.

  12. A complete system for 3D reconstruction of roots for phenotypic analysis.

    PubMed

    Kumar, Pankaj; Cai, Jinhai; Miklavcic, Stanley J

    2015-01-01

    Here we present a complete system for 3D reconstruction of roots grown in a transparent gel medium or washed and suspended in water. The system is capable of being fully automated as it is self calibrating. The system starts with detection of root tips in root images from an image sequence generated by a turntable motion. Root tips are detected using the statistics of Zernike moments on image patches centred on high curvature points on root boundary and Bayes classification rule. The detected root tips are tracked in the image sequence using a multi-target tracking algorithm. Conics are fitted to the root tip trajectories using a novel ellipse fitting algorithm which weighs the data points by its eccentricity. The conics projected from the circular trajectory have a complex conjugate intersection which are image of the circular points. Circular points constraint the image of the absolute conics which are directly related to the internal parameters of the camera. The pose of the camera is computed from the image of the rotation axis and the horizon. The silhouettes of the roots and camera parameters are used to reconstruction the 3D voxel model of the roots. We show the results of real 3D reconstruction of roots which are detailed and realistic for phenotypic analysis.

  13. Movement and Motion of Soybean Cyst Nematode Heterodera glycines Populations and Individuals in Response to Abamectin.

    PubMed

    Jensen, Jared P; Beeman, Augustine Q; Njus, Zach L; Kalwa, Upender; Pandey, Santosh; Tylka, Gregory L

    2018-05-09

    Two new in vitro methods were developed to analyze plant-parasitic nematode behavior, at the population and the individual organism levels, through time-lapse image analysis. The first method employed a high-resolution flatbed scanner to monitor the movement of a population of nematodes over a 24-h period at 25°C. The second method tracked multiple motion parameters of individual nematodes on a microscopic scale, using a high-speed camera. Changes in movement and motion of second-stage juveniles (J2) of the soybean cyst nematode Heterodera glycines Ichinohe were measured after exposure to a serial dilution of abamectin (0.1 to 100 μg/ml). Movement and motion of H. glycines were significantly reduced as the concentration of abamectin increased. The effective range of abamectin to inhibit movement and motion of H. glycines J2 was between 1.0 and 10 μg/ml. Proof-of-concept experiments for both methods produced one of the first in vitro sensitivity studies of H. glycines to abamectin. The two methods developed allow for higher-throughput analysis of nematode movement and motion and provide objective and data-rich measurements that are difficult to achieve from conventional microscopic laboratory methods.

  14. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-11-04

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  15. Fluorescent image tracking velocimeter

    DOEpatents

    Shaffer, Franklin D.

    1994-01-01

    A multiple-exposure fluorescent image tracking velocimeter (FITV) detects and measures the motion (trajectory, direction and velocity) of small particles close to light scattering surfaces. The small particles may follow the motion of a carrier medium such as a liquid, gas or multi-phase mixture, allowing the motion of the carrier medium to be observed, measured and recorded. The main components of the FITV include: (1) fluorescent particles; (2) a pulsed fluorescent excitation laser source; (3) an imaging camera; and (4) an image analyzer. FITV uses fluorescing particles excited by visible laser light to enhance particle image detectability near light scattering surfaces. The excitation laser light is filtered out before reaching the imaging camera allowing the fluoresced wavelengths emitted by the particles to be detected and recorded by the camera. FITV employs multiple exposures of a single camera image by pulsing the excitation laser light for producing a series of images of each particle along its trajectory. The time-lapsed image may be used to determine trajectory and velocity and the exposures may be coded to derive directional information.

  16. Multispectral image dissector camera flight test

    NASA Technical Reports Server (NTRS)

    Johnson, B. L.

    1973-01-01

    It was demonstrated that the multispectral image dissector camera is able to provide composite pictures of the earth surface from high altitude overflights. An electronic deflection feature was used to inject the gyro error signal into the camera for correction of aircraft motion.

  17. Value of automatic patient motion detection and correction in myocardial perfusion imaging using a CZT-based SPECT camera.

    PubMed

    van Dijk, Joris D; van Dalen, Jorn A; Mouden, Mohamed; Ottervanger, Jan Paul; Knollema, Siert; Slump, Cornelis H; Jager, Pieter L

    2018-04-01

    Correction of motion has become feasible on cadmium-zinc-telluride (CZT)-based SPECT cameras during myocardial perfusion imaging (MPI). Our aim was to quantify the motion and to determine the value of automatic correction using commercially available software. We retrospectively included 83 consecutive patients who underwent stress-rest MPI CZT-SPECT and invasive fractional flow reserve (FFR) measurement. Eight-minute stress acquisitions were reformatted into 1.0- and 20-second bins to detect respiratory motion (RM) and patient motion (PM), respectively. RM and PM were quantified and scans were automatically corrected. Total perfusion deficit (TPD) and SPECT interpretation-normal, equivocal, or abnormal-were compared between the noncorrected and corrected scans. Scans with a changed SPECT interpretation were compared with FFR, the reference standard. Average RM was 2.5 ± 0.4 mm and maximal PM was 4.5 ± 1.3 mm. RM correction influenced the diagnostic outcomes in two patients based on TPD changes ≥7% and in nine patients based on changed visual interpretation. In only four of these patients, the changed SPECT interpretation corresponded with FFR measurements. Correction for PM did not influence the diagnostic outcomes. Respiratory motion and patient motion were small. Motion correction did not appear to improve the diagnostic outcome and, hence, the added value seems limited in MPI using CZT-based SPECT cameras.

  18. New Lower-Limb Gait Asymmetry Indices Based on a Depth Camera

    PubMed Central

    Auvinet, Edouard; Multon, Franck; Meunier, Jean

    2015-01-01

    Background: Various asymmetry indices have been proposed to compare the spatiotemporal, kinematic and kinetic parameters of lower limbs during the gait cycle. However, these indices rely on gait measurement systems that are costly and generally require manual examination, calibration procedures and the precise placement of sensors/markers on the body of the patient. Methods: To overcome these issues, this paper proposes a new asymmetry index, which uses an inexpensive, easy-to-use and markerless depth camera (Microsoft Kinect™) output. This asymmetry index directly uses depth images provided by the Kinect™ without requiring joint localization. It is based on the longitudinal spatial difference between lower-limb movements during the gait cycle. To evaluate the relevance of this index, fifteen healthy subjects were tested on a treadmill walking normally and then via an artificially-induced gait asymmetry with a thick sole placed under one shoe. The gait movement was simultaneously recorded using a Kinect™ placed in front of the subject and a motion capture system. Results: The proposed longitudinal index distinguished asymmetrical gait (p < 0.001), while other symmetry indices based on spatiotemporal gait parameters failed using such Kinect™ skeleton measurements. Moreover, the correlation coefficient between this index measured by Kinect™ and the ground truth of this index measured by motion capture is 0.968. Conclusion: This gait asymmetry index measured with a Kinect™ is low cost, easy to use and is a promising development for clinical gait analysis. PMID:25719863

  19. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  20. Imaging characteristics of photogrammetric camera systems

    USGS Publications Warehouse

    Welch, R.; Halliday, J.

    1973-01-01

    In view of the current interest in high-altitude and space photographic systems for photogrammetric mapping, the United States Geological Survey (U.S.G.S.) undertook a comprehensive research project designed to explore the practical aspects of applying the latest image quality evaluation techniques to the analysis of such systems. The project had two direct objectives: (1) to evaluate the imaging characteristics of current U.S.G.S. photogrammetric camera systems; and (2) to develop methodologies for predicting the imaging capabilities of photogrammetric camera systems, comparing conventional systems with new or different types of systems, and analyzing the image quality of photographs. Image quality was judged in terms of a number of evaluation factors including response functions, resolving power, and the detectability and measurability of small detail. The limiting capabilities of the U.S.G.S. 6-inch and 12-inch focal length camera systems were established by analyzing laboratory and aerial photographs in terms of these evaluation factors. In the process, the contributing effects of relevant parameters such as lens aberrations, lens aperture, shutter function, image motion, film type, and target contrast procedures for analyzing image quality and predicting and comparing performance capabilities. ?? 1973.

  1. Reproducibility of UAV-based earth topography reconstructions based on Structure-from-Motion algorithms

    NASA Astrophysics Data System (ADS)

    Clapuyt, Francois; Vanacker, Veerle; Van Oost, Kristof

    2016-05-01

    Combination of UAV-based aerial pictures and Structure-from-Motion (SfM) algorithm provides an efficient, low-cost and rapid framework for remote sensing and monitoring of dynamic natural environments. This methodology is particularly suitable for repeated topographic surveys in remote or poorly accessible areas. However, temporal analysis of landform topography requires high accuracy of measurements and reproducibility of the methodology as differencing of digital surface models leads to error propagation. In order to assess the repeatability of the SfM technique, we surveyed a study area characterized by gentle topography with an UAV platform equipped with a standard reflex camera, and varied the focal length of the camera and location of georeferencing targets between flights. Comparison of different SfM-derived topography datasets shows that precision of measurements is in the order of centimetres for identical replications which highlights the excellent performance of the SfM workflow, all parameters being equal. The precision is one order of magnitude higher for 3D topographic reconstructions involving independent sets of ground control points, which results from the fact that the accuracy of the localisation of ground control points strongly propagates into final results.

  2. Performance analysis of visual tracking algorithms for motion-based user interfaces on mobile devices

    NASA Astrophysics Data System (ADS)

    Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing

    2008-02-01

    Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.

  3. Motion behavior of water droplets driven by triboelectric nanogenerator

    NASA Astrophysics Data System (ADS)

    Nie, Jinhui; Jiang, Tao; Shao, Jiajia; Ren, Zewei; Bai, Yu; Iwamoto, Mitsumasa; Chen, Xiangyu; Wang, Zhong Lin

    2018-04-01

    By integrating a triboelectric nanogenerator (TENG) and a simple circuit board, the motion of water droplets can be controlled by the output of the TENG, which demonstrates a self-powered microfluidic system toward various practical applications in the fields of microfluidic system and soft robotics. This paper describes a method to construct a physical model for this self-powered system on the basis of electrostatic induction theory. The model can precisely simulate the detailed motion behavior of the droplet under driving of TENG, and it can also reveal the influences of surface hydrophobicity on the motion of the droplet, which can help us to better understand the key parameters that decide the performance of the system. The experimental observation of the dynamic performance of the droplet has also been done with a high speed camera system. A comparison between simulation results and real measurements confirms that the proposed model can predict the velocity and position of the water droplet driven by high voltage source as well as TENG. Hence, the proposed model in this work could serve as a guidance for optimizing the self-powered systems in future studies.

  4. The application of holography as a real-time three-dimensional motion picture camera

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L.

    1973-01-01

    A historical introduction to holography is presented, as well as a basic description of sideband holography for stationary objects. A brief theoretical development of both time-dependent and time-independent holography is also provided, along with an analytical and intuitive discussion of a unique holographic arrangement which allows the resolution of front surface detail from an object moving at high speeds. As an application of such a system, a real-time three-dimensional motion picture camera system is discussed and the results of a recent demonstration of the world's first true three-dimensional motion picture are given.

  5. Image deblurring in smartphone devices using built-in inertial measurement sensors

    NASA Astrophysics Data System (ADS)

    Šindelář, Ondřej; Šroubek, Filip

    2013-01-01

    Long-exposure handheld photography is degraded with blur, which is difficult to remove without prior information about the camera motion. In this work, we utilize inertial sensors (accelerometers and gyroscopes) in modern smartphones to detect exact motion trajectory of the smartphone camera during exposure and remove blur from the resulting photography based on the recorded motion data. The whole system is implemented on the Android platform and embedded in the smartphone device, resulting in a close-to-real-time deblurring algorithm. The performance of the proposed system is demonstrated in real-life scenarios.

  6. Image registration for multi-exposed HDRI and motion deblurring

    NASA Astrophysics Data System (ADS)

    Lee, Seok; Wey, Ho-Cheon; Lee, Seong-Deok

    2009-02-01

    In multi-exposure based image fusion task, alignment is an essential prerequisite to prevent ghost artifact after blending. Compared to usual matching problem, registration is more difficult when each image is captured under different photographing conditions. In HDR imaging, we use long and short exposure images, which have different brightness and there exist over/under satuated regions. In motion deblurring problem, we use blurred and noisy image pair and the amount of motion blur varies from one image to another due to the different exposure times. The main difficulty is that luminance levels of the two images are not in linear relationship and we cannot perfectly equalize or normalize the brightness of each image and this leads to unstable and inaccurate alignment results. To solve this problem, we applied probabilistic measure such as mutual information to represent similarity between images after alignment. In this paper, we discribed about the characteristics of multi-exposed input images in the aspect of registration and also analyzed the magnitude of camera hand shake. By exploiting the independence of luminance of mutual information, we proposed a fast and practically useful image registration technique in multiple capturing. Our algorithm can be applied to extreme HDR scenes and motion blurred scenes with over 90% success rate and its simplicity enables to be embedded in digital camera and mobile camera phone. The effectiveness of our registration algorithm is examined by various experiments on real HDR or motion deblurring cases using hand-held camera.

  7. Proposed patient motion monitoring system using feature point tracking with a web camera.

    PubMed

    Miura, Hideharu; Ozawa, Shuichi; Matsuura, Takaaki; Yamada, Kiyoshi; Nagata, Yasushi

    2017-12-01

    Patient motion monitoring systems play an important role in providing accurate treatment dose delivery. We propose a system that utilizes a web camera (frame rate up to 30 fps, maximum resolution of 640 × 480 pixels) and an in-house image processing software (developed using Microsoft Visual C++ and OpenCV). This system is simple to use and convenient to set up. The pyramidal Lucas-Kanade method was applied to calculate motions for each feature point by analysing two consecutive frames. The image processing software employs a color scheme where the defined feature points are blue under stable (no movement) conditions and turn red along with a warning message and an audio signal (beeping alarm) for large patient movements. The initial position of the marker was used by the program to determine the marker positions in all the frames. The software generates a text file that contains the calculated motion for each frame and saves it as a compressed audio video interleave (AVI) file. We proposed a patient motion monitoring system using a web camera, which is simple and convenient to set up, to increase the safety of treatment delivery.

  8. The Information Available to a Moving Observer on Shape with Unknown, Isotropic BRDFs.

    PubMed

    Chandraker, Manmohan

    2016-07-01

    Psychophysical studies show motion cues inform about shape even with unknown reflectance. Recent works in computer vision have considered shape recovery for an object of unknown BRDF using light source or object motions. This paper proposes a theory that addresses the remaining problem of determining shape from the (small or differential) motion of the camera, for unknown isotropic BRDFs. Our theory derives a differential stereo relation that relates camera motion to surface depth, which generalizes traditional Lambertian assumptions. Under orthographic projection, we show differential stereo may not determine shape for general BRDFs, but suffices to yield an invariant for several restricted (still unknown) BRDFs exhibited by common materials. For the perspective case, we show that differential stereo yields the surface depth for unknown isotropic BRDF and unknown directional lighting, while additional constraints are obtained with restrictions on the BRDF or lighting. The limits imposed by our theory are intrinsic to the shape recovery problem and independent of choice of reconstruction method. We also illustrate trends shared by theories on shape from differential motion of light source, object or camera, to relate the hardness of surface reconstruction to the complexity of imaging setup.

  9. Systematic Calibration for a Backpacked Spherical Photogrammetry Imaging System

    NASA Astrophysics Data System (ADS)

    Rau, J. Y.; Su, B. W.; Hsiao, K. W.; Jhan, J. P.

    2016-06-01

    A spherical camera can observe the environment for almost 720 degrees' field of view in one shoot, which is useful for augmented reality, environment documentation, or mobile mapping applications. This paper aims to develop a spherical photogrammetry imaging system for the purpose of 3D measurement through a backpacked mobile mapping system (MMS). The used equipment contains a Ladybug-5 spherical camera, a tactical grade positioning and orientation system (POS), i.e. SPAN-CPT, and an odometer, etc. This research aims to directly apply photogrammetric space intersection technique for 3D mapping from a spherical image stereo-pair. For this purpose, several systematic calibration procedures are required, including lens distortion calibration, relative orientation calibration, boresight calibration for direct georeferencing, and spherical image calibration. The lens distortion is serious on the ladybug-5 camera's original 6 images. Meanwhile, for spherical image mosaicking from these original 6 images, we propose the use of their relative orientation and correct their lens distortion at the same time. However, the constructed spherical image still contains systematic error, which will reduce the 3D measurement accuracy. Later for direct georeferencing purpose, we need to establish a ground control field for boresight/lever-arm calibration. Then, we can apply the calibrated parameters to obtain the exterior orientation parameters (EOPs) of all spherical images. In the end, the 3D positioning accuracy after space intersection will be evaluated, including EOPs obtained by structure from motion method.

  10. Linearized motion estimation for articulated planes.

    PubMed

    Datta, Ankur; Sheikh, Yaser; Kanade, Takeo

    2011-04-01

    In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.

  11. Use of camera drive in stereoscopic display of learning contents of introductory physics

    NASA Astrophysics Data System (ADS)

    Matsuura, Shu

    2011-03-01

    Simple 3D physics simulations with stereoscopic display were created for a part of introductory physics e-Learning. First, cameras to see the 3D world can be made controllable by the user. This enabled to observe the system and motions of objects from any position in the 3D world. Second, cameras were made attachable to one of the moving object in the simulation so as to observe the relative motion of other objects. By this option, it was found that users perceive the velocity and acceleration more sensibly on stereoscopic display than on non-stereoscopic 3D display. Simulations were made using Adobe Flash ActionScript, and Papervison 3D library was used to render the 3D models in the flash web pages. To display the stereogram, two viewports from virtual cameras were displayed in parallel in the same web page. For observation of stereogram, the images of two viewports were superimposed by using 3D stereogram projection box (T&TS CO., LTD.), and projected on an 80-inch screen. The virtual cameras were controlled by keyboard and also by Nintendo Wii remote controller buttons. In conclusion, stereoscopic display offers learners more opportunities to play with the simulated models, and to perceive the characteristics of motion better.

  12. Human detection and motion analysis at security points

    NASA Astrophysics Data System (ADS)

    Ozer, I. Burak; Lv, Tiehan; Wolf, Wayne H.

    2003-08-01

    This paper presents a real-time video surveillance system for the recognition of specific human activities. Specifically, the proposed automatic motion analysis is used as an on-line alarm system to detect abnormal situations in a campus environment. A smart multi-camera system developed at Princeton University is extended for use in smart environments in which the camera detects the presence of multiple persons as well as their gestures and their interaction in real-time.

  13. Automated Reconstruction of Three-Dimensional Fish Motion, Forces, and Torques

    PubMed Central

    Voesenek, Cees J.; Pieters, Remco P. M.; van Leeuwen, Johan L.

    2016-01-01

    Fish can move freely through the water column and make complex three-dimensional motions to explore their environment, escape or feed. Nevertheless, the majority of swimming studies is currently limited to two-dimensional analyses. Accurate experimental quantification of changes in body shape, position and orientation (swimming kinematics) in three dimensions is therefore essential to advance biomechanical research of fish swimming. Here, we present a validated method that automatically tracks a swimming fish in three dimensions from multi-camera high-speed video. We use an optimisation procedure to fit a parameterised, morphology-based fish model to each set of video images. This results in a time sequence of position, orientation and body curvature. We post-process this data to derive additional kinematic parameters (e.g. velocities, accelerations) and propose an inverse-dynamics method to compute the resultant forces and torques during swimming. The presented method for quantifying 3D fish motion paves the way for future analyses of swimming biomechanics. PMID:26752597

  14. Visualizing the history of living spaces.

    PubMed

    Ivanov, Yuri; Wren, Christopher; Sorokin, Alexander; Kaur, Ishwinder

    2007-01-01

    The technology available to building designers now makes it possible to monitor buildings on a very large scale. Video cameras and motion sensors are commonplace in practically every office space, and are slowly making their way into living spaces. The application of such technologies, in particular video cameras, while improving security, also violates privacy. On the other hand, motion sensors, while being privacy-conscious, typically do not provide enough information for a human operator to maintain the same degree of awareness about the space that can be achieved by using video cameras. We propose a novel approach in which we use a large number of simple motion sensors and a small set of video cameras to monitor a large office space. In our system we deployed 215 motion sensors and six video cameras to monitor the 3,000-square-meter office space occupied by 80 people for a period of about one year. The main problem in operating such systems is finding a way to present this highly multidimensional data, which includes both spatial and temporal components, to a human operator to allow browsing and searching recorded data in an efficient and intuitive way. In this paper we present our experiences and the solutions that we have developed in the course of our work on the system. We consider this work to be the first step in helping designers and managers of building systems gain access to information about occupants' behavior in the context of an entire building in a way that is only minimally intrusive to the occupants' privacy.

  15. Non-contact measurement of helicopter device position in wind tunnels with the use of optical videogrammetry method

    NASA Astrophysics Data System (ADS)

    Kuruliuk, K. A.; Kulesh, V. P.

    2016-10-01

    An optical videogrammetry method using one digital camera for non-contact measurements of geometric shape parameters, position and motion of models and structural elements of aircraft in experimental aerodynamics was developed. The tests with the use of this method for measurement of six components (three linear and three angular ones) of real position of helicopter device in wind tunnel flow were conducted. The distance between camera and test object was 15 meters. It was shown in practice that, in the conditions of aerodynamic experiment instrumental measurement error (standard deviation) for angular and linear displacements of helicopter device does not exceed 0,02° and 0.3 mm, respectively. Analysis of the results shows that at the minimum rotor thrust deviations are systematic and generally are within ± 0.2 degrees. Deviations of angle values grow with the increase of rotor thrust.

  16. In-air versus underwater comparison of 3D reconstruction accuracy using action sport cameras.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2017-01-25

    Action sport cameras (ASC) have achieved a large consensus for recreational purposes due to ongoing cost decrease, image resolution and frame rate increase, along with plug-and-play usability. Consequently, they have been recently considered for sport gesture studies and quantitative athletic performance evaluation. In this paper, we evaluated the potential of two ASCs (GoPro Hero3+) for in-air (laboratory) and underwater (swimming pool) three-dimensional (3D) motion analysis as a function of different camera setups involving the acquisition frequency, image resolution and field of view. This is motivated by the fact that in swimming, movement cycles are characterized by underwater and in-air phases what imposes the technical challenge of having a split volume configuration: an underwater measurement volume observed by underwater cameras and an in-air measurement volume observed by in-air cameras. The reconstruction of whole swimming cycles requires thus merging of simultaneous measurements acquired in both volumes. Characterizing and optimizing the instrumental errors of such a configuration makes mandatory the assessment of the instrumental errors of both volumes. In order to calibrate the camera stereo pair, black spherical markers placed on two calibration tools, used both in-air and underwater, and a two-step nonlinear optimization were exploited. The 3D reconstruction accuracy of testing markers and the repeatability of the estimated camera parameters accounted for system performance. For both environments, statistical tests were focused on the comparison of the different camera configurations. Then, each camera configuration was compared across the two environments. In all assessed resolutions, and in both environments, the reconstruction error (true distance between the two testing markers) was less than 3mm and the error related to the working volume diagonal was in the range of 1:2000 (3×1.3×1.5m 3 ) to 1:7000 (4.5×2.2×1.5m 3 ) in agreement with the literature. Statistically, the 3D accuracy obtained in the in-air environment was poorer (p<10 -5 ) than the one in the underwater environment, across all the tested camera configurations. Related to the repeatability of the camera parameters, we found a very low variability in both environments (1.7% and 2.9%, in-air and underwater). This result encourage the use of ASC technology to perform quantitative reconstruction both in-air and underwater environments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design

    DTIC Science & Technology

    2015-10-01

    the study. This equipment has included a modified GoPro head-mounted camera and a Vicon 13-camera optical motion capture system, which was not part...also completed for relevant members of the study team. 4. The head-mounted camera setup has been established (a modified GoPro Hero 3 with external

  18. Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas

    PubMed Central

    2018-01-01

    This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites. PMID:29673230

  19. Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas.

    PubMed

    Gakne, Paul Verlaine; O'Keefe, Kyle

    2018-04-17

    This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites.

  20. A feasibility study of damage detection in beams using high-speed camera (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wan, Chao; Yuan, Fuh-Gwo

    2017-04-01

    In this paper a method for damage detection in beam structures using high-speed camera is presented. Traditional methods of damage detection in structures typically involve contact (i.e., piezoelectric sensor or accelerometer) or non-contact sensors (i.e., laser vibrometer) which can be costly and time consuming to inspect an entire structure. With the popularity of the digital camera and the development of computer vision technology, video cameras offer a viable capability of measurement including higher spatial resolution, remote sensing and low-cost. In the study, a damage detection method based on the high-speed camera was proposed. The system setup comprises a high-speed camera and a line-laser which can capture the out-of-plane displacement of a cantilever beam. The cantilever beam with an artificial crack was excited and the vibration process was recorded by the camera. A methodology called motion magnification, which can amplify subtle motions in a video is used for modal identification of the beam. A finite element model was used for validation of the proposed method. Suggestions for applications of this methodology and challenges in future work will be discussed.

  1. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  2. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  3. The Accuracy of Conventional 2D Video for Quantifying Upper Limb Kinematics in Repetitive Motion Occupational Tasks

    PubMed Central

    Chen, Chia-Hsiung; Azari, David; Hu, Yu Hen; Lindstrom, Mary J.; Thelen, Darryl; Yen, Thomas Y.; Radwin, Robert G.

    2015-01-01

    Objective Marker-less 2D video tracking was studied as a practical means to measure upper limb kinematics for ergonomics evaluations. Background Hand activity level (HAL) can be estimated from speed and duty cycle. Accuracy was measured using a cross correlation template-matching algorithm for tracking a region of interest on the upper extremities. Methods Ten participants performed a paced load transfer task while varying HAL (2, 4, and 5) and load (2.2 N, 8.9 N and 17.8 N). Speed and acceleration measured from 2D video were compared against ground truth measurements using 3D infrared motion capture. Results The median absolute difference between 2D video and 3D motion capture was 86.5 mm/s for speed, and 591 mm/s2 for acceleration, and less than 93 mm/s for speed and 656 mm/s2 for acceleration when camera pan and tilt were within ±30 degrees. Conclusion Single-camera 2D video had sufficient accuracy (< 100 mm/s) for evaluating HAL. Practitioner Summary This study demonstrated that 2D video tracking had sufficient accuracy to measure HAL for ascertaining the American Conference of Government Industrial Hygienists Threshold Limit Value® for repetitive motion when the camera is located within ±30 degrees off the plane of motion when compared against 3D motion capture for a simulated repetitive motion task. PMID:25978764

  4. Projective Structure from Two Uncalibrated Images: Structure from Motion and Recognition

    DTIC Science & Technology

    1992-09-01

    correspondence between points in Maybank 1990). The question, therefore, is why look for both views more of a problem, and hence, may make the...plane is fixed with respect to the 1987, Faugeras, Luong and Maybank 1992). The prob- camera coordinate frame. A rigid camera motion, there- lem of...the second reference Rieger-Lawton 1985, Faugeras and Maybank 1990, Hil- plane (assuming the four object points Pi, j = 1, ...,4, dreth 1991, Faugeras

  5. Efficient structure from motion on large scenes using UAV with position and pose information

    NASA Astrophysics Data System (ADS)

    Teng, Xichao; Yu, Qifeng; Shang, Yang; Luo, Jing; Wang, Gang

    2018-04-01

    In this paper, we exploit prior information from global positioning systems and inertial measurement units to speed up the process of large scene reconstruction from images acquired by Unmanned Aerial Vehicles. We utilize weak pose information and intrinsic parameter to obtain the projection matrix for each view. As compared to unmanned aerial vehicles' flight altitude, topographic relief can usually be ignored, we assume that the scene is flat and use weak perspective camera to get projective transformations between two views. Furthermore, we propose an overlap criterion and select potentially matching view pairs between projective transformed views. A robust global structure from motion method is used for image based reconstruction. Our real world experiments show that the approach is accurate, scalable and computationally efficient. Moreover, projective transformations between views can also be used to eliminate false matching.

  6. Validation of the Microsoft Kinect® camera system for measurement of lower extremity jump landing and squatting kinematics.

    PubMed

    Eltoukhy, Moataz; Kelly, Adam; Kim, Chang-Young; Jun, Hyung-Pil; Campbell, Richard; Kuenze, Christopher

    2016-01-01

    Cost effective, quantifiable assessment of lower extremity movement represents potential improvement over standard tools for evaluation of injury risk. Ten healthy participants completed three trials of a drop jump, overhead squat, and single leg squat task. Peak hip and knee kinematics were assessed using an 8 camera BTS Smart 7000DX motion analysis system and the Microsoft Kinect® camera system. The agreement and consistency between both uncorrected and correct Kinect kinematic variables and the BTS camera system were assessed using interclass correlations coefficients. Peak sagittal plane kinematics measured using the Microsoft Kinect® camera system explained a significant amount of variance [Range(hip) = 43.5-62.8%; Range(knee) = 67.5-89.6%] in peak kinematics measured using the BTS camera system. Across tasks, peak knee flexion angle and peak hip flexion were found to be consistent and in agreement when the Microsoft Kinect® camera system was directly compared to the BTS camera system but these values were improved following application of a corrective factor. The Microsoft Kinect® may not be an appropriate surrogate for traditional motion analysis technology, but it may have potential applications as a real-time feedback tool in pathological or high injury risk populations.

  7. Phase-stepped fringe projection by rotation about the camera's perspective center.

    PubMed

    Huddart, Y R; Valera, J D; Weston, N J; Featherstone, T C; Moore, A J

    2011-09-12

    A technique to produce phase steps in a fringe projection system for shape measurement is presented. Phase steps are produced by introducing relative rotation between the object and the fringe projection probe (comprising a projector and camera) about the camera's perspective center. Relative motion of the object in the camera image can be compensated, because it is independent of the distance of the object from the camera, whilst the phase of the projected fringes is stepped due to the motion of the projector with respect to the object. The technique was validated with a static fringe projection system by moving an object on a coordinate measuring machine (CMM). The alternative approach, of rotating a lightweight and robust CMM-mounted fringe projection probe, is discussed. An experimental accuracy of approximately 1.5% of the projected fringe pitch was achieved, limited by the standard phase-stepping algorithms used rather than by the accuracy of the phase steps produced by the new technique.

  8. Suppressing the image smear of the vibration modulation transfer function for remote-sensing optical cameras.

    PubMed

    Li, Jin; Liu, Zilong; Liu, Si

    2017-02-20

    In on-board photographing processes of satellite cameras, the platform vibration can generate image motion, distortion, and smear, which seriously affect the image quality and image positioning. In this paper, we create a mathematical model of a vibrating modulate transfer function (VMTF) for a remote-sensing camera. The total MTF of a camera is reduced by the VMTF, which means the image quality is degraded. In order to avoid the degeneration of the total MTF caused by vibrations, we use an Mn-20Cu-5Ni-2Fe (M2052) manganese copper alloy material to fabricate a vibration-isolation mechanism (VIM). The VIM can transform platform vibration energy into irreversible thermal energy with its internal twin crystals structure. Our experiment shows the M2052 manganese copper alloy material is good enough to suppress image motion below 125 Hz, which is the vibration frequency of satellite platforms. The camera optical system has a higher MTF after suppressing the vibration of the M2052 material than before.

  9. a Prompt Methodology to Georeference Complex Hypogea Environments

    NASA Astrophysics Data System (ADS)

    Troisi, S.; Baiocchi, V.; Del Pizzo, S.; Giannone, F.

    2017-02-01

    Actually complex underground structures and facilities occupy a wide space in our cities, most of them are often unsurveyed; cable duct, drainage system are not exception. Furthermore, several inspection operations are performed in critical air condition, that do not allow or make more difficult a conventional survey. In this scenario a prompt methodology to survey and georeferencing such facilities is often indispensable. A visual based approach was proposed in this paper; such methodology provides a 3D model of the environment and the path followed by the camera using the conventional photogrammetric/Structure from motion software tools. The key-role is played by the lens camera; indeed, a fisheye system was employed to obtain a very wide field of view (FOV) and therefore high overlapping among the frames. The camera geometry is in according to a forward motion along the axis camera. Consequently, to avoid instability of bundle adjustment algorithm a preliminary calibration of camera was carried out. A specific case study was reported and the accuracy achieved.

  10. Real-time full-motion color Flash lidar for target detection and identification

    NASA Astrophysics Data System (ADS)

    Nelson, Roy; Coppock, Eric; Craig, Rex; Craner, Jeremy; Nicks, Dennis; von Niederhausern, Kurt

    2015-05-01

    Greatly improved understanding of areas and objects of interest can be gained when real time, full-motion Flash LiDAR is fused with inertial navigation data and multi-spectral context imagery. On its own, full-motion Flash LiDAR provides the opportunity to exploit the z dimension for improved intelligence vs. 2-D full-motion video (FMV). The intelligence value of this data is enhanced when it is combined with inertial navigation data to produce an extended, georegistered data set suitable for a variety of analysis. Further, when fused with multispectral context imagery the typical point cloud now becomes a rich 3-D scene which is intuitively obvious to the user and allows rapid cognitive analysis with little or no training. Ball Aerospace has developed and demonstrated a real-time, full-motion LIDAR system that fuses context imagery (VIS to MWIR demonstrated) and inertial navigation data in real time, and can stream these information-rich geolocated/fused 3-D scenes from an airborne platform. In addition, since the higher-resolution context camera is boresighted and frame synchronized to the LiDAR camera and the LiDAR camera is an array sensor, techniques have been developed to rapidly interpolate the LIDAR pixel values creating a point cloud that has the same resolution as the context camera, effectively creating a high definition (HD) LiDAR image. This paper presents a design overview of the Ball TotalSight™ LIDAR system along with typical results over urban and rural areas collected from both rotary and fixed-wing aircraft. We conclude with a discussion of future work.

  11. Measurement of 3-D Vibrational Motion by Dynamic Photogrammetry Using Least-Square Image Matching for Sub-Pixel Targeting to Improve Accuracy.

    PubMed

    Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho

    2016-03-11

    This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility.

  12. Measurement of 3-D Vibrational Motion by Dynamic Photogrammetry Using Least-Square Image Matching for Sub-Pixel Targeting to Improve Accuracy

    PubMed Central

    Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho

    2016-01-01

    This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility. PMID:26978366

  13. The Feasibility of 3d Point Cloud Generation from Smartphones

    NASA Astrophysics Data System (ADS)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  14. Terrain shape estimation from optical flow, using Kalman filtering

    NASA Astrophysics Data System (ADS)

    Hoff, William A.; Sklair, Cheryl W.

    1990-01-01

    As one moves through a static environment, the visual world as projected on the retina seems to flow past. This apparent motion, called optical flow, can be an important source of depth perception for autonomous robots. An important application is in planetary exploration -the landing vehicle must find a safe landing site in rugged terrain, and an autonomous rover must be able to navigate safely through this terrain. In this paper, we describe a solution to this problem. Image edge points are tracked between frames of a motion sequence, and the range to the points is calculated from the displacement of the edge points and the known motion of the camera. Kalman filtering is used to incrementally improve the range estimates to those points, and provide an estimate of the uncertainty in each range. Errors in camera motion and image point measurement can also be modelled with Kalman filtering. A surface is then interpolated to these points, providing a complete map from which hazards such as steeply sloping areas can be detected. Using the method of extended Kalman filtering, our approach allows arbitrary camera motion. Preliminary results of an implementation are presented, and show that the resulting range accuracy is on the order of 1-2% of the range.

  15. Combining High-Speed Cameras and Stop-Motion Animation Software to Support Students' Modeling of Human Body Movement

    NASA Astrophysics Data System (ADS)

    Lee, Victor R.

    2015-04-01

    Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video, can be deployed in such a way to support students' participation in practices of scientific modeling. As participants in classroom design experiment, fifteen fifth-grade students worked with high-speed cameras and stop-motion animation software (SAM Animation) over several days to produce dynamic models of motion and body movement. The designed series of learning activities involved iterative cycles of animation creation and critique and use of various depictive materials. Subsequent analysis of flipbooks of human jumping movements created by the students at the beginning and end of the unit revealed a significant improvement in both the epistemic fidelity of students' representations. Excerpts from classroom observations highlight the role that the teacher plays in supporting students' thoughtful reflection of and attention to slow-motion video. In total, this design and research intervention demonstrates that the combination of technologies, activities, and teacher support can lead to improvements in some of the foundations associated with students' modeling.

  16. Samba: a real-time motion capture system using wireless camera sensor networks.

    PubMed

    Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai

    2014-03-20

    There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments.

  17. Samba: A Real-Time Motion Capture System Using Wireless Camera Sensor Networks

    PubMed Central

    Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai

    2014-01-01

    There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments. PMID:24658618

  18. Examining wildlife responses to phenology and wildfire using a landscape-scale camera trap network

    Treesearch

    Miguel L. Villarreal; Leila Gass; Laura Norman; Joel B. Sankey; Cynthia S. A. Wallace; Dennis McMacken; Jack L. Childs; Roy Petrakis

    2013-01-01

    Between 2001 and 2009, the Borderlands Jaguar Detection Project deployed 174 camera traps in the mountains of southern Arizona to record jaguar activity. In addition to jaguars, the motion-activated cameras, placed along known wildlife travel routes, recorded occurrences of ~ 20 other animal species. We examined temporal relationships of white-tailed deer (Odocoileus...

  19. Computational cameras for moving iris recognition

    NASA Astrophysics Data System (ADS)

    McCloskey, Scott; Venkatesha, Sharath

    2015-05-01

    Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.

  20. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  1. Automatic registration of panoramic image sequence and mobile laser scanning data using semantic features

    NASA Astrophysics Data System (ADS)

    Li, Jianping; Yang, Bisheng; Chen, Chi; Huang, Ronggang; Dong, Zhen; Xiao, Wen

    2018-02-01

    Inaccurate exterior orientation parameters (EoPs) between sensors obtained by pre-calibration leads to failure of registration between panoramic image sequence and mobile laser scanning data. To address this challenge, this paper proposes an automatic registration method based on semantic features extracted from panoramic images and point clouds. Firstly, accurate rotation parameters between the panoramic camera and the laser scanner are estimated using GPS and IMU aided structure from motion (SfM). The initial EoPs of panoramic images are obtained at the same time. Secondly, vehicles in panoramic images are extracted by the Faster-RCNN as candidate primitives to be matched with potential corresponding primitives in point clouds according to the initial EoPs. Finally, translation between the panoramic camera and the laser scanner is refined by maximizing the overlapping area of corresponding primitive pairs based on the Particle Swarm Optimization (PSO), resulting in a finer registration between panoramic image sequences and point clouds. Two challenging urban scenes were experimented to assess the proposed method, and the final registration errors of these two scenes were both less than three pixels, which demonstrates a high level of automation, robustness and accuracy.

  2. Characterizing Microbial Mat Morphology with Structure from Motion Techniques in Ice-Covered Lake Joyce, McMurdo Dry Valleys, Antarctica

    NASA Astrophysics Data System (ADS)

    Mackey, T. J.; Leidman, S. Z.; Allen, B.; Hawes, I.; Lawrence, J.; Jungblut, A. D.; Krusor, M.; Coleman, L.; Sumner, D. Y.

    2015-12-01

    Structure from Motion (SFM) techniques can provide quantitative morphological documentation of otherwise inaccessible benthic ecosystems such as microbial mats in Lake Joyce, a perennially ice-covered lake of the Antarctic McMurdo Dry Valleys (MDV). Microbial mats are a key ecosystem of MDV lakes, and diverse mat morphologies like pinnacles emerge from interactions among microbial behavior, mineralization, and environmental conditions. Environmental gradients can be isolated to test mat growth models, but assessment of mat morphology along these gradients is complicated by their inaccessibility: the Lake Joyce ice cover is 4-5 m thick, water depths containing diverse pinnacle morphologies are 9-14 m, and relevant mat features are cm-scale. In order to map mat pinnacle morphology in different sedimentary settings, we deployed drop cameras (SeaViewer and GoPro) through 29 GPS referenced drill holes clustered into six stations along a transect spanning 880 m. Once under the ice cover, a boom containing a second GoPro camera was unfurled and rotated to collect oblique images of the benthic mats within dm of the mat-water interface. This setup allowed imaging from all sides over a ~1.5 m diameter area of the lake bottom. Underwater lens parameters were determined for each camera in Agisoft Lens; images were reconstructed and oriented in space with the SFM software Agisoft Photoscan, using the drop camera axis of rotation as up. The reconstructions were compared to downward facing images to assess accuracy, and similar images of an object with known geometry provided a test for expected error in reconstructions. Downward facing images identify decreasing pinnacle abundance in higher sedimentation settings, and quantitative measurements of 3D reconstructions in KeckCAVES LidarViewer supplement these mat morphological facies with measurements of pinnacle height and orientation. Reconstructions also help isolate confounding variables for mat facies trends with measurements of lake bottom slope and underlying relief that could influence pinnacle growth. Comparison of 3D reconstructions to downward-facing drop camera images demonstrate that SFM is a powerful tool for documenting diverse mat morphologies across environmental gradients in ice-covered lakes.

  3. Identification of hand motion using background subtraction method and extraction of image binary with backpropagation neural network on skeleton model

    NASA Astrophysics Data System (ADS)

    Fauziah; Wibowo, E. P.; Madenda, S.; Hustinawati

    2018-03-01

    Capturing and recording motion in human is mostly done with the aim for sports, health, animation films, criminality, and robotic applications. In this study combined background subtraction and back propagation neural network. This purpose to produce, find similarity movement. The acquisition process using 8 MP resolution camera MP4 format, duration 48 seconds, 30frame/rate. video extracted produced 1444 pieces and results hand motion identification process. Phase of image processing performed is segmentation process, feature extraction, identification. Segmentation using bakground subtraction, extracted feature basically used to distinguish between one object to another object. Feature extraction performed by using motion based morfology analysis based on 7 invariant moment producing four different classes motion: no object, hand down, hand-to-side and hands-up. Identification process used to recognize of hand movement using seven inputs. Testing and training with a variety of parameters tested, it appears that architecture provides the highest accuracy in one hundred hidden neural network. The architecture is used propagate the input value of the system implementation process into the user interface. The result of the identification of the type of the human movement has been clone to produce the highest acuracy of 98.5447%. The training process is done to get the best results.

  4. Intelligent viewing control for robotic and automation systems

    NASA Astrophysics Data System (ADS)

    Schenker, Paul S.; Peters, Stephen F.; Paljug, Eric D.; Kim, Won S.

    1994-10-01

    We present a new system for supervisory automated control of multiple remote cameras. Our primary purpose in developing this system has been to provide capability for knowledge- based, `hands-off' viewing during execution of teleoperation/telerobotic tasks. The reported technology has broader applicability to remote surveillance, telescience observation, automated manufacturing workcells, etc. We refer to this new capability as `Intelligent Viewing Control (IVC),' distinguishing it from a simple programmed camera motion control. In the IVC system, camera viewing assignment, sequencing, positioning, panning, and parameter adjustment (zoom, focus, aperture, etc.) are invoked and interactively executed by real-time by a knowledge-based controller, drawing on a priori known task models and constraints, including operator preferences. This multi-camera control is integrated with a real-time, high-fidelity 3D graphics simulation, which is correctly calibrated in perspective to the actual cameras and their platform kinematics (translation/pan-tilt). Such merged graphics- with-video design allows the system user to preview and modify the planned (`choreographed') viewing sequences. Further, during actual task execution, the system operator has available both the resulting optimized video sequence, as well as supplementary graphics views from arbitrary perspectives. IVC, including operator-interactive designation of robot task actions, is presented to the user as a well-integrated video-graphic single screen user interface allowing easy access to all relevant telerobot communication/command/control resources. We describe and show pictorial results of a preliminary IVC system implementation for telerobotic servicing of a satellite.

  5. Time-of-flight depth image enhancement using variable integration time

    NASA Astrophysics Data System (ADS)

    Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong

    2013-03-01

    Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.

  6. Pose and motion recovery from feature correspondences and a digital terrain map.

    PubMed

    Lerner, Ronen; Rivlin, Ehud; Rotstein, Héctor P

    2006-09-01

    A novel algorithm for pose and motion estimation using corresponding features and a Digital Terrain Map is proposed. Using a Digital Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables the elimination of the ambiguity present in vision-based algorithms for motion recovery. As a consequence, the absolute position and orientation of a camera can be recovered with respect to the external reference frame. In order to do this, the DTM is used to formulate a constraint between corresponding features in two consecutive frames. Explicit reconstruction of the 3D world is not required. When considering a number of feature points, the resulting constraints can be solved using nonlinear optimization in terms of position, orientation, and motion. Such a procedure requires an initial guess of these parameters, which can be obtained from dead-reckoning or any other source. The feasibility of the algorithm is established through extensive experimentation. Performance is compared with a state-of-the-art alternative algorithm, which intermediately reconstructs the 3D structure and then registers it to the DTM. A clear advantage for the novel algorithm is demonstrated in variety of scenarios.

  7. Comparison of three different techniques for camera and motion control of a teleoperated robot.

    PubMed

    Doisy, Guillaume; Ronen, Adi; Edan, Yael

    2017-01-01

    This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. MonoSLAM: real-time single camera SLAM.

    PubMed

    Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier

    2007-06-01

    We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.

  9. Cross-Correlation-Based Structural System Identification Using Unmanned Aerial Vehicles

    PubMed Central

    Yoon, Hyungchul; Hoskere, Vedhus; Park, Jong-Woong; Spencer, Billie F.

    2017-01-01

    Computer vision techniques have been employed to characterize dynamic properties of structures, as well as to capture structural motion for system identification purposes. All of these methods leverage image-processing techniques using a stationary camera. This requirement makes finding an effective location for camera installation difficult, because civil infrastructure (i.e., bridges, buildings, etc.) are often difficult to access, being constructed over rivers, roads, or other obstacles. This paper seeks to use video from Unmanned Aerial Vehicles (UAVs) to address this problem. As opposed to the traditional way of using stationary cameras, the use of UAVs brings the issue of the camera itself moving; thus, the displacements of the structure obtained by processing UAV video are relative to the UAV camera. Some efforts have been reported to compensate for the camera motion, but they require certain assumptions that may be difficult to satisfy. This paper proposes a new method for structural system identification using the UAV video directly. Several challenges are addressed, including: (1) estimation of an appropriate scale factor; and (2) compensation for the rolling shutter effect. Experimental validation is carried out to validate the proposed approach. The experimental results demonstrate the efficacy and significant potential of the proposed approach. PMID:28891985

  10. Demonstration of a High-Fidelity Predictive/Preview Display Technique for Telerobotic Servicing in Space

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Bejczy, Antal K.

    1993-01-01

    A highly effective predictive/preview display technique for telerobotic servicing in space under several seconds communication time delay has been demonstrated on a large laboratory scale in May 1993, involving the Jet Propulsion Laboratory as the simulated ground control station and, 2500 miles away, the Goddard Space Flight Center as the simulated satellite servicing set-up. The technique is based on a high-fidelity calibration procedure that enables a high-fidelity overlay of 3-D graphics robot arm and object models over given 2-D TV camera images of robot arm and objects. To generate robot arm motions, the operator can confidently interact in real time with the graphics models of the robot arm and objects overlaid on an actual camera view of the remote work site. The technique also enables the operator to generate high-fidelity synthetic TV camera views showing motion events that are hidden in a given TV camera view or for which no TV camera views are available. The positioning accuracy achieved by this technique for a zoomed-in camera setting was about +/-5 mm, well within the allowable +/-12 mm error margin at the insertion of a 45 cm long tool in the servicing task.

  11. Model-based Estimation for Pose, Velocity of Projectile from Stereo Linear Array Image

    NASA Astrophysics Data System (ADS)

    Zhao, Zhuxin; Wen, Gongjian; Zhang, Xing; Li, Deren

    2012-01-01

    The pose (position and attitude) and velocity of in-flight projectiles have major influence on the performance and accuracy. A cost-effective method for measuring the gun-boosted projectiles is proposed. The method adopts only one linear array image collected by the stereo vision system combining a digital line-scan camera and a mirror near the muzzle. From the projectile's stereo image, the motion parameters (pose and velocity) are acquired by using a model-based optimization algorithm. The algorithm achieves optimal estimation of the parameters by matching the stereo projection of the projectile and that of the same size 3D model. The speed and the AOA (angle of attack) could also be determined subsequently. Experiments are made to test the proposed method.

  12. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  13. Quantitative underwater 3D motion analysis using submerged video cameras: accuracy analysis and trajectory reconstruction.

    PubMed

    Silvatti, Amanda P; Cerveri, Pietro; Telles, Thiago; Dias, Fábio A S; Baroni, Guido; Barros, Ricardo M L

    2013-01-01

    In this study we aim at investigating the applicability of underwater 3D motion capture based on submerged video cameras in terms of 3D accuracy analysis and trajectory reconstruction. Static points with classical direct linear transform (DLT) solution, a moving wand with bundle adjustment and a moving 2D plate with Zhang's method were considered for camera calibration. As an example of the final application, we reconstructed the hand motion trajectories in different swimming styles and qualitatively compared this with Maglischo's model. Four highly trained male swimmers performed butterfly, breaststroke and freestyle tasks. The middle fingertip trajectories of both hands in the underwater phase were considered. The accuracy (mean absolute error) of the two calibration approaches (wand: 0.96 mm - 2D plate: 0.73 mm) was comparable to out of water results and highly superior to the classical DLT results (9.74 mm). Among all the swimmers, the hands' trajectories of the expert swimmer in the style were almost symmetric and in good agreement with Maglischo's model. The kinematic results highlight symmetry or asymmetry between the two hand sides, intra- and inter-subject variability in terms of the motion patterns and agreement or disagreement with the model. The two outcomes, calibration results and trajectory reconstruction, both move towards the quantitative 3D underwater motion analysis.

  14. Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles.

    PubMed

    Atman, Jamal; Popp, Manuel; Ruppelt, Jan; Trommer, Gert F

    2016-09-16

    Micro Air Vehicles (MAVs) equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS). In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV's navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P) problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results.

  15. Exploding Balloons, Deformed Balls, Strange Reflections and Breaking Rods: Slow Motion Analysis of Selected Hands-On Experiments

    ERIC Educational Resources Information Center

    Vollmer, Michael; Mollmann, Klaus-Peter

    2011-01-01

    A selection of hands-on experiments from different fields of physics, which happen too fast for the eye or video cameras to properly observe and analyse the phenomena, is presented. They are recorded and analysed using modern high speed cameras. Two types of cameras were used: the first were rather inexpensive consumer products such as Casio…

  16. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-01-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a through review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  17. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-07-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  18. Mapping Land and Water Surface Topography with instantaneous Structure from Motion

    NASA Astrophysics Data System (ADS)

    Dietrich, J.; Fonstad, M. A.

    2012-12-01

    Structure from Motion (SfM) has given researchers an invaluable tool for low-cost, high-resolution 3D mapping of the environment. These SfM 3D surface models are commonly constructed from many digital photographs collected with one digital camera (either handheld or attached to aerial platform). This method works for stationary or very slow moving objects. However, objects in motion are impossible to capture with one-camera SfM. With multiple simultaneously triggered cameras, it becomes possible to capture multiple photographs at the same time which allows for the construction 3D surface models of moving objects and surfaces, an instantaneous SfM (ISfM) surface model. In river science, ISfM provides a low-cost solution for measuring a number of river variables that researchers normally estimate or are unable to collect over large areas. With ISfM and sufficient coverage of the banks and RTK-GPS control it is possible to create a digital surface model of land and water surface elevations across an entire channel and water surface slopes at any point within the surface model. By setting the cameras to collect time-lapse photography of a scene it is possible to create multiple surfaces that can be compared using traditional digital surface model differencing. These water surface models could be combined the high-resolution bathymetry to create fully 3D cross sections that could be useful in hydrologic modeling. Multiple temporal image sets could also be used in 2D or 3D particle image velocimetry to create 3D surface velocity maps of a channel. Other applications in earth science include anything where researchers could benefit from temporal surface modeling like mass movements, lava flows, dam removal monitoring. The camera system that was used for this research consisted of ten pocket digital cameras (Canon A3300) equipped with wireless triggers. The triggers were constructed with an Arduino-style microcontroller and off-the-shelf handheld radios with a maximum range of several kilometers. The cameras are controlled from another microcontroller/radio combination that allows for manual or automatic triggering of the cameras. The total cost of the camera system was approximately 1500 USD.

  19. Unsupervised markerless 3-DOF motion tracking in real time using a single low-budget camera.

    PubMed

    Quesada, Luis; León, Alejandro J

    2012-10-01

    Motion tracking is a critical task in many computer vision applications. Existing motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on motion tracking. In this paper, we present a novel three degrees of freedom motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera that can be found installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a nonmodeled unmarked object that may be nonrigid, nonconvex, partially occluded, self-occluded, or motion blurred, given that it is opaque, evenly colored, enough contrasting with the background in each frame, and that it does not rotate. Our system is also able to determine the most relevant object to track in the screen. Our proposal does not impose additional constraints, therefore it allows a market-wide implementation of applications that require the estimation of the three position degrees of freedom of an object.

  20. Brownian Movement and Avogadro's Number: A Laboratory Experiment.

    ERIC Educational Resources Information Center

    Kruglak, Haym

    1988-01-01

    Reports an experimental procedure for studying Einstein's theory of Brownian movement using commercially available latex microspheres and a video camera. Describes how students can monitor sphere motions and determine Avogadro's number. Uses a black and white video camera, microscope, and TV. (ML)

  1. Dynamic radionuclide determination of regional left ventricular wall motion using a new digital imaging device

    NASA Technical Reports Server (NTRS)

    Steele, P.; Kirch, D.

    1975-01-01

    In 47 men with arteriographically defined coronary artery disease comparative studies of left ventricular ejection fraction and segmental wall motion were made with radionuclide data obtained from the image intensifier camera computer system and with contrast cineventriculography. The radionuclide data was digitized and the images corresponding to left ventricular end-diastole and end-systole were identified from the left ventricular time-activity curve. The left ventricular end-diastolic and end-systolic images were subtracted to form a silhouette difference image which described wall motion of the anterior and inferior left ventricular segments. The image intensifier camera allows manipulation of dynamically acquired radionuclide data because of the high count rate and consequently improved resolution of the left ventricular image.

  2. [Segmental wall movement of the left ventricle in healthy persons and myocardial infarct patients studied by a catheter-less nuclear medical method (camera-cinematography of the heart)].

    PubMed

    Geffers, H; Sigel, H; Bitter, F; Kampmann, H; Stauch, M; Adam, W E

    1976-08-01

    Camera-Kinematography is a nearly noninvasive method to investigate regional motion of the myocard, and allows evaluation of the function of the heart. About 20 min after injection of 15-20 mCi of 99mTC-Human-Serum-Albumin, when the tracer is distributed homogenously within the bloodpool, data acquisition starts. Myocardial wall motion is represented in an appropriate quasi three-dimensional form. In this representation scars can be revealed as "silent" (akinetic) regions, aneurysms by asynchronic motion. Time activity curves for arbitrarily chosen regions can be calculated and give an equivalent for regional volume changes. 16 patients with an old infarction have been investigated. In fourteen cases the location and extent of regions with abnormal motion could be evaluated. Only two cases of a small posterior wall infarction did not show deviations from normal contraction pattern.

  3. Multisensory visual servoing by a neural network.

    PubMed

    Wei, G Q; Hirzinger, G

    1999-01-01

    Conventional computer vision methods for determining a robot's end-effector motion based on sensory data needs sensor calibration (e.g., camera calibration) and sensor-to-hand calibration (e.g., hand-eye calibration). This involves many computations and even some difficulties, especially when different kinds of sensors are involved. In this correspondence, we present a neural network approach to the motion determination problem without any calibration. Two kinds of sensory data, namely, camera images and laser range data, are used as the input to a multilayer feedforward network to associate the direct transformation from the sensory data to the required motions. This provides a practical sensor fusion method. Using a recursive motion strategy and in terms of a network correction, we relax the requirement for the exactness of the learned transformation. Another important feature of our work is that the goal position can be changed without having to do network retraining. Experimental results show the effectiveness of our method.

  4. CameraHRV: robust measurement of heart rate variability using a camera

    NASA Astrophysics Data System (ADS)

    Pai, Amruta; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2018-02-01

    The inter-beat-interval (time period of the cardiac cycle) changes slightly for every heartbeat; this variation is measured as Heart Rate Variability (HRV). HRV is presumed to occur due to interactions between the parasym- pathetic and sympathetic nervous system. Therefore, it is sometimes used as an indicator of the stress level of an individual. HRV also reveals some clinical information about cardiac health. Currently, HRV is accurately measured using contact devices such as a pulse oximeter. However, recent research in the field of non-contact imaging Photoplethysmography (iPPG) has made vital sign measurements using just the video recording of any exposed skin (such as a person's face) possible. The current signal processing methods for extracting HRV using peak detection perform well for contact-based systems but have poor performance for the iPPG signals. The main reason for this poor performance is the fact that current methods are sensitive to large noise sources which are often present in iPPG data. Further, current methods are not robust to motion artifacts that are common in iPPG systems. We developed a new algorithm, CameraHRV, for robustly extracting HRV even in low SNR such as is common with iPPG recordings. CameraHRV combined spatial combination and frequency demodulation to obtain HRV from the instantaneous frequency of the iPPG signal. CameraHRV outperforms other current methods of HRV estimation. Ground truth data was obtained from FDA-approved pulse oximeter for validation purposes. CameraHRV on iPPG data showed an error of 6 milliseconds for low motion and varying skin tone scenarios. The improvement in error was 14%. In case of high motion scenarios like reading, watching and talking, the error was 10 milliseconds.

  5. The determination of high-resolution spatio-temporal glacier motion fields from time-lapse sequences

    NASA Astrophysics Data System (ADS)

    Schwalbe, Ellen; Maas, Hans-Gerd

    2017-12-01

    This paper presents a comprehensive method for the determination of glacier surface motion vector fields at high spatial and temporal resolution. These vector fields can be derived from monocular terrestrial camera image sequences and are a valuable data source for glaciological analysis of the motion behaviour of glaciers. The measurement concepts for the acquisition of image sequences are presented, and an automated monoscopic image sequence processing chain is developed. Motion vector fields can be derived with high precision by applying automatic subpixel-accuracy image matching techniques on grey value patterns in the image sequences. Well-established matching techniques have been adapted to the special characteristics of the glacier data in order to achieve high reliability in automatic image sequence processing, including the handling of moving shadows as well as motion effects induced by small instabilities in the camera set-up. Suitable geo-referencing techniques were developed to transform image measurements into a reference coordinate system.The result of monoscopic image sequence analysis is a dense raster of glacier surface point trajectories for each image sequence. Each translation vector component in these trajectories can be determined with an accuracy of a few centimetres for points at a distance of several kilometres from the camera. Extensive practical validation experiments have shown that motion vector and trajectory fields derived from monocular image sequences can be used for the determination of high-resolution velocity fields of glaciers, including the analysis of tidal effects on glacier movement, the investigation of a glacier's motion behaviour during calving events, the determination of the position and migration of the grounding line and the detection of subglacial channels during glacier lake outburst floods.

  6. Development of a Sunspot Tracking System

    NASA Technical Reports Server (NTRS)

    Taylor, Jaime R.

    1998-01-01

    Large solar flares produce a significant amount of energetic particles which pose a hazard for human activity in space. In the hope of understanding flare mechanisms and thus better predicting solar flares, NASA's Marshall Space Flight Center (MSFC) developed an experimental vector magnetograph (EXVM) polarimeter to measure the Sun's magnetic field. The EXVM will be used to perform ground-based solar observations and will provide a proof of concept for the design of a similar instrument for the Japanese Solar-B space mission. The EXVM typically operates for a period of several minutes. During this time there is image motion due to atmospheric fluctuation and telescope wind loading. To optimize the EXVM performance an image motion compensation device (sunspot tracker) is needed. The sunspot tracker consists of two parts, an image motion determination system and an image deflection system. For image motion determination a CCD or CID camera is used to digitize an image, than an algorithm is applied to determine the motion. This motion or error signal is sent to the image deflection system which moves the image back to its original location. Both of these systems are under development. Two algorithms are available for sunspot tracking which require the use of only one row and one column of image data. To implement these algorithms, two identical independent systems are being developed, one system for each axis of motion. Two CID cameras have been purchased; the data from each camera will be used to determine image motion for each direction. The error signal generated by the tracking algorithm will be sent to an image deflection system consisting of an actuator and a mirror constrained to move about one axis. Magnetostrictive actuators were chosen to move the mirror over piezoelectrics due to their larger driving force and larger range of motion. The actuator and mirror mounts are currently under development.

  7. Preplanning and Evaluating Video Documentaries and Features.

    ERIC Educational Resources Information Center

    Maynard, Riley

    1997-01-01

    This article presents a ten-part pre-production outline and post-production evaluation that helps communications students more effectively improve video skills. Examines camera movement and motion, camera angle and perspective, lighting, audio, graphics, backgrounds and color, special effects, editing, transitions, and music. Provides a glossary…

  8. Generation of animation sequences of three dimensional models

    NASA Technical Reports Server (NTRS)

    Poi, Sharon (Inventor); Bell, Brad N. (Inventor)

    1990-01-01

    The invention is directed toward a method and apparatus for generating an animated sequence through the movement of three-dimensional graphical models. A plurality of pre-defined graphical models are stored and manipulated in response to interactive commands or by means of a pre-defined command file. The models may be combined as part of a hierarchical structure to represent physical systems without need to create a separate model which represents the combined system. System motion is simulated through the introduction of translation, rotation and scaling parameters upon a model within the system. The motion is then transmitted down through the system hierarchy of models in accordance with hierarchical definitions and joint movement limitations. The present invention also calls for a method of editing hierarchical structure in response to interactive commands or a command file such that a model may be included, deleted, copied or moved within multiple system model hierarchies. The present invention also calls for the definition of multiple viewpoints or cameras which may exist as part of a system hierarchy or as an independent camera. The simulated movement of the models and systems is graphically displayed on a monitor and a frame is recorded by means of a video controller. Multiple movement and hierarchy manipulations are then recorded as a sequence of frames which may be played back as an animation sequence on a video cassette recorder.

  9. Computing camera heading: A study

    NASA Astrophysics Data System (ADS)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  10. An on-line calibration algorithm for external parameters of visual system based on binocular stereo cameras

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua

    2014-11-01

    Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation robustness and accuracy of the fundamental matrix. Finally, we take an experiment for computing the relationship of a pair of stereo cameras to demonstrate accurate performance of the algorithm.

  11. Three dimensional measurement with an electrically tunable focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Lei, Yu; Tong, Qing; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng

    2017-03-01

    A liquid crystal microlens array (LCMLA) with an arrayed microhole pattern electrode based on nematic liquid crystal materials using a fabrication method including traditional UV-photolithography and wet etching is presented. Its focusing performance is measured under different voltage signals applied between the electrodes of the LCMLA. The experimental outcome shows that the focal length of the LCMLA can be tuned easily by only changing the root mean square value of the voltage signal applied. The developed LCMLA is further integrated with a main lens and an imaging sensor to construct a LCMLA-based focused plenoptic camera (LCFPC) prototype. The focused range of the LCFPC can be shifted electrically along the optical axis of the imaging system. The principles and methods for acquiring several key parameters such as three dimensional (3D) depth, positioning, and motion expression are given. The depth resolution is discussed in detail. Experiments are carried out to obtain the static and dynamic 3D information of objects chosen.

  12. Three dimensional measurement with an electrically tunable focused plenoptic camera.

    PubMed

    Lei, Yu; Tong, Qing; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng

    2017-03-01

    A liquid crystal microlens array (LCMLA) with an arrayed microhole pattern electrode based on nematic liquid crystal materials using a fabrication method including traditional UV-photolithography and wet etching is presented. Its focusing performance is measured under different voltage signals applied between the electrodes of the LCMLA. The experimental outcome shows that the focal length of the LCMLA can be tuned easily by only changing the root mean square value of the voltage signal applied. The developed LCMLA is further integrated with a main lens and an imaging sensor to construct a LCMLA-based focused plenoptic camera (LCFPC) prototype. The focused range of the LCFPC can be shifted electrically along the optical axis of the imaging system. The principles and methods for acquiring several key parameters such as three dimensional (3D) depth, positioning, and motion expression are given. The depth resolution is discussed in detail. Experiments are carried out to obtain the static and dynamic 3D information of objects chosen.

  13. A scanning PIV method for fine-scale turbulence measurements

    NASA Astrophysics Data System (ADS)

    Lawson, John M.; Dawson, James R.

    2014-12-01

    A hybrid technique is presented that combines scanning PIV with tomographic reconstruction to make spatially and temporally resolved measurements of the fine-scale motions in turbulent flows. The technique uses one or two high-speed cameras to record particle images as a laser sheet is rapidly traversed across a measurement volume. This is combined with a fast method for tomographic reconstruction of the particle field for use in conjunction with PIV cross-correlation. The method was tested numerically using DNS data and with experiments in a large mixing tank that produces axisymmetric homogeneous turbulence at . A parametric investigation identifies the important parameters for a scanning PIV set-up and provides guidance to the interested experimentalist in achieving the best accuracy. Optimal sheet spacings and thicknesses are reported, and it was found that accurate results could be obtained at quite low scanning speeds. The two-camera method is the most robust to noise, permitting accurate measurements of the velocity gradients and direct determination of the dissipation rate.

  14. Assessment of a visually guided autonomous exploration robot

    NASA Astrophysics Data System (ADS)

    Harris, C.; Evans, R.; Tidey, E.

    2008-10-01

    A system has been developed to enable a robot vehicle to autonomously explore and map an indoor environment using only visual sensors. The vehicle is equipped with a single camera, whose output is wirelessly transmitted to an off-board standard PC for processing. Visual features within the camera imagery are extracted and tracked, and their 3D positions are calculated using a Structure from Motion algorithm. As the vehicle travels, obstacles in its surroundings are identified and a map of the explored region is generated. This paper discusses suitable criteria for assessing the performance of the system by computer-based simulation and practical experiments with a real vehicle. Performance measures identified include the positional accuracy of the 3D map and the vehicle's location, the efficiency and completeness of the exploration and the system reliability. Selected results are presented and the effect of key system parameters and algorithms on performance is assessed. This work was funded by the Systems Engineering for Autonomous Systems (SEAS) Defence Technology Centre established by the UK Ministry of Defence.

  15. Joint Video Stitching and Stabilization from Moving Cameras.

    PubMed

    Guo, Heng; Liu, Shuaicheng; He, Tong; Zhu, Shuyuan; Zeng, Bing; Gabbouj, Moncef

    2016-09-08

    In this paper, we extend image stitching to video stitching for videos that are captured for the same scene simultaneously by multiple moving cameras. In practice, videos captured under this circumstance often appear shaky. Directly applying image stitching methods for shaking videos often suffers from strong spatial and temporal artifacts. To solve this problem, we propose a unified framework in which video stitching and stabilization are performed jointly. Specifically, our system takes several overlapping videos as inputs. We estimate both inter motions (between different videos) and intra motions (between neighboring frames within a video). Then, we solve an optimal virtual 2D camera path from all original paths. An enlarged field of view along the virtual path is finally obtained by a space-temporal optimization that takes both inter and intra motions into consideration. Two important components of this optimization are that (1) a grid-based tracking method is designed for an improved robustness, which produces features that are distributed evenly within and across multiple views, and (2) a mesh-based motion model is adopted for the handling of the scene parallax. Some experimental results are provided to demonstrate the effectiveness of our approach on various consumer-level videos and a Plugin, named "Video Stitcher" is developed at Adobe After Effects CC2015 to show the processed videos.

  16. Photogrammetric Modeling and Image-Based Rendering for Rapid Virtual Environment Creation

    DTIC Science & Technology

    2004-12-01

    area and different methods have been proposed. Pertinent methods include: Camera Calibration , Structure from Motion, Stereo Correspondence, and Image...Based Rendering 1.1.1 Camera Calibration Determining the 3D structure of a model from multiple views becomes simpler if the intrinsic (or internal...can introduce significant nonlinearities into the image. We have found that camera calibration is a straightforward process which can simplify the

  17. Visual Control for Multirobot Organized Rendezvous.

    PubMed

    Lopez-Nicolas, G; Aranda, M; Mezouar, Y; Sagues, C

    2012-08-01

    This paper addresses the problem of visual control of a set of mobile robots. In our framework, the perception system consists of an uncalibrated flying camera performing an unknown general motion. The robots are assumed to undergo planar motion considering nonholonomic constraints. The goal of the control task is to drive the multirobot system to a desired rendezvous configuration relying solely on visual information given by the flying camera. The desired multirobot configuration is defined with an image of the set of robots in that configuration without any additional information. We propose a homography-based framework relying on the homography induced by the multirobot system that gives a desired homography to be used to define the reference target, and a new image-based control law that drives the robots to the desired configuration by imposing a rigidity constraint. This paper extends our previous work, and the main contributions are that the motion constraints on the flying camera are removed, the control law is improved by reducing the number of required steps, the stability of the new control law is proved, and real experiments are provided to validate the proposal.

  18. A deep proper motion catalog within the Sloan digital sky survey footprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munn, Jeffrey A.; Harris, Hugh C.; Tilleman, Trudy M.

    2014-12-01

    A new proper motion catalog is presented, combining the Sloan Digital Sky Survey (SDSS) with second epoch observations in the r band within a portion of the SDSS imaging footprint. The new observations were obtained with the 90prime camera on the Steward Observatory Bok 90 inch telescope, and the Array Camera on the U.S. Naval Observatory, Flagstaff Station, 1.3 m telescope. The catalog covers 1098 square degrees to r = 22.0, an additional 1521 square degrees to r = 20.9, plus a further 488 square degrees of lesser quality data. Statistical errors in the proper motions range from 5 masmore » year{sup −1} at the bright end to 15 mas year{sup −1} at the faint end, for a typical epoch difference of six years. Systematic errors are estimated to be roughly 1 mas year{sup −1} for the Array Camera data, and as much as 2–4 mas year{sup −1} for the 90prime data (though typically less). The catalog also includes a second epoch of r band photometry.« less

  19. Head motion evaluation and correction for PET scans with 18F-FDG in the Japanese Alzheimer's disease neuroimaging initiative (J-ADNI) multi-center study.

    PubMed

    Ikari, Yasuhiko; Nishio, Tomoyuki; Makishi, Yoko; Miya, Yukari; Ito, Kengo; Koeppe, Robert A; Senda, Michio

    2012-08-01

    Head motion during 30-min (six 5-min frames) brain PET scans starting 30 min post-injection of FDG was evaluated together with the effect of post hoc motion correction between frames in J-ADNI multicenter study carried out in 24 PET centers on a total of 172 subjects consisting of 81 normal subjects, 55 mild cognitive impairment (MCI) and 36 mild Alzheimer's disease (AD) patients. Based on the magnitude of the between-frame co-registration parameters, the scans were classified into six levels (A-F) of motion degree. The effect of motion and its correction was evaluated using between-frame variation of the regional FDG uptake values on ROIs placed over cerebral cortical areas. Although AD patients tended to present larger motion (motion level E or F in 22 % of the subjects) than MCI (3 %) and normal (4 %) subjects, unignorable motion was observed in a small number of subjects in the latter groups as well. The between-frame coefficient of variation (SD/mean) was 0.5 % in the frontal, 0.6 % in the parietal and 1.8 % in the posterior cingulate ROI for the scans of motion level 1. The respective values were 1.5, 1.4, and 3.6 % for the scans of motion level F, but reduced by the motion correction to 0.5, 0.4 and 0.8 %, respectively. The motion correction changed the ROI value for the posterior cingulate cortex by 11.6 % in the case of severest motion. Substantial head motion occurs in a fraction of subjects in a multicenter setup which includes PET centers lacking sufficient experience in imaging demented patients. A simple frame-by-frame co-registration technique that can be applied to any PET camera model is effective in correcting for motion and improving quantitative capability.

  20. Real-time film recording from stroke-written CRT's

    NASA Technical Reports Server (NTRS)

    Hunt, R.; Grunwald, A. J.

    1980-01-01

    Real-time simulation studies often require motion-picture recording of events directly from stroke written cathode-ray tubes (CRT's). Difficulty presented is prevention of "flicker," which results from lack of synchronization between display sequence on CRT and shutter motion of camera. Programmable method has been devised for phasing display sequence to shutter motion, ensuring flicker-free recordings.

  1. Determination of the Static Friction Coefficient from Circular Motion

    ERIC Educational Resources Information Center

    Molina-Bolívar, J. A.; Cabrerizo-Vílchez, M. A.

    2014-01-01

    This paper describes a physics laboratory exercise for determining the coefficient of static friction between two surfaces. The circular motion of a coin placed on the surface of a rotating turntable has been studied. For this purpose, the motion is recorded with a high-speed digital video camera recording at 240 frames s[superscript-1], and the…

  2. Using Motion Pictures to Teach Management: Refocusing the Camera Lens through the Infusion Approach to Diversity

    ERIC Educational Resources Information Center

    Bumpus, Minnette A.

    2005-01-01

    Motion pictures and television shows can provide mediums to facilitate the learning of management and organizational behavior theories and concepts. Although the motion pictures and television shows cited in the literature cover a broad range of cinematic categories, racial inclusion is limited. The objectives of this article are to document the…

  3. The Motion Picture and the Teaching of English.

    ERIC Educational Resources Information Center

    Sheridan, Marion C.; And Others

    Written to help a viewer watch a motion picture perceptively, this book explains the characteristics of the film as an art form and examines the role of motion pictures in the English curriculum. Specific topics covered include (1) the technical aspects of the production of films (the order of "shots," camera angle, and point of view), (2) the…

  4. High speed imaging - An important industrial tool

    NASA Technical Reports Server (NTRS)

    Moore, Alton; Pinelli, Thomas E.

    1986-01-01

    High-speed photography, which is a rapid sequence of photographs that allow an event to be analyzed through the stoppage of motion or the production of slow-motion effects, is examined. In high-speed photography 16, 35, and 70 mm film and framing rates between 64-12,000 frames per second are utilized to measure such factors as angles, velocities, failure points, and deflections. The use of dual timing lamps in high-speed photography and the difficulties encountered with exposure and programming the camera and event are discussed. The application of video cameras to the recording of high-speed events is described.

  5. STS-41 crew is briefed on camera equipment during training session at JSC

    NASA Technical Reports Server (NTRS)

    1990-01-01

    STS-41 crewmembers are briefed on camera equipment during training session at JSC. Trainer Judy M. Alexander explains the use 16mm motion picture equipment to (left to right) Pilot Robert D. Cabana, Mission Specialist (MS) Bruce E. Melnick, and MS Thomas D. Akers.

  6. The frequency response of rat vibrissae to sound.

    PubMed

    Shatz, Lisa F; Christensen, Craig W

    2008-05-01

    The motion of isolated rat vibrissae due to low frequency sound has been modeled and measured with good agreement (within a factor of 2) between the data and the model's predictions. As had been done in previous studies on the response of rat vibrissae to tactile stimulation [Hartmann, M. J., Johnson, N. J., Towal, R. B., and Assad, C., J. Neurosci 23, 6510-6519 (2003) and Neimark, M. A., Andermann, A. L., Hopfield, J. J., and Moore, C. I., J. Neurosci 23, 6449-6509 (2003)] the vibrissae were modeled as thin conical beams. The force of the vibrating air on a vibrissa was modeled using the exact solution for a vibrating infinite cylinder in linear fluid. A finite element method was used to model the motion of a single vibrissa fixed at its base, using the aforementioned fluid force. Values for Young's modulus and vibrissa mass density were taken from a previous study [Neimark et al. (above)]. The model had no freely fitted parameters. Motion of isolated vibrissae was measured using a video camera with microscope. The sound stimulation was created using a stereo speaker connected to a signal generator. The tuning was found to be sharp, with quality factors that varied between 3 and 7, much sharper than the motion of cricket cercal hairs or in vitro inner ear hair bundles.

  7. Aerodynamics of a beetle in take-off flights

    NASA Astrophysics Data System (ADS)

    Lee, Boogeon; Park, Hyungmin; Kim, Sun-Tae

    2015-11-01

    In the present study, we investigate the aerodynamics of a beetle in its take-off flights based on the three-dimensional kinematics of inner (hindwing) and outer (elytron) wings, and body postures, which are measured with three high-speed cameras at 2000 fps. To track the highly deformable wing motions, we distribute 21 morphological markers and use the modified direct linear transform algorithm for the reconstruction of measured wing motions. To realize different take-off conditions, we consider two types of take-off flights; that is, one is the take-off from a flat ground and the other is from a vertical rod mimicking a branch of a tree. It is first found that the elytron which is flapped passively due to the motion of hindwing also has non-negligible wing-kinematic parameters. With the ground, the flapping amplitude of elytron is reduced and the hindwing changes its flapping angular velocity during up and downstrokes. On the other hand, the angle of attack on the elytron and hindwing increases and decreases, respectively, due to the ground. These changes in the wing motion are critically related to the aerodynamic force generation, which will be discussed in detail. Supported by the grant to Bio-Mimetic Robot Research Center funded by Defense Acquisition Program Administration (UD130070ID).

  8. Study of human body: Kinematics and kinetics of a martial arts (Silat) performers using 3D-motion capture

    NASA Astrophysics Data System (ADS)

    Soh, Ahmad Afiq Sabqi Awang; Jafri, Mohd Zubir Mat; Azraai, Nur Zaidi

    2015-04-01

    The Interest in this studies of human kinematics goes back very far in human history drove by curiosity or need for the understanding the complexity of human body motion. To find new and accurate information about the human movement as the advance computing technology became available for human movement that can perform. Martial arts (silat) were chose and multiple type of movement was studied. This project has done by using cutting-edge technology which is 3D motion capture to characterize and to measure the motion done by the performers of martial arts (silat). The camera will detect the markers (infrared reflection by the marker) around the performer body (total of 24 markers) and will show as dot in the computer software. The markers detected were analyzing using kinematic kinetic approach and time as reference. A graph of velocity, acceleration and position at time,t (seconds) of each marker was plot. Then from the information obtain, more parameters were determined such as work done, momentum, center of mass of a body using mathematical approach. This data can be used for development of the effectiveness movement in martial arts which is contributed to the people in arts. More future works can be implemented from this project such as analysis of a martial arts competition.

  9. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  10. Ultrafast Imaging of Electronic Motion in Atoms and Molecules

    DTIC Science & Technology

    2016-01-12

    pulses were measured with a home-made faraday cup and laser-triggered streak camera, respectively. Both are retractable and can measure the beam in...100 fs. The charge and duration of the electron pulses were measured with a home-made faraday cup and laser-triggered streak camera, respectively... faraday cup and laser-triggered streak camera, respectively. Both are retractable and can measure the beam in-situ. The gun was shown to generate pulses

  11. MO-FG-CAMPUS-JeP3-04: Feasibility Study of Real-Time Ultrasound Monitoring for Abdominal Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Lin; Kien Ng, Sook; Zhang, Ying

    Purpose: Ultrasound is ideal for real-time monitoring in radiotherapy with high soft tissue contrast, non-ionization, portability, and cost effectiveness. Few studies investigated clinical application of real-time ultrasound monitoring for abdominal stereotactic body radiation therapy (SBRT). This study aims to demonstrate the feasibility of real-time monitoring of 3D target motion using 4D ultrasound. Methods: An ultrasound probe holding system was designed to allow clinician to freely move and lock ultrasound probe. For phantom study, an abdominal ultrasound phantom was secured on a 2D programmable respiratory motion stage. One side of the stage was elevated than another side to generate 3D motion.more » The motion stage made periodic breath-hold movement. Phantom movement tracked by infrared camera was considered as ground truth. For volunteer study three healthy subjects underwent the same setup for abdominal SBRT with active breath control (ABC). 4D ultrasound B-mode images were acquired for both phantom and volunteers for real-time monitoring. 10 breath-hold cycles were monitored for each experiment. For phantom, the target motion tracked by ultrasound was compared with motion tracked by infrared camera. For healthy volunteers, the reproducibility of ABC breath-hold was evaluated. Results: Volunteer study showed the ultrasound system fitted well to the clinical SBRT setup. The reproducibility for 10 breath-holds is less than 2 mm in three directions for all three volunteers. For phantom study the motion between inspiration and expiration captured by camera (ground truth) is 2.35±0.02 mm, 1.28±0.04 mm, 8.85±0.03 mm in LR, AP, SI directly, respectively. The motion monitored by ultrasound is 2.21±0.07 mm, 1.32±0.12mm, 9.10±0.08mm, respectively. The motion monitoring error in any direction is less than 0.5 mm. Conclusion: The volunteer study proved the clinical feasibility of real-time ultrasound monitoring for abdominal SBRT. The phantom and volunteer ABC studies demonstrated sub-millimeter accuracy of 3D motion movement monitoring.« less

  12. Optical Mapping of Membrane Potential and Epicardial Deformation in Beating Hearts.

    PubMed

    Zhang, Hanyu; Iijima, Kenichi; Huang, Jian; Walcott, Gregory P; Rogers, Jack M

    2016-07-26

    Cardiac optical mapping uses potentiometric fluorescent dyes to image membrane potential (Vm). An important limitation of conventional optical mapping is that contraction is usually arrested pharmacologically to prevent motion artifacts from obscuring Vm signals. However, these agents may alter electrophysiology, and by abolishing contraction, also prevent optical mapping from being used to study coupling between electrical and mechanical function. Here, we present a method to simultaneously map Vm and epicardial contraction in the beating heart. Isolated perfused swine hearts were stained with di-4-ANEPPS and fiducial markers were glued to the epicardium for motion tracking. The heart was imaged at 750 Hz with a video camera. Fluorescence was excited with cyan or blue LEDs on alternating camera frames, thus providing a 375-Hz effective sampling rate. Marker tracking enabled the pixel(s) imaging any epicardial site within the marked region to be identified in each camera frame. Cyan- and blue-elicited fluorescence have different sensitivities to Vm, but other signal features, primarily motion artifacts, are common. Thus, taking the ratio of fluorescence emitted by a motion-tracked epicardial site in adjacent frames removes artifacts, leaving Vm (excitation ratiometry). Reconstructed Vm signals were validated by comparison to monophasic action potentials and to conventional optical mapping signals. Binocular imaging with additional video cameras enabled marker motion to be tracked in three dimensions. From these data, epicardial deformation during the cardiac cycle was quantified by computing finite strain fields. We show that the method can simultaneously map Vm and strain in a left-sided working heart preparation and can image changes in both electrical and mechanical function 5 min after the induction of regional ischemia. By allowing high-resolution optical mapping in the absence of electromechanical uncoupling agents, the method relieves a long-standing limitation of optical mapping and has potential to enhance new studies in coupled cardiac electromechanics. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  13. Exploring the Language of Films.

    ERIC Educational Resources Information Center

    Roller, George E.

    A film study course written for the Dade County, Fla. public schools is described which covers techniques of motion pictures and their historical development. Techniques include the "language of pictures" (distance shots, angle shots, color, lighting, arrangement), the "language of motion" (camera movement, subject movement),…

  14. The Effects of Applying Game-Based Learning to Webcam Motion Sensor Games for Autistic Students' Sensory Integration Training

    ERIC Educational Resources Information Center

    Li, Kun-Hsien; Lou, Shi-Jer; Tsai, Huei-Yin; Shih, Ru-Chu

    2012-01-01

    This study aims to explore the effects of applying game-based learning to webcam motion sensor games for autistic students' sensory integration training for autistic students. The research participants were three autistic students aged from six to ten. Webcam camera as the research tool wad connected internet games to engage in motion sensor…

  15. Simultaneous two-view epipolar geometry estimation and motion segmentation by 4D tensor voting.

    PubMed

    Tong, Wai-Shun; Tang, Chi-Keung; Medioni, Gérard

    2004-09-01

    We address the problem of simultaneous two-view epipolar geometry estimation and motion segmentation from nonstatic scenes. Given a set of noisy image pairs containing matches of n objects, we propose an unconventional, efficient, and robust method, 4D tensor voting, for estimating the unknown n epipolar geometries, and segmenting the static and motion matching pairs into n independent motions. By considering the 4D isotropic and orthogonal joint image space, only two tensor voting passes are needed, and a very high noise to signal ratio (up to five) can be tolerated. Epipolar geometries corresponding to multiple, rigid motions are extracted in succession. Only two uncalibrated frames are needed, and no simplifying assumption (such as affine camera model or homographic model between images) other than the pin-hole camera model is made. Our novel approach consists of propagating a local geometric smoothness constraint in the 4D joint image space, followed by global consistency enforcement for extracting the fundamental matrices corresponding to independent motions. We have performed extensive experiments to compare our method with some representative algorithms to show that better performance on nonstatic scenes are achieved. Results on challenging data sets are presented.

  16. Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles

    PubMed Central

    Atman, Jamal; Popp, Manuel; Ruppelt, Jan; Trommer, Gert F.

    2016-01-01

    Micro Air Vehicles (MAVs) equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS). In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV’s navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P) problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results. PMID:27649203

  17. The phantom robot - Predictive displays for teleoperation with time delay

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.; Kim, Won S.; Venema, Steven C.

    1990-01-01

    An enhanced teleoperation technique for time-delayed bilateral teleoperator control is discussed. The control technique selected for time delay is based on the use of a high-fidelity graphics phantom robot that is being controlled in real time (without time delay) against the static task image. Thus, the motion of the phantom robot image on the monitor predicts the motion of the real robot. The real robot's motion will follow the phantom robot's motion on the monitor with the communication time delay implied in the task. Real-time high-fidelity graphics simulation of a PUMA arm is generated and overlaid on the actual camera view of the arm. A simple camera calibration technique is used for calibrated graphics overlay. A preliminary experiment is performed with the predictive display by using a very simple tapping task. The results with this simple task indicate that predictive display enhances the human operator's telemanipulation task performance significantly during free motion when there is a long time delay. It appears, however, that either two-view or stereoscopic predictive displays are necessary for general three-dimensional tasks.

  18. Study of the detail content of Apollo orbital photography

    NASA Technical Reports Server (NTRS)

    Kinzly, R. E.

    1972-01-01

    The results achieved during a study of the Detail Content of Apollo Orbital Photography are reported. The effect of residual motion smear or image reproduction processes upon the detail content of lunar surface imagery obtained from the orbiting command module are assessed. Data and conclusions obtained from the Apollo 8, 12, 14 and 15 missions are included. For the Apollo 8, 12 and 14 missions, the bracket-mounted Hasselblad camera had no mechanism internal to the camera for motion compensation. If the motion of the command module were left totally uncompensated, these photographs would exhibit a ground smear varying from 12 to 27 meters depending upon the focal length of the lens and the exposure time. During the photographic sequences motion compensation was attempted by firing the attitude control system of the spacecraft at a rate to compensate for the motion relative to the lunar surface. The residual smear occurring in selected frames of imagery was assessed using edge analyses methods to obtain and achieved modulation transfer function (MTF) which was compared to a baseline MTF.

  19. High-precision method of binocular camera calibration with a distortion model.

    PubMed

    Li, Weimin; Shan, Siyu; Liu, Hui

    2017-03-10

    A high-precision camera calibration method for binocular stereo vision system based on a multi-view template and alternative bundle adjustment is presented in this paper. The proposed method could be achieved by taking several photos on a specially designed calibration template that has diverse encoded points in different orientations. In this paper, the method utilized the existing algorithm used for monocular camera calibration to obtain the initialization, which involves a camera model, including radial lens distortion and tangential distortion. We created a reference coordinate system based on the left camera coordinate to optimize the intrinsic parameters of left camera through alternative bundle adjustment to obtain optimal values. Then, optimal intrinsic parameters of the right camera can be obtained through alternative bundle adjustment when we create a reference coordinate system based on the right camera coordinate. We also used all intrinsic parameters that were acquired to optimize extrinsic parameters. Thus, the optimal lens distortion parameters and intrinsic and extrinsic parameters were obtained. Synthetic and real data were used to test the method. The simulation results demonstrate that the maximum mean absolute relative calibration errors are about 3.5e-6 and 1.2e-6 for the focal length and the principal point, respectively, under zero-mean Gaussian noise with 0.05 pixels standard deviation. The real result shows that the reprojection error of our model is about 0.045 pixels with the relative standard deviation of 1.0e-6 over the intrinsic parameters. The proposed method is convenient, cost-efficient, highly precise, and simple to carry out.

  20. Traffic monitoring with distributed smart cameras

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Rosner, Marcin; Ulm, Michael; Schwingshackl, Gert

    2012-01-01

    The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. Today the automated analysis of traffic situations is still in its infancy--the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully captured and interpreted by a vision system. 3In this work we present steps towards a visual monitoring system which is designed to detect potentially dangerous traffic situations around a pedestrian crossing at a street intersection. The camera system is specifically designed to detect incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system has been field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in a weatherproof housing. Two cameras run vehicle detection and tracking software, one camera runs a pedestrian detection and tracking module based on the HOG dectection principle. All 3 cameras use sparse optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. Geometric calibration of the cameras allows us to estimate the real-world co-ordinates of detected objects and to link the cameras together into one common reference system. This work describes the foundation for all the different object detection modalities (pedestrians, vehicles), and explains the system setup, tis design, and evaluation results which we have achieved so far.

  1. Comparison of parameters of modern cooled and uncooled thermal cameras

    NASA Astrophysics Data System (ADS)

    Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał

    2017-10-01

    During the design of a system employing thermal cameras one always faces a problem of choosing the camera types best suited for the task. In many cases such a choice is far from optimal one, and there are several reasons for that. System designers often favor tried and tested solution they are used to. They do not follow the latest developments in the field of infrared technology and sometimes their choices are based on prejudice and not on facts. The paper presents the results of measurements of basic parameters of MWIR and LWIR thermal cameras, carried out in a specialized testing laboratory. The measured parameters are decisive in terms of image quality generated by thermal cameras. All measurements were conducted according to current procedures and standards. However the camera settings were not optimized for a specific test conditions or parameter measurements. Instead the real settings used in normal camera operations were applied to obtain realistic camera performance figures. For example there were significant differences between measured values of noise parameters and catalogue data provided by manufacturers, due to the application of edge detection filters to increase detection and recognition ranges. The purpose of this paper is to provide help in choosing the optimal thermal camera for particular application, answering the question whether to opt for cheaper microbolometer device or apply slightly better (in terms of specifications) yet more expensive cooled unit. Measurements and analysis were performed by qualified personnel with several dozen years of experience in both designing and testing of thermal camera systems with both cooled and uncooled focal plane arrays. Cameras of similar array sizes and optics were compared, and for each tested group the best performing devices were selected.

  2. Immersive viewing engine

    NASA Astrophysics Data System (ADS)

    Schonlau, William J.

    2006-05-01

    An immersive viewing engine providing basic telepresence functionality for a variety of application types is presented. Augmented reality, teleoperation and virtual reality applications all benefit from the use of head mounted display devices that present imagery appropriate to the user's head orientation at full frame rates. Our primary application is the viewing of remote environments, as with a camera equipped teleoperated vehicle. The conventional approach where imagery from a narrow field camera onboard the vehicle is presented to the user on a small rectangular screen is contrasted with an immersive viewing system where a cylindrical or spherical format image is received from a panoramic camera on the vehicle, resampled in response to sensed user head orientation and presented via wide field eyewear display, approaching 180 degrees of horizontal field. Of primary interest is the user's enhanced ability to perceive and understand image content, even when image resolution parameters are poor, due to the innate visual integration and 3-D model generation capabilities of the human visual system. A mathematical model for tracking user head position and resampling the panoramic image to attain distortion free viewing of the region appropriate to the user's current head pose is presented and consideration is given to providing the user with stereo viewing generated from depth map information derived using stereo from motion algorithms.

  3. Choosing a Motion Detector.

    ERIC Educational Resources Information Center

    Ballard, David M.

    1990-01-01

    Examines the characteristics of three types of motion detectors: Doppler radar, infrared, and ultrasonic wave, and how they are used on school buses to prevent students from being killed by their own school bus. Other safety devices cited are bus crossing arms and a camera monitor system. (MLF)

  4. Controlling Brownian motion of single protein molecules and single fluorophores in aqueous buffer.

    PubMed

    Cohen, Adam E; Moerner, W E

    2008-05-12

    We present an Anti-Brownian Electrokinetic trap (ABEL trap) capable of trapping individual fluorescently labeled protein molecules in aqueous buffer. The ABEL trap operates by tracking the Brownian motion of a single fluorescent particle in solution, and applying a time-dependent electric field designed to induce an electrokinetic drift that cancels the Brownian motion. The trapping strength of the ABEL trap is limited by the latency of the feedback loop. In previous versions of the trap, this latency was set by the finite frame rate of the camera used for video-tracking. In the present system, the motion of the particle is tracked entirely in hardware (without a camera or image-processing software) using a rapidly rotating laser focus and lock-in detection. The feedback latency is set by the finite rate of arrival of photons. We demonstrate trapping of individual molecules of the protein GroEL in buffer, and we show confinement of single fluorophores of the dye Cy3 in water.

  5. SOFIA tracking image simulation

    NASA Astrophysics Data System (ADS)

    Taylor, Charles R.; Gross, Michael A. K.

    2016-09-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) tracking camera simulator is a component of the Telescope Assembly Simulator (TASim). TASim is a software simulation of the telescope optics, mounting, and control software. Currently in its fifth major version, TASim is relied upon for telescope operator training, mission planning and rehearsal, and mission control and science instrument software development and testing. TASim has recently been extended for hardware-in-the-loop operation in support of telescope and camera hardware development and control and tracking software improvements. All three SOFIA optical tracking cameras are simulated, including the Focal Plane Imager (FPI), which has recently been upgraded to the status of a science instrument that can be used on its own or in parallel with one of the seven infrared science instruments. The simulation includes tracking camera image simulation of starfields based on the UCAC4 catalog at real-time rates of 4-20 frames per second. For its role in training and planning, it is important for the tracker image simulation to provide images with a realistic appearance and response to changes in operating parameters. For its role in tracker software improvements, it is vital to have realistic signal and noise levels and precise star positions. The design of the software simulation for precise subpixel starfield rendering (including radial distortion), realistic point-spread function as a function of focus, tilt, and collimation, and streaking due to telescope motion will be described. The calibration of the simulation for light sensitivity, dark and bias signal, and noise will also be presented

  6. Implementation of a Real-Time Stacking Algorithm in a Photogrammetric Digital Camera for Uavs

    NASA Astrophysics Data System (ADS)

    Audi, A.; Pierrot-Deseilligny, M.; Meynard, C.; Thom, C.

    2017-08-01

    In the recent years, unmanned aerial vehicles (UAVs) have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery) need a longexposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the Nth image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesn't seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real-time the gyrometers of the IMU.

  7. A reduced-dimensionality approach to uncovering dyadic modes of body motion in conversations.

    PubMed

    Gaziv, Guy; Noy, Lior; Liron, Yuvalal; Alon, Uri

    2017-01-01

    Face-to-face conversations are central to human communication and a fascinating example of joint action. Beyond verbal content, one of the primary ways in which information is conveyed in conversations is body language. Body motion in natural conversations has been difficult to study precisely due to the large number of coordinates at play. There is need for fresh approaches to analyze and understand the data, in order to ask whether dyads show basic building blocks of coupled motion. Here we present a method for analyzing body motion during joint action using depth-sensing cameras, and use it to analyze a sample of scientific conversations. Our method consists of three steps: defining modes of body motion of individual participants, defining dyadic modes made of combinations of these individual modes, and lastly defining motion motifs as dyadic modes that occur significantly more often than expected given the single-person motion statistics. As a proof-of-concept, we analyze the motion of 12 dyads of scientists measured using two Microsoft Kinect cameras. In our sample, we find that out of many possible modes, only two were motion motifs: synchronized parallel torso motion in which the participants swayed from side to side in sync, and still segments where neither person moved. We find evidence of dyad individuality in the use of motion modes. For a randomly selected subset of 5 dyads, this individuality was maintained for at least 6 months. The present approach to simplify complex motion data and to define motion motifs may be used to understand other joint tasks and interactions. The analysis tools developed here and the motion dataset are publicly available.

  8. A reduced-dimensionality approach to uncovering dyadic modes of body motion in conversations

    PubMed Central

    Noy, Lior; Liron, Yuvalal; Alon, Uri

    2017-01-01

    Face-to-face conversations are central to human communication and a fascinating example of joint action. Beyond verbal content, one of the primary ways in which information is conveyed in conversations is body language. Body motion in natural conversations has been difficult to study precisely due to the large number of coordinates at play. There is need for fresh approaches to analyze and understand the data, in order to ask whether dyads show basic building blocks of coupled motion. Here we present a method for analyzing body motion during joint action using depth-sensing cameras, and use it to analyze a sample of scientific conversations. Our method consists of three steps: defining modes of body motion of individual participants, defining dyadic modes made of combinations of these individual modes, and lastly defining motion motifs as dyadic modes that occur significantly more often than expected given the single-person motion statistics. As a proof-of-concept, we analyze the motion of 12 dyads of scientists measured using two Microsoft Kinect cameras. In our sample, we find that out of many possible modes, only two were motion motifs: synchronized parallel torso motion in which the participants swayed from side to side in sync, and still segments where neither person moved. We find evidence of dyad individuality in the use of motion modes. For a randomly selected subset of 5 dyads, this individuality was maintained for at least 6 months. The present approach to simplify complex motion data and to define motion motifs may be used to understand other joint tasks and interactions. The analysis tools developed here and the motion dataset are publicly available. PMID:28141861

  9. Earth Observations taken by Expedition 41 crewmember

    NASA Image and Video Library

    2014-09-13

    ISS041-E-013683 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.

  10. Earth Observations taken by Expedition 41 crewmember

    NASA Image and Video Library

    2014-09-13

    ISS041-E-013687 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.

  11. Earth Observations taken by Expedition 41 crewmember

    NASA Image and Video Library

    2014-09-13

    ISS041-E-013693 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.

  12. Imaging of optically diffusive media by use of opto-elastography

    NASA Astrophysics Data System (ADS)

    Bossy, Emmanuel; Funke, Arik R.; Daoudi, Khalid; Tanter, Mickael; Fink, Mathias; Boccara, Claude

    2007-02-01

    We present a camera-based optical detection scheme designed to detect the transient motion created by the acoustic radiation force in elastic media. An optically diffusive tissue mimicking phantom was illuminated with coherent laser light, and a high speed camera (2 kHz frame rate) was used to acquire and cross-correlate consecutive speckle patterns. Time-resolved transient decorrelations of the optical speckle were measured as the results of localised motion induced in the medium by the radiation force and subsequent propagating shear waves. As opposed to classical acousto-optic techniques which are sensitive to vibrations induced by compressional waves at ultrasonic frequencies, the proposed technique is sensitive only to the low frequency transient motion induced in the medium by the radiation force. It therefore provides a way to assess both optical and shear mechanical properties.

  13. Ultrafast electron microscopy integrated with a direct electron detection camera.

    PubMed

    Lee, Young Min; Kim, Young Jae; Kim, Ye-Jin; Kwon, Oh-Hoon

    2017-07-01

    In the past decade, we have witnessed the rapid growth of the field of ultrafast electron microscopy (UEM), which provides intuitive means to watch atomic and molecular motions of matter. Yet, because of the limited current of the pulsed electron beam resulting from space-charge effects, observations have been mainly made to periodic motions of the crystalline structure of hundreds of nanometers or higher by stroboscopic imaging at high repetition rates. Here, we develop an advanced UEM with robust capabilities for circumventing the present limitations by integrating a direct electron detection camera for the first time which allows for imaging at low repetition rates. This approach is expected to promote UEM to a more powerful platform to visualize molecular and collective motions and dissect fundamental physical, chemical, and materials phenomena in space and time.

  14. Instrumentation for Infrared Airglow Clutter.

    DTIC Science & Technology

    1987-03-10

    gain, and filter position to the Camera Head, and monitors these parameters as well as preamp video. GAZER is equipped with a Lenzar wide angle, low...Specifications/Parameters VIDEO SENSOR: Camera ...... . LENZAR Intensicon-8 LLLTV using 2nd gen * micro-channel intensifier and proprietary camera tube

  15. Tracking a Head-Mounted Display in a Room-Sized Environment with Head-Mounted Cameras

    DTIC Science & Technology

    1990-04-01

    poor resolution and a very limited working volume [Wan90]. 4 OPTOTRAK [Nor88] uses one camera with two dual-axis CCD infrared position sensors. Each...Nor88] Northern Digital. Trade literature on Optotrak - Northern Digital’s Three Dimensional Optical Motion Tracking and Analysis System. Northern Digital

  16. Motion Sickness When Driving With a Head-Slaved Camera System

    DTIC Science & Technology

    2003-02-01

    YPR-765 under armour (Report TM-97-A026). Soesterberg, The Netherlands: TNO Human Factors Research Institute. Van Erp, J.B.F., Padmos, P. & Tenkink, E...Institute. Van Erp, J.B.F., Van den Dobbelsteen, J.J. & Padmos, P. (1998). Improved camera-monitor system for driving YPR-765 under armour (Report TM-98

  17. An HDR imaging method with DTDI technology for push-broom cameras

    NASA Astrophysics Data System (ADS)

    Sun, Wu; Han, Chengshan; Xue, Xucheng; Lv, Hengyi; Shi, Junxia; Hu, Changhong; Li, Xiangzhi; Fu, Yao; Jiang, Xiaonan; Huang, Liang; Han, Hongyin

    2018-03-01

    Conventionally, high dynamic-range (HDR) imaging is based on taking two or more pictures of the same scene with different exposure. However, due to a high-speed relative motion between the camera and the scene, it is hard for this technique to be applied to push-broom remote sensing cameras. For the sake of HDR imaging in push-broom remote sensing applications, the present paper proposes an innovative method which can generate HDR images without redundant image sensors or optical components. Specifically, this paper adopts an area array CMOS (complementary metal oxide semiconductor) with the digital domain time-delay-integration (DTDI) technology for imaging, instead of adopting more than one row of image sensors, thereby taking more than one picture with different exposure. And then a new HDR image by fusing two original images with a simple algorithm can be achieved. By conducting the experiment, the dynamic range (DR) of the image increases by 26.02 dB. The proposed method is proved to be effective and has potential in other imaging applications where there is a relative motion between the cameras and scenes.

  18. Mars Odyssey Observes Martian Moons

    NASA Image and Video Library

    2018-02-22

    Phobos and Deimos, the moons of Mars, are seen by the Mars Odyssey orbiter's Thermal Emission Imaging System, or THEMIS, camera. The images were taken in visible-wavelength light. THEMIS also recorded thermal-infrared imagery in the same scan. The apparent motion is due to progression of the camera's pointing during the 17-second span of the February 15, 2018, observation, not from motion of the two moons. This was the second observation of Phobos by Mars Odyssey; the first was on September 29, 2017. Researchers have been using THEMIS to examine Mars since early 2002, but the maneuver turning the orbiter around to point the camera at Phobos was developed only recently. The distance to Phobos from Odyssey during the observation was about 3,489 miles (5,615 kilometers). The distance to Deimos from Odyssey during the observation was about 12,222 miles (19,670 kilometers). An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA22248

  19. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  20. Optical Indoor Positioning System Based on TFT Technology.

    PubMed

    Gőzse, István

    2015-12-24

    A novel indoor positioning system is presented in the paper. Similarly to the camera-based solutions, it is based on visual detection, but it conceptually differs from the classical approaches. First, the objects are marked by LEDs, and second, a special sensing unit is applied, instead of a camera, to track the motion of the markers. This sensing unit realizes a modified pinhole camera model, where the light-sensing area is fixed and consists of a small number of sensing elements (photodiodes), and it is the hole that can be moved. The markers are tracked by controlling the motion of the hole, such that the light of the LEDs always hits the photodiodes. The proposed concept has several advantages: Apart from its low computational demands, it is insensitive to the disturbing ambient light. Moreover, as every component of the system can be realized by simple and inexpensive elements, the overall cost of the system can be kept low.

  1. Eye gaze tracking for endoscopic camera positioning: an application of a hardware/software interface developed to automate Aesop.

    PubMed

    Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K

    2008-01-01

    A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.

  2. STS-36 Mission Specialist Thuot operates 16mm camera on OV-104's middeck

    NASA Image and Video Library

    1990-03-03

    STS-36 Mission Specialist (MS) Pierre J. Thuot operates 16mm ARRIFLEX motion picture camera mounted on the open airlock hatch via a bracket. Thuot uses the camera to record activity of his fellow STS-36 crewmembers on the middeck of Atlantis, Orbiter Vehicle (OV) 104. Positioned between the airlock hatch and the starboard wall-mounted sleep restraints, Thuot, wearing a FAIRFAX t-shirt, squints into the cameras eye piece. Thuot and four other astronauts spent four days, 10 hours and 19 minutes aboard OV-104 for the Department of Defense (DOD) devoted mission.

  3. STS-36 Mission Specialist Thuot operates 16mm camera on OV-104's middeck

    NASA Technical Reports Server (NTRS)

    1990-01-01

    STS-36 Mission Specialist (MS) Pierre J. Thuot operates 16mm ARRIFLEX motion picture camera mounted on the open airlock hatch via a bracket. Thuot uses the camera to record activity of his fellow STS-36 crewmembers on the middeck of Atlantis, Orbiter Vehicle (OV) 104. Positioned between the airlock hatch and the starboard wall-mounted sleep restraints, Thuot, wearing a FAIRFAX t-shirt, squints into the cameras eye piece. Thuot and four other astronauts spent four days, 10 hours and 19 minutes aboard OV-104 for the Department of Defense (DOD) devoted mission.

  4. Foot and ankle kinematics in patients with posterior tibial tendon dysfunction.

    PubMed

    Ness, Mary Ellen; Long, Jason; Marks, Richard; Harris, Gerald

    2008-02-01

    The purpose of this study is to provide a quantitative characterization of gait in patients with posterior tibial tendon dysfunction (PTTD), including temporal-spatial and kinematic parameters, and to compare these results to those of a Normal population. Our hypothesis was that segmental foot kinematics were significantly different in multiple segments across multiple planes. A 15 camera motion analysis system and weight-bearing radiographs were employed to evaluate 3D foot and ankle motion in a population of 34 patients with PTTD (30 females, 4 males) and 25 normal subjects (12 females, 13 males). The four-segment Milwaukee Foot Model (MFM) with radiographic indexing was used to analyze foot and ankle motion and provided kinematic data in the sagittal, coronal and transverse planes as well as temporal-spatial information. The temporal-spatial parameters revealed statistically significant deviations in all four metrics for the PTTD population. Stride length, cadence and walking speed were all significantly diminished, while stance duration was significantly prolonged (p<0.0125). Significant kinematic differences were noted between the groups (p<0.002), including: (1) diminished dorsiflexion and increased eversion of the hindfoot; (2) decreased plantarflexion of the forefoot, as well as abduction shift and loss of the varus thrust in the forefoot; and (3) decreased range of motion (ROM) with diminished dorsiflexion of the hallux. The study provides an impetus for improved orthotic and bracing designs to aid in the care of distal foot segments during the treatment of PTTD. It also provides the basis for future evaluation of surgical efficacy. The course of this investigation may ultimately lead to improved treatment planning methods, including orthotic and operative interventions.

  5. System and method for generating motion corrected tomographic images

    DOEpatents

    Gleason, Shaun S [Knoxville, TN; Goddard, Jr., James S.

    2012-05-01

    A method and related system for generating motion corrected tomographic images includes the steps of illuminating a region of interest (ROI) to be imaged being part of an unrestrained live subject and having at least three spaced apart optical markers thereon. Simultaneous images are acquired from a first and a second camera of the markers from different angles. Motion data comprising 3D position and orientation of the markers relative to an initial reference position is then calculated. Motion corrected tomographic data obtained from the ROI using the motion data is then obtained, where motion corrected tomographic images obtained therefrom.

  6. Teasing Apart Complex Motions using VideoPoint

    NASA Astrophysics Data System (ADS)

    Fischer, Mark

    2002-10-01

    Using video analysis software such as VideoPoint, it is possible to explore the physics of any phenomenon that can be captured on videotape. The good news is that complex motions can be filmed and analyzed. The bad news is that the motions can become very complex very quickly. An example of such a complicated motion, the 2-dimensional motion of an object as filmed by a camera that is moving and rotating in the same plane will be discussed. Methods for extracting the desired object motion will be given as well as suggestions for shooting more easily analyzable video clips.

  7. Automatic 3D relief acquisition and georeferencing of road sides by low-cost on-motion SfM

    NASA Astrophysics Data System (ADS)

    Voumard, Jérémie; Bornemann, Perrick; Malet, Jean-Philippe; Derron, Marc-Henri; Jaboyedoff, Michel

    2017-04-01

    3D terrain relief acquisition is important for a large part of geosciences. Several methods have been developed to digitize terrains, such as total station, LiDAR, GNSS or photogrammetry. To digitize road (or rail tracks) sides on long sections, mobile spatial imaging system or UAV are commonly used. In this project, we compare a still fairly new method -the SfM on-motion technics- with some traditional technics of terrain digitizing (terrestrial laser scanning, traditional SfM, UAS imaging solutions, GNSS surveying systems and total stations). The SfM on-motion technics generates 3D spatial data by photogrammetric processing of images taken from a moving vehicle. Our mobile system consists of six action cameras placed on a vehicle. Four fisheye cameras mounted on a mast on the vehicle roof are placed at 3.2 meters above the ground. Three of them have a GNNS chip providing geotagged images. Two pictures were acquired every second by each camera. 4K resolution fisheye videos were also used to extract 8.3M not geotagged pictures. All these pictures are then processed with the Agisoft PhotoScan Professional software. Results from the SfM on-motion technics are compared with results from classical SfM photogrammetry on a 500 meters long alpine track. They were also compared with mobile laser scanning data on the same road section. First results seem to indicate that slope structures are well observable up to decimetric accuracy. For the georeferencing, the planimetric (XY) accuracy of few meters is much better than the altimetric (Z) accuracy. There is indeed a Z coordinate shift of few tens of meters between GoPro cameras and Garmin camera. This makes necessary to give a greater freedom to altimetric coordinates in the processing software. Benefits of this low-cost SfM on-motion method are: 1) a simple setup to use in the field (easy to switch between vehicle types as car, train, bike, etc.), 2) a low cost and 3) an automatic georeferencing of 3D points clouds. Main disadvantages are: 1) results are less accurate than those from LiDAR system, 2) a heavy images processing and 3) a short distance of acquisition.

  8. Creative Film-Making.

    ERIC Educational Resources Information Center

    Smallman, Kirk

    The fundamentals of motion picture photography are introduced with a physiological explanation for the illusion of motion in a film. Film stock formats and emulsions, camera features, and lights are listed and described. Various techniques of exposure control are illustrated in terms of their effects. Photographing action with a stationary or a…

  9. A Vision-Based Motion Sensor for Undergraduate Laboratories.

    ERIC Educational Resources Information Center

    Salumbides, Edcel John; Maristela, Joyce; Uy, Alfredson; Karremans, Kees

    2002-01-01

    Introduces an alternative method to determine the mechanics of a moving object that uses computer vision algorithms with a charge-coupled device (CCD) camera as a recording device. Presents two experiments, pendulum motion and terminal velocity, to compare results of the alternative and conventional methods. (YDS)

  10. A software-based tool for video motion tracking in the surgical skills assessment landscape.

    PubMed

    Ganni, Sandeep; Botden, Sanne M B I; Chmarra, Magdalena; Goossens, Richard H M; Jakimowicz, Jack J

    2018-01-16

    The use of motion tracking has been proved to provide an objective assessment in surgical skills training. Current systems, however, require the use of additional equipment or specialised laparoscopic instruments and cameras to extract the data. The aim of this study was to determine the possibility of using a software-based solution to extract the data. 6 expert and 23 novice participants performed a basic laparoscopic cholecystectomy procedure in the operating room. The recorded videos were analysed using Kinovea 0.8.15 and the following parameters calculated the path length, average instrument movement and number of sudden or extreme movements. The analysed data showed that experts had significantly shorter path length (median 127 cm vs. 187 cm, p = 0.01), smaller average movements (median 0.40 cm vs. 0.32 cm, p = 0.002) and fewer sudden movements (median 14.00 vs. 21.61, p = 0.001) than their novice counterparts. The use of software-based video motion tracking of laparoscopic cholecystectomy is a simple and viable method enabling objective assessment of surgical performance. It provides clear discrimination between expert and novice performance.

  11. Pedestrian Detection by Laser Scanning and Depth Imagery

    NASA Astrophysics Data System (ADS)

    Barsi, A.; Lovas, T.; Molnar, B.; Somogyi, A.; Igazvolgyi, Z.

    2016-06-01

    Pedestrian flow is much less regulated and controlled compared to vehicle traffic. Estimating flow parameters would support many safety, security or commercial applications. Current paper discusses a method that enables acquiring information on pedestrian movements without disturbing and changing their motion. Profile laser scanner and depth camera have been applied to capture the geometry of the moving people as time series. Procedures have been developed to derive complex flow parameters, such as count, volume, walking direction and velocity from laser scanned point clouds. Since no images are captured from the faces of pedestrians, no privacy issues raised. The paper includes accuracy analysis of the estimated parameters based on video footage as reference. Due to the dense point clouds, detailed geometry analysis has been conducted to obtain the height and shoulder width of pedestrians and to detect whether luggage has been carried or not. The derived parameters support safety (e.g. detecting critical pedestrian density in mass events), security (e.g. detecting prohibited baggage in endangered areas) and commercial applications (e.g. counting pedestrians at all entrances/exits of a shopping mall).

  12. Real-Time External Respiratory Motion Measuring Technique Using an RGB-D Camera and Principal Component Analysis †

    PubMed Central

    Wijenayake, Udaya; Park, Soon-Yong

    2017-01-01

    Accurate tracking and modeling of internal and external respiratory motion in the thoracic and abdominal regions of a human body is a highly discussed topic in external beam radiotherapy treatment. Errors in target/normal tissue delineation and dose calculation and the increment of the healthy tissues being exposed to high radiation doses are some of the unsolicited problems caused due to inaccurate tracking of the respiratory motion. Many related works have been introduced for respiratory motion modeling, but a majority of them highly depend on radiography/fluoroscopy imaging, wearable markers or surgical node implanting techniques. We, in this article, propose a new respiratory motion tracking approach by exploiting the advantages of an RGB-D camera. First, we create a patient-specific respiratory motion model using principal component analysis (PCA) removing the spatial and temporal noise of the input depth data. Then, this model is utilized for real-time external respiratory motion measurement with high accuracy. Additionally, we introduce a marker-based depth frame registration technique to limit the measuring area into an anatomically consistent region that helps to handle the patient movements during the treatment. We achieved a 0.97 correlation comparing to a spirometer and 0.53 mm average error considering a laser line scanning result as the ground truth. As future work, we will use this accurate measurement of external respiratory motion to generate a correlated motion model that describes the movements of internal tumors. PMID:28792468

  13. Motion Evaluation for Rehabilitation Training of the Disabled

    NASA Astrophysics Data System (ADS)

    Kim, Tae-Young; Park, Jun; Lim, Cheol-Su

    In this paper, a motion evaluation technique for rehabilitation training is introduced. Motion recognition technologies have been developed for determining matching motions in the training set. However, we need to measure how well and how much of the motion has been followed for training motion evaluation. We employed a Finite State Machine as a framework of motion evaluation. For similarity analysis, we used weighted angular value differences although any template matching algorithm may be used. For robustness under illumination changes, IR LED's and cameras with IR-pass filter were used. Developed technique was successfully used for rehabilitation training of the disabled. Therapists appraised the system as practically useful.

  14. Practical use of high-speed cameras for research and development within the automotive industry: yesterday and today

    NASA Astrophysics Data System (ADS)

    Steinmetz, Klaus

    1995-05-01

    Within the automotive industry, especially for the development and improvement of safety systems, we find a lot of high accelerated motions, that can not be followed and consequently not be analyzed by human eye. For the vehicle safety tests at AUDI, which are performed as 'Crash Tests', 'Sled Tests' and 'Static Component Tests', 'Stalex', 'Hycam', and 'Locam' cameras are in use. Nowadays the automobile production is inconceivable without the use of high speed cameras.

  15. STS-31 crew activity on the middeck of the Earth-orbiting Discovery, OV-103

    NASA Image and Video Library

    1990-04-29

    STS031-05-002 (24-29 April 1990) --- A 35mm camera with a "fish eye" lens captured this high angle image on Discovery's middeck. Astronaut Kathryn D. Sullivan works with the IMAX camera in foreground, while Astronaut Steven A. Hawley consults a checklist in corner. An Arriflex motion picture camera records student ion arc experiment in apparatus mounted on stowage locker. The experiment was the project of Gregory S. Peterson, currently a student at Utah State University.

  16. Movable Cameras And Monitors For Viewing Telemanipulator

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Venema, Steven C.

    1993-01-01

    Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.

  17. On a novel low cost high accuracy experimental setup for tomographic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Discetti, Stefano; Ianiro, Andrea; Astarita, Tommaso; Cardone, Gennaro

    2013-07-01

    This work deals with the critical aspects related to cost reduction of a Tomo PIV setup and to the bias errors introduced in the velocity measurements by the coherent motion of the ghost particles. The proposed solution consists of using two independent imaging systems composed of three (or more) low speed single frame cameras, which can be up to ten times cheaper than double shutter cameras with the same image quality. Each imaging system is used to reconstruct a particle distribution in the same measurement region, relative to the first and the second exposure, respectively. The reconstructed volumes are then interrogated by cross-correlation in order to obtain the measured velocity field, as in the standard tomographic PIV implementation. Moreover, differently from tomographic PIV, the ghost particle distributions of the two exposures are uncorrelated, since their spatial distribution is camera orientation dependent. For this reason, the proposed solution promises more accurate results, without the bias effect of the coherent ghost particles motion. Guidelines for the implementation and the application of the present method are proposed. The performances are assessed with a parametric study on synthetic experiments. The proposed low cost system produces a much lower modulation with respect to an equivalent three-camera system. Furthermore, the potential accuracy improvement using the Motion Tracking Enhanced MART (Novara et al 2010 Meas. Sci. Technol. 21 035401) is much higher than in the case of the standard implementation of tomographic PIV.

  18. CCDs in the Mechanics Lab--A Competitive Alternative (Part II).

    ERIC Educational Resources Information Center

    Pinto, Fabrizio

    1995-01-01

    Describes a system of interactive astronomy whereby nonscience students are able to acquire their own images from a room remotely linked to a telescope. Briefly discusses some applications of Charge-Coupled Device cameras (CCDs) in teaching free fall, projectile motion, and the motion of the pendulum. (JRH)

  19. Video Analysis of Muscle Motion

    ERIC Educational Resources Information Center

    Foster, Boyd

    2004-01-01

    In this article, the author discusses how video cameras can help students in physical education and sport science classes successfully learn and present anatomy and kinesiology content at levels. Video analysis of physical activity is an excellent way to expand student knowledge of muscle location and function, planes and axes of motion, and…

  20. Integrating motion-detection cameras and hair snags for wolverine identification

    Treesearch

    Audrey J. Magoun; Clinton D. Long; Michael K. Schwartz; Kristine L. Pilgrim; Richard E. Lowell; Patrick Valkenburg

    2011-01-01

    We developed an integrated system for photographing a wolverine's (Gulo gulo) ventral pattern while concurrently collecting hair for microsatellite DNA genotyping. Our objectives were to 1) test the system on a wild population of wolverines using an array of camera and hair-snag (C&H) stations in forested habitat where wolverines were known to occur, 2)...

  1. Vapour Pressure and Adiabatic Cooling from Champagne: Slow-Motion Visualization of Gas Thermodynamics

    ERIC Educational Resources Information Center

    Vollmer, Michael; Mollmann, Klaus-Peter

    2012-01-01

    The recent introduction of inexpensive high-speed cameras offers a new experimental approach to many simple but fast-occurring events in physics. In this paper, the authors present two simple demonstration experiments recorded with high-speed cameras in the fields of gas dynamics and thermal physics. The experiments feature vapour pressure effects…

  2. The World in Slow Motion: Using a High-Speed Camera in a Physics Workshop

    ERIC Educational Resources Information Center

    Dewanto, Andreas; Lim, Geok Quee; Kuang, Jianhong; Zhang, Jinfeng; Yeo, Ye

    2012-01-01

    We present a physics workshop for college students to investigate various physical phenomena using high-speed cameras. The technical specifications required, the step-by-step instructions, as well as the practical limitations of the workshop, are discussed. This workshop is also intended to be a novel way to promote physics to Generation-Y…

  3. Robust Parallel Motion Estimation and Mapping with Stereo Cameras in Underground Infrastructure

    NASA Astrophysics Data System (ADS)

    Liu, Chun; Li, Zhengning; Zhou, Yuan

    2016-06-01

    Presently, we developed a novel robust motion estimation method for localization and mapping in underground infrastructure using a pre-calibrated rigid stereo camera rig. Localization and mapping in underground infrastructure is important to safety. Yet it's also nontrivial since most underground infrastructures have poor lighting condition and featureless structure. Overcoming these difficulties, we discovered that parallel system is more efficient than the EKF-based SLAM approach since parallel system divides motion estimation and 3D mapping tasks into separate threads, eliminating data-association problem which is quite an issue in SLAM. Moreover, the motion estimation thread takes the advantage of state-of-art robust visual odometry algorithm which is highly functional under low illumination and provides accurate pose information. We designed and built an unmanned vehicle and used the vehicle to collect a dataset in an underground garage. The parallel system was evaluated by the actual dataset. Motion estimation results indicated a relative position error of 0.3%, and 3D mapping results showed a mean position error of 13cm. Off-line process reduced position error to 2cm. Performance evaluation by actual dataset showed that our system is capable of robust motion estimation and accurate 3D mapping in poor illumination and featureless underground environment.

  4. Kinematic analysis of preterm newborns' spontaneous movements for postural activity assessment.

    PubMed

    Halek, Jan; Muckova, Anita; Svoboda, Zdenek; Janura, Miroslav; Marikova, Jana; Horakova, Katerina; Kantor, Lumir; Nemcova, Nina

    2015-12-01

    The objectives of this pilot study were to assess the potential use of 3D videography for analyzing the motion of the body center of mass (COM) in newborns and to determine differences in spontaneous movements between preterm and full-term infants. The group comprised 10 preterm newborns (gestational age at birth between 26 and 37 weeks; birth weight 800 to 2960 g; gestational age at the time of examination 34 to 39 weeks) and 10 full-term infants (gestational week 38 to 41; birth weight 2810 to 4360 g). To determine the range of motion of the COM, 3D videography was used (2 cameras, 25 Hz). When recording their movements, the infants were in the supine position, calm and awake. The recordings were processed using the APAS software. Selected points on the body were marked to obtain data for calculating the basic parameters of COM trajectories. The range of motion of the COM in both craniocaudal and anteroposterior directions was significantly greater in premature infants (P < 0.05 and P < 0.01, respectively) than in full-term babies. The variability of motion of the COM was significantly greater in the craniocaudal (P < 0.01) and anteroposterior (P < 0.05) directions in preterm babies. This was also valid for the velocity of motion of the COM in the craniocaudal direction (P < 0.05). 3D videography can be used for experimental assessment of motor behavior in preterm infants. Basic kinematic characteristics of the motion of the COM (range, variability, velocity) are greater in preterm infants.

  5. Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle

    NASA Astrophysics Data System (ADS)

    Ettl, Svenja

    2015-04-01

    'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.

  6. A High-Speed Motion-Picture Study of Normal Combustion, Knock and Preignition in a Spark-Ignition Engines

    NASA Technical Reports Server (NTRS)

    Rothrock, A M; Spencer, R C; Miller, Cearcy D

    1941-01-01

    Combustion in a spark-ignition engine was investigated by means of the NACA high-speed motion-picture cameras. This camera is operated at a speed of 40,000 photographs a second and therefore makes possible the study of changes that take place in the intervals as short as 0.000025 second. When the motion pictures are projected at the normal speed of 16 frames a second, any rate of movement shown is slowed down 2500 times. Photographs are presented of normal combustion, of combustion from preignitions, and of knock both with and without preignition. The photographs of combustion show that knock may be preceded by a period of exothermic reaction in the end zone that persists for a time interval of as much as 0.0006 second. The knock takes place in 0.00005 second or less.

  7. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  8. Inner hair cell stereocilia movements captured in-situ by a high-speed camera with subpixel image processing

    NASA Astrophysics Data System (ADS)

    Wang, Yanli; Puria, Sunil; Steele, Charles R.; Ricci, Anthony J.

    2018-05-01

    Mechanical stimulation of the stereocilia hair bundles of the inner and outer hair cells (IHCs and OHCs, respectively) drives IHC synaptic release and OHC electromotility. The modes of hair-bundle motion can have a dramatic influence on the electrophysiological responses of the hair cells. The in vivo modes of motion are, however, unknown for both IHC and OHC bundles. In this work, we are developing technology to investigate the in situ hair-bundle motion in excised mouse cochleae, for which the hair bundles of the OHCs are embedded in the tectorial membrane but those of the IHCs are not. Motion is generated by pushing onto the stapes at 1 kHz with a glass probe coupled to a piezo stack, and recorded using a high-speed camera at 10,000 frames per second. The motions of individual IHC stereocilia and the cell boundary are analyzed using 2D and 1D Gaussian fitting algorithms, respectively. Preliminary results show that the IHC bundle moves mainly in the radial direction and exhibits a small degree of splay, and that the stereocilia in the second row move less than those in the first row, even in the same focal plane.

  9. Vision-based control for flight relative to dynamic environments

    NASA Astrophysics Data System (ADS)

    Causey, Ryan Scott

    The concept of autonomous systems has been considered an enabling technology for a diverse group of military and civilian applications. The current direction for autonomous systems is increased capabilities through more advanced systems that are useful for missions that require autonomous avoidance, navigation, tracking, and docking. To facilitate this level of mission capability, passive sensors, such as cameras, and complex software are added to the vehicle. By incorporating an on-board camera, visual information can be processed to interpret the surroundings. This information allows decision making with increased situational awareness without the cost of a sensor signature, which is critical in military applications. The concepts presented in this dissertation facilitate the issues inherent to vision-based state estimation of moving objects for a monocular camera configuration. The process consists of several stages involving image processing such as detection, estimation, and modeling. The detection algorithm segments the motion field through a least-squares approach and classifies motions not obeying the dominant trend as independently moving objects. An approach to state estimation of moving targets is derived using a homography approach. The algorithm requires knowledge of the camera motion, a reference motion, and additional feature point geometry for both the target and reference objects. The target state estimates are then observed over time to model the dynamics using a probabilistic technique. The effects of uncertainty on state estimation due to camera calibration are considered through a bounded deterministic approach. The system framework focuses on an aircraft platform of which the system dynamics are derived to relate vehicle states to image plane quantities. Control designs using standard guidance and navigation schemes are then applied to the tracking and homing problems using the derived state estimation. Four simulations are implemented in MATLAB that build on the image concepts present in this dissertation. The first two simulations deal with feature point computations and the effects of uncertainty. The third simulation demonstrates the open-loop estimation of a target ground vehicle in pursuit whereas the four implements a homing control design for the Autonomous Aerial Refueling (AAR) using target estimates as feedback.

  10. Camera calibration for multidirectional flame chemiluminescence tomography

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Zhang, Weiguang; Zhang, Yuhong; Yu, Xun

    2017-04-01

    Flame chemiluminescence tomography (FCT), which combines computerized tomography theory and multidirectional chemiluminescence emission measurements, can realize instantaneous three-dimensional (3-D) diagnostics for flames with high spatial and temporal resolutions. One critical step of FCT is to record the projections by multiple cameras from different view angles. For high accuracy reconstructions, it requires that extrinsic parameters (the positions and orientations) and intrinsic parameters (especially the image distances) of cameras be accurately calibrated first. Taking the focus effect of the camera into account, a modified camera calibration method was presented for FCT, and a 3-D calibration pattern was designed to solve the parameters. The precision of the method was evaluated by reprojections of feature points to cameras with the calibration results. The maximum root mean square error of the feature points' position is 1.42 pixels and 0.0064 mm for the image distance. An FCT system with 12 cameras was calibrated by the proposed method and the 3-D CH* intensity of a propane flame was measured. The results showed that the FCT system provides reasonable reconstruction accuracy using the camera's calibration results.

  11. 3D for the people: multi-camera motion capture in the field with consumer-grade cameras and open source software

    PubMed Central

    Evangelista, Dennis J.; Ray, Dylan D.; Hedrick, Tyson L.

    2016-01-01

    ABSTRACT Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts. PMID:27444791

  12. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects

    PubMed Central

    Lambers, Martin; Kolb, Andreas

    2017-01-01

    In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data. PMID:29271888

  13. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.

    PubMed

    Bulczak, David; Lambers, Martin; Kolb, Andreas

    2017-12-22

    In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.

  14. Walking variations in healthy women wearing high-heeled shoes: Shoe size and heel height effects.

    PubMed

    Di Sipio, Enrica; Piccinini, Giulia; Pecchioli, Cristiano; Germanotta, Marco; Iacovelli, Chiara; Simbolotti, Chiara; Cruciani, Arianna; Padua, Luca

    2018-05-03

    The use of high heels is widespread in modern society in professional and social contests. Literature showed that wearing high heels can produce injurious effects on several structures from the toes to the pelvis. No studies considered shoe length as an impacting factor on walking with high heels. The aim of this study is to evaluate walking parameters in young healthy women wearing high heels, considering not only the heel height but also the foot/shoe size. We evaluate spatio-temporal, kinematic and kinetic data, collected using a 8-camera motion capture system, in a sample of 21 healthy women in three different walking conditions: 1) barefoot, 2) wearing 12 cm high heel shoes independently from shoe size, and 3) wearing shoes with heel height based on shoe size, keeping the ankles' plantar flexion angle constant. The main outcome measures were: spatio-temporal parameters, gait harmony measurement, range of motion, flexion and extension maximal values, power and moment of lower limb joints. Comparing the three walking conditions, the Mixed Anova test, showed significant differences between both high heeled conditions (variable and constant height) and barefoot in spatio-temporal, kinematic and kinetic parameters. Regardless of the shoe size, both heeled conditions presented a similar gait pattern and were responsible for negative effects on walking parameters. Considering our results and the relevance of the heel height, further studies are needed to identify a threshold, over which it is possible to observe that wearing high heels could cause harmful effects, independently from the foot/shoe size. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. 3D pose estimation and motion analysis of the articulated human hand-forearm limb in an industrial production environment

    NASA Astrophysics Data System (ADS)

    Hahn, Markus; Barrois, Björn; Krüger, Lars; Wöhler, Christian; Sagerer, Gerhard; Kummert, Franz

    2010-09-01

    This study introduces an approach to model-based 3D pose estimation and instantaneous motion analysis of the human hand-forearm limb in the application context of safe human-robot interaction. 3D pose estimation is performed using two approaches: The Multiocular Contracting Curve Density (MOCCD) algorithm is a top-down technique based on pixel statistics around a contour model projected into the images from several cameras. The Iterative Closest Point (ICP) algorithm is a bottom-up approach which uses a motion-attributed 3D point cloud to estimate the object pose. Due to their orthogonal properties, a fusion of these algorithms is shown to be favorable. The fusion is performed by a weighted combination of the extracted pose parameters in an iterative manner. The analysis of object motion is based on the pose estimation result and the motion-attributed 3D points belonging to the hand-forearm limb using an extended constraint-line approach which does not rely on any temporal filtering. A further refinement is obtained using the Shape Flow algorithm, a temporal extension of the MOCCD approach, which estimates the temporal pose derivative based on the current and the two preceding images, corresponding to temporal filtering with a short response time of two or at most three frames. Combining the results of the two motion estimation stages provides information about the instantaneous motion properties of the object. Experimental investigations are performed on real-world image sequences displaying several test persons performing different working actions typically occurring in an industrial production scenario. In all example scenes, the background is cluttered, and the test persons wear various kinds of clothes. For evaluation, independently obtained ground truth data are used. [Figure not available: see fulltext.

  16. An Automatic Procedure for Combining Digital Images and Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    Moussa, W.; Abdel-Wahab, M.; Fritsch, D.

    2012-07-01

    Besides improving both the geometry and the visual quality of the model, the integration of close-range photogrammetry and terrestrial laser scanning techniques directs at filling gaps in laser scanner point clouds to avoid modeling errors, reconstructing more details in higher resolution and recovering simple structures with less geometric details. Thus, within this paper a flexible approach for the automatic combination of digital images and laser scanner data is presented. Our approach comprises two methods for data fusion. The first method starts by a marker-free registration of digital images based on a point-based environment model (PEM) of a scene which stores the 3D laser scanner point clouds associated with intensity and RGB values. The PEM allows the extraction of accurate control information for the direct computation of absolute camera orientations with redundant information by means of accurate space resection methods. In order to use the computed relations between the digital images and the laser scanner data, an extended Helmert (seven-parameter) transformation is introduced and its parameters are estimated. Precedent to that, in the second method, the local relative orientation parameters of the camera images are calculated by means of an optimized Structure and Motion (SaM) reconstruction method. Then, using the determined transformation parameters results in having absolute oriented images in relation to the laser scanner data. With the resulting absolute orientations we have employed robust dense image reconstruction algorithms to create oriented dense image point clouds, which are automatically combined with the laser scanner data to form a complete detailed representation of a scene. Examples of different data sets are shown and experimental results demonstrate the effectiveness of the presented procedures.

  17. MPCM: a hardware coder for super slow motion video sequences

    NASA Astrophysics Data System (ADS)

    Alcocer, Estefanía; López-Granado, Otoniel; Gutierrez, Roberto; Malumbres, Manuel P.

    2013-12-01

    In the last decade, the improvements in VLSI levels and image sensor technologies have led to a frenetic rush to provide image sensors with higher resolutions and faster frame rates. As a result, video devices were designed to capture real-time video at high-resolution formats with frame rates reaching 1,000 fps and beyond. These ultrahigh-speed video cameras are widely used in scientific and industrial applications, such as car crash tests, combustion research, materials research and testing, fluid dynamics, and flow visualization that demand real-time video capturing at extremely high frame rates with high-definition formats. Therefore, data storage capability, communication bandwidth, processing time, and power consumption are critical parameters that should be carefully considered in their design. In this paper, we propose a fast FPGA implementation of a simple codec called modulo-pulse code modulation (MPCM) which is able to reduce the bandwidth requirements up to 1.7 times at the same image quality when compared with PCM coding. This allows current high-speed cameras to capture in a continuous manner through a 40-Gbit Ethernet point-to-point access.

  18. Photogrammetry System and Method for Determining Relative Motion Between Two Bodies

    NASA Technical Reports Server (NTRS)

    Miller, Samuel A. (Inventor); Severance, Kurt (Inventor)

    2014-01-01

    A photogrammetry system and method provide for determining the relative position between two objects. The system utilizes one or more imaging devices, such as high speed cameras, that are mounted on a first body, and three or more photogrammetry targets of a known location on a second body. The system and method can be utilized with cameras having fish-eye, hyperbolic, omnidirectional, or other lenses. The system and method do not require overlapping fields-of-view if two or more cameras are utilized. The system and method derive relative orientation by equally weighting information from an arbitrary number of heterogeneous cameras, all with non-overlapping fields-of-view. Furthermore, the system can make the measurements with arbitrary wide-angle lenses on the cameras.

  19. Keyboard before Head Tracking Depresses User Success in Remote Camera Control

    NASA Astrophysics Data System (ADS)

    Zhu, Dingyun; Gedeon, Tom; Taylor, Ken

    In remote mining, operators of complex machinery have more tasks or devices to control than they have hands. For example, operating a rock breaker requires two handed joystick control to position and fire the jackhammer, leaving the camera control to either automatic control or require the operator to switch between controls. We modelled such a teleoperated setting by performing experiments using a simple physical game analogue, being a half size table soccer game with two handles. The complex camera angles of the mining application were modelled by obscuring the direct view of the play area and the use of a Pan-Tilt-Zoom (PTZ) camera. The camera control was via either a keyboard or via head tracking using two different sets of head gestures called “head motion” and “head flicking” for turning camera motion on/off. Our results show that the head motion control was able to provide a comparable performance to using a keyboard, while head flicking was significantly worse. In addition, the sequence of use of the three control methods is highly significant. It appears that use of the keyboard first depresses successful use of the head tracking methods, with significantly better results when one of the head tracking methods was used first. Analysis of the qualitative survey data collected supports that the worst (by performance) method was disliked by participants. Surprisingly, use of that worst method as the first control method significantly enhanced performance using the other two control methods.

  20. Rapid-Response or Repeat-Mode Topography from Aerial Structure from Motion

    NASA Astrophysics Data System (ADS)

    Nissen, E.; Johnson, K. L.; Fitzgerald, F. S.; Morgan, M.; White, J.

    2014-12-01

    This decade has seen a surge of interest in Structure-from-Motion (SfM) as a means of generating high-resolution topography and coregistered texture maps from stereo digital photographs. Using an unstructured set of overlapping photographs captured from multiple viewpoints and minimal GPS ground control, SfM solves simultaneously for scene topography and camera positions, orientations and lens parameters. The use of cheap unmanned aerial vehicles or tethered helium balloons as camera platforms expedites data collection and overcomes many of the cost, time and logistical limitations of LiDAR surveying, making it a potentially valuable tool for rapid response mapping and repeat monitoring applications. We begin this presentation by assessing what data resolutions and precisions are achievable using a simple aerial camera platform and commercial SfM software (we use the popular Agisoft Photoscan package). SfM point clouds generated at two small (~0.1 km2), sparsely-vegetated field sites in California compare favorably with overlapping airborne and terrestrial LiDAR surveys, with closest point distances of a few centimeters between the independent datasets. Next, we go on to explore the method in more challenging conditions, in response to a major landslide in Mesa County, Colorado, on 25th May 2014. Photographs collected from a small UAV were used to generate a high-resolution model of the 4.5 x 1 km landslide several days before an airborne LiDAR survey could be organized and flown. An initial estimate of the mass balance of the landslide could quickly be made by differencing this model against pre-event topography generated using stereo photographs collected in 2009 as part of the National Agricultural Imagery Program (NAIP). This case study therefore demonstrates the rich potential offered by this technique, as well as some of the challenges, particularly with respect to the treatment of vegetation.

  1. Real-time image mosaicing for medical applications.

    PubMed

    Loewke, Kevin E; Camarillo, David B; Jobst, Christopher A; Salisbury, J Kenneth

    2007-01-01

    In this paper we describe the development of a robotically-assisted image mosaicing system for medical applications. The processing occurs in real-time due to a fast initial image alignment provided by robotic position sensing. Near-field imaging, defined by relatively large camera motion, requires translations as well as pan and tilt orientations to be measured. To capture these measurements we use 5-d.o.f. sensing along with a hand-eye calibration to account for sensor offset. This sensor-based approach speeds up the mosaicing, eliminates cumulative errors, and readily handles arbitrary camera motions. Our results have produced visually satisfactory mosaics on a dental model but can be extended to other medical images.

  2. Power estimation of martial arts movement using 3D motion capture camera

    NASA Astrophysics Data System (ADS)

    Azraai, Nur Zaidi; Awang Soh, Ahmad Afiq Sabqi; Mat Jafri, Mohd Zubir

    2017-06-01

    Motion capture camera (MOCAP) has been widely used in many areas such as biomechanics, physiology, animation, arts, etc. This project is done by approaching physics mechanics and the extended of MOCAP application through sports. Most researchers will use a force plate, but this will only can measure the force of impact, but for us, we are keen to observe the kinematics of the movement. Martial arts is one of the sports that uses more than one part of the human body. For this project, martial art `Silat' was chosen because of its wide practice in Malaysia. 2 performers have been selected, one of them has an experienced in `Silat' practice and another one have no experience at all so that we can compare the energy and force generated by the performers. Every performer will generate a punching with same posture which in this project, two types of punching move were selected. Before the measuring start, a calibration has been done so the software knows the area covered by the camera and reduce the error when analyze by using the T stick that have been pasted with a marker. A punching bag with mass 60 kg was hung on an iron bar as a target. The use of this punching bag is to determine the impact force of a performer when they punch. This punching bag also will be stuck with the optical marker so we can observe the movement after impact. 8 cameras have been used and placed with 2 cameras at every side of the wall with different angle in a rectangular room 270 ft2 and the camera covered approximately 50 ft2. We covered only a small area so less noise will be detected and make the measurement more accurate. A Marker has been pasted on the limb of the entire hand that we want to observe and measure. A passive marker used in this project has a characteristic to reflect the infrared that being generated by the camera. The infrared will reflected to the camera sensor so the marker position can be detected and show in software. The used of many cameras is to increase the precision and improve the accuracy of the marker. Performer movement was recorded and analyzed using software Cortex motion analysis where velocity and acceleration of a performer movement can be measured. With classical mechanics approach we have estimated the power and force of impact and shows that an experienced performer produces more power and force of impact is higher than the inexperienced performer.

  3. TH-AB-202-11: Spatial and Rotational Quality Assurance of 6DOF Patient Tracking Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belcher, AH; Liu, X; Grelewicz, Z

    2016-06-15

    Purpose: External tracking systems used for patient positioning and motion monitoring during radiotherapy are now capable of detecting both translations and rotations (6DOF). In this work, we develop a novel technique to evaluate the 6DOF performance of external motion tracking systems. We apply this methodology to an infrared (IR) marker tracking system and two 3D optical surface mapping systems in a common tumor 6DOF workspace. Methods: An in-house designed and built 6DOF parallel kinematics robotic motion phantom was used to follow input trajectories with sub-millimeter and sub-degree accuracy. The 6DOF positions of the robotic system were then tracked and recordedmore » independently by three optical camera systems. A calibration methodology which associates the motion phantom and camera coordinate frames was first employed, followed by a comprehensive 6DOF trajectory evaluation, which spanned a full range of positions and orientations in a 20×20×16 mm and 5×5×5 degree workspace. The intended input motions were compared to the calibrated 6DOF measured points. Results: The technique found the accuracy of the IR marker tracking system to have maximal root mean square error (RMSE) values of 0.25 mm translationally and 0.09 degrees rotationally, in any one axis, comparing intended 6DOF positions to positions measured by the IR camera. The 6DOF RSME discrepancy for the first 3D optical surface tracking unit yielded maximal values of 0.60 mm and 0.11 degrees over the same 6DOF volume. An earlier generation 3D optical surface tracker was observed to have worse tracking capabilities than both the IR camera unit and the newer 3D surface tracking system with maximal RMSE of 0.74 mm and 0.28 degrees within the same 6DOF evaluation space. Conclusion: The proposed technique was effective at evaluating the performance of 6DOF patient tracking systems. All systems examined exhibited tracking capabilities at the sub-millimeter and sub-degree level within a 6DOF workspace.« less

  4. Large Scale Structure From Motion for Autonomous Underwater Vehicle Surveys

    DTIC Science & Technology

    2004-09-01

    Govern the Formation of Multiple Images of a Scene and Some of Their Applications. MIT Press, 2001. [26] 0. Faugeras and S. Maybank . Motion from point...Machine Vision Conference, volume 1, pages 384-393, September 2002. [69] S. Maybank and 0. Faugeras. A theory of self-calibration of a moving camera

  5. Settling dynamics of asymmetric rigid fibers

    Treesearch

    E.J. Tozzi; C Tim Scott; David Vahey; D.J. Klingenberg

    2011-01-01

    The three-dimensional motion of asymmetric rigid fibers settling under gravity in a quiescent fluid was experimentally measured using a pair of cameras located on a movable platform. The particle motion typically consisted of an initial transient after which the particle approached a steady rate of rotation about an axis parallel to the acceleration of gravity, with...

  6. Design of a compact low-power human-computer interaction equipment for hand motion

    NASA Astrophysics Data System (ADS)

    Wu, Xianwei; Jin, Wenguang

    2017-01-01

    Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.

  7. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  8. Camera pose estimation for augmented reality in a small indoor dynamic scene

    NASA Astrophysics Data System (ADS)

    Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad

    2017-09-01

    Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.

  9. Calculation for simulation of archery goal value using a web camera and ultrasonic sensor

    NASA Astrophysics Data System (ADS)

    Rusjdi, Darma; Abdurrasyid, Wulandari, Dewi Arianti

    2017-08-01

    Development of the device simulator digital indoor archery-based embedded systems as a solution to the limitations of the field or open space is adequate, especially in big cities. Development of the device requires simulations to calculate the value of achieving the target based on the approach defined by the parabolic motion variable initial velocity and direction of motion of the arrow reaches the target. The simulator device should be complemented with an initial velocity measuring device using ultrasonic sensors and measuring direction of the target using a digital camera. The methodology uses research and development of application software from modeling and simulation approach. The research objective to create simulation applications calculating the value of the achievement of the target arrows. Benefits as a preliminary stage for the development of the simulator device of archery. Implementation of calculating the value of the target arrows into the application program generates a simulation game of archery that can be used as a reference development of the digital archery simulator in a room with embedded systems using ultrasonic sensors and web cameras. Applications developed with the simulation calculation comparing the outer radius of the circle produced a camera from a distance of three meters.

  10. Using Wide-Field Meteor Cameras to Actively Engage Students in Science

    NASA Astrophysics Data System (ADS)

    Kuehn, D. M.; Scales, J. N.

    2012-08-01

    Astronomy has always afforded teachers an excellent topic to develop students' interest in science. New technology allows the opportunity to inexpensively outfit local school districts with sensitive, wide-field video cameras that can detect and track brighter meteors and other objects. While the data-collection and analysis process can be mostly automated by software, there is substantial human involvement that is necessary in the rejection of spurious detections, in performing dynamics and orbital calculations, and the rare recovery and analysis of fallen meteorites. The continuous monitoring allowed by dedicated wide-field surveillance cameras can provide students with a better understanding of the behavior of the night sky including meteors and meteor showers, stellar motion, the motion of the Sun, Moon, and planets, phases of the Moon, meteorological phenomena, etc. Additionally, some students intrigued by the possibility of UFOs and "alien visitors" may find that actual monitoring data can help them develop methods for identifying "unknown" objects. We currently have two ultra-low light-level surveillance cameras coupled to fish-eye lenses that are actively obtaining data. We have developed curricula suitable for middle or high school students in astronomy and earth science courses and are in the process of testing and revising our materials.

  11. A novel optical investigation technique for railroad track inspection and assessment

    NASA Astrophysics Data System (ADS)

    Sabato, Alessandro; Beale, Christopher H.; Niezrecki, Christopher

    2017-04-01

    Track failures due to cross tie degradation or loss in ballast support may result in a number of problems ranging from simple service interruptions to derailments. Structural Health Monitoring (SHM) of railway track is important for safety reasons and to reduce downtime and maintenance costs. For this reason, novel and cost-effective track inspection technologies for assessing tracks' health are currently insufficient and needed. Advancements achieved in recent years in cameras technology, optical sensors, and image-processing algorithms have made machine vision, Structure from Motion (SfM), and three-dimensional (3D) Digital Image Correlation (DIC) systems extremely appealing techniques for extracting structural deformations and geometry profiles. Therefore, optically based, non-contact measurement techniques may be used for assessing surface defects, rail and tie deflection profiles, and ballast condition. In this study, the design of two camera-based measurement systems is proposed for crossties-ballast condition assessment and track examination purposes. The first one consists of four pairs of cameras installed on the underside of a rail car to detect the induced deformation and displacement on the whole length of the track's cross tie using 3D DIC measurement techniques. The second consists of another set of cameras using SfM techniques for obtaining a 3D rendering of the infrastructure from a series of two-dimensional (2D) images to evaluate the state of the track qualitatively. The feasibility of the proposed optical systems is evaluated through extensive laboratory tests, demonstrating their ability to measure parameters of interest (e.g. crosstie's full-field displacement, vertical deflection, shape, etc.) for assessment and SHM of railroad track.

  12. Real-time Awake Animal Motion Tracking System for SPECT Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goddard Jr, James Samuel; Baba, Justin S; Lee, Seung Joon

    Enhancements have been made in the development of a real-time optical pose measurement and tracking system that provides 3D position and orientation data for a single photon emission computed tomography (SPECT) imaging system for awake, unanesthetized, unrestrained small animals. Three optical cameras with infrared (IR) illumination view the head movements of an animal enclosed in a transparent burrow. Markers placed on the head provide landmark points for image segmentation. Strobed IR LED s are synchronized to the cameras and illuminate the markers to prevent motion blur for each set of images. The system using the three cameras automatically segments themore » markers, detects missing data, rejects false reflections, performs trinocular marker correspondence, and calculates the 3D pose of the animal s head. Improvements have been made in methods for segmentation, tracking, and 3D calculation to give higher speed and more accurate measurements during a scan. The optical hardware has been installed within a Siemens MicroCAT II small animal scanner at Johns Hopkins without requiring functional changes to the scanner operation. The system has undergone testing using both phantoms and live mice and has been characterized in terms of speed, accuracy, robustness, and reliability. Experimental data showing these motion tracking results are given.« less

  13. SarcOptiM for ImageJ: high-frequency online sarcomere length computing on stimulated cardiomyocytes.

    PubMed

    Pasqualin, Côme; Gannier, François; Yu, Angèle; Malécot, Claire O; Bredeloux, Pierre; Maupoil, Véronique

    2016-08-01

    Accurate measurement of cardiomyocyte contraction is a critical issue for scientists working on cardiac physiology and physiopathology of diseases implying contraction impairment. Cardiomyocytes contraction can be quantified by measuring sarcomere length, but few tools are available for this, and none is freely distributed. We developed a plug-in (SarcOptiM) for the ImageJ/Fiji image analysis platform developed by the National Institutes of Health. SarcOptiM computes sarcomere length via fast Fourier transform analysis of video frames captured or displayed in ImageJ and thus is not tied to a dedicated video camera. It can work in real time or offline, the latter overcoming rotating motion or displacement-related artifacts. SarcOptiM includes a simulator and video generator of cardiomyocyte contraction. Acquisition parameters, such as pixel size and camera frame rate, were tested with both experimental recordings of rat ventricular cardiomyocytes and synthetic videos. It is freely distributed, and its source code is available. It works under Windows, Mac, or Linux operating systems. The camera speed is the limiting factor, since the algorithm can compute online sarcomere shortening at frame rates >10 kHz. In conclusion, SarcOptiM is a free and validated user-friendly tool for studying cardiomyocyte contraction in all species, including human. Copyright © 2016 the American Physiological Society.

  14. Automation of workplace lifting hazard assessment for musculoskeletal injury prevention.

    PubMed

    Spector, June T; Lieblich, Max; Bao, Stephen; McQuade, Kevin; Hughes, Margaret

    2014-01-01

    Existing methods for practically evaluating musculoskeletal exposures such as posture and repetition in workplace settings have limitations. We aimed to automate the estimation of parameters in the revised United States National Institute for Occupational Safety and Health (NIOSH) lifting equation, a standard manual observational tool used to evaluate back injury risk related to lifting in workplace settings, using depth camera (Microsoft Kinect) and skeleton algorithm technology. A large dataset (approximately 22,000 frames, derived from six subjects) of simultaneous lifting and other motions recorded in a laboratory setting using the Kinect (Microsoft Corporation, Redmond, Washington, United States) and a standard optical motion capture system (Qualysis, Qualysis Motion Capture Systems, Qualysis AB, Sweden) was assembled. Error-correction regression models were developed to improve the accuracy of NIOSH lifting equation parameters estimated from the Kinect skeleton. Kinect-Qualysis errors were modelled using gradient boosted regression trees with a Huber loss function. Models were trained on data from all but one subject and tested on the excluded subject. Finally, models were tested on three lifting trials performed by subjects not involved in the generation of the model-building dataset. Error-correction appears to produce estimates for NIOSH lifting equation parameters that are more accurate than those derived from the Microsoft Kinect algorithm alone. Our error-correction models substantially decreased the variance of parameter errors. In general, the Kinect underestimated parameters, and modelling reduced this bias, particularly for more biased estimates. Use of the raw Kinect skeleton model tended to result in falsely high safe recommended weight limits of loads, whereas error-corrected models gave more conservative, protective estimates. Our results suggest that it may be possible to produce reasonable estimates of posture and temporal elements of tasks such as task frequency in an automated fashion, although these findings should be confirmed in a larger study. Further work is needed to incorporate force assessments and address workplace feasibility challenges. We anticipate that this approach could ultimately be used to perform large-scale musculoskeletal exposure assessment not only for research but also to provide real-time feedback to workers and employers during work method improvement activities and employee training.

  15. Novel health monitoring method using an RGB camera.

    PubMed

    Hassan, M A; Malik, A S; Fofi, D; Saad, N; Meriaudeau, F

    2017-11-01

    In this paper we present a novel health monitoring method by estimating the heart rate and respiratory rate using an RGB camera. The heart rate and the respiratory rate are estimated from the photoplethysmography (PPG) and the respiratory motion. The method mainly operates by using the green spectrum of the RGB camera to generate a multivariate PPG signal to perform multivariate de-noising on the video signal to extract the resultant PPG signal. A periodicity based voting scheme (PVS) was used to measure the heart rate and respiratory rate from the estimated PPG signal. We evaluated our proposed method with a state of the art heart rate measuring method for two scenarios using the MAHNOB-HCI database and a self collected naturalistic environment database. The methods were furthermore evaluated for various scenarios at naturalistic environments such as a motion variance session and a skin tone variance session. Our proposed method operated robustly during the experiments and outperformed the state of the art heart rate measuring methods by compensating the effects of the naturalistic environment.

  16. Optical Indoor Positioning System Based on TFT Technology

    PubMed Central

    Gőzse, István

    2015-01-01

    A novel indoor positioning system is presented in the paper. Similarly to the camera-based solutions, it is based on visual detection, but it conceptually differs from the classical approaches. First, the objects are marked by LEDs, and second, a special sensing unit is applied, instead of a camera, to track the motion of the markers. This sensing unit realizes a modified pinhole camera model, where the light-sensing area is fixed and consists of a small number of sensing elements (photodiodes), and it is the hole that can be moved. The markers are tracked by controlling the motion of the hole, such that the light of the LEDs always hits the photodiodes. The proposed concept has several advantages: Apart from its low computational demands, it is insensitive to the disturbing ambient light. Moreover, as every component of the system can be realized by simple and inexpensive elements, the overall cost of the system can be kept low. PMID:26712753

  17. Flexcam Image Capture Viewing and Spot Tracking

    NASA Technical Reports Server (NTRS)

    Rao, Shanti

    2008-01-01

    Flexcam software was designed to allow continuous monitoring of the mechanical deformation of the telescope structure at Palomar Observatory. Flexcam allows the user to watch the motion of a star with a low-cost astronomical camera, to measure the motion of the star on the image plane, and to feed this data back into the telescope s control system. This automatic interaction between the camera and a user interface facilitates integration and testing. Flexcam is a CCD image capture and analysis tool for the ST-402 camera from Santa Barbara Instruments Group (SBIG). This program will automatically take a dark exposure and then continuously display corrected images. The image size, bit depth, magnification, exposure time, resolution, and filter are always displayed on the title bar. Flexcam locates the brightest pixel and then computes the centroid position of the pixels falling in a box around that pixel. This tool continuously writes the centroid position to a network file that can be used by other instruments.

  18. Effects of frame rate and image resolution on pulse rate measured using multiple camera imaging photoplethysmography

    NASA Astrophysics Data System (ADS)

    Blackford, Ethan B.; Estepp, Justin R.

    2015-03-01

    Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.

  19. Monitoring lava-dome growth during the 2004-2008 Mount St. Helens, Washington, eruption using oblique terrestrial photography

    USGS Publications Warehouse

    Major, J.J.; Dzurisin, D.; Schilling, S.P.; Poland, Michael P.

    2009-01-01

    We present an analysis of lava dome growth during the 2004–2008 eruption of Mount St. Helens using oblique terrestrial images from a network of remotely placed cameras. This underutilized monitoring tool augmented more traditional monitoring techniques, and was used to provide a robust assessment of the nature, pace, and state of the eruption and to quantify the kinematics of dome growth. Eruption monitoring using terrestrial photography began with a single camera deployed at the mouth of the volcano's crater during the first year of activity. Analysis of those images indicates that the average lineal extrusion rate decayed approximately logarithmically from about 8 m/d to about 2 m/d (± 2 m/d) from November 2004 through December 2005, and suggests that the extrusion rate fluctuated on time scales of days to weeks. From May 2006 through September 2007, imagery from multiple cameras deployed around the volcano allowed determination of 3-dimensional motion across the dome complex. Analysis of the multi-camera imagery shows spatially differential, but remarkably steady to gradually slowing, motion, from about 1–2 m/d from May through October 2006, to about 0.2–1.0 m/d from May through September 2007. In contrast to the fluctuations in lineal extrusion rate documented during the first year of eruption, dome motion from May 2006 through September 2007 was monotonic (± 0.10 m/d) to gradually slowing on time scales of weeks to months. The ability to measure spatial and temporal rates of motion of the effusing lava dome from oblique terrestrial photographs provided a significant, and sometimes the sole, means of identifying and quantifying dome growth during the eruption, and it demonstrates the utility of using frequent, long-term terrestrial photography to monitor and study volcanic eruptions.

  20. Hybrid Systems Diagnosis

    NASA Technical Reports Server (NTRS)

    McIlraith, Sheila; Biswas, Gautam; Clancy, Dan; Gupta, Vineet

    2005-01-01

    This paper reports on an on-going Project to investigate techniques to diagnose complex dynamical systems that are modeled as hybrid systems. In particular, we examine continuous systems with embedded supervisory controllers that experience abrupt, partial or full failure of component devices. We cast the diagnosis problem as a model selection problem. To reduce the space of potential models under consideration, we exploit techniques from qualitative reasoning to conjecture an initial set of qualitative candidate diagnoses, which induce a smaller set of models. We refine these diagnoses using parameter estimation and model fitting techniques. As a motivating case study, we have examined the problem of diagnosing NASA's Sprint AERCam, a small spherical robotic camera unit with 12 thrusters that enable both linear and rotational motion.

  1. PIV/HPIV Film Analysis Software Package

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    A PIV/HPIV film analysis software system was developed that calculates the 2-dimensional spatial autocorrelations of subregions of Particle Image Velocimetry (PIV) or Holographic Particle Image Velocimetry (HPIV) film recordings. The software controls three hardware subsystems including (1) a Kodak Megaplus 1.4 camera and EPIX 4MEG framegrabber subsystem, (2) an IEEE/Unidex 11 precision motion control subsystem, and (3) an Alacron I860 array processor subsystem. The software runs on an IBM PC/AT host computer running either the Microsoft Windows 3.1 or Windows 95 operating system. It is capable of processing five PIV or HPIV displacement vectors per second, and is completely automated with the exception of user input to a configuration file prior to analysis execution for update of various system parameters.

  2. Principal axis-based correspondence between multiple cameras for people tracking.

    PubMed

    Hu, Weiming; Hu, Min; Zhou, Xue; Tan, Tieniu; Lou, Jianguang; Maybank, Steve

    2006-04-01

    Visual surveillance using multiple cameras has attracted increasing interest in recent years. Correspondence between multiple cameras is one of the most important and basic problems which visual surveillance using multiple cameras brings. In this paper, we propose a simple and robust method, based on principal axes of people, to match people across multiple cameras. The correspondence likelihood reflecting the similarity of pairs of principal axes of people is constructed according to the relationship between "ground-points" of people detected in each camera view and the intersections of principal axes detected in different camera views and transformed to the same view. Our method has the following desirable properties: 1) Camera calibration is not needed. 2) Accurate motion detection and segmentation are less critical due to the robustness of the principal axis-based feature to noise. 3) Based on the fused data derived from correspondence results, positions of people in each camera view can be accurately located even when the people are partially occluded in all views. The experimental results on several real video sequences from outdoor environments have demonstrated the effectiveness, efficiency, and robustness of our method.

  3. Cinematic camera emulation using two-dimensional color transforms

    NASA Astrophysics Data System (ADS)

    McElvain, Jon S.; Gish, Walter

    2015-02-01

    For cinematic and episodic productions, on-set look management is an important component of the creative process, and involves iterative adjustments of the set, actors, lighting and camera configuration. Instead of using the professional motion capture device to establish a particular look, the use of a smaller form factor DSLR is considered for this purpose due to its increased agility. Because the spectral response characteristics will be different between the two camera systems, a camera emulation transform is needed to approximate the behavior of the destination camera. Recently, twodimensional transforms have been shown to provide high-accuracy conversion of raw camera signals to a defined colorimetric state. In this study, the same formalism is used for camera emulation, whereby a Canon 5D Mark III DSLR is used to approximate the behavior a Red Epic cinematic camera. The spectral response characteristics for both cameras were measured and used to build 2D as well as 3x3 matrix emulation transforms. When tested on multispectral image databases, the 2D emulation transforms outperform their matrix counterparts, particularly for images containing highly chromatic content.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaffney, Kelly

    Movies have transformed our perception of the world. With slow motion photography, we can see a hummingbird flap its wings, and a bullet pierce an apple. The remarkably small and extremely fast molecular world that determines how your body functions cannot be captured with even the most sophisticated movie camera today. To see chemistry in real time requires a camera capable of seeing molecules that are one ten billionth of a foot with a frame rate of 10 trillion frames per second! SLAC has embarked on the construction of just such a camera. Please join me as I discuss howmore » this molecular movie camera will work and how it will change our perception of the molecular world.« less

  5. The MicronEye Motion Monitor: A New Tool for Class and Laboratory Demonstrations.

    ERIC Educational Resources Information Center

    Nissan, M.; And Others

    1988-01-01

    Describes a special camera that can be directly linked to a computer that has been adapted for studying movement. Discusses capture, processing, and analysis of two-dimensional data with either IBM PC or Apple II computers. Gives examples of a variety of mechanical tests including pendulum motion, air track, and air table. (CW)

  6. An anti-disturbing real time pose estimation method and system

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Zhang, Xiao-hu

    2011-08-01

    Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new method can estimate pose between camera and object when part even all known features are lost, and has a quick response time benefit from GPU parallel computing. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in autonomous navigation and positioning, robots fields at unknown environment. The results of simulation and experiments demonstrate that proposed method could suppress noise effectively, extracted features robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.

  7. Observation of Planetary Motion Using a Digital Camera

    ERIC Educational Resources Information Center

    Meyn, Jan-Peter

    2008-01-01

    A digital SLR camera with a standard lens (50 mm focal length, f/1.4) on a fixed tripod is used to obtain photographs of the sky which contain stars up to 8[superscript m] apparent magnitude. The angle of view is large enough to ensure visual identification of the photograph with a large sky region in a stellar map. The resolution is sufficient to…

  8. Automated Quantification of the Landing Error Scoring System With a Markerless Motion-Capture System.

    PubMed

    Mauntel, Timothy C; Padua, Darin A; Stanley, Laura E; Frank, Barnett S; DiStefano, Lindsay J; Peck, Karen Y; Cameron, Kenneth L; Marshall, Stephen W

    2017-11-01

      The Landing Error Scoring System (LESS) can be used to identify individuals with an elevated risk of lower extremity injury. The limitation of the LESS is that raters identify movement errors from video replay, which is time-consuming and, therefore, may limit its use by clinicians. A markerless motion-capture system may be capable of automating LESS scoring, thereby removing this obstacle.   To determine the reliability of an automated markerless motion-capture system for scoring the LESS.   Cross-sectional study.   United States Military Academy.   A total of 57 healthy, physically active individuals (47 men, 10 women; age = 18.6 ± 0.6 years, height = 174.5 ± 6.7 cm, mass = 75.9 ± 9.2 kg).   Participants completed 3 jump-landing trials that were recorded by standard video cameras and a depth camera. Their movement quality was evaluated by expert LESS raters (standard video recording) using the LESS rubric and by software that automates LESS scoring (depth-camera data). We recorded an error for a LESS item if it was present on at least 2 of 3 jump-landing trials. We calculated κ statistics, prevalence- and bias-adjusted κ (PABAK) statistics, and percentage agreement for each LESS item. Interrater reliability was evaluated between the 2 expert rater scores and between a consensus expert score and the markerless motion-capture system score.   We observed reliability between the 2 expert LESS raters (average κ = 0.45 ± 0.35, average PABAK = 0.67 ± 0.34; percentage agreement = 0.83 ± 0.17). The markerless motion-capture system had similar reliability with consensus expert scores (average κ = 0.48 ± 0.40, average PABAK = 0.71 ± 0.27; percentage agreement = 0.85 ± 0.14). However, reliability was poor for 5 LESS items in both LESS score comparisons.   A markerless motion-capture system had the same level of reliability as expert LESS raters, suggesting that an automated system can accurately assess movement. Therefore, clinicians can use the markerless motion-capture system to reliably score the LESS without being limited by the time requirements of manual LESS scoring.

  9. Visual acuity, contrast sensitivity, and range performance with compressed motion video

    NASA Astrophysics Data System (ADS)

    Bijl, Piet; de Vries, Sjoerd C.

    2010-10-01

    Video of visual acuity (VA) and contrast sensitivity (CS) test charts in a complex background was recorded using a CCD color camera mounted on a computer-controlled tripod and was fed into real-time MPEG-2 compression/decompression equipment. The test charts were based on the triangle orientation discrimination (TOD) test method and contained triangle test patterns of different sizes and contrasts in four possible orientations. In a perception experiment, observers judged the orientation of the triangles in order to determine VA and CS thresholds at the 75% correct level. Three camera velocities (0, 1.0, and 2.0 deg/s, or 0, 4.1, and 8.1 pixels/frame) and four compression rates (no compression, 4 Mb/s, 2 Mb/s, and 1 Mb/s) were used. VA is shown to be rather robust to any combination of motion and compression. CS, however, dramatically decreases when motion is combined with high compression ratios. The measured thresholds were fed into the TOD target acquisition model to predict the effect of motion and compression on acquisition ranges for tactical military vehicles. The effect of compression on static performance is limited but strong with motion video. The data suggest that with the MPEG2 algorithm, the emphasis is on the preservation of image detail at the cost of contrast loss.

  10. An improved multi-paths optimization method for video stabilization

    NASA Astrophysics Data System (ADS)

    Qin, Tao; Zhong, Sheng

    2018-03-01

    For video stabilization, the difference between original camera motion path and the optimized one is proportional to the cropping ratio and warping ratio. A good optimized path should preserve the moving tendency of the original one meanwhile the cropping ratio and warping ratio of each frame should be kept in a proper range. In this paper we use an improved warping-based motion representation model, and propose a gauss-based multi-paths optimization method to get a smoothing path and obtain a stabilized video. The proposed video stabilization method consists of two parts: camera motion path estimation and path smoothing. We estimate the perspective transform of adjacent frames according to warping-based motion representation model. It works well on some challenging videos where most previous 2D methods or 3D methods fail for lacking of long features trajectories. The multi-paths optimization method can deal well with parallax, as we calculate the space-time correlation of the adjacent grid, and then a kernel of gauss is used to weigh the motion of adjacent grid. Then the multi-paths are smoothed while minimize the crop ratio and the distortion. We test our method on a large variety of consumer videos, which have casual jitter and parallax, and achieve good results.

  11. Gait in adolescent idiopathic scoliosis: kinematics and electromyographic analysis.

    PubMed

    Mahaudens, P; Banse, X; Mousny, M; Detrembleur, C

    2009-04-01

    Adolescent idiopathic scoliosis (AIS) is a progressive growth disease that affects spinal anatomy, mobility, and left-right trunk symmetry. Consequently, AIS can modify human locomotion. Very few studies have investigated a simple activity like walking in a cohort of well-defined untreated patients with scoliosis. The first goal of this study is to evaluate the effects of scoliosis and scoliosis severity on kinematic and electromyographic (EMG) gait variables compared to an able-bodied population. The second goal is to look for any asymmetry in these parameters during walking. Thirteen healthy girls and 41 females with untreated AIS, with left thoracolumbar or lumbar primary structural curves were assessed. AIS patients were divided into three clinical subgroups (group 1 < 20 degrees, group 2 between 20 and 40 degrees, and group 3 > 40 degrees). Gait analysis included synchronous bilateral kinematic and EMG measurements. The subjects walked on a treadmill at 4 km/h (comfortable speed). The tridimensional (3D) shoulder, pelvis, and lower limb motions were measured using 22 reflective markers tracked by four infrared cameras. The EMG timing activity was measured using bipolar surface electrodes on quadratus lumborum, erector spinae, gluteus medius, rectus femoris, semitendinosus, tibialis anterior, and gastrocnemius muscles. Statistical comparisons (ANOVA) were performed across groups and sides for kinematic and EMG parameters. The step length was reduced in AIS compared to normal subjects (7% less). Frontal shoulder, pelvis, and hip motion and transversal hip motion were reduced in scoliosis patients (respectively, 21, 27, 28, and 22% less). The EMG recording during walking showed that the quadratus lumborum, erector spinae, gluteus medius, and semitendinosus muscles contracted during a longer part of the stride in scoliotic patients (46% of the stride) compared with normal subjects (35% of the stride). There was no significant difference between scoliosis groups 1, 2, and 3 for any of the kinematic and EMG parameters, meaning that severe scoliosis was not associated with increased differences in gait parameters compared to mild scoliosis. Scoliosis was not associated with any kinematic or EMG left-right asymmetry. In conclusion, scoliosis patients showed significant but slight modifications in gait, even in cases of mild scoliosis. With the naked eye, one could not see any difference from controls, but with powerful gait analysis technology, the pelvic frontal motion (right-left tilting) was reduced, as was the motion in the hips and shoulder. Surprisingly, no asymmetry was noted but the spine seemed dynamically stiffened by the longer contraction time of major spinal and pelvic muscles. Further studies are needed to evaluate the origin and consequences of these observations.

  12. Effects of 8 weeks of mat-based Pilates exercise on gait in chronic stroke patients.

    PubMed

    Roh, SuYeon; Gil, Ho Jong; Yoon, Sukhoon

    2016-09-01

    [Purpose] The purpose of this study was to investigate the effects of an 8-week program of Pilates exercise on gait in chronic hemiplegia patients and to determine whether or not it can be used for rehabilitation in postsrtoke patients. [Subjects and Methods] Twenty individuals with unilateral chronic hemiparetic stroke (age, 66.1 ± 4.4 yrs; height, 162.3 ± 8.3 cm; weight, 67.4 ± 12.3 kg) participated in this study and were randomly allocated equally to either a Pilates exercise group or a control group. To identify the effects of Pilates exercise, a 3-D motion analysis with 8 infrared cameras was performed. [Results] For the gait parameters, improvements were found in the Pilates exercise group for all variables, and statistical significance was observed for stride length, gait velocity, knee range of motion and hip range of motion. For the asymmetry indexes, insignificant improvements were found for all variables in the Pilates exercise group. [Conclusion] In conclusion, an 8-week program of Pilates exercise had a positive influence on improving the gait ability of poststroke patients, and the intervention could be applied to poststroke patients with various levels of physical disability by adjusting the intensity of training.

  13. Effects of 8 weeks of mat-based Pilates exercise on gait in chronic stroke patients

    PubMed Central

    Roh, SuYeon; Gil, Ho Jong; Yoon, Sukhoon

    2016-01-01

    [Purpose] The purpose of this study was to investigate the effects of an 8-week program of Pilates exercise on gait in chronic hemiplegia patients and to determine whether or not it can be used for rehabilitation in postsrtoke patients. [Subjects and Methods] Twenty individuals with unilateral chronic hemiparetic stroke (age, 66.1 ± 4.4 yrs; height, 162.3 ± 8.3 cm; weight, 67.4 ± 12.3 kg) participated in this study and were randomly allocated equally to either a Pilates exercise group or a control group. To identify the effects of Pilates exercise, a 3-D motion analysis with 8 infrared cameras was performed. [Results] For the gait parameters, improvements were found in the Pilates exercise group for all variables, and statistical significance was observed for stride length, gait velocity, knee range of motion and hip range of motion. For the asymmetry indexes, insignificant improvements were found for all variables in the Pilates exercise group. [Conclusion] In conclusion, an 8-week program of Pilates exercise had a positive influence on improving the gait ability of poststroke patients, and the intervention could be applied to poststroke patients with various levels of physical disability by adjusting the intensity of training. PMID:27799706

  14. Towards automated assistance for operating home medical devices.

    PubMed

    Gao, Zan; Detyniecki, Marcin; Chen, Ming-Yu; Wu, Wen; Hauptmann, Alexander G; Wactlar, Howard D

    2010-01-01

    To detect errors when subjects operate a home medical device, we observe them with multiple cameras. We then perform action recognition with a robust approach to recognize action information based on explicitly encoding motion information. This algorithm detects interest points and encodes not only their local appearance but also explicitly models local motion. Our goal is to recognize individual human actions in the operations of a home medical device to see if the patient has correctly performed the required actions in the prescribed sequence. Using a specific infusion pump as a test case, requiring 22 operation steps from 6 action classes, our best classifier selects high likelihood action estimates from 4 available cameras, to obtain an average class recognition rate of 69%.

  15. A robust vision-based sensor fusion approach for real-time pose estimation.

    PubMed

    Assa, Akbar; Janabi-Sharifi, Farrokh

    2014-02-01

    Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.

  16. The new camera calibration system at the US Geological Survey

    USGS Publications Warehouse

    Light, D.L.

    1992-01-01

    Modern computerized photogrammetric instruments are capable of utilizing both radial and decentering camera calibration parameters which can increase plotting accuracy over that of older analog instrumentation technology from previous decades. Also, recent design improvements in aerial cameras have minimized distortions and increased the resolving power of camera systems, which should improve the performance of the overall photogrammetric process. In concert with these improvements, the Geological Survey has adopted the rigorous mathematical model for camera calibration developed by Duane Brown. An explanation of the Geological Survey's calibration facility and the additional calibration parameters now being provided in the USGS calibration certificate are reviewed. -Author

  17. Multi-Angle Snowflake Camera Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuefer, Martin; Bailey, J.

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less

  18. Minimum Requirements for Taxicab Security Cameras.

    PubMed

    Zeng, Shengke; Amandus, Harlan E; Amendola, Alfred A; Newbraugh, Bradley H; Cantis, Douglas M; Weaver, Darlene

    2014-07-01

    The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability.

  19. Markerless rat head motion tracking using structured light for brain PET imaging of unrestrained awake small animals

    NASA Astrophysics Data System (ADS)

    Miranda, Alan; Staelens, Steven; Stroobants, Sigrid; Verhaeghe, Jeroen

    2017-03-01

    Preclinical positron emission tomography (PET) imaging in small animals is generally performed under anesthesia to immobilize the animal during scanning. More recently, for rat brain PET studies, methods to perform scans of unrestrained awake rats are being developed in order to avoid the unwanted effects of anesthesia on the brain response. Here, we investigate the use of a projected structure stereo camera to track the motion of the rat head during the PET scan. The motion information is then used to correct the PET data. The stereo camera calculates a 3D point cloud representation of the scene and the tracking is performed by point cloud matching using the iterative closest point algorithm. The main advantage of the proposed motion tracking is that no intervention, e.g. for marker attachment, is needed. A manually moved microDerenzo phantom experiment and 3 awake rat [18F]FDG experiments were performed to evaluate the proposed tracking method. The tracking accuracy was 0.33 mm rms. After motion correction image reconstruction, the microDerenzo phantom was recovered albeit with some loss of resolution. The reconstructed FWHM of the 2.5 and 3 mm rods increased with 0.94 and 0.51 mm respectively in comparison with the motion-free case. In the rat experiments, the average tracking success rate was 64.7%. The correlation of relative brain regional [18F]FDG uptake between the anesthesia and awake scan reconstructions was increased from on average 0.291 (not significant) before correction to 0.909 (p  <  0.0001) after motion correction. Markerless motion tracking using structured light can be successfully used for tracking of the rat head for motion correction in awake rat PET scans.

  20. Pre-clinical and clinical walking kinematics in female breeding pigs with lameness: A nested case-control cohort study.

    PubMed

    Stavrakakis, S; Guy, J H; Syranidis, I; Johnson, G R; Edwards, S A

    2015-07-01

    Gait profiles were investigated in a cohort of female pigs experiencing a lameness period prevalence of 29% over 17 months. Gait alterations before and during visually diagnosed lameness were evaluated to identify the best quantitative clinical lameness indicators and early predictors for lameness. Pre-breeding gilts (n= 84) were recruited to the study over a period of 6 months, underwent motion capture every 5 weeks and, depending on their age at entry to the study, were followed for up to three successive gestations. Animals were subject to motion capture in each parity at 8 weeks of gestation and on the day of weaning (28 days postpartum). During kinematic motion capture, the pigs walked on the same concrete walkway and an array of infra-red cameras was used to collect three dimensional coordinate data of reflective skin markers attached to the head, trunk and limb anatomical landmarks. Of 24 pigs diagnosed with lameness, 19 had preclinical gait records, whilst 18 had a motion capture while lame. Depending on availability, data from one or two preclinical motion capture 1-11 months prior to lameness and on the day of lameness were analysed. Lameness was best detected and evaluated using relative spatiotemporal gait parameters, especially vertical head displacement and asymmetric stride phase timing. Irregularity in the step-to-stride length ratio was elevated (deviation  ≥ 0.03) in young pigs which presented lameness in later life (odds ratio 7.2-10.8). Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  2. Identifying sports videos using replay, text, and camera motion features

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  3. Detection of unknown targets from aerial camera and extraction of simple object fingerprints for the purpose of target reacquisition

    NASA Astrophysics Data System (ADS)

    Mundhenk, T. Nathan; Ni, Kang-Yu; Chen, Yang; Kim, Kyungnam; Owechko, Yuri

    2012-01-01

    An aerial multiple camera tracking paradigm needs to not only spot unknown targets and track them, but also needs to know how to handle target reacquisition as well as target handoff to other cameras in the operating theater. Here we discuss such a system which is designed to spot unknown targets, track them, segment the useful features and then create a signature fingerprint for the object so that it can be reacquired or handed off to another camera. The tracking system spots unknown objects by subtracting background motion from observed motion allowing it to find targets in motion, even if the camera platform itself is moving. The area of motion is then matched to segmented regions returned by the EDISON mean shift segmentation tool. Whole segments which have common motion and which are contiguous to each other are grouped into a master object. Once master objects are formed, we have a tight bound on which to extract features for the purpose of forming a fingerprint. This is done using color and simple entropy features. These can be placed into a myriad of different fingerprints. To keep data transmission and storage size low for camera handoff of targets, we try several different simple techniques. These include Histogram, Spatiogram and Single Gaussian Model. These are tested by simulating a very large number of target losses in six videos over an interval of 1000 frames each from the DARPA VIVID video set. Since the fingerprints are very simple, they are not expected to be valid for long periods of time. As such, we test the shelf life of fingerprints. This is how long a fingerprint is good for when stored away between target appearances. Shelf life gives us a second metric of goodness and tells us if a fingerprint method has better accuracy over longer periods. In videos which contain multiple vehicle occlusions and vehicles of highly similar appearance we obtain a reacquisition rate for automobiles of over 80% using the simple single Gaussian model compared with the null hypothesis of <20%. Additionally, the performance for fingerprints stays well above the null hypothesis for as much as 800 frames. Thus, a simple and highly compact single Gaussian model is useful for target reacquisition. Since the model is agnostic to view point and object size, it is expected to perform as well on a test of target handoff. Since some of the performance degradation is due to problems with the initial target acquisition and tracking, the simple Gaussian model may perform even better with an improved initial acquisition technique. Also, since the model makes no assumption about the object to be tracked, it should be possible to use it to fingerprint a multitude of objects, not just cars. Further accuracy may be obtained by creating manifolds of objects from multiple samples.

  4. Precise Image-Based Motion Estimation for Autonomous Small Body Exploration

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew Edie; Matthies, Larry H.

    2000-01-01

    We have developed and tested a software algorithm that enables onboard autonomous motion estimation near small bodies using descent camera imagery and laser altimetry. Through simulation and testing, we have shown that visual feature tracking can decrease uncertainty in spacecraft motion to a level that makes landing on small, irregularly shaped, bodies feasible. Possible future work will include qualification of the algorithm as a flight experiment for the Deep Space 4/Champollion comet lander mission currently under study at the Jet Propulsion Laboratory.

  5. On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation

    DTIC Science & Technology

    2015-03-01

    SWIR Short Wave Infrared VisualSFM Visual Structure from Motion WPAFB Wright Patterson Air Force Base xi ON THE INTEGRATION OF MEDIUM WAVE INFRARED...Structure from Motion Visual Structure from Motion ( VisualSFM ) is an application that performs incremental SfM using images fed into it of a scene [20...too drastically in between frames. When this happens, VisualSFM will begin creating a new model with images that do not fit to the old one. These new

  6. [Kinematics of the healthy and arthritic hip joint during walking. A study of 136 subjects].

    PubMed

    Dujardin, F; Aucouturier, T; Bocquet, G; Duparc, F; Weber, J; Thomine, J M

    1998-11-01

    The study aimed to analyze the spatiotemporal parameters and 3-dimensional pelvic and hip kinematic components during gait in two groups: patients with a primitive osteoarthritis of the hip and control normal subjects. The study included 51 patients, ranged from 42 to 81 years, and 86 normal subjects. Gait analysis was performed using the optoelectronic system VICON with 5 cameras in free-speed conditions. Functional grading of the patients was assessed by Lequesne's score. Thickness of the hip cartilage was measured on pelvis AP radiograph. A preliminary study was performed to measure reliability of the data on 11 patients. At the initial stage of osteoarthritis, speed, cadence, stride length and hip flexion-extension motion appeared as very close to normal data. After this initial stage, there was a statistical relationship between these parameters and arthritis functional grading. Pelvis rotation around the vertical axis did not change according to severity of functional grading. The mean value of this component of pelvis motion was 10 degrees in the pathological group, whereas it was 8 degrees in the female normal group, and 7 degrees in the male group. There were no significant relationship between radiographical thickness of hip cartilage and functional grading of patients or gait parameters. This study demonstrates that spatiotemporal gait parameters and kinematic data appear as quantitative index which could be used in future studies. It also shows that pelvic rotation is greater in pathological group than in normal subjects, even in the extreme beginning of the hip osteoarthritis. This particularity can be explained as a very early consequence of the arthritis or, in the opposite, as risk factor.

  7. Bubble driven quasioscillatory translational motion of catalytic micromotors.

    PubMed

    Manjare, Manoj; Yang, Bo; Zhao, Y-P

    2012-09-21

    A new quasioscillatory translational motion has been observed for big Janus catalytic micromotors with a fast CCD camera. Such motional behavior is found to coincide with both the bubble growth and burst processes resulting from the catalytic reaction, and the competition of the two processes generates a net forward motion. Detailed physical models have been proposed to describe the above processes. It is suggested that the bubble growth process imposes a growth force moving the micromotor forward, while the burst process induces an instantaneous local pressure depression pulling the micromotor backward. The theoretic predictions are consistent with the experimental data.

  8. Bubble Driven Quasioscillatory Translational Motion of Catalytic Micromotors

    NASA Astrophysics Data System (ADS)

    Manjare, Manoj; Yang, Bo; Zhao, Y.-P.

    2012-09-01

    A new quasioscillatory translational motion has been observed for big Janus catalytic micromotors with a fast CCD camera. Such motional behavior is found to coincide with both the bubble growth and burst processes resulting from the catalytic reaction, and the competition of the two processes generates a net forward motion. Detailed physical models have been proposed to describe the above processes. It is suggested that the bubble growth process imposes a growth force moving the micromotor forward, while the burst process induces an instantaneous local pressure depression pulling the micromotor backward. The theoretic predictions are consistent with the experimental data.

  9. Towards designing an optical-flow based colonoscopy tracking algorithm: a comparative study

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.

    2013-03-01

    Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison among egomotion estimation algorithms showed that our FOE-constrained egomotion estimation method achieved the optimal balance between tracking accuracy and robustness. The comparative study demonstrated that our optical-flow based colonoscopy tracking algorithm maintains good accuracy and stability for routine use in clinical practice.

  10. Versatile microsecond movie camera

    NASA Astrophysics Data System (ADS)

    Dreyfus, R. W.

    1980-03-01

    A laboratory-type movie camera is described which satisfies many requirements in the range 1 microsec to 1 sec. The camera consists of a He-Ne laser and compatible state-of-the-art components; the primary components are an acoustooptic modulator, an electromechanical beam deflector, and a video tape system. The present camera is distinct in its operation in that submicrosecond laser flashes freeze the image motion while still allowing the simplicity of electromechanical image deflection in the millisecond range. The gating and pulse delay circuits of an oscilloscope synchronize the modulator and scanner relative to the subject being photographed. The optical table construction and electronic control enhance the camera's versatility and adaptability. The instant replay video tape recording allows for easy synchronization and immediate viewing of the results. Economy is achieved by using off-the-shelf components, optical table construction, and short assembly time.

  11. A quasi-dense matching approach and its calibration application with Internet photos.

    PubMed

    Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei

    2015-03-01

    This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.

  12. Accuracy and Precision of a Custom Camera-Based System for 2-D and 3-D Motion Tracking during Speech and Nonspeech Motor Tasks

    ERIC Educational Resources Information Center

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose: Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable…

  13. STS-28 Columbia, OV-102, MS Brown uses ARRIFLEX camera on aft flight deck

    NASA Image and Video Library

    1989-08-13

    STS028-17-033 (August 1989) --- Astronaut Mark N. Brown, STS-28 mission specialist, pauses from a session of motion-picture photography conducted through one of the aft windows on the flight deck of the Earth-orbiting Space Shuttle Columbia. He is using an Arriflex camera. The horizon of the blue and white appearing Earth and its airglow are visible in the background.

  14. Efficient subtle motion detection from high-speed video for sound recovery and vibration analysis using singular value decomposition-based approach

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an

    2017-09-01

    High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.

  15. Educational Aspects of the CONCAM Sky Monitoring Project

    NASA Astrophysics Data System (ADS)

    Nemiroff, R. J.; Rafert, J. B.; Ftaclas, C.; Pereira, W. E.; Perez-Ramirez, D.

    2000-12-01

    We have built a prototype CONtinuous CAMera (CONCAM) that mates a fisheye lens to a CCD camera run by a laptop computer. Presently, one CONCAM is deployed at Kitt Peak National Observatory and another is being set up on Mauna Kea in Hawaii. CONCAMs can detect stars of visual magnitude 6 near the image center in a two-minute exposure. CONCAMs are weather-proof, take continuous data from 2 π steradians on the sky, are programmable over the internet, create data files downloadable over the internet, are small enough to fit inside a briefcase, and cost under \\$10 K. . Images archived at http://concam.net can be used to teach many introductory concepts. These include: the rotation of the Earth, the relative location and phase of the Moon, the location and relative motion of planets, the location of the Galactic plane, the motion of Earth satellites, the location and motion of comets, the motion of meteors, the radiant of a meteor shower, the relative locations of interesting stars, and the relative brightness changes of highly variable stars. Concam.net is not meant to replace first hand student observations of the sky, but rather to complement them with classroom-accessible actual-sky-image examples.

  16. Graphics simulation and training aids for advanced teleoperation

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Schenker, Paul S.; Bejczy, Antal K.

    1993-01-01

    Graphics displays can be of significant aid in accomplishing a teleoperation task throughout all three phases of off-line task analysis and planning, operator training, and online operation. In the first phase, graphics displays provide substantial aid to investigate work cell layout, motion planning with collision detection and with possible redundancy resolution, and planning for camera views. In the second phase, graphics displays can serve as very useful tools for introductory training of operators before training them on actual hardware. In the third phase, graphics displays can be used for previewing planned motions and monitoring actual motions in any desired viewing angle, or, when communication time delay prevails, for providing predictive graphics overlay on the actual camera view of the remote site to show the non-time-delayed consequences of commanded motions in real time. This paper addresses potential space applications of graphics displays in all three operational phases of advanced teleoperation. Possible applications are illustrated with techniques developed and demonstrated in the Advanced Teleoperation Laboratory at JPL. The examples described include task analysis and planning of a simulated Solar Maximum Satellite Repair task, a novel force-reflecting teleoperation simulator for operator training, and preview and predictive displays for on-line operations.

  17. Noise Reduction in Brainwaves by Using Both EEG Signals and Frontal Viewing Camera Images

    PubMed Central

    Bang, Jae Won; Choi, Jong-Suk; Park, Kang Ryoung

    2013-01-01

    Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have been used in various applications, including human–computer interfaces, diagnosis of brain diseases, and measurement of cognitive status. However, EEG signals can be contaminated with noise caused by user's head movements. Therefore, we propose a new method that combines an EEG acquisition device and a frontal viewing camera to isolate and exclude the sections of EEG data containing these noises. This method is novel in the following three ways. First, we compare the accuracies of detecting head movements based on the features of EEG signals in the frequency and time domains and on the motion features of images captured by the frontal viewing camera. Second, the features of EEG signals in the frequency domain and the motion features captured by the frontal viewing camera are selected as optimal ones. The dimension reduction of the features and feature selection are performed using linear discriminant analysis. Third, the combined features are used as inputs to support vector machine (SVM), which improves the accuracy in detecting head movements. The experimental results show that the proposed method can detect head movements with an average error rate of approximately 3.22%, which is smaller than that of other methods. PMID:23669713

  18. Human body motion capture from multi-image video sequences

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2003-01-01

    In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.

  19. Dynamic light scattering microscopy

    NASA Astrophysics Data System (ADS)

    Dzakpasu, Rhonda

    An optical microscope technique, dynamic light scattering microscopy (DLSM) that images dynamically scattered light fluctuation decay rates is introduced. Using physical optics we show theoretically that within the optical resolution of the microscope, relative motions between scattering centers are sufficient to produce significant phase variations resulting in interference intensity fluctuations in the image plane. The time scale for these intensity fluctuations is predicted. The spatial coherence distance defining the average distance between constructive and destructive interference in the image plane is calculated and compared with the pixel size. We experimentally tested DLSM on polystyrene latex nanospheres and living macrophage cells. In order to record these rapid fluctuations, on a slow progressive scan CCD camera, we used a thin laser line of illumination on the sample such that only a single column of pixels in the CCD camera is illuminated. This allowed the use of the rate of the column-by-column readout transfer process as the acquisition rate of the camera. This manipulation increased the data acquisition rate by at least an order of magnitude in comparison to conventional CCD cameras rates defined by frames/s. Analysis of the observed fluctuations provides information regarding the rates of motion of the scattering centers. These rates, acquired from each position on the sample are used to create a spatial map of the fluctuation decay rates. Our experiments show that with this technique, we are able to achieve a good signal-to-noise ratio and can monitor fast intensity fluctuations, on the order of milliseconds. DLSM appears to provide dynamic information about fast motions within cells at a sub-optical resolution scale and provides a new kind of spatial contrast.

  20. A traffic situation analysis system

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Rosner, Marcin

    2011-01-01

    The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. For example embedded vision systems built into vehicles can be used as early warning systems, or stationary camera systems can modify the switching frequency of signals at intersections. Today the automated analysis of traffic situations is still in its infancy - the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully understood by a vision system. We present steps towards such a traffic monitoring system which is designed to detect potentially dangerous traffic situations, especially incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system is field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in an outdoor capable housing. Two cameras run vehicle detection software including license plate detection and recognition, one camera runs a complex pedestrian detection and tracking module based on the HOG detection principle. As a supplement, all 3 cameras use additional optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. This work describes the foundation for all 3 different object detection modalities (pedestrians, vehi1cles, license plates), and explains the system setup and its design.

  1. Partial camera automation in an unmanned air vehicle.

    PubMed

    Korteling, J E; van der Borg, W

    1997-03-01

    The present study focused on an intelligent, semiautonomous, interface for a camera operator of a simulated unmanned air vehicle (UAV). This interface used system "knowledge" concerning UAV motion in order to assist a camera operator in tracking an object moving through the landscape below. The semiautomated system compensated for the translations of the UAV relative to the earth. This compensation was accompanied by the appropriate joystick movements ensuring tactile (haptic) feedback of these system interventions. The operator had to superimpose self-initiated joystick manipulations over these system-initiated joystick motions in order to track the motion of a target (a driving truck) relative to the terrain. Tracking data showed that subjects performed substantially better with the active system. Apparently, the subjects had no difficulty in maintaining control, i.e., "following" the active stick while superimposing self-initiated control movements over the system-interventions. Furthermore, tracking performance with an active interface was clearly superior relative to the passive system. The magnitude of this effect was equal to the effect of update-frequency (2-5 Hz) of the monitor image. The benefits of update frequency enhancement and semiautomated tracking were the greatest under difficult steering conditions. Mental workload scores indicated that, for the difficult tracking-dynamics condition, both semiautomation and update frequency increase resulted in less experienced mental effort. For the easier dynamics this effect was only seen for update frequency.

  2. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  3. A new position measurement system using a motion-capture camera for wind tunnel tests.

    PubMed

    Park, Hyo Seon; Kim, Ji Young; Kim, Jin Gi; Choi, Se Woon; Kim, Yousok

    2013-09-13

    Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS) could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS). The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape) of the test specimen using system identification methods (frequency domain decomposition, FDD). By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape) with the 3D measurements.

  4. A New Position Measurement System Using a Motion-Capture Camera for Wind Tunnel Tests

    PubMed Central

    Park, Hyo Seon; Kim, Ji Young; Kim, Jin Gi; Choi, Se Woon; Kim, Yousok

    2013-01-01

    Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS) could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS). The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape) of the test specimen using system identification methods (frequency domain decomposition, FDD). By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape) with the 3D measurements. PMID:24064600

  5. Blind image deblurring based on trained dictionary and curvelet using sparse representation

    NASA Astrophysics Data System (ADS)

    Feng, Liang; Huang, Qian; Xu, Tingfa; Li, Shao

    2015-04-01

    Motion blur is one of the most significant and common artifacts causing poor image quality in digital photography, in which many factors resulted. In imaging process, if the objects are moving quickly in the scene or the camera moves in the exposure interval, the image of the scene would blur along the direction of relative motion between the camera and the scene, e.g. camera shake, atmospheric turbulence. Recently, sparse representation model has been widely used in signal and image processing, which is an effective method to describe the natural images. In this article, a new deblurring approach based on sparse representation is proposed. An overcomplete dictionary learned from the trained image samples via the KSVD algorithm is designed to represent the latent image. The motion-blur kernel can be treated as a piece-wise smooth function in image domain, whose support is approximately a thin smooth curve, so we employed curvelet to represent the blur kernel. Both of overcomplete dictionary and curvelet system have high sparsity, which improves the robustness to the noise and more satisfies the observer's visual demand. With the two priors, we constructed restoration model of blurred images and succeeded to solve the optimization problem with the help of alternating minimization technique. The experiment results prove the method can preserve the texture of original images and suppress the ring artifacts effectively.

  6. Precise Image-Based Motion Estimation for Autonomous Small Body Exploration

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew E.; Matthies, Larry H.

    1998-01-01

    Space science and solar system exploration are driving NASA to develop an array of small body missions ranging in scope from near body flybys to complete sample return. This paper presents an algorithm for onboard motion estimation that will enable the precision guidance necessary for autonomous small body landing. Our techniques are based on automatic feature tracking between a pair of descent camera images followed by two frame motion estimation and scale recovery using laser altimetry data. The output of our algorithm is an estimate of rigid motion (attitude and position) and motion covariance between frames. This motion estimate can be passed directly to the spacecraft guidance and control system to enable rapid execution of safe and precise trajectories.

  7. Feasibility study for the application of the large format camera as a payload for the Orbiter program

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The large format camera (LFC) designed as a 30 cm focal length cartographic camera system that employs forward motion compensation in order to achieve the full image resolution provided by its 80 degree field angle lens is described. The feasibility of application of the current LFC design to deployment in the orbiter program as the Orbiter Camera Payload System was assessed and the changes that are necessary to meet such a requirement are discussed. Current design and any proposed design changes were evaluated relative to possible future deployment of the LFC on a free flyer vehicle or in a WB-57F. Preliminary mission interface requirements for the LFC are given.

  8. Mechanical Design of the LSST Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordby, Martin; Bowden, Gordon; Foss, Mike

    2008-06-13

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors inmore » image reconstruction. Design and analysis for the camera body and cryostat will be detailed.« less

  9. Distributed Sensing and Processing for Multi-Camera Networks

    NASA Astrophysics Data System (ADS)

    Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.

    Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.

  10. Real-time intra-fraction-motion tracking using the treatment couch: a feasibility study

    NASA Astrophysics Data System (ADS)

    D'Souza, Warren D.; Naqvi, Shahid A.; Yu, Cedric X.

    2005-09-01

    Significant differences between planned and delivered treatments may occur due to respiration-induced tumour motion, leading to underdosing of parts of the tumour and overdosing of parts of the surrounding critical structures. Existing methods proposed to counter tumour motion include breath-holds, gating and MLC-based tracking. Breath-holds and gating techniques increase treatment time considerably, whereas MLC-based tracking is limited to two dimensions. We present an alternative solution in which a robotic couch moves in real time in response to organ motion. To demonstrate proof-of-principle, we constructed a miniature adaptive couch model consisting of two movable platforms that simulate tumour motion and couch motion, respectively. These platforms were connected via an electronic feedback loop so that the bottom platform responded to the motion of the top platform. We tested our model with a seven-field step-and-shoot delivery case in which we performed three film-based experiments: (1) static geometry, (2) phantom-only motion and (3) phantom motion with simulated couch motion. Our measurements demonstrate that the miniature couch was able to compensate for phantom motion to the extent that the dose distributions were practically indistinguishable from those in static geometry. Motivated by this initial success, we investigated a real-time couch compensation system consisting of a stereoscopic infra-red camera system interfaced to a robotic couch known as the Hexapod™, which responds in real time to any change in position detected by the cameras. Optical reflectors placed on a solid water phantom were used as surrogates for motion. We tested the effectiveness of couch-based motion compensation for fixed fields and a dynamic arc delivery cases. Due to hardware limitations, we performed film-based experiments (1), (2) and (3), with the robotic couch at a phantom motion period and dose rate of 16 s and 100 MU min-1, respectively. Analysis of film measurements showed near-equivalent dose distributions (<=2 mm agreement of corresponding isodose lines) for static geometry and motion-synchronized real-time robotic couch tracking-based radiation delivery.

  11. A novel dual-camera calibration method for 3D optical measurement

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang

    2018-05-01

    A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.

  12. Observation and analysis of high-speed human motion with frequent occlusion in a large area

    NASA Astrophysics Data System (ADS)

    Wang, Yuru; Liu, Jiafeng; Liu, Guojun; Tang, Xianglong; Liu, Peng

    2009-12-01

    The use of computer vision technology in collecting and analyzing statistics during sports matches or training sessions is expected to provide valuable information for tactics improvement. However, the measurements published in the literature so far are either unreliably documented to be used in training planning due to their limitations or unsuitable for studying high-speed motion in large area with frequent occlusions. A sports annotation system is introduced in this paper for tracking high-speed non-rigid human motion over a large playing area with the aid of motion camera, taking short track speed skating competitions as an example. The proposed system is composed of two sub-systems: precise camera motion compensation and accurate motion acquisition. In the video registration step, a distinctive invariant point feature detector (probability density grads detector) and a global parallax based matching points filter are used, to provide reliable and robust matching across a large range of affine distortion and illumination change. In the motion acquisition step, a two regions' relationship constrained joint color model and Markov chain Monte Carlo based joint particle filter are emphasized, by dividing the human body into two relative key regions. Several field tests are performed to assess measurement errors, including comparison to popular algorithms. With the help of the system presented, the system obtains position data on a 30 m × 60 m large rink with root-mean-square error better than 0.3975 m, velocity and acceleration data with absolute error better than 1.2579 m s-1 and 0.1494 m s-2, respectively.

  13. 3D video-based deformation measurement of the pelvis bone under dynamic cyclic loading

    PubMed Central

    2011-01-01

    Background Dynamic three-dimensional (3D) deformation of the pelvic bones is a crucial factor in the successful design and longevity of complex orthopaedic oncological implants. The current solutions are often not very promising for the patient; thus it would be interesting to measure the dynamic 3D-deformation of the whole pelvic bone in order to get a more realistic dataset for a better implant design. Therefore we hypothesis if it would be possible to combine a material testing machine with a 3D video motion capturing system, used in clinical gait analysis, to measure the sub millimetre deformation of a whole pelvis specimen. Method A pelvis specimen was placed in a standing position on a material testing machine. Passive reflective markers, traceable by the 3D video motion capturing system, were fixed to the bony surface of the pelvis specimen. While applying a dynamic sinusoidal load the 3D-movement of the markers was recorded by the cameras and afterwards the 3D-deformation of the pelvis specimen was computed. The accuracy of the 3D-movement of the markers was verified with 3D-displacement curve with a step function using a manual driven 3D micro-motion-stage. Results The resulting accuracy of the measurement system depended on the number of cameras tracking a marker. The noise level for a marker seen by two cameras was during the stationary phase of the calibration procedure ± 0.036 mm, and ± 0.022 mm if tracked by 6 cameras. The detectable 3D-movement performed by the 3D-micro-motion-stage was smaller than the noise level of the 3D-video motion capturing system. Therefore the limiting factor of the setup was the noise level, which resulted in a measurement accuracy for the dynamic test setup of ± 0.036 mm. Conclusion This 3D test setup opens new possibilities in dynamic testing of wide range materials, like anatomical specimens, biomaterials, and its combinations. The resulting 3D-deformation dataset can be used for a better estimation of material characteristics of the underlying structures. This is an important factor in a reliable biomechanical modelling and simulation as well as in a successful design of complex implants. PMID:21762533

  14. Ambient-Light-Canceling Camera Using Subtraction of Frames

    NASA Technical Reports Server (NTRS)

    Morookian, John Michael

    2004-01-01

    The ambient-light-canceling camera (ALCC) is a proposed near-infrared electronic camera that would utilize a combination of (1) synchronized illumination during alternate frame periods and (2) subtraction of readouts from consecutive frames to obtain images without a background component of ambient light. The ALCC is intended especially for use in tracking the motion of an eye by the pupil center corneal reflection (PCCR) method. Eye tracking by the PCCR method has shown potential for application in human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological deficiencies. In the PCCR method, an eye is illuminated by near-infrared light from a lightemitting diode (LED). Some of the infrared light is reflected from the surface of the cornea. Some of the infrared light enters the eye through the pupil and is reflected from back of the eye out through the pupil a phenomenon commonly observed as the red-eye effect in flash photography. An electronic camera is oriented to image the user's eye. The output of the camera is digitized and processed by algorithms that locate the two reflections. Then from the locations of the centers of the two reflections, the direction of gaze is computed. As described thus far, the PCCR method is susceptible to errors caused by reflections of ambient light. Although a near-infrared band-pass optical filter can be used to discriminate against ambient light, some sources of ambient light have enough in-band power to compete with the LED signal. The mode of operation of the ALCC would complement or supplant spectral filtering by providing more nearly complete cancellation of the effect of ambient light. In the operation of the ALCC, a near-infrared LED would be pulsed on during one camera frame period and off during the next frame period. Thus, the scene would be illuminated by both the LED (signal) light and the ambient (background) light during one frame period, and would be illuminated with only ambient (background) light during the next frame period. The camera output would be digitized and sent to a computer, wherein the pixel values of the background-only frame would be subtracted from the pixel values of the signal-plus-background frame to obtain signal-only pixel values (see figure). To prevent artifacts of motion from entering the images, it would be necessary to acquire image data at a rate greater than the standard video rate of 30 frames per second. For this purpose, the ALCC would exploit a novel control technique developed at NASA s Jet Propulsion Laboratory for advanced charge-coupled-device (CCD) cameras. This technique provides for readout from a subwindow [region of interest (ROI)] within the image frame. Because the desired reflections from the eye would typically occupy a small fraction of the area within the image frame, the ROI capability would make it possible to acquire and subtract pixel values at rates of several hundred frames per second considerably greater than the standard video rate and sufficient to both (1) suppress motion artifacts and (2) track the motion of the eye between consecutive subtractive frame pairs.

  15. Unattended real-time re-establishment of visibility in high dynamic range video and stills

    NASA Astrophysics Data System (ADS)

    Abidi, B.

    2014-05-01

    We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and timely decision making of the human analyst, as well as that of unmanned systems performing automatic data exploitation, such as target detection and identification.

  16. A Single Camera Motion Capture System for Human-Computer Interaction

    NASA Astrophysics Data System (ADS)

    Okada, Ryuzo; Stenger, Björn

    This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.

  17. Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras

    PubMed Central

    Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Guo, Yin

    2016-01-01

    The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained. PMID:27869731

  18. Detection of unmanned aerial vehicles using a visible camera system.

    PubMed

    Hu, Shuowen; Goldman, Geoffrey H; Borel-Donohue, Christoph C

    2017-01-20

    Unmanned aerial vehicles (UAVs) flown by adversaries are an emerging asymmetric threat to homeland security and the military. To help address this threat, we developed and tested a computationally efficient UAV detection algorithm consisting of horizon finding, motion feature extraction, blob analysis, and coherence analysis. We compare the performance of this algorithm against two variants, one using the difference image intensity as the motion features and another using higher-order moments. The proposed algorithm and its variants are tested using field test data of a group 3 UAV acquired with a panoramic video camera in the visible spectrum. The performance of the algorithms was evaluated using receiver operating characteristic curves. The results show that the proposed approach had the best performance compared to the two algorithmic variants.

  19. Estimation of velocities via optical flow

    NASA Astrophysics Data System (ADS)

    Popov, A.; Miller, A.; Miller, B.; Stepanyan, K.

    2017-02-01

    This article presents an approach to the optical flow (OF) usage as a general navigation means providing the information about the linear and angular vehicle's velocities. The term of "OF" came from opto-electronic devices where it corresponds to a video sequence of images related to the camera motion either over static surfaces or set of objects. Even if the positions of these objects are unknown in advance, one can estimate the camera motion provided just by video sequence itself and some metric information, such as distance between the objects or the range to the surface. This approach is applicable to any passive observation system which is able to produce a sequence of images, such as radio locator or sonar. Here the UAV application of the OF is considered since it is historically

  20. Psychophysical Calibration of Mobile Touch-Screens for Vision Testing in the Field

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.

    2015-01-01

    The now ubiquitous nature of touch-screen displays in cell phones and tablet computers makes them an attractive option for vision testing outside of the laboratory or clinic. Accurate measurement of parameters such as contrast sensitivity, however, requires precise control of absolute and relative screen luminances. The nonlinearity of the display response (gamma) can be measured or checked using a minimum motion technique similar to that developed by Anstis and Cavanagh (1983) for the determination of isoluminance. While the relative luminances of the color primaries vary between subjects (due to factors such as individual differences in pre-retinal pigment densities), the gamma nonlinearity can be checked in the lab using a photometer. Here we compare results obtained using the psychophysical method with physical measurements for a number of different devices. In addition, we present a novel physical method using the device's built-in front-facing camera in conjunction with a mirror to jointly calibrate the camera and display. A high degree of consistency between devices is found, but some departures from ideal performance are observed. In spite of this, the effects of calibration errors and display artifacts on estimates of contrast sensitivity are found to be small.

  1. 3-D high-speed imaging of volcanic bomb trajectory in basaltic explosive eruptions

    USGS Publications Warehouse

    Gaudin, D.; Taddeucci, J; Houghton, Bruce F.; Orr, Tim R.; Andronico, D.; Del Bello, E.; Kueppers, U.; Ricci, T.; Scarlato, P.

    2016-01-01

    Imaging, in general, and high speed imaging in particular are important emerging tools for the study of explosive volcanic eruptions. However, traditional 2-D video observations cannot measure volcanic ejecta motion toward and away from the camera, strongly hindering our capability to fully determine crucial hazard-related parameters such as explosion directionality and pyroclasts' absolute velocity. In this paper, we use up to three synchronized high-speed cameras to reconstruct pyroclasts trajectories in three dimensions. Classical stereographic techniques are adapted to overcome the difficult observation conditions of active volcanic vents, including the large number of overlapping pyroclasts which may change shape in flight, variable lighting and clouding conditions, and lack of direct access to the target. In particular, we use a laser rangefinder to measure the geometry of the filming setup and manually track pyroclasts on the videos. This method reduces uncertainties to 10° in azimuth and dip angle of the pyroclasts, and down to 20% in the absolute velocity estimation. We demonstrate the potential of this approach by three examples: the development of an explosion at Stromboli, a bubble burst at Halema'uma'u lava lake, and an in-flight collision between two bombs at Stromboli.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leary, T.J.; Lamb, A.

    The Department of Energy`s Office of Arms Control and Non-Proliferation (NN-20) has developed a suite of airborne remote sensing systems that simultaneously collect coincident data from a US Navy P-3 aircraft. The primary objective of the Airborne Multisensor Pod System (AMPS) Program is {open_quotes}to collect multisensor data that can be used for data research, both to reduce interpretation problems associated with data overload and to develop information products more complete than can be obtained from any single sensor.{close_quotes} The sensors are housed in wing-mounted pods and include: a Ku-Band Synthetic Aperture Radar; a CASI Hyperspectral Imager; a Daedalus 3600 Airbornemore » Multispectral Scanner; a Wild Heerbrugg RC-30 motion compensated large format camera; various high resolution, light intensified and thermal video cameras; and several experimental sensors (e.g. the Portable Hyperspectral Imager of Low-Light Spectroscopy (PHILLS)). Over the past year or so, the Coastal Marine Resource Assessment (CAMRA) group at the Florida Department of Environmental Protection`s Marine Research Institute (FMRI) has been working with the Department of Energy through the Naval Research Laboratory to develop applications and products from existing data. Considerable effort has been spent identifying image formats integration parameters. 2 refs., 3 figs., 2 tabs.« less

  3. The impact of stereo 3D sports TV broadcasts on user's depth perception and spatial presence experience

    NASA Astrophysics Data System (ADS)

    Weigelt, K.; Wiemeyer, J.

    2014-03-01

    This work examines the impact of content and presentation parameters in 2D versus 3D on depth perception and spatial presence, and provides guidelines for stereoscopic content development for 3D sports TV broadcasts and cognate subjects. Under consideration of depth perception and spatial presence experience, a preliminary study with 8 participants (sports: soccer and boxing) and a main study with 31 participants (sports: soccer and BMX-Miniramp) were performed. The dimension (2D vs. 3D) and camera position (near vs. far) were manipulated for soccer and boxing. In addition for soccer, the field of view (small vs. large) was examined. Moreover, the direction of motion (horizontal vs. depth) was considered for BMX-Miniramp. Subjective assessments, behavioural tests and qualitative interviews were implemented. The results confirm a strong effect of 3D on both depth perception and spatial presence experience as well as selective influences of camera distance and field of view. The results can improve understanding of the perception and experience of 3D TV as a medium. Finally, recommendations are derived on how to use various 3D sports ideally as content for TV broadcasts.

  4. 3-D Velocimetry of Strombolian Explosions

    NASA Astrophysics Data System (ADS)

    Taddeucci, J.; Gaudin, D.; Orr, T. R.; Scarlato, P.; Houghton, B. F.; Del Bello, E.

    2014-12-01

    Using two synchronized high-speed cameras we were able to reconstruct the three-dimensional displacement and velocity field of bomb-sized pyroclasts in Strombolian explosions at Stromboli Volcano. Relatively low-intensity Strombolian-style activity offers a rare opportunity to observe volcanic processes that remain hidden from view during more violent explosive activity. Such processes include the ejection and emplacement of bomb-sized clasts along pure or drag-modified ballistic trajectories, in-flight bomb collision, and gas liberation dynamics. High-speed imaging of Strombolian activity has already opened new windows for the study of the abovementioned processes, but to date has only utilized two-dimensional analysis with limited motion detection and ability to record motion towards or away from the observer. To overcome this limitation, we deployed two synchronized high-speed video cameras at Stromboli. The two cameras, located sixty meters apart, filmed Strombolian explosions at 500 and 1000 frames per second and with different resolutions. Frames from the two cameras were pre-processed and combined into a single video showing frames alternating from one to the other camera. Bomb-sized pyroclasts were then manually identified and tracked in the combined video, together with fixed reference points located as close as possible to the vent. The results from manual tracking were fed to a custom software routine that, knowing the relative position of the vent and cameras, and the field of view of the latter, provided the position of each bomb relative to the reference points. By tracking tens of bombs over five to ten frames at different intervals during one explosion, we were able to reconstruct the three-dimensional evolution of the displacement and velocity fields of bomb-sized pyroclasts during individual Strombolian explosions. Shifting jet directivity and dispersal angle clearly appear from the three-dimensional analysis.

  5. Global rotational motion and displacement estimation of digital image stabilization based on the oblique vectors matching algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Fei; Hui, Mei; Zhao, Yue-jin

    2009-08-01

    The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.

  6. The kinelite project. A new powerful motion analyser for spacelab and space station

    NASA Astrophysics Data System (ADS)

    Venet, M.; Pinard, H.; McIntyre, J.; Berthoz, A.; Lacquaniti, F.

    The goal of the Kinelite Project is to develop a space qualified motion analysis system to be used in space by the scientific community, mainly to support neuroscience protocols. The measurement principle of the Kinelite is to determine, by triangulation mean, the 3D position of small, lightweight, reflective markers positionned at the different points of interest. The scene is illuminated by Infra Red flashes and the reflected light is acquired by up to 8 precalibrated and synchronized CCD cameras. The main characteristics of the system are: - Camera field of view: 45 °, - Number of cameras: 2 to 8, - Acquisition frequency: 25, 50, 100 or 200 Hz, - CCD format: 256 × 256, - Number of markers: up to 64, - 3D accuracy: 2 mm, - Main dimensions: 45 cm × 45 cm × 30 cm, - Mass: 23 kg, - Power consumption: less than 200 W. The Kinelite will first fly aboard the NASA Spacelab; it will be used, during the NEUROLAB mission (4/98), to support the "Frames of References and Internal Models" (Principal Investigator: Pr. A.BERTHOZ, Co Investigators: J. Mc INTYRE, F. LACQUANITI).

  7. Vision robot with rotational camera for searching ID tags

    NASA Astrophysics Data System (ADS)

    Kimura, Nobutaka; Moriya, Toshio

    2008-02-01

    We propose a new concept, called "real world crawling", in which intelligent mobile sensors completely recognize environments by actively gathering information in those environments and integrating that information on the basis of location. First we locate objects by widely and roughly scanning the entire environment with these mobile sensors, and we check the objects in detail by moving the sensors to find out exactly what and where they are. We focused on the automation of inventory counting with barcodes as an application of our concept. We developed "a barcode reading robot" which autonomously moved in a warehouse. It located and read barcode ID tags using a camera and a barcode reader while moving. However, motion blurs caused by the robot's translational motion made it difficult to recognize the barcodes. Because of the high computational cost of image deblurring software, we used the pan rotation of the camera to reduce these blurs. We derived the appropriate pan rotation velocity from the robot's translational velocity and from the distance to the surfaces of barcoded boxes. We verified the effectiveness of our method in an experimental test.

  8. Muscle forces analysis in the shoulder mechanism during wheelchair propulsion.

    PubMed

    Lin, Hwai-Ting; Su, Fong-Chin; Wu, Hong-Wen; An, Kai-Nan

    2004-01-01

    This study combines an ergometric wheelchair, a six-camera video motion capture system and a prototype computer graphics based musculoskeletal model (CGMM) to predict shoulder joint loading, muscle contraction force per muscle and the sequence of muscular actions during wheelchair propulsion, and also to provide an animated computer graphics model of the relative interactions. Five healthy male subjects with no history of upper extremity injury participated. A conventional manual wheelchair was equipped with a six-component load cell to collect three-dimensional forces and moments experienced by the wheel, allowing real-time measurement of hand/rim force applied by subjects during normal wheelchair operation. An ExpertVision six-camera video motion capture system collected trajectory data of markers attached on anatomical positions. The CGMM was used to simulate and animate muscle action by using an optimization technique combining observed muscular motions with physiological constraints to estimate muscle contraction forces during wheelchair propulsion. The CGMM provides results that satisfactorily match the predictions of previous work, disregarding minor differences which presumably result from differing experimental conditions, measurement technologies and subjects. Specifically, the CGMM shows that the supraspinatus, infraspinatus, anterior deltoid, pectoralis major and biceps long head are the prime movers during the propulsion phase. The middle and posterior deltoid and supraspinatus muscles are responsible for arm return during the recovery phase. CGMM modelling shows that the rotator cuff and pectoralis major play an important role during wheelchair propulsion, confirming the known risk of injury for these muscles during wheelchair propulsion. The CGMM successfully transforms six-camera video motion capture data into a technically useful and visually interesting animated video model of the shoulder musculoskeletal system. The CGMM further yields accurate estimates of muscular forces during motion, indicating that this prototype modelling and analysis technique will aid in study, analysis and therapy of the mechanics and underlying pathomechanics involved in various musculoskeletal overuse syndromes.

  9. Recognition of Drainage Tunnels during Glacier Lake Outburst Events from Terrestrial Image Sequences

    NASA Astrophysics Data System (ADS)

    Schwalbe, E.; Koschitzki, R.; Maas, H.-G.

    2016-06-01

    In recent years, many glaciers all over the world have been distinctly retreating and thinning. One of the consequences of this is the increase of so called glacier lake outburst flood events (GLOFs). The mechanisms ruling such GLOF events are still not yet fully understood by glaciologists. Thus, there is a demand for data and measurements that can help to understand and model the phenomena. Thereby, a main issue is to obtain information about the location and formation of subglacial channels through which some lakes, dammed by a glacier, start to drain. The paper will show how photogrammetric image sequence analysis can be used to collect such data. For the purpose of detecting a subglacial tunnel, a camera has been installed in a pilot study to observe the area of the Colonia Glacier (Northern Patagonian Ice Field) where it dams the Lake Cachet II. To verify the hypothesis, that the course of the subglacial tunnel is indicated by irregular surface motion patterns during its collapse, the camera acquired image sequences of the glacier surface during several GLOF events. Applying tracking techniques to these image sequences, surface feature motion trajectories could be obtained for a dense raster of glacier points. Since only a single camera has been used for image sequence acquisition, depth information is required to scale the trajectories. Thus, for scaling and georeferencing of the measurements a GPS-supported photogrammetric network has been measured. The obtained motion fields of the Colonia Glacier deliver information about the glacier's behaviour before during and after a GLOF event. If the daily vertical glacier motion of the glacier is integrated over a period of several days and projected into a satellite image, the location and shape of the drainage channel underneath the glacier becomes visible. The high temporal resolution of the motion fields may also allows for an analysis of the tunnels dynamic in comparison to the changing water level of the lake.

  10. EVA Robotic Assistant Project: Platform Attitude Prediction

    NASA Technical Reports Server (NTRS)

    Nickels, Kevin M.

    2003-01-01

    The Robotic Systems Technology Branch is currently working on the development of an EVA Robotic Assistant under the sponsorship of the Surface Systems Thrust of the NASA Cross Enterprise Technology Development Program (CETDP). This will be a mobile robot that can follow a field geologist during planetary surface exploration, carry his tools and the samples that he collects, and provide video coverage of his activity. Prior experiments have shown that for such a robot to be useful it must be able to follow the geologist at walking speed over any terrain of interest. Geologically interesting terrain tends to be rough rather than smooth. The commercial mobile robot that was recently purchased as an initial testbed for the EVA Robotic Assistant Project, an ATRV Jr., is capable of faster than walking speed outside but it has no suspension. Its wheels with inflated rubber tires are attached to axles that are connected directly to the robot body. Any angular motion of the robot produced by driving over rough terrain will directly affect the pointing of the on-board stereo cameras. The resulting image motion is expected to make tracking of the geologist more difficult. This will either require the tracker to search a larger part of the image to find the target from frame to frame or to search mechanically in pan and tilt whenever the image motion is large enough to put the target outside the image in the next frame. This project consists of the design and implementation of a Kalman filter that combines the output of the angular rate sensors and linear accelerometers on the robot to estimate the motion of the robot base. The motion of the stereo camera pair mounted on the robot that results from this motion as the robot drives over rough terrain is then straightforward to compute. The estimates may then be used, for example, to command the robot s on-board pan-tilt unit to compensate for the camera motion induced by the base movement. This has been accomplished in two ways: first, a standalone head stabilizer has been implemented and second, the estimates have been used to influence the search algorithm of the stereo tracking algorithm. Studies of the image motion of a tracked object indicate that the image motion of objects is suppressed while the robot crossing rough terrain. This work expands the range of speed and surface roughness over which the robot should be able to track and follow a field geologist and accept arm gesture commands from the geologist.

  11. Operation of the University of Hawaii 2.2M telescope on Mauna Kea

    NASA Technical Reports Server (NTRS)

    Hall, Donald N. B.

    1991-01-01

    NASA's planetary astronomy program provides part of the funding for the 2.2 meter telescope. The parameters for time on the telescope are laid out. A major instrumental highlight has been the commissioning of a 256 x 256 near infrared camera which uses a Rockwell NICMOS-3 array. At the f/10 focus, image scales of 0.37 and 0.75 arcsec/pixel are available. A new, high quantum efficiency Tektronix 1024 x 1024 CCD saw first light on the telescope in 1991, and was available regularly from April 1991. Data from both of these detectors are transmitted directly to the Sun workstation for immediate analysis by the observers. The autoguider software was enhanced to permit guided tracking on objects have nonsideral motions (i.e., solar system objects).

  12. Driven motion and instability of an atmospheric pressure arc

    NASA Astrophysics Data System (ADS)

    Karasik, Max

    Atmospheric pressure arcs are used extensively in applications such as welding and metallurgy. However, comparatively little is known of the physics of such arcs in external magnetic fields and the mechanisms of the instabilities present. In order to address questions of equilibrium and stability of such arcs, an experimental are furnace is constructed and operated in air with graphite cathode and steel anode at currents 100--250 A. The arc is diagnosed with a gated intensified camera and a collimated photodiode array, as well as fast voltage and current probes. Experiments are carried out on the response of the are to applied transverse DC and AC (up to ≈1 kHz) magnetic fields. The arc is found to deflect parabolically for DC field and assumes a growing sinusoidal structure for AC field. A simple analytic two-parameter fluid model of the are dynamics is derived, in which the inertia of the magnetically pumped cathode jet balances the applied J⃗xB⃗ force. Time variation of the applied field allows evaluation of the parameters individually. A fit of the model to the experimental data gives a value for the average jet speed an order of magnitude below Maecker's estimate of the maximum jet speed. A spontaneous instability of the same arc is investigated experimentally and modeled analytically. The presence of the instability is found to depend critically on cathode dimensions. For cylindrical cathodes, instability occurs only for a narrow range of cathode diameters. Cathode spot motion is proposed as the mechanism of the instability. A simple fluid model combining the effect of the cathode spot motion and the inertia of the cathode jet successfully describes the arc shape during low amplitude instability. The amplitude of cathode spot motion required by the model is in agreement with measurements. The average jet velocity required is approximately equal to that inferred from the transverse magnetic field experiments. Reasons for spot motion and for cathode geometry dependence are discussed. An exploratory study of the instability of the arc in applied axial magnetic field is also described. Applicability of the results of the thesis to an industrial steelmaking furnace is considered.

  13. Data filtering with support vector machines in geometric camera calibration.

    PubMed

    Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C

    2010-02-01

    The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.

  14. Computer aided photographic engineering

    NASA Technical Reports Server (NTRS)

    Hixson, Jeffrey A.; Rieckhoff, Tom

    1988-01-01

    High speed photography is an excellent source of engineering data but only provides a two-dimensional representation of a three-dimensional event. Multiple cameras can be used to provide data for the third dimension but camera locations are not always available. A solution to this problem is to overlay three-dimensional CAD/CAM models of the hardware being tested onto a film or photographic image, allowing the engineer to measure surface distances, relative motions between components, and surface variations.

  15. Films for Learning: Some Observations on the Present, Past, and Future Role of the Educational Motion Picture.

    ERIC Educational Resources Information Center

    Flory, John

    Although there have been great developments in motion picture technology, such as super 8mm film, magnetic sound, low cost color film, simpler projectors and movie cameras, and cartridge-loading projectors, there is still only limited use of audiovisual materials in the classroom today. This paper suggests some of the possible reasons for the lack…

  16. Uav Cameras: Overview and Geometric Calibration Benchmark

    NASA Astrophysics Data System (ADS)

    Cramer, M.; Przybilla, H.-J.; Zurhorst, A.

    2017-08-01

    Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

  17. Investigation of the influence of spatial degrees of freedom on thermal infrared measurement

    NASA Astrophysics Data System (ADS)

    Fleuret, Julien R.; Yousefi, Bardia; Lei, Lei; Djupkep Dizeu, Frank Billy; Zhang, Hai; Sfarra, Stefano; Ouellet, Denis; Maldague, Xavier P. V.

    2017-05-01

    Long Wavelength Infrared (LWIR) cameras can provide a representation of a part of the light spectrum that is sensitive to temperature. These cameras also named Thermal Infrared (TIR) cameras are powerful tools to detect features that cannot be seen by other imaging technologies. For instance they enable defect detection in material, fever and anxiety in mammals and many other features for numerous applications. However, the accuracy of thermal cameras can be affected by many parameters; the most critical involves the relative position of the camera with respect to the object of interest. Several models have been proposed in order to minimize the influence of some of the parameters but they are mostly related to specific applications. Because such models are based on some prior informations related to context, their applicability to other contexts cannot be easily assessed. The few models remaining are mostly associated with a specific device. In this paper the authors studied the influence of the camera position on the measurement accuracy. Modeling of the position of the camera from the object of interest depends on many parameters. In order to propose a study which is as accurate as possible, the position of the camera will be represented as a five dimensions model. The aim of this study is to investigate and attempt to introduce a model which is as independent from the device as possible.

  18. SU-E-T-570: New Quality Assurance Method Using Motion Tracking for 6D Robotic Couches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheon, W; Cho, J; Ahn, S

    Purpose: To accommodate geometrically accurate patient positioning, a robotic couch that is capable of 6-degrees of freedom has been introduced. However, conventional couch QA methods are not sufficient to enable the necessary accuracy of tests. Therefore, we have developed a camera based motion detection and geometry calibration system for couch QA. Methods: Employing a Visual-Tracking System (VTS, BonitaB10, Vicon, UK) which tracks infrared reflective(IR) markers, camera calibration was conducted using a 5.7 × 5.7 × 5.7 cm{sup 3} cube attached with IR markers at each corner. After positioning a robotic-couch at the origin with the cube on the table top,more » 3D coordinates of the cube’s eight corners were acquired by VTS in the VTS coordinate system. Next, positions in reference coordinates (roomcoordinates) were assigned using the known relation between each point. Finally, camera calibration was completed by finding a transformation matrix between VTS and reference coordinate systems and by applying a pseudo inverse matrix method. After the calibration, the accuracy of linear and rotational motions as well as couch sagging could be measured by analyzing the continuously acquired data of the cube while the couch moves to a designated position. Accuracy of the developed software was verified through comparison with measurement data when using a Laser tracker (FARO, Lake Mary, USA) for a robotic-couch installed for proton therapy. Results: VTS system could track couch motion accurately and measured position in room-coordinates. The VTS measurements and Laser tracker data agreed within 1% of difference for linear and rotational motions. Also because the program analyzes motion in 3-Dimension, it can compute couch sagging. Conclusion: Developed QA system provides submillimeter/ degree accuracy which fulfills the high-end couch QA. This work was supported by the National Research Foundation of Korea funded by Ministry of Science, ICT & Future Planning. (2013M2A2A7043507 and 2012M3A9B6055201)« less

  19. SU-E-J-12: An Image-Guided Soft Robotic Patient Positioning System for Maskless Head-And-Neck Cancer Radiotherapy: A Proof-Of-Concept Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogunmolu, O; Gans, N; Jiang, S

    Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance ofmore » the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control will be pursued.« less

  20. Potential for application of an acoustic camera in particle tracking velocimetry.

    PubMed

    Wu, Fu-Chun; Shao, Yun-Chuan; Wang, Chi-Kuei; Liou, Jim

    2008-11-01

    We explored the potential and limitations for applying an acoustic camera as the imaging instrument of particle tracking velocimetry. The strength of the acoustic camera is its usability in low-visibility environments where conventional optical cameras are ineffective, while its applicability is limited by lower temporal and spatial resolutions. We conducted a series of experiments in which acoustic and optical cameras were used to simultaneously image the rotational motion of tracer particles, allowing for a comparison of the acoustic- and optical-based velocities. The results reveal that the greater fluctuations associated with the acoustic-based velocities are primarily attributed to the lower temporal resolution. The positive and negative biases induced by the lower spatial resolution are balanced, with the positive ones greater in magnitude but the negative ones greater in quantity. These biases reduce with the increase in the mean particle velocity and approach minimum as the mean velocity exceeds the threshold value that can be sensed by the acoustic camera.

  1. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  2. Simultaneous tracking and regulation visual servoing of wheeled mobile robots with uncalibrated extrinsic parameters

    NASA Astrophysics Data System (ADS)

    Lu, Qun; Yu, Li; Zhang, Dan; Zhang, Xuebo

    2018-01-01

    This paper presentsa global adaptive controller that simultaneously solves tracking and regulation for wheeled mobile robots with unknown depth and uncalibrated camera-to-robot extrinsic parameters. The rotational angle and the scaled translation between the current camera frame and the reference camera frame, as well as the ones between the desired camera frame and the reference camera frame can be calculated in real time by using the pose estimation techniques. A transformed system is first obtained, for which an adaptive controller is then designed to accomplish both tracking and regulation tasks, and the controller synthesis is based on Lyapunov's direct method. Finally, the effectiveness of the proposed method is illustrated by a simulation study.

  3. Verification of the test stand for microbolometer camera in accredited laboratory

    NASA Astrophysics Data System (ADS)

    Krupiński, Michal; Bareła, Jaroslaw; Chmielewski, Krzysztof; Kastek, Mariusz

    2017-10-01

    Microbolometer belongs to the group of thermal detectors and consist of temperature sensitive resistor which is exposed to measured radiation flux. Bolometer array employs a pixel structure prepared in silicon technology. The detecting area is defined by a size of thin membrane, usually made of amorphous silicon (a-Si) or vanadium oxide (VOx). FPAs are made of a multitude of detector elements (for example 384 × 288 ), where each individual detector has different sensitivity and offset due to detector-to-detector spread in the FPA fabrication process, and additionally can change with sensor operating temperature, biasing voltage variation or temperature of the observed scene. The difference in sensitivity and offset among detectors (which is called non-uniformity) additionally with its high sensitivity, produces fixed pattern noise (FPN) on produced image. Fixed pattern noise degrades parameters of infrared cameras like sensitivity or NETD. Additionally it degrades image quality, radiometric accuracy and temperature resolution. In order to objectively compare the two infrared cameras ones must measure and compare their parameters on a laboratory test stand. One of the basic parameters for the evaluation of a designed camera is NETD. In order to examine the NETD, parameters such as sensitivity and pixels noise must be measured. To do so, ones should register the output signal from the camera in response to the radiation of black bodies at two different temperatures. The article presets an application and measuring stand for determining the parameters of microbolometers camera. Prepared measurements were compared with the result of the measurements in the Institute of Optoelectronics, MUT on a METS test stand by CI SYSTEM. This test stand consists of IR collimator, IR standard source, rotating wheel with test patterns, a computer with a video grabber card and specialized software. The parameters of thermals cameras were measure according to norms and method described in literature.

  4. Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.

    2016-06-01

    In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.

  5. How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.

  6. Minimum Requirements for Taxicab Security Cameras*

    PubMed Central

    Zeng, Shengke; Amandus, Harlan E.; Amendola, Alfred A.; Newbraugh, Bradley H.; Cantis, Douglas M.; Weaver, Darlene

    2015-01-01

    Problem The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Methods Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Results Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. Practical Applications These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability. PMID:26823992

  7. Stability analysis for a multi-camera photogrammetric system.

    PubMed

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-08-18

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  8. Stability Analysis for a Multi-Camera Photogrammetric System

    PubMed Central

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-01-01

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012

  9. Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation

    NASA Technical Reports Server (NTRS)

    Lee, George

    1992-01-01

    A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market.

  10. Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Zuo, Chao; Tao, Tianyang; Hu, Yan; Zhang, Minliang; Chen, Qian; Gu, Guohua

    2018-04-01

    Phase-shifting profilometry (PSP) is a widely used approach to high-accuracy three-dimensional shape measurements. However, when it comes to moving objects, phase errors induced by the movement often result in severe artifacts even though a high-speed camera is in use. From our observations, there are three kinds of motion artifacts: motion ripples, motion-induced phase unwrapping errors, and motion outliers. We present a novel motion-compensated PSP to remove the artifacts for dynamic measurements of rigid objects. The phase error of motion ripples is analyzed for the N-step phase-shifting algorithm and is compensated using the statistical nature of the fringes. The phase unwrapping errors are corrected exploiting adjacent reliable pixels, and the outliers are removed by comparing the original phase map with a smoothed phase map. Compared with the three-step PSP, our method can improve the accuracy by more than 95% for objects in motion.

  11. Automation of the targeting and reflective alignment concept

    NASA Technical Reports Server (NTRS)

    Redfield, Robin C.

    1992-01-01

    The automated alignment system, described herein, employs a reflective, passive (requiring no power) target and includes a PC-based imaging system and one camera mounted on a six degree of freedom robot manipulator. The system detects and corrects for manipulator misalignment in three translational and three rotational directions by employing the Targeting and Reflective Alignment Concept (TRAC), which simplifies alignment by decoupling translational and rotational alignment control. The concept uses information on the camera and the target's relative position based on video feedback from the camera. These relative positions are converted into alignment errors and minimized by motions of the robot. The system is robust to exogenous lighting by virtue of a subtraction algorithm which enables the camera to only see the target. These capabilities are realized with relatively minimal complexity and expense.

  12. Development of real-time motion verification system using in-room optical images for respiratory-gated radiotherapy.

    PubMed

    Park, Yang-Kyun; Son, Tae-geun; Kim, Hwiyoung; Lee, Jaegi; Sung, Wonmo; Kim, Il Han; Lee, Kunwoo; Bang, Young-bong; Ye, Sung-Joon

    2013-09-06

    Phase-based respiratory-gated radiotherapy relies on the reproducibility of patient breathing during the treatment. To monitor the positional reproducibility of patient breathing against a 4D CT simulation, we developed a real-time motion verification system (RMVS) using an optical tracking technology. The system in the treatment room was integrated with a real-time position management system. To test the system, an anthropomorphic phantom that was mounted on a motion platform moved on a programmed breathing pattern and then underwent a 4D CT simulation with RPM. The phase-resolved anterior surface lines were extracted from the 4D CT data to constitute 4D reference lines. In the treatment room, three infrared reflective markers were attached on the superior, middle, and inferior parts of the phantom along with the body midline and then RMVS could track those markers using an optical camera system. The real-time phase information extracted from RPM was delivered to RMVS via in-house network software. Thus, the real-time anterior-posterior positions of the markers were simultaneously compared with the 4D reference lines. The technical feasibility of RMVS was evaluated by repeating the above procedure under several scenarios such as ideal case (with identical motion parameters between simulation and treatment), cycle change, baseline shift, displacement change, and breathing type changes (abdominal or chest breathing). The system capability for operating under irregular breathing was also investigated using real patient data. The evaluation results showed that RMVS has a competence to detect phase-matching errors between patient's motion during the treatment and 4D CT simulation. Thus, we concluded that RMVS could be used as an online quality assurance tool for phase-based gating treatments.

  13. Concurrent validity and reliability of using ground reaction force and center of pressure parameters in the determination of leg movement initiation during single leg lift.

    PubMed

    Aldabe, Daniela; de Castro, Marcelo Peduzzi; Milosavljevic, Stephan; Bussey, Melanie Dawn

    2016-09-01

    Postural adjustment evaluations during single leg lift requires the initiation of heel lift (T1) identification. T1 measured by means of motion analyses system is the most reliable approach. However, this method involves considerable workspace, expensive cameras, and time processing data and setting up laboratory. The use of ground reaction forces (GRF) and centre of pressure (COP) data is an alternative method as its data processing and setting up is less time consuming. Further, kinetic data is normally collected using frequency samples higher than 1000Hz whereas kinematic data are commonly captured using 50-200Hz. This study describes the concurrent-validity and reliability of GRF and COP measurements in determining T1, using a motion analysis system as reference standard. Kinematic and kinetic data during single leg lift were collected from ten participants. GRF and COP data were collected using one and two force plates. Displacement of a single heel marker was captured by means of ten Vicon(©) cameras. Kinetic and kinematic data were collected using a sample frequency of 1000Hz. Data were analysed in two stages: identification of key events in the kinetic data, and assessing concurrent validity of T1 based on the chosen key events with T1 provided by the kinematic data. The key event presenting the least systematic bias, along with a narrow 95% CI and limits of agreement against the reference standard T1, was the Baseline COPy event. Baseline COPy event was obtained using one force plate and presented excellent between-tester reliability. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Self calibrating monocular camera measurement of traffic parameters.

    DOT National Transportation Integrated Search

    2009-12-01

    This proposed project will extend the work of previous projects that have developed algorithms and software : to measure traffic speed under adverse conditions using un-calibrated cameras. The present implementation : uses the WSDOT CCTV cameras moun...

  15. Compact full-motion video hyperspectral cameras: development, image processing, and applications

    NASA Astrophysics Data System (ADS)

    Kanaev, A. V.

    2015-10-01

    Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.

  16. [Method for evaluating the positional accuracy of a six-degrees-of-freedom radiotherapy couch using high definition digital cameras].

    PubMed

    Takemura, Akihiro; Ueda, Shinichi; Noto, Kimiya; Kurata, Yuichi; Shoji, Saori

    2011-01-01

    In this study, we proposed and evaluated a positional accuracy assessment method with two high-resolution digital cameras for add-on six-degrees-of-freedom radiotherapy (6D) couches. Two high resolution digital cameras (D5000, Nikon Co.) were used in this accuracy assessment method. These cameras were placed on two orthogonal axes of a linear accelerator (LINAC) coordinate system and focused on the isocenter of the LINAC. Pictures of a needle that was fixed on the 6D couch were taken by the cameras during couch motions of translation and rotation of each axis. The coordinates of the needle in the pictures were obtained using manual measurement, and the coordinate error of the needle was calculated. The accuracy of a HexaPOD evo (Elekta AB, Sweden) was evaluated using this method. All of the mean values of the X, Y, and Z coordinate errors in the translation tests were within ±0.1 mm. However, the standard deviation of the Z coordinate errors in the Z translation test was 0.24 mm, which is higher than the others. In the X rotation test, we found that the X coordinate of the rotational origin of the 6D couch was shifted. We proposed an accuracy assessment method for a 6D couch. The method was able to evaluate the accuracy of the motion of only the 6D couch and revealed the deviation of the origin of the couch rotation. This accuracy assessment method is effective for evaluating add-on 6D couch positioning.

  17. Relative effects of posture and activity on human height estimation from surveillance footage.

    PubMed

    Ramstrand, Nerrolyn; Ramstrand, Simon; Brolund, Per; Norell, Kristin; Bergström, Peter

    2011-10-10

    Height estimations based on security camera footage are often requested by law enforcement authorities. While valid and reliable techniques have been established to determine vertical distances from video frames, there is a discrepancy between a person's true static height and their height as measured when assuming different postures or when in motion (e.g., walking). The aim of the research presented in this report was to accurately record the height of subjects as they performed a variety of activities typically observed in security camera footage and compare results to height recorded using a standard height measuring device. Forty-six able bodied adults participated in this study and were recorded using a 3D motion analysis system while performing eight different tasks. Height measurements captured using the 3D motion analysis system were compared to static height measurements in order to determine relative differences. It is anticipated that results presented in this report can be used by forensic image analysis experts as a basis for correcting height estimations of people captured on surveillance footage. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  18. In-vivo confirmation of the use of the dart thrower's motion during activities of daily living.

    PubMed

    Brigstocke, G H O; Hearnden, A; Holt, C; Whatling, G

    2014-05-01

    The dart thrower's motion is a wrist rotation along an oblique plane from radial extension to ulnar flexion. We report an in-vivo study to confirm the use of the dart thrower's motion during activities of daily living. Global wrist motion in ten volunteers was recorded using a three-dimensional optoelectronic motion capture system, in which digital infra-red cameras track the movement of retro-reflective marker clusters. Global wrist motion has been approximated to the dart thrower's motion when hammering a nail, throwing a ball, drinking from a glass, pouring from a jug and twisting the lid of a jar, but not when combing hair or manipulating buttons. The dart thrower's motion is the plane of global wrist motion used during most activities of daily living. Arthrodesis of the radiocarpal joint instead of the midcarpal joint will allow better wrist function during most activities of daily living by preserving the dart thrower's motion.

  19. Potential Utility of a 4K Consumer Camera for Surgical Education in Ophthalmology.

    PubMed

    Ichihashi, Tsunetomo; Hirabayashi, Yutaka; Nagahara, Miyuki

    2017-01-01

    Purpose. We evaluated the potential utility of a cost-effective 4K consumer video system for surgical education in ophthalmology. Setting. Tokai University Hachioji Hospital, Tokyo, Japan. Design. Experimental study. Methods. The eyes that underwent cataract surgery, glaucoma surgery, vitreoretinal surgery, or oculoplastic surgery between February 2016 and April 2016 were recorded with 17.2 million pixels using a high-definition digital video camera (LUMIX DMC-GH4, Panasonic, Japan) and with 0.41 million pixels using a conventional analog video camera (MKC-501, Ikegami, Japan). Motion pictures of two cases for each surgery type were evaluated and classified as having poor, normal, or excellent visibility. Results. The 4K video system was easily installed by reading the instructions without technical expertise. The details of the surgical picture in the 4K system were highly improved over those of the conventional pictures, and the visual effects for surgical education were significantly improved. Motion pictures were stored for approximately 11 h with 512 GB SD memory. The total price of this system was USD 8000, which is a very low price compared with a commercial system. Conclusion. This 4K consumer camera was able to record and play back with high-definition surgical field visibility on the 4K monitor and is a low-cost, high-performing alternative for surgical facilities.

  20. 33 CFR 117.829 - Northeast Cape Fear River.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... maintenance authorized in accordance with Subpart A of this part. (3) Trains shall be controlled so that any... of failure or obstruction of the motion sensors, laser scanners, video cameras or marine-radio...

  1. 33 CFR 117.829 - Northeast Cape Fear River.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... maintenance authorized in accordance with Subpart A of this part. (3) Trains shall be controlled so that any... of failure or obstruction of the motion sensors, laser scanners, video cameras or marine-radio...

  2. Validation of Attitude and Heading Reference System and Microsoft Kinect for Continuous Measurement of Cervical Range of Motion Compared to the Optical Motion Capture System.

    PubMed

    Song, Young Seop; Yang, Kyung Yong; Youn, Kibum; Yoon, Chiyul; Yeom, Jiwoon; Hwang, Hyeoncheol; Lee, Jehee; Kim, Keewon

    2016-08-01

    To compare optical motion capture system (MoCap), attitude and heading reference system (AHRS) sensor, and Microsoft Kinect for the continuous measurement of cervical range of motion (ROM). Fifteen healthy adult subjects were asked to sit in front of the Kinect camera with optical markers and AHRS sensors attached to the body in a room equipped with optical motion capture camera. Subjects were instructed to independently perform axial rotation followed by flexion/extension and lateral bending. Each movement was repeated 5 times while being measured simultaneously with 3 devices. Using the MoCap system as the gold standard, the validity of AHRS and Kinect for measurement of cervical ROM was assessed by calculating correlation coefficient and Bland-Altman plot with 95% limits of agreement (LoA). MoCap and ARHS showed fair agreement (95% LoA<10°), while MoCap and Kinect showed less favorable agreement (95% LoA>10°) for measuring ROM in all directions. Intraclass correlation coefficient (ICC) values between MoCap and AHRS in -40° to 40° range were excellent for flexion/extension and lateral bending (ICC>0.9). ICC values were also fair for axial rotation (ICC>0.8). ICC values between MoCap and Kinect system in -40° to 40° range were fair for all motions. Our study showed feasibility of using AHRS to measure cervical ROM during continuous motion with an acceptable range of error. AHRS and Kinect system can also be used for continuous monitoring of flexion/extension and lateral bending in ordinary range.

  3. Robotic Surgery

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Automated Endoscopic System for Optimal Positioning, or AESOP, was developed by Computer Motion, Inc. under a SBIR contract from the Jet Propulsion Lab. AESOP is a robotic endoscopic positioning system used to control the motion of a camera during endoscopic surgery. The camera, which is mounted at the end of a robotic arm, previously had to be held in place by the surgical staff. With AESOP the robotic arm can make more precise and consistent movements. AESOP is also voice controlled by the surgeon. It is hoped that this technology can be used in space repair missions which require precision beyond human dexterity. A new generation of the same technology entitled the ZEUS Robotic Surgical System can make endoscopic procedures even more successful. ZEUS allows the surgeon control various instruments in its robotic arms, allowing for the precision the procedure requires.

  4. Digital amateur observations of Venus at 0.9μm

    NASA Astrophysics Data System (ADS)

    Kardasis, E.

    2017-09-01

    Venus atmosphere is extremely dynamic, though it is very difficult to observe any features on it in the visible and even in the near-IR range. Digital observations with planetary cameras in recent years routinely produce high-quality images, especially in the near-infrared (0.7-1μm), since IR wavelengths are less influenced by Earth's atmosphere and Venus's atmosphere is partially transparent in this spectral region. Continuous observations over a few hours may track dark atmospheric features in the dayside and determine their motion. In this work we will present such observations and some dark-feature motion measurements at 0.9μm. Ground-based observations at this wavelength are rare and are complementary to in situ observations by JAXA's Akatsuki orbiter, that studies the atmospheric dynamics of Venus also in this band with the IR1 camera.

  5. Patient positioning in radiotherapy based on surface imaging using time of flight cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilles, M., E-mail: marlene.gilles@univ-brest.fr

    2016-08-15

    Purpose: To evaluate the patient positioning accuracy in radiotherapy using a stereo-time of flight (ToF)-camera system. Methods: A system using two ToF cameras was used to scan the surface of the patients in order to position them daily on the treatment couch. The obtained point clouds were registered to (a) detect translations applied to the table (intrafraction motion) and (b) predict the displacement to be applied in order to place the patient in its reference position (interfraction motion). The measures provided by this system were compared to the effectively applied translations. The authors analyzed 150 fractions including lung, pelvis/prostate, andmore » head and neck cancer patients. Results: The authors obtained small absolute errors for displacement detection: 0.8 ± 0.7, 0.8 ± 0.7, and 0.7 ± 0.6 mm along the vertical, longitudinal, and lateral axes, respectively, and 0.8 ± 0.7 mm for the total norm displacement. Lung cancer patients presented the largest errors with a respective mean of 1.1 ± 0.9, 0.9 ± 0.9, and 0.8 ± 0.7 mm. Conclusions: The proposed stereo-ToF system allows for sufficient accuracy and faster patient repositioning in radiotherapy. Its capability to track the complete patient surface in real time could allow, in the future, not only for an accurate positioning but also a real time tracking of any patient intrafraction motion (translation, involuntary, and breathing).« less

  6. Patient positioning in radiotherapy based on surface imaging using time of flight cameras.

    PubMed

    Gilles, M; Fayad, H; Miglierini, P; Clement, J F; Scheib, S; Cozzi, L; Bert, J; Boussion, N; Schick, U; Pradier, O; Visvikis, D

    2016-08-01

    To evaluate the patient positioning accuracy in radiotherapy using a stereo-time of flight (ToF)-camera system. A system using two ToF cameras was used to scan the surface of the patients in order to position them daily on the treatment couch. The obtained point clouds were registered to (a) detect translations applied to the table (intrafraction motion) and (b) predict the displacement to be applied in order to place the patient in its reference position (interfraction motion). The measures provided by this system were compared to the effectively applied translations. The authors analyzed 150 fractions including lung, pelvis/prostate, and head and neck cancer patients. The authors obtained small absolute errors for displacement detection: 0.8 ± 0.7, 0.8 ± 0.7, and 0.7 ± 0.6 mm along the vertical, longitudinal, and lateral axes, respectively, and 0.8 ± 0.7 mm for the total norm displacement. Lung cancer patients presented the largest errors with a respective mean of 1.1 ± 0.9, 0.9 ± 0.9, and 0.8 ± 0.7 mm. The proposed stereo-ToF system allows for sufficient accuracy and faster patient repositioning in radiotherapy. Its capability to track the complete patient surface in real time could allow, in the future, not only for an accurate positioning but also a real time tracking of any patient intrafraction motion (translation, involuntary, and breathing).

  7. Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design

    DTIC Science & Technology

    2016-10-01

    study of the resulting videos led to a new prosthetics-use taxonomy that is generalizable to various levels of amputation and terminal devices. The...taxonomy was applied to classification of the recorded videos via custom tagging software with midi controller interface. The software creates...a motion capture studio and video cameras to record accurate and detailed upper body motion during a series of standardized tasks. These tasks are

  8. Power estimation of martial arts movement with different physical, mood, and behavior using motion capture camera

    NASA Astrophysics Data System (ADS)

    Awang Soh, Ahmad Afiq Sabqi; Mat Jafri, Mohd Zubir; Azraai, Nur Zaidi

    2017-07-01

    In Malay world, there is a spirit traditional ritual where they use it as healing practices or for normal life. Malay martial arts (silat) also is not exceptional where some branch of silat have spirit traditional ritual where they said can help them in combat. In this paper, we will not use any ritual, instead we will use some medicinal and environment change when they are performing. There will be 2 performers (fighter) selected, one of them have an experience in martial arts training and another performer does not have experience. Motion Capture (MOCAP) camera will help observe and analyze this move. 8 cameras have been placed in the MOCAP room 2 on each side of the wall facing toward the center of the room from every angle. This will help prevent the loss detection of a marker that been stamped on the limb of a performer. Passive marker has been used where it will reflect the infrared to the camera sensor. Infrared is generated by the source around the camera lens. A 60 kg punching bag was hung on the iron bar function as the target for the performer when throws a punch. Markers also have been stamped on the punching bag so we can detect the movement how far can it swing when hit by the performer. 2 performers will perform 2 moves each with the same position and posture. For every 2 moves, we have made the environment change without the performer notice about it. The first 2 punch with normal environment, second part we have played a positive music to change the performer's mood and third part we have put a medicine (cream/oil) on the skin of the performer. This medicine will make the skin feel a little bit hot. This process repeated to another performer with no experience. The position of this marker analyzed by the Cortex Motion Analysis software where from this data, we can estimate the kinetics and kinematics of the performer. It shows that the increase of kinetics for every part because of the change in the environment, and different result for the 2 performers.

  9. Analysis of Brown camera distortion model

    NASA Astrophysics Data System (ADS)

    Nowakowski, Artur; Skarbek, Władysław

    2013-10-01

    Contemporary image acquisition devices introduce optical distortion into image. It results in pixel displacement and therefore needs to be compensated for many computer vision applications. The distortion is usually modeled by the Brown distortion model, which parameters can be included in camera calibration task. In this paper we describe original model, its dependencies and analyze orthogonality with regard to radius for its decentering distortion component. We also report experiments with camera calibration algorithm included in OpenCV library, especially a stability of distortion parameters estimation is evaluated.

  10. Gait in adolescent idiopathic scoliosis: kinematics and electromyographic analysis

    PubMed Central

    Banse, X.; Mousny, M.; Detrembleur, C.

    2009-01-01

    Adolescent idiopathic scoliosis (AIS) is a progressive growth disease that affects spinal anatomy, mobility, and left-right trunk symmetry. Consequently, AIS can modify human locomotion. Very few studies have investigated a simple activity like walking in a cohort of well-defined untreated patients with scoliosis. The first goal of this study is to evaluate the effects of scoliosis and scoliosis severity on kinematic and electromyographic (EMG) gait variables compared to an able-bodied population. The second goal is to look for any asymmetry in these parameters during walking. Thirteen healthy girls and 41 females with untreated AIS, with left thoracolumbar or lumbar primary structural curves were assessed. AIS patients were divided into three clinical subgroups (group 1 < 20°, group 2 between 20 and 40°, and group 3 > 40°). Gait analysis included synchronous bilateral kinematic and EMG measurements. The subjects walked on a treadmill at 4 km/h (comfortable speed). The tridimensional (3D) shoulder, pelvis, and lower limb motions were measured using 22 reflective markers tracked by four infrared cameras. The EMG timing activity was measured using bipolar surface electrodes on quadratus lumborum, erector spinae, gluteus medius, rectus femoris, semitendinosus, tibialis anterior, and gastrocnemius muscles. Statistical comparisons (ANOVA) were performed across groups and sides for kinematic and EMG parameters. The step length was reduced in AIS compared to normal subjects (7% less). Frontal shoulder, pelvis, and hip motion and transversal hip motion were reduced in scoliosis patients (respectively, 21, 27, 28, and 22% less). The EMG recording during walking showed that the quadratus lumborum, erector spinae, gluteus medius, and semitendinosus muscles contracted during a longer part of the stride in scoliotic patients (46% of the stride) compared with normal subjects (35% of the stride). There was no significant difference between scoliosis groups 1, 2, and 3 for any of the kinematic and EMG parameters, meaning that severe scoliosis was not associated with increased differences in gait parameters compared to mild scoliosis. Scoliosis was not associated with any kinematic or EMG left–right asymmetry. In conclusion, scoliosis patients showed significant but slight modifications in gait, even in cases of mild scoliosis. With the naked eye, one could not see any difference from controls, but with powerful gait analysis technology, the pelvic frontal motion (right–left tilting) was reduced, as was the motion in the hips and shoulder. Surprisingly, no asymmetry was noted but the spine seemed dynamically stiffened by the longer contraction time of major spinal and pelvic muscles. Further studies are needed to evaluate the origin and consequences of these observations. PMID:19224255

  11. 3-dimensional telepresence system for a robotic environment

    DOEpatents

    Anderson, Matthew O.; McKay, Mark D.

    2000-01-01

    A telepresence system includes a camera pair remotely controlled by a control module affixed to an operator. The camera pair provides for three dimensional viewing and the control module, affixed to the operator, affords hands-free operation of the camera pair. In one embodiment, the control module is affixed to the head of the operator and an initial position is established. A triangulating device is provided to track the head movement of the operator relative to the initial position. A processor module receives input from the triangulating device to determine where the operator has moved relative to the initial position and moves the camera pair in response thereto. The movement of the camera pair is predetermined by a software map having a plurality of operation zones. Each zone therein corresponds to unique camera movement parameters such as speed of movement. Speed parameters include constant speed, or increasing or decreasing. Other parameters include pan, tilt, slide, raise or lowering of the cameras. Other user interface devices are provided to improve the three dimensional control capabilities of an operator in a local operating environment. Such other devices include a pair of visual display glasses, a microphone and a remote actuator. The pair of visual display glasses are provided to facilitate three dimensional viewing, hence depth perception. The microphone affords hands-free camera movement by utilizing voice commands. The actuator allows the operator to remotely control various robotic mechanisms in the remote operating environment.

  12. Normal human gait patterns in Peruvian individuals: an exploratory assessment using VICON motion capture system

    NASA Astrophysics Data System (ADS)

    Dongo, R.; Moscoso, M.; Callupe, R.; Pajaya, J.; Elías, D.

    2017-11-01

    Gait analysis is of clinical relevance for clinicians. However, normal gait patterns used in foreign literature could be different from local individuals. The aim of this study was to determine the normal gait patterns and parameters of Peruvian individuals in order to have a local referent for clinical assessments and making diagnosis and treatment Peruvian people with lower motor neuron injuries. A descriptive study with 34 subjects was conducted to assess their gait cycle. VICON® cameras were used to capture body movements. For the analyses, we calculated spatiotemporal gait parameters and average angles of displacement of the hip, knee, and ankle joints with their respective 95% confidence intervals. The results showed gait speed was 0.58m/s, cadence was 102.1steps/min, and the angular displacement of the hip, knee and ankle joints were all lower than those described in the literature. In the graphs, gait cycles were close to those reported in previous studies, but the parameters of speed, cadence and angles of displacements are lower than the ones shown in the literature. These results could be used as a better reference pattern in the clinical setting.

  13. Kinematic analysis of head, trunk, and pelvic motion during mirror therapy for stroke patients

    PubMed Central

    Kim, Jinmin; Yi, Jaehoon; Song, Chang-Ho

    2017-01-01

    [Purpose] The purpose of this study was to investigate mirror therapy (MT) condition by analyzing kinematic parameters according to mirror size and angle. [Subjects and Methods] Three hemiparesis stroke patients and five healthy adults participated in this cross-sectional study. Kinematic parameters during the MT were collected over a total of 5 trials for each subject (3 mirror angles × 3 mirror sizes). Center of pressure (COP) excursion data was collected by force plate, and other kinematic parameters by infra-red cameras. [Results] The larger the size and smaller the angle, the overall dependent variables decreased in all participants. Particularly, when virtual reality reflection equipment (VRRE) was used, the value of the flexion and the lateral tilt was the closest to the midline compared to all other independent variables. Moreover, it showed tendency of moving towards the affected side. Based on the results, MT for stroke patients has a disadvantage of shifting weight and leaning towards the unaffected side during therapy. [Conclusion] Therefore, it seems to be more effective in terms of clinics to apply VRRE to make up for the weak parts and provide more elaborate visual feedback. PMID:29184290

  14. Kinematic analysis of head, trunk, and pelvic motion during mirror therapy for stroke patients.

    PubMed

    Kim, Jinmin; Yi, Jaehoon; Song, Chang-Ho

    2017-10-01

    [Purpose] The purpose of this study was to investigate mirror therapy (MT) condition by analyzing kinematic parameters according to mirror size and angle. [Subjects and Methods] Three hemiparesis stroke patients and five healthy adults participated in this cross-sectional study. Kinematic parameters during the MT were collected over a total of 5 trials for each subject (3 mirror angles × 3 mirror sizes). Center of pressure (COP) excursion data was collected by force plate, and other kinematic parameters by infra-red cameras. [Results] The larger the size and smaller the angle, the overall dependent variables decreased in all participants. Particularly, when virtual reality reflection equipment (VRRE) was used, the value of the flexion and the lateral tilt was the closest to the midline compared to all other independent variables. Moreover, it showed tendency of moving towards the affected side. Based on the results, MT for stroke patients has a disadvantage of shifting weight and leaning towards the unaffected side during therapy. [Conclusion] Therefore, it seems to be more effective in terms of clinics to apply VRRE to make up for the weak parts and provide more elaborate visual feedback.

  15. Are camera surveys useful for assessing recruitment in white-tailed deer?

    Treesearch

    M. Colter Chitwood; Marcus A. Lashley; John C. Kilgo; Michael J. Cherry; L. Mike Conner; Mark Vukovich; H. Scott Ray; Charles Ruth; Robert J. Warren; Christopher S. DePerno; Christopher E. Moorman

    2017-01-01

    Camera surveys commonly are used by managers and hunters to estimate white-tailed deer Odocoileus virginianus density and demographic rates. Though studies have documented biases and inaccuracies in the camera survey methodology, camera traps remain popular due to ease of use, cost-effectiveness, and ability to survey large areas. Because recruitment is a key parameter...

  16. Gunshot-wound dynamics model for John F. Kennedy assassination.

    PubMed

    Nalli, Nicholas R

    2018-04-01

    U.S. President John F. Kennedy was assassinated while riding in an open motorcade by a sniper in Dallas, Texas on 22 November 1963. A civilian bystander, Mr. Abraham Zapruder, filmed the motorcade with a 8-mm home movie camera as it drove through Dealey Plaza, inadvertently recording an ≈8 second sequence of events that included a fatal gunshot wound to the President in the head. The accompanying backward motion of the President's head after impact appeared to support later "conspiracy theories" because it was claimed that this was proof of a shot from the front (in addition to one from behind). In this paper, simple one-dimensional dynamical models are uniquely applied to study in detail the fatal shot and the motion of the President's head observed in the film. Using known parameters from the crime scene, explicit force calculations are carried out for determining the projectile's retardation during tissue passage along with the resulting transfer of momentum and kinetic energy (KE). The computed instantaneous KE transfer within the soft tissue is found to be consistent with the formation of a temporary cavity associated with the observed explosion of the head, and subsequent quantitative examination of this phenomenon reveals two delayed forces at play in the backward motion of the President following impact. It is therefore found that the observed motions of President Kennedy in the film are physically consistent with a high-speed projectile impact from the rear of the motorcade, these resulting from an instantaneous forward impulse force, followed by delayed rearward recoil and neuromuscular forces.

  17. Improved calibration-based non-uniformity correction method for uncooled infrared camera

    NASA Astrophysics Data System (ADS)

    Liu, Chengwei; Sui, Xiubao

    2017-08-01

    With the latest improvements of microbolometer focal plane arrays (FPA), uncooled infrared (IR) cameras are becoming the most widely used devices in thermography, especially in handheld devices. However the influences derived from changing ambient condition and the non-uniform response of the sensors make it more difficult to correct the nonuniformity of uncooled infrared camera. In this paper, based on the infrared radiation characteristic in the TEC-less uncooled infrared camera, a novel model was proposed for calibration-based non-uniformity correction (NUC). In this model, we introduce the FPA temperature, together with the responses of microbolometer under different ambient temperature to calculate the correction parameters. Based on the proposed model, we can work out the correction parameters with the calibration measurements under controlled ambient condition and uniform blackbody. All correction parameters can be determined after the calibration process and then be used to correct the non-uniformity of the infrared camera in real time. This paper presents the detail of the compensation procedure and the performance of the proposed calibration-based non-uniformity correction method. And our method was evaluated on realistic IR images obtained by a 384x288 pixels uncooled long wave infrared (LWIR) camera operated under changed ambient condition. The results show that our method can exclude the influence caused by the changed ambient condition, and ensure that the infrared camera has a stable performance.

  18. Real-time marker-free motion capture system using blob feature analysis

    NASA Astrophysics Data System (ADS)

    Park, Chang-Joon; Kim, Sung-Eun; Kim, Hong-Seok; Lee, In-Ho

    2005-02-01

    This paper presents a real-time marker-free motion capture system which can reconstruct 3-dimensional human motions. The virtual character of the proposed system mimics the motion of an actor in real-time. The proposed system captures human motions by using three synchronized CCD cameras and detects the root and end-effectors of an actor such as a head, hands, and feet by exploiting the blob feature analysis. And then, the 3-dimensional positions of end-effectors are restored and tracked by using Kalman filter. At last, the positions of the intermediate joint are reconstructed by using anatomically constrained inverse kinematics algorithm. The proposed system was implemented under general lighting conditions and we confirmed that the proposed system could reconstruct motions of a lot of people wearing various clothes in real-time stably.

  19. Conditions that influence the accuracy of anthropometric parameter estimation for human body segments using shape-from-silhouette

    NASA Astrophysics Data System (ADS)

    Mundermann, Lars; Mundermann, Annegret; Chaudhari, Ajit M.; Andriacchi, Thomas P.

    2005-01-01

    Anthropometric parameters are fundamental for a wide variety of applications in biomechanics, anthropology, medicine and sports. Recent technological advancements provide methods for constructing 3D surfaces directly. Of these new technologies, visual hull construction may be the most cost-effective yet sufficiently accurate method. However, the conditions influencing the accuracy of anthropometric measurements based on visual hull reconstruction are unknown. The purpose of this study was to evaluate the conditions that influence the accuracy of 3D shape-from-silhouette reconstruction of body segments dependent on number of cameras, camera resolution and object contours. The results demonstrate that the visual hulls lacked accuracy in concave regions and narrow spaces, but setups with a high number of cameras reconstructed a human form with an average accuracy of 1.0 mm. In general, setups with less than 8 cameras yielded largely inaccurate visual hull constructions, while setups with 16 and more cameras provided good volume estimations. Body segment volumes were obtained with an average error of 10% at a 640x480 resolution using 8 cameras. Changes in resolution did not significantly affect the average error. However, substantial decreases in error were observed with increasing number of cameras (33.3% using 4 cameras; 10.5% using 8 cameras; 4.1% using 16 cameras; 1.2% using 64 cameras).

  20. Enhancing physics demos using iPhone slow motion

    NASA Astrophysics Data System (ADS)

    Lincoln, James

    2017-12-01

    Slow motion video enhances our ability to perceive and experience the physical world. This can help students and teachers especially in cases of fast moving objects or detailed events that happen too quickly for the eye to follow. As often as possible, demonstrations should be performed by the students themselves and luckily many of them will already have this technology in their pockets. The "S" series of iPhone has the slow motion video feature standard, which also includes simultaneous sound recording (somewhat unusual among slow motion cameras). In this article I share some of my experiences using this feature and provide advice on how to successfully use this technology in the classroom.

  1. Facial motion parameter estimation and error criteria in model-based image coding

    NASA Astrophysics Data System (ADS)

    Liu, Yunhai; Yu, Lu; Yao, Qingdong

    2000-04-01

    Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

  2. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  3. Development of a 300,000-pixel ultrahigh-speed high-sensitivity CCD

    NASA Astrophysics Data System (ADS)

    Ohtake, H.; Hayashida, T.; Kitamura, K.; Arai, T.; Yonai, J.; Tanioka, K.; Maruyama, H.; Etoh, T. Goji; Poggemann, D.; Ruckelshausen, A.; van Kuijk, H.; Bosiers, Jan T.

    2006-02-01

    We are developing an ultrahigh-speed, high-sensitivity broadcast camera that is capable of capturing clear, smooth slow-motion videos even where lighting is limited, such as at professional baseball games played at night. In earlier work, we developed an ultrahigh-speed broadcast color camera1) using three 80,000-pixel ultrahigh-speed, highsensitivity CCDs2). This camera had about ten times the sensitivity of standard high-speed cameras, and enabled an entirely new style of presentation for sports broadcasts and science programs. Most notably, increasing the pixel count is crucially important for applying ultrahigh-speed, high-sensitivity CCDs to HDTV broadcasting. This paper provides a summary of our experimental development aimed at improving the resolution of CCD even further: a new ultrahigh-speed high-sensitivity CCD that increases the pixel count four-fold to 300,000 pixels.

  4. Expedition One CDR Shepherd with IMAX camera

    NASA Image and Video Library

    2001-02-11

    STS98-E-5164 (11 February 2001) --- Astronaut William M. (Bill) Shepherd documents activity onboard the newly attached Destiny laboratory using an IMAX motion picture camera. The crews of Atlantis and the International Space Station on February 11 opened the Destiny laboratory and spent the first full day of what are planned to be years of work ahead inside the orbiting science and command center. Shepherd opened the Destiny hatch, and he and Shuttle commander Kenneth D. Cockrell ventured inside at 8:38 a.m. (CST). Members of both crews went to work quickly inside the new module, activating air systems, fire extinguishers, alarm systems, computers and internal communications. The crew also continued equipment transfers from the shuttle to the station and filmed several scenes onboard the station using an IMAX camera. This scene was recorded with a digital still camera.

  5. Motion correction for improved estimation of heart rate using a visual spectrum camera

    NASA Astrophysics Data System (ADS)

    Tarbox, Elizabeth A.; Rios, Christian; Kaur, Balvinder; Meyer, Shaun; Hirt, Lauren; Tran, Vy; Scott, Kaitlyn; Ikonomidou, Vasiliki

    2017-05-01

    Heart rate measurement using a visual spectrum recording of the face has drawn interest over the last few years as a technology that can have various health and security applications. In our previous work, we have shown that it is possible to estimate the heart beat timing accurately enough to perform heart rate variability analysis for contactless stress detection. However, a major confounding factor in this approach is the presence of movement, which can interfere with the measurements. To mitigate the effects of movement, in this work we propose the use of face detection and tracking based on the Karhunen-Loewe algorithm in order to counteract measurement errors introduced by normal subject motion, as expected during a common seated conversation setting. We analyze the requirements on image acquisition for the algorithm to work, and its performance under different ranges of motion, changes of distance to the camera, as well and the effect of illumination changes due to different positioning with respect to light sources on the acquired signal. Our results suggest that the effect of face tracking on visual-spectrum based cardiac signal estimation depends on the amplitude of the motion. While for larger-scale conversation-induced motion it can significantly improve estimation accuracy, with smaller-scale movements, such as the ones caused by breathing or talking without major movement errors in facial tracking may interfere with signal estimation. Overall, employing facial tracking is a crucial step in adapting this technology to real-life situations with satisfactory results.

  6. A Radiation-Triggered Surveillance System for UF6 Cylinder Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, Michael M.; Myjak, Mitchell J.

    This report provides background information and representative scenarios for testing a prototype radiation-triggered surveillance system at an operating facility that handles uranium hexafluoride (UF 6) cylinders. The safeguards objective is to trigger cameras using radiation, or radiation and motion, rather than motion alone, to reduce significantly the number of image files generated by a motion-triggered system. The authors recommend the use of radiation-triggered surveillance at all facilities where cylinder paths are heavily traversed by personnel. The International Atomic Energy Agency (IAEA) has begun using surveillance cameras in the feed and withdrawal areas of gas centrifuge enrichment plants (GCEPs). The camerasmore » generate imagery using elapsed time or motion, but this creates problems in areas occupied 24/7 by personnel. Either motion-or-interval-based triggering generates thousands of review files over the course of a month. Since inspectors must review the files to verify operator material-flow-declarations, a plethora of files significantly extends the review process. The primary advantage of radiation-triggered surveillance is the opportunity to obtain full-time cylinder throughput verification versus what presently amounts to part-time verification. Cost savings should be substantial, as the IAEA presently uses frequent unannounced inspections to verify cylinder-throughput declarations. The use of radiation-triggered surveillance allows the IAEA to implement less frequent unannounced inspections for the purpose of flow verification, but its principal advantage is significantly shorter and more effective inspector video reviews.« less

  7. The Multi-Parameter Wireless Sensing System (MPwise): Its Description and Application to Earthquake Risk Mitigation.

    PubMed

    Boxberger, Tobias; Fleming, Kevin; Pittore, Massimiliano; Parolai, Stefano; Pilz, Marco; Mikulla, Stefan

    2017-10-20

    The Multi-Parameter Wireless Sensing (MPwise) system is an innovative instrumental design that allows different sensor types to be combined with relatively high-performance computing and communications components. These units, which incorporate off-the-shelf components, can undertake complex information integration and processing tasks at the individual unit or node level (when used in a network), allowing the establishment of networks that are linked by advanced, robust and rapid communications routing and network topologies. The system (and its predecessors) was originally designed for earthquake risk mitigation, including earthquake early warning (EEW), rapid response actions, structural health monitoring, and site-effect characterization. For EEW, MPwise units are capable of on-site, decentralized, independent analysis of the recorded ground motion and based on this, may issue an appropriate warning, either by the unit itself or transmitted throughout a network by dedicated alarming procedures. The multi-sensor capabilities of the system allow it to be instrumented with standard strong- and weak-motion sensors, broadband sensors, MEMS (namely accelerometers), cameras, temperature and humidity sensors, and GNSS receivers. In this work, the MPwise hardware, software and communications schema are described, as well as an overview of its possible applications. While focusing on earthquake risk mitigation actions, the aim in the future is to expand its capabilities towards a more multi-hazard and risk mitigation role. Overall, MPwise offers considerable flexibility and has great potential in contributing to natural hazard risk mitigation.

  8. The Impact of Immediate Verbal Feedback on the Improvement of Swimming Technique

    PubMed Central

    Zatoń, Krystyna; Szczepan, Stefan

    2014-01-01

    The present research attempts to ascertain the impact of immediate verbal feedback (IVF) on modifications of stroke length (SL). In all swimming styles, stroke length is considered an essential kinematic parameter of the swimming cycle. It is important for swimming mechanics and energetics. If SL shortens while the stroke rate (SR) remains unchanged or decreases, the temporal-spatial structure of swimming is considered erroneous. It results in a lower swimming velocity. Our research included 64 subjects, who were divided into two groups: the experimental – E (n=32) and the control – C (n=32) groups. A pretest and a post-test were conducted. The subjects swam the front crawl over the test distance of 25m at Vmax. Only the E group subjects were provided with IVF aiming to increase their SL. All tests were filmed by two cameras (50 samples•s-1). The kinematic parameters of the swimming cycle were analyzed using the SIMI Reality Motion Systems 2D software (SIMI Reality Motion Systems 2D GmbH, Germany). The movement analysis allowed to determine the average horizontal swimming velocity over 15 meters. The repeated measures analysis of variance ANOVA with a post-hoc Tukey range test demonstrated statistically significant (p<0.05) differences between the two groups in terms of SL and swimming velocity. IVF brought about a 6.93% (Simi method) and a 5.09% (Hay method) increase in SL, as well as a 2.92% increase in swimming velocity. PMID:25114741

  9. Estimation of viscoelastic surface wave parameters using a low cost optical deflection method

    NASA Astrophysics Data System (ADS)

    Brum, J.; Balay, G.; Arzúa, A.; Núñez, I.; Negreira, C.

    2010-01-01

    In this work an optical deflection method was used to study surface vibrations created by a low frequency source placed on the sample's surface. The optical method consists in placing a laser beam perpendicularly the sample's surface (gelatine based phantom). A beam-splitter is placed between the laser and the sample to project the reflected beam into a screen. As the surface moves due to the action of the low frequency source the laser beam on the screen also moves. Recording this movement with a digital camera allow us to reconstruct de surface motion using the light reflection law. If the scattering of the surface is very strong (such the one in biological tissue) a lens is placed between the surface and the beam-splitter to collect the scattered light. As validation method the surface movement was measured using a 10 MHz ultrasonic transducer placed normal to the surface in pulse-eco mode. The optical measurements were in complete agreement with the acoustical measurements. The optical measurement has the following advantages over the acoustic: 2-dimensional motion could be recorded and it is low cost. Since the acquisition was synchronized and the source-laser beam distance is known, measuring the time of flight an estimation of the surface wave velocity is obtained in order to measure the elasticity of the sample. The authors conclude that a reliable optical, low cost method for obtaining surface wave parameters of biological tissue was developed and successfully validate.

  10. The Multi-Parameter Wireless Sensing System (MPwise): Its Description and Application to Earthquake Risk Mitigation

    PubMed Central

    Boxberger, Tobias; Fleming, Kevin; Pittore, Massimiliano; Parolai, Stefano; Pilz, Marco; Mikulla, Stefan

    2017-01-01

    The Multi-Parameter Wireless Sensing (MPwise) system is an innovative instrumental design that allows different sensor types to be combined with relatively high-performance computing and communications components. These units, which incorporate off-the-shelf components, can undertake complex information integration and processing tasks at the individual unit or node level (when used in a network), allowing the establishment of networks that are linked by advanced, robust and rapid communications routing and network topologies. The system (and its predecessors) was originally designed for earthquake risk mitigation, including earthquake early warning (EEW), rapid response actions, structural health monitoring, and site-effect characterization. For EEW, MPwise units are capable of on-site, decentralized, independent analysis of the recorded ground motion and based on this, may issue an appropriate warning, either by the unit itself or transmitted throughout a network by dedicated alarming procedures. The multi-sensor capabilities of the system allow it to be instrumented with standard strong- and weak-motion sensors, broadband sensors, MEMS (namely accelerometers), cameras, temperature and humidity sensors, and GNSS receivers. In this work, the MPwise hardware, software and communications schema are described, as well as an overview of its possible applications. While focusing on earthquake risk mitigation actions, the aim in the future is to expand its capabilities towards a more multi-hazard and risk mitigation role. Overall, MPwise offers considerable flexibility and has great potential in contributing to natural hazard risk mitigation. PMID:29053608

  11. Application of Optical Measurement Techniques During Stages of Pregnancy: Use of Phantom High Speed Cameras for Digital Image Correlation (D.I.C.) During Baby Kicking and Abdomen Movements

    NASA Technical Reports Server (NTRS)

    Gradl, Paul

    2016-01-01

    Paired images were collected using a projected pattern instead of standard painting of the speckle pattern on her abdomen. High Speed cameras were post triggered after movements felt. Data was collected at 120 fps -limited due to 60hz frequency of projector. To ensure that kicks and movement data was real a background test was conducted with no baby movement (to correct for breathing and body motion).

  12. Toward Active Control of Noise from Hot Supersonic Jets

    DTIC Science & Technology

    2012-05-14

    was developed that would allow for easy data sharing among the research teams. This format includes the acoustic data along with all calibration ...SUPERSONIC | QUARTERLY RPT. 3 ■ 1 i; ’XZ. "• Tff . w w i — r i (a) Far-Field Array Calibration (b) MHz Rate PIV Camera Setup Figure... Plenoptic camera is a similar setup to determine 3-D motion of the flow using a thick light sheet. 2.3 Update on CFD Progress In the previous interim

  13. Wind Tunnel Tests of the Space Shuttle Foam Insulation with Simulated Debonded Regions

    DTIC Science & Technology

    1981-04-01

    set identification number Gage sensitivity Calculated gage sen8itivity 82 = Sl * f(TGE) Material specimen identification designation Free-stream...ColoY motion pictures (2 cameras) and pre- and posttest color stills recorded ariy changes "in the samples. The movie cameras were operated at...The oBli ~ue shock wave generated by the -wedge reduces the free-stream Mach nut1ber to the desired local Mach number. Since the free=sti’eam

  14. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  15. A Third Arm for the Surgeon

    NASA Technical Reports Server (NTRS)

    1994-01-01

    In laparoscopic surgery, tiny incisions are made in the patient's body and a laparoscope (an optical tube with a camera at the end) is inserted. The camera's image is projected onto two video screens, whose views guide the surgeon through the procedure. AESOP, a medical robot developed by Computer Motion, Inc. with NASA assistance, eliminates the need for a human assistant to operate the camera. The surgeon uses a foot pedal control to move the device, allowing him to use both hands during the surgery. Miscommunication is avoided; AESOP's movement is smooth and steady, and the memory vision is invaluable. Operations can be completed more quickly, and the patient spends less time under anesthesia. AESOP has been approved by the FDA.

  16. FUNCTIONAL ASSESSMENT OF A CAMERA PHONE-BASED WAYFINDING SYSTEM OPERATED BY BLIND AND VISUALLY IMPAIRED USERS

    PubMed Central

    COUGHLAN, JAMES; MANDUCHI, ROBERTO

    2009-01-01

    We describe a wayfinding system for blind and visually impaired persons that uses a camera phone to determine the user's location with respect to color markers, posted at locations of interest (such as offices), which are automatically detected by the phone. The color marker signs are specially designed to be detected in real time in cluttered environments using computer vision software running on the phone; a novel segmentation algorithm quickly locates the borders of the color marker in each image, which allows the system to calculate how far the marker is from the phone. We present a model of how the user's scanning strategy (i.e. how he/she pans the phone left and right to find color markers) affects the system's ability to detect color markers given the limitations imposed by motion blur, which is always a possibility whenever a camera is in motion. Finally, we describe experiments with our system tested by blind and visually impaired volunteers, demonstrating their ability to reliably use the system to find locations designated by color markers in a variety of indoor and outdoor environments, and elucidating which search strategies were most effective for users. PMID:19960101

  17. Laser differential image-motion monitor for characterization of turbulence during free-space optical communication tests.

    PubMed

    Brown, David M; Juarez, Juan C; Brown, Andrea M

    2013-12-01

    A laser differential image-motion monitor (DIMM) system was designed and constructed as part of a turbulence characterization suite during the DARPA free-space optical experimental network experiment (FOENEX) program. The developed link measurement system measures the atmospheric coherence length (r0), atmospheric scintillation, and power in the bucket for the 1550 nm band. DIMM measurements are made with two separate apertures coupled to a single InGaAs camera. The angle of arrival (AoA) for the wavefront at each aperture can be calculated based on focal spot movements imaged by the camera. By utilizing a single camera for the simultaneous measurement of the focal spots, the correlation of the variance in the AoA allows a straightforward computation of r0 as in traditional DIMM systems. Standard measurements of scintillation and power in the bucket are made with the same apertures by redirecting a percentage of the incoming signals to InGaAs detectors integrated with logarithmic amplifiers for high sensitivity and high dynamic range. By leveraging two, small apertures, the instrument forms a small size and weight configuration for mounting to actively tracking laser communication terminals for characterizing link performance.

  18. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  19. Video Altimeter and Obstruction Detector for an Aircraft

    NASA Technical Reports Server (NTRS)

    Delgado, Frank J.; Abernathy, Michael F.; White, Janis; Dolson, William R.

    2013-01-01

    Video-based altimetric and obstruction detection systems for aircraft have been partially developed. The hardware of a system of this type includes a downward-looking video camera, a video digitizer, a Global Positioning System receiver or other means of measuring the aircraft velocity relative to the ground, a gyroscope based or other attitude-determination subsystem, and a computer running altimetric and/or obstruction-detection software. From the digitized video data, the altimetric software computes the pixel velocity in an appropriate part of the video image and the corresponding angular relative motion of the ground within the field of view of the camera. Then by use of trigonometric relationships among the aircraft velocity, the attitude of the camera, the angular relative motion, and the altitude, the software computes the altitude. The obstruction-detection software performs somewhat similar calculations as part of a larger task in which it uses the pixel velocity data from the entire video image to compute a depth map, which can be correlated with a terrain map, showing locations of potential obstructions. The depth map can be used as real-time hazard display and/or to update an obstruction database.

  20. Camera-pose estimation via projective Newton optimization on the manifold.

    PubMed

    Sarkis, Michel; Diepold, Klaus

    2012-04-01

    Determining the pose of a moving camera is an important task in computer vision. In this paper, we derive a projective Newton algorithm on the manifold to refine the pose estimate of a camera. The main idea is to benefit from the fact that the 3-D rigid motion is described by the special Euclidean group, which is a Riemannian manifold. The latter is equipped with a tangent space defined by the corresponding Lie algebra. This enables us to compute the optimization direction, i.e., the gradient and the Hessian, at each iteration of the projective Newton scheme on the tangent space of the manifold. Then, the motion is updated by projecting back the variables on the manifold itself. We also derive another version of the algorithm that employs homeomorphic parameterization to the special Euclidean group. We test the algorithm on several simulated and real image data sets. Compared with the standard Newton minimization scheme, we are now able to obtain the full numerical formula of the Hessian with a 60% decrease in computational complexity. Compared with Levenberg-Marquardt, the results obtained are more accurate while having a rather similar complexity.

  1. FUNCTIONAL ASSESSMENT OF A CAMERA PHONE-BASED WAYFINDING SYSTEM OPERATED BY BLIND AND VISUALLY IMPAIRED USERS.

    PubMed

    Coughlan, James; Manduchi, Roberto

    2009-06-01

    We describe a wayfinding system for blind and visually impaired persons that uses a camera phone to determine the user's location with respect to color markers, posted at locations of interest (such as offices), which are automatically detected by the phone. The color marker signs are specially designed to be detected in real time in cluttered environments using computer vision software running on the phone; a novel segmentation algorithm quickly locates the borders of the color marker in each image, which allows the system to calculate how far the marker is from the phone. We present a model of how the user's scanning strategy (i.e. how he/she pans the phone left and right to find color markers) affects the system's ability to detect color markers given the limitations imposed by motion blur, which is always a possibility whenever a camera is in motion. Finally, we describe experiments with our system tested by blind and visually impaired volunteers, demonstrating their ability to reliably use the system to find locations designated by color markers in a variety of indoor and outdoor environments, and elucidating which search strategies were most effective for users.

  2. Relationship between Main Civilian Occupation and Army General Classification Test Standard Score. Part 2

    DTIC Science & Technology

    1945-03-07

    Picture (285) ....... •"■*’ Cameraman, Motion Picture (043) 115 Canvas Cover Renairuan (OhU) ■ * Car Carpenter, Railway (046) i<" Car Mechanic...Film Editor, Motion Picture (l3l) .,,,.,♦ * 15 Filter Operator, ^ tor Supply (O83). # 10 Fingerprinter (307) ’. . * 30 Fire Fighter (383) ,. 128...Mechanic (322) .... Registered Nurse (225) ....... Repairman, Camera (042) Repairman, Canvas Cover (044) . . . Repairman, Central. Of fice (095

  3. Scanning and storage of electrophoretic records

    DOEpatents

    McKean, Ronald A.; Stiegman, Jeff

    1990-01-01

    An electrophoretic record that includes at least one gel separation is mounted for motion laterally of the separation record. A light source is positioned to illuminate at least a portion of the record, and a linear array camera is positioned to have a field of view of the illuminated portion of the record and orthogonal to the direction of record motion. The elements of the linear array are scanned at increments of motion of the record across the field of view to develop a series of signals corresponding to intensity of light at each element at each scan increment.

  4. Towards motion insensitive EEG-fMRI: Correcting motion-induced voltages and gradient artefact instability in EEG using an fMRI prospective motion correction (PMC) system.

    PubMed

    Maziero, Danilo; Velasco, Tonicarlo R; Hunt, Nigel; Payne, Edwin; Lemieux, Louis; Salmon, Carlos E G; Carmichael, David W

    2016-09-01

    The simultaneous acquisition of electroencephalography and functional magnetic resonance imaging (EEG-fMRI) is a multimodal technique extensively applied for mapping the human brain. However, the quality of EEG data obtained within the MRI environment is strongly affected by subject motion due to the induction of voltages in addition to artefacts caused by the scanning gradients and the heartbeat. This has limited its application in populations such as paediatric patients or to study epileptic seizure onset. Recent work has used a Moiré-phase grating and a MR-compatible camera to prospectively update image acquisition and improve fMRI quality (prospective motion correction: PMC). In this study, we use this technology to retrospectively reduce the spurious voltages induced by motion in the EEG data acquired inside the MRI scanner, with and without fMRI acquisitions. This was achieved by modelling induced voltages from the tracking system motion parameters; position and angles, their first derivative (velocities) and the velocity squared. This model was used to remove the voltages related to the detected motion via a linear regression. Since EEG quality during fMRI relies on a temporally stable gradient artefact (GA) template (calculated from averaging EEG epochs matched to scan volume or slice acquisition), this was evaluated in sessions both with and without motion contamination, and with and without PMC. We demonstrate that our approach is capable of significantly reducing motion-related artefact with a magnitude of up to 10mm of translation, 6° of rotation and velocities of 50mm/s, while preserving physiological information. We also demonstrate that the EEG-GA variance is not increased by the gradient direction changes associated with PMC. Provided a scan slice-based GA template is used (rather than a scan volume GA template) we demonstrate that EEG variance during motion can be supressed towards levels found when subjects are still. In summary, we show that PMC can be used to dramatically improve EEG quality during large amplitude movements, while benefiting from previously reported improvements in fMRI quality, and does not affect EEG data quality in the absence of large amplitude movements. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects

    DOEpatents

    Lu, Shin-Yee

    1998-01-01

    A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.

  6. Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects

    DOEpatents

    Lu, S.Y.

    1998-12-22

    A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.

  7. The on-orbit calibration of geometric parameters of the Tian-Hui 1 (TH-1) satellite

    NASA Astrophysics Data System (ADS)

    Wang, Jianrong; Wang, Renxiang; Hu, Xin; Su, Zhongbo

    2017-02-01

    The on-orbit calibration of geometric parameters is a key step in improving the location accuracy of satellite images without using Ground Control Points (GCPs). Most methods of on-orbit calibration are based on the self-calibration using additional parameters. When using additional parameters, different number of additional parameters may lead to different results. The triangulation bundle adjustment is another way to calibrate the geometric parameters of camera, which can describe the changes in each geometric parameter. When triangulation bundle adjustment method is applied to calibrate geometric parameters, a prerequisite is that the strip model can avoid systematic deformation caused by the rate of attitude changes. Concerning the stereo camera, the influence of the intersection angle should be considered during calibration. The Equivalent Frame Photo (EFP) bundle adjustment based on the Line-Matrix CCD (LMCCD) image can solve the systematic distortion of the strip model, and obtain high accuracy location without using GCPs. In this paper, the triangulation bundle adjustment is used to calibrate the geometric parameters of TH-1 satellite cameras based on LMCCD image. During the bundle adjustment, the three-line array cameras are reconstructed by adopting the principle of inverse triangulation. Finally, the geometric accuracy is validated before and after on-orbit calibration using 5 testing fields. After on-orbit calibration, the 3D geometric accuracy is improved to 11.8 m from 170 m. The results show that the location accuracy of TH-1 without using GCPs is significantly improved using the on-orbit calibration of the geometric parameters.

  8. Note: Reliable and non-contact 6D motion tracking system based on 2D laser scanners for cargo transportation.

    PubMed

    Kim, Young-Keun; Kim, Kyung-Soo

    2014-10-01

    Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.

  9. Note: Reliable and non-contact 6D motion tracking system based on 2D laser scanners for cargo transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Young-Keun, E-mail: ykkim@handong.edu; Kim, Kyung-Soo

    Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-basedmore » sensor, the system is expected to be highly robust to sea weather conditions.« less

  10. Note: Reliable and non-contact 6D motion tracking system based on 2D laser scanners for cargo transportation

    NASA Astrophysics Data System (ADS)

    Kim, Young-Keun; Kim, Kyung-Soo

    2014-10-01

    Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.

  11. Dynamic visual attention: motion direction versus motion magnitude

    NASA Astrophysics Data System (ADS)

    Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.

    2008-02-01

    Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.

  12. Determination of the static friction coefficient from circular motion

    NASA Astrophysics Data System (ADS)

    Molina-Bolívar, J. A.; Cabrerizo-Vílchez, M. A.

    2014-07-01

    This paper describes a physics laboratory exercise for determining the coefficient of static friction between two surfaces. The circular motion of a coin placed on the surface of a rotating turntable has been studied. For this purpose, the motion is recorded with a high-speed digital video camera recording at 240 frames s-1, and the videos are analyzed using Tracker video-analysis software, allowing the students to dynamically model the motion of the coin. The students have to obtain the static friction coefficient by comparing the centripetal and maximum static friction forces. The experiment only requires simple and inexpensive materials. The dynamics of circular motion and static friction forces are difficult for many students to understand. The proposed laboratory exercise addresses these topics, which are relevant to the physics curriculum.

  13. Using the Standard Deviation of a Region of Interest in an Image to Estimate Camera to Emitter Distance

    PubMed Central

    Cano-García, Angel E.; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information. PMID:22778608

  14. Using the standard deviation of a region of interest in an image to estimate camera to emitter distance.

    PubMed

    Cano-García, Angel E; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.

  15. Preliminary Evaluation of a Commercial 360 Multi-Camera Rig for Photogrammetric Purposes

    NASA Astrophysics Data System (ADS)

    Teppati Losè, L.; Chiabrando, F.; Spanò, A.

    2018-05-01

    The research presented in this paper is focused on a preliminary evaluation of a 360 multi-camera rig: the possibilities to use the images acquired by the system in a photogrammetric workflow and for the creation of spherical images are investigated and different tests and analyses are reported. Particular attention is dedicated to different operative approaches for the estimation of the interior orientation parameters of the cameras, both from an operative and theoretical point of view. The consistency of the six cameras that compose the 360 system was in depth analysed adopting a self-calibration approach in a commercial photogrammetric software solution. A 3D calibration field was projected and created, and several topographic measurements were performed in order to have a set of control points to enhance and control the photogrammetric process. The influence of the interior parameters of the six cameras were analyse both in the different phases of the photogrammetric workflow (reprojection errors on the single tie point, dense cloud generation, geometrical description of the surveyed object, etc.), both in the stitching of the different images into a single spherical panorama (some consideration on the influence of the camera parameters on the overall quality of the spherical image are reported also in these section).

  16. Analysis of the variation of range parameters of thermal cameras

    NASA Astrophysics Data System (ADS)

    Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał

    2016-10-01

    Measured range characteristics may vary considerably (up to several dozen percent) between different samples of the same camera type. The question is whether the manufacturing process somehow lacks repeatability or the commonly used measurement procedures themselves need improvement. The presented paper attempts to deal with the aforementioned question. The measurement method has been thoroughly analyzed as well as the measurement test bed. Camera components (such as detector and optics) have also been analyzed and their key parameters have been measured, including noise figures of the entire system. Laboratory measurements are the most precise method used to determine range parameters of a thermal camera. However, in order to obtain reliable results several important conditions have to be fulfilled. One must have the test equipment capable of measurement accuracy (uncertainty) significantly better than the magnitudes of measured quantities. The measurements must be performed in a controlled environment thus excluding the influence of varying environmental conditions. The personnel must be well-trained, experienced in testing the thermal imaging devices and familiar with the applied measurement procedures. The measurement data recorded for several dozen of cooled thermal cameras (from one of leading camera manufacturers) have been the basis of the presented analysis. The measurements were conducted in the accredited research laboratory of Institute of Optoelectronics (Military University of Technology).

  17. The potential of low-cost RPAS for multi-view reconstruction of rock cliffs

    NASA Astrophysics Data System (ADS)

    Ettore Guccione, Davide; Thoeni, Klaus; Santise, Marina; Giacomini, Anna; Roncella, Riccardo; Forlani, Gianfranco

    2016-04-01

    RPAS, also known as drones or UAVs, have been used in military applications for many years. Nevertheless, the technology has become accessible to everyone only in recent years (Westoby et al., 2012; Nex and Remondino, 2014). Electric multirotor helicopters or multicopters have become one of the most exciting developments and several off-the-shelf platforms (including camera) are now available. In particular, RPAS can provide 3D models of sub-vertical rock faces, which for instance are needed for rockfall hazard assessments along road cuts and very steep mountains. The current work investigates the potential of two low-cost off-the-shelf quadcopters equipped with digital cameras for multi-view reconstruction of sub-vertical rock cliffs. The two platforms used are a DJI Phantom 1 (P1) equipped with a Gopro Hero 3+ (12MP) and a DJI Phantom 3 Professional (P3). The latter comes with an integrated 12MP camera mounted on a 3-axis gimbal. Both platforms cost less than 1.500€ including camera. The study area is a small rock cliff near the Callaghan Campus of the University of Newcastle (Thoeni et al., 2014). The wall is partly smooth with some evident geological features such as non-persistent joints and sharp edges. Several flights were performed with both cameras set in time-lapse mode. Hence, images were taken automatically but the flights were performed manually since the investigated rock face is very irregular which required adjusting the yaw and roll for optimal coverage since the flights were performed very close to the cliff face. The digital images were processed with a commercial SfM software package. Thereby, several processing options and camera networks were investigated in order to define the most accurate configuration. Firstly, the difference between the use of coded ground control targets versus natural features was studied. Coded targets generally provide the best accuracy but they need to be placed on the surface which is not always possible as rock cliffs are not easily accessible. Nevertheless, work natural features can provide a good alternative if chosen wisely. Secondly, the influence of using fixed interior orientation parameters and self-calibration was investigated. The results show that in the case of the used sensors and camera networks self-calibration provides better results. This can mainly be attributed to the fact that the object distance is not constant and rather small (less than 10m) and that both cameras do not provide an option for fixing the interior orientation parameters. Finally, the results of both platforms are as well compared to a point cloud obtained with a terrestrial laser scanner where generally a very good agreement is observed. References Nex, F., Remondino, F. (2014) UAV for 3D mapping applications: a review. Applied Geomatics 6(1), 1-15. Thoeni, K., Giacomini, A., Murtagh, R., Kniest, E. (2014) A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5, 573-580. Westoby, M.J., Brasington, J., Glasser, N.F., Hambrey, M.J., Reynolds, J.M. (2012) 'Structure-from-Motion' photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 179, 300-314.

  18. High-Speed Video Analysis in a Conceptual Physics Class

    NASA Astrophysics Data System (ADS)

    Desbien, Dwain M.

    2011-09-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.

  19. 3D Surface Reconstruction and Volume Calculation of Rills

    NASA Astrophysics Data System (ADS)

    Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.

    2015-04-01

    We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.

  20. Applications of Phase-Based Motion Processing

    NASA Technical Reports Server (NTRS)

    Branch, Nicholas A.; Stewart, Eric C.

    2018-01-01

    Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.

  1. On HMI's Mod-L Sequence: Test and Evaluation

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Baldner, Charles; Bogart, R. S.; Bush, R.; Couvidat, S.; Duvall, Thomas L.; Hoeksema, Jon Todd; Norton, Aimee Ann; Scherrer, Philip H.; Schou, Jesper

    2016-05-01

    HMI Mod-L sequence can produce full Stokes parameters at a cadence of 90 seconds by combining filtergrams from both cameras, the front camera and the side camera. Within the 90-second, the front camera takes two sets of Left and Right Circular Polarizations (LCP and RCP) at 6 wavelengths; the side camera takes one set of Linear Polarizations (I+/-Q and I+/-U) at 6 wavelengths. By combining two cameras, one can obtain full Stokes parameters of [I,Q,U,V] at 6 wavelengths in 90 seconds. In norminal Mod-C sequence that HMI currently uses, the front camera takes LCP and RCP at a cadence of 45 seconds, while the side camera takes observation of the full Stokes at a cadence of 135 seconds. Mod-L should be better than Mod-C for providing vector magnetic field data because (1) Mod-L increases cadence of full Stokes observation, which leads to higher temporal resolution of vector magnetic field measurement; (2) decreases noise in vector magnetic field data because it uses more filtergrams to produce [I,Q,U,V]. There are two potential issues in Mod-L that need to be addressed: (1) scaling intensity of the two cameras’ filtergrams; and (2) if current polarization calibration model, which is built for each camera separately, works for the combined data from both cameras. This presentation will address these questions, and further place a discussion here.

  2. Omnidirectional Underwater Camera Design and Calibration

    PubMed Central

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-01-01

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707

  3. Three-dimensional kinematic correlates of ball velocity during maximal instep soccer kicking in males.

    PubMed

    Sinclair, Jonathan; Fewtrell, David; Taylor, Paul John; Bottoms, Lindsay; Atkins, Stephen; Hobbs, Sarah Jane

    2014-01-01

    Achieving a high ball velocity is important during soccer shooting, as it gives the goalkeeper less time to react, thus improving a player's chance of scoring. This study aimed to identify important technical aspects of kicking linked to the generation of ball velocity using regression analyses. Maximal instep kicks were obtained from 22 academy-level soccer players using a 10-camera motion capture system sampling at 500 Hz. Three-dimensional kinematics of the lower extremity segments were obtained. Regression analysis was used to identify the kinematic parameters associated with the development of ball velocity. A single biomechanical parameter; knee extension velocity of the kicking limb at ball contact Adjusted R(2) = 0.39, p ≤ 0.01 was obtained as a significant predictor of ball-velocity. This study suggests that sagittal plane knee extension velocity is the strongest contributor to ball velocity and potentially overall kicking performance. It is conceivable therefore that players may benefit from exposure to coaching and strength techniques geared towards the improvement of knee extension angular velocity as highlighted in this study.

  4. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion.

  5. Land Survey from Unmaned Aerial Veichle

    NASA Astrophysics Data System (ADS)

    Peterman, V.; Mesarič, M.

    2012-07-01

    In this paper we present, how we use a quadrocopter unmanned aerial vehicle with a camera attached to it, to do low altitude photogrammetric land survey. We use the quadrocopter to take highly overlapping photos of the area of interest. A "structure from motion" algorithm is implemented to get parameters of camera orientations and to generate a sparse point cloud representation of objects in photos. Than a patch based multi view stereo algorithm is applied to generate a dense point cloud. Ground control points are used to georeference the data. Further processing is applied to generate digital orthophoto maps, digital surface models, digital terrain models and assess volumes of various types of material. Practical examples of land survey from a UAV are presented in the paper. We explain how we used our system to monitor the reconstruction of commercial building, then how our UAV was used to assess the volume of coal supply for Ljubljana heating plant. Further example shows the usefulness of low altitude photogrammetry for documentation of archaeological excavations. In the final example we present how we used our UAV to prepare an underlay map for natural gas pipeline's route planning. In the final analysis we conclude that low altitude photogrammetry can help bridge the gap between laser scanning and classic tachymetric survey, since it offers advantages of both techniques.

  6. Technical Note: Kinect V2 surface filtering during gantry motion for radiotherapy applications.

    PubMed

    Nazir, Souha; Rihana, Sandy; Visvikis, Dimitris; Fayad, Hadi

    2018-04-01

    In radiotherapy, the Kinect V2 camera, has recently received a lot of attention concerning many clinical applications including patient positioning, respiratory motion tracking, and collision detection during the radiotherapy delivery phase. However, issues associated with such applications are related to some materials and surfaces reflections generating an offset in depth measurements especially during gantry motion. This phenomenon appears in particular when the collimator surface is observed by the camera; resulting in erroneous depth measurements, not only in Kinect surfaces itself, but also as a large peak when extracting a 1D respiratory signal from these data. In this paper, we proposed filtering techniques to reduce the noise effect in the Kinect-based 1D respiratory signal, using a trend removal filter, and in associated 2D surfaces, using a temporal median filter. Filtering process was validated using a phantom, in order to simulate a patient undergoing radiotherapy treatment while having the ground truth. Our results indicate a better correlation between the reference respiratory signal and its corresponding filtered signal (Correlation coefficient of 0.76) than that of the nonfiltered signal (Correlation coefficient of 0.13). Furthermore, surface filtering results show a decrease in the mean square distance error (85%) between the reference and the measured point clouds. This work shows a significant noise compensation and surface restitution after surface filtering and therefore a potential use of the Kinect V2 camera for different radiotherapy-based applications, such as respiratory tracking and collision detection. © 2018 American Association of Physicists in Medicine.

  7. High-throughput microfluidic line scan imaging for cytological characterization

    NASA Astrophysics Data System (ADS)

    Hutcheson, Joshua A.; Powless, Amy J.; Majid, Aneeka A.; Claycomb, Adair; Fritsch, Ingrid; Balachandran, Kartik; Muldoon, Timothy J.

    2015-03-01

    Imaging cells in a microfluidic chamber with an area scan camera is difficult due to motion blur and data loss during frame readout causing discontinuity of data acquisition as cells move at relatively high speeds through the chamber. We have developed a method to continuously acquire high-resolution images of cells in motion through a microfluidics chamber using a high-speed line scan camera. The sensor acquires images in a line-by-line fashion in order to continuously image moving objects without motion blur. The optical setup comprises an epi-illuminated microscope with a 40X oil immersion, 1.4 NA objective and a 150 mm tube lens focused on a microfluidic channel. Samples containing suspended cells fluorescently stained with 0.01% (w/v) proflavine in saline are introduced into the microfluidics chamber via a syringe pump; illumination is provided by a blue LED (455 nm). Images were taken of samples at the focal plane using an ELiiXA+ 8k/4k monochrome line-scan camera at a line rate of up to 40 kHz. The system's line rate and fluid velocity are tightly controlled to reduce image distortion and are validated using fluorescent microspheres. Image acquisition was controlled via MATLAB's Image Acquisition toolbox. Data sets comprise discrete images of every detectable cell which may be subsequently mined for morphological statistics and definable features by a custom texture analysis algorithm. This high-throughput screening method, comparable to cell counting by flow cytometry, provided efficient examination including counting, classification, and differentiation of saliva, blood, and cultured human cancer cells.

  8. Patient training in respiratory-gated radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kini, Vijay R.; Vedam, Subrahmanya S.; Keall, Paul J.

    2003-03-31

    Respiratory gating is used to counter the effects of organ motion during radiotherapy for chest tumors. The effects of variations in patient breathing patterns during a single treatment and from day to day are unknown. We evaluated the feasibility of using patient training tools and their effect on the breathing cycle regularity and reproducibility during respiratory-gated radiotherapy. To monitor respiratory patterns, we used a component of a commercially available respiratory-gated radiotherapy system (Real Time Position Management (RPM) System, Varian Oncology Systems, Palo Alto, CA 94304). This passive marker video tracking system consists of reflective markers placed on the patient's chestmore » or abdomen, which are detected by a wall-mounted video camera. Software installed on a PC interfaced to this camera detects the marker motion digitally and records it. The marker position as a function of time serves as the motion signal that may be used to trigger imaging or treatment. The training tools used were audio prompting and visual feedback, with free breathing as a control. The audio prompting method used instructions to 'breathe in' or 'breathe out' at periodic intervals deduced from patients' own breathing patterns. In the visual feedback method, patients were shown a real-time trace of their abdominal wall motion due to breathing. Using this, they were asked to maintain a constant amplitude of motion. Motion traces of the abdominal wall were recorded for each patient for various maneuvers. Free breathing showed a variable amplitude and frequency. Audio prompting resulted in a reproducible frequency; however, the variability and the magnitude of amplitude increased. Visual feedback gave a better control over the amplitude but showed minor variations in frequency. We concluded that training improves the reproducibility of amplitude and frequency of patient breathing cycles. This may increase the accuracy of respiratory-gated radiation therapy.« less

  9. Motion video analysis using planar parallax

    NASA Astrophysics Data System (ADS)

    Sawhney, Harpreet S.

    1994-04-01

    Motion and structure analysis in video sequences can lead to efficient descriptions of objects and their motions. Interesting events in videos can be detected using such an analysis--for instance independent object motion when the camera itself is moving, figure-ground segregation based on the saliency of a structure compared to its surroundings. In this paper we present a method for 3D motion and structure analysis that uses a planar surface in the environment as a reference coordinate system to describe a video sequence. The motion in the video sequence is described as the motion of the reference plane, and the parallax motion of all the non-planar components of the scene. It is shown how this method simplifies the otherwise hard general 3D motion analysis problem. In addition, a natural coordinate system in the environment is used to describe the scene which can simplify motion based segmentation. This work is a part of an ongoing effort in our group towards video annotation and analysis for indexing and retrieval. Results from a demonstration system being developed are presented.

  10. Note: Simple hysteresis parameter inspector for camera module with liquid lens

    NASA Astrophysics Data System (ADS)

    Chen, Po-Jui; Liao, Tai-Shan; Hwang, Chi-Hung

    2010-05-01

    A method to inspect hysteresis parameter is presented in this article. The hysteresis of whole camera module with liquid lens can be measured rather than a single lens merely. Because the variation in focal length influences image quality, we propose utilizing the sharpness of images which is captured from camera module for hysteresis evaluation. Experiments reveal that the profile of sharpness hysteresis corresponds to the characteristic of contact angle of liquid lens. Therefore, it can infer that the hysteresis of camera module is induced by the contact angle of liquid lens. An inspection process takes only 20 s to complete. Thus comparing with other instruments, this inspection method is more suitable to integrate into the mass production lines for online quality assurance.

  11. Motion Imagery and Robotics Application (MIRA)

    NASA Technical Reports Server (NTRS)

    Martinez, Lindolfo; Rich, Thomas

    2011-01-01

    Objectives include: I. Prototype a camera service leveraging the CCSDS Integrated protocol stack (MIRA/SM&C/AMS/DTN): a) CCSDS MIRA Service (New). b) Spacecraft Monitor and Control (SM&C). c) Asynchronous Messaging Service (AMS). d) Delay/Disruption Tolerant Networking (DTN). II. Additional MIRA Objectives: a) Demo of Camera Control through ISS using CCSDS protocol stack (Berlin, May 2011). b) Verify that the CCSDS standards stack can provide end-to-end space camera services across ground and space environments. c) Test interoperability of various CCSDS protocol standards. d) Identify overlaps in the design and implementations of the CCSDS protocol standards. e) Identify software incompatibilities in the CCSDS stack interfaces. f) Provide redlines to the SM&C, AMS, and DTN working groups. d) Enable the CCSDS MIRA service for potential use in ISS Kibo camera commanding. e) Assist in long-term evolution of this entire group of CCSDS standards to TRL 6 or greater.

  12. PRIMAS: a real-time 3D motion-analysis system

    NASA Astrophysics Data System (ADS)

    Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans

    1994-03-01

    The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.

  13. A novel validation and calibration method for motion capture systems based on micro-triangulation.

    PubMed

    Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M

    2018-06-06

    Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Augmented reality image guidance for minimally invasive coronary artery bypass

    NASA Astrophysics Data System (ADS)

    Figl, Michael; Rueckert, Daniel; Hawkes, David; Casula, Roberto; Hu, Mingxing; Pedro, Ose; Zhang, Dong Ping; Penney, Graeme; Bello, Fernando; Edwards, Philip

    2008-03-01

    We propose a novel system for image guidance in totally endoscopic coronary artery bypass (TECAB). A key requirement is the availability of 2D-3D registration techniques that can deal with non-rigid motion and deformation. Image guidance for TECAB is mainly required before the mechanical stabilization of the heart, thus the most dominant source of non-rigid deformation is the motion of the beating heart. To augment the images in the endoscope of the da Vinci robot, we have to find the transformation from the coordinate system of the preoperative imaging modality to the system of the endoscopic cameras. In a first step we build a 4D motion model of the beating heart. Intraoperatively we can use the ECG or video processing to determine the phase of the cardiac cycle. We can then take the heart surface from the motion model and register it to the stereo-endoscopic images of the da Vinci robot using 2D-3D registration methods. We are investigating robust feature tracking and intensity-based methods for this purpose. Images of the vessels available in the preoperative coordinate system can then be transformed to the camera system and projected into the calibrated endoscope view using two video mixers with chroma keying. It is hoped that the augmented view can improve the efficiency of TECAB surgery and reduce the conversion rate to more conventional procedures.

  15. Piecewise-Planar StereoScan: Sequential Structure and Motion using Plane Primitives.

    PubMed

    Raposo, Carolina; Antunes, Michel; P Barreto, Joao

    2017-08-09

    The article describes a pipeline that receives as input a sequence of stereo images, and outputs the camera motion and a Piecewise-Planar Reconstruction (PPR) of the scene. The pipeline, named Piecewise-Planar StereoScan (PPSS), works as follows: the planes in the scene are detected for each stereo view using semi-dense depth estimation; the relative pose is computed by a new closed-form minimal algorithm that only uses point correspondences whenever plane detections do not fully constrain the motion; the camera motion and the PPR are jointly refined by alternating between discrete optimization and continuous bundle adjustment; and, finally, the detected 3D planes are segmented in images using a new framework that handles low texture and visibility issues. PPSS is extensively validated in indoor and outdoor datasets, and benchmarked against two popular point-based SfM pipelines. The experiments confirm that plane-based visual odometry is resilient to situations of small image overlap, poor texture, specularity, and perceptual aliasing where the fast LIBVISO2 pipeline fails. The comparison against VisualSfM+CMVS/PMVS shows that, for a similar computational complexity, PPSS is more accurate and provides much more compelling and visually pleasant 3D models. These results strongly suggest that plane primitives are an advantageous alternative to point correspondences for applications of SfM and 3D reconstruction in man-made environments.

  16. Determining wildlife use of wildlife crossing structures under different scenarios.

    DOT National Transportation Integrated Search

    2012-05-01

    This research evaluated Utahs wildlife crossing structures to help UDOT and the Utah Division of Wildlife Resources assess crossing efficacy. In this study, remote motion-sensed cameras were used at 14 designated wildlife crossing culverts and bri...

  17. An investigation into the use of road drainage structures by wildlife in Maryland.

    DOT National Transportation Integrated Search

    2011-08-01

    The research team documented culvert use by 57 species of vertebrates with both infra-red motion detecting digital : game cameras and visual sightings. Species affiliations with culvert characteristics were analyzed using 2 : statistics, Canonical ...

  18. Hand-held photomicroscopy system

    NASA Technical Reports Server (NTRS)

    Zabower, H. R.

    1972-01-01

    Photomicroscopy system, with simple optics and any standard microscope objective, is used with any type of motion picture, still, or television camera system. Device performs well under difficult environmental conditions and applies to work in ecological studies, field hospitals, and geological surveys.

  19. Iceland: Eyjafjallajökull Volcano

    Atmospheric Science Data Center

    2013-04-17

    ... causes motion of the plume features between camera views. A quantitative computer analysis is necessary to separate out wind and height ... MD. The MISR data were obtained from the NASA Langley Research Center Atmospheric Science Data Center in Hampton, VA. Image ...

  20. Digital stereophotogrammetry based on circular markers and zooming cameras: evaluation of a method for 3D analysis of small motions in orthopaedic research

    PubMed Central

    2011-01-01

    Background Orthopaedic research projects focusing on small displacements in a small measurement volume require a radiation free, three dimensional motion analysis system. A stereophotogrammetrical motion analysis system can track wireless, small, light-weight markers attached to the objects. Thereby the disturbance of the measured objects through the marker tracking can be kept at minimum. The purpose of this study was to develop and evaluate a non-position fixed compact motion analysis system configured for a small measurement volume and able to zoom while tracking small round flat markers in respect to a fiducial marker which was used for the camera pose estimation. Methods The system consisted of two web cameras and the fiducial marker placed in front of them. The markers to track were black circles on a white background. The algorithm to detect a centre of the projected circle on the image plane was described and applied. In order to evaluate the accuracy (mean measurement error) and precision (standard deviation of the measurement error) of the optical measurement system, two experiments were performed: 1) inter-marker distance measurement and 2) marker displacement measurement. Results The first experiment of the 10 mm distances measurement showed a total accuracy of 0.0086 mm and precision of ± 0.1002 mm. In the second experiment, translations from 0.5 mm to 5 mm were measured with total accuracy of 0.0038 mm and precision of ± 0.0461 mm. The rotations of 2.25° amount were measured with the entire accuracy of 0.058° and the precision was of ± 0.172°. Conclusions The description of the non-proprietary measurement device with very good levels of accuracy and precision may provide opportunities for new, cost effective applications of stereophotogrammetrical analysis in musculoskeletal research projects, focusing on kinematics of small displacements in a small measurement volume. PMID:21284867

Top