Correction And Use Of Jitter In Television Images
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.
1989-01-01
Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.
Cheng, Xuemin; Hao, Qun; Xie, Mengdi
2016-04-07
Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.
An improved multi-paths optimization method for video stabilization
NASA Astrophysics Data System (ADS)
Qin, Tao; Zhong, Sheng
2018-03-01
For video stabilization, the difference between original camera motion path and the optimized one is proportional to the cropping ratio and warping ratio. A good optimized path should preserve the moving tendency of the original one meanwhile the cropping ratio and warping ratio of each frame should be kept in a proper range. In this paper we use an improved warping-based motion representation model, and propose a gauss-based multi-paths optimization method to get a smoothing path and obtain a stabilized video. The proposed video stabilization method consists of two parts: camera motion path estimation and path smoothing. We estimate the perspective transform of adjacent frames according to warping-based motion representation model. It works well on some challenging videos where most previous 2D methods or 3D methods fail for lacking of long features trajectories. The multi-paths optimization method can deal well with parallax, as we calculate the space-time correlation of the adjacent grid, and then a kernel of gauss is used to weigh the motion of adjacent grid. Then the multi-paths are smoothed while minimize the crop ratio and the distortion. We test our method on a large variety of consumer videos, which have casual jitter and parallax, and achieve good results.
Joint Video Stitching and Stabilization from Moving Cameras.
Guo, Heng; Liu, Shuaicheng; He, Tong; Zhu, Shuyuan; Zeng, Bing; Gabbouj, Moncef
2016-09-08
In this paper, we extend image stitching to video stitching for videos that are captured for the same scene simultaneously by multiple moving cameras. In practice, videos captured under this circumstance often appear shaky. Directly applying image stitching methods for shaking videos often suffers from strong spatial and temporal artifacts. To solve this problem, we propose a unified framework in which video stitching and stabilization are performed jointly. Specifically, our system takes several overlapping videos as inputs. We estimate both inter motions (between different videos) and intra motions (between neighboring frames within a video). Then, we solve an optimal virtual 2D camera path from all original paths. An enlarged field of view along the virtual path is finally obtained by a space-temporal optimization that takes both inter and intra motions into consideration. Two important components of this optimization are that (1) a grid-based tracking method is designed for an improved robustness, which produces features that are distributed evenly within and across multiple views, and (2) a mesh-based motion model is adopted for the handling of the scene parallax. Some experimental results are provided to demonstrate the effectiveness of our approach on various consumer-level videos and a Plugin, named "Video Stitcher" is developed at Adobe After Effects CC2015 to show the processed videos.
Image motion compensation by area correlation and centroid tracking of solar surface features
NASA Technical Reports Server (NTRS)
Nein, M. E.; Mcintosh, W. R.; Cumings, N. P.
1983-01-01
An experimental solar correlation tracker was tested and evaluated on a ground-based solar magnetograph. Using sunspots as fixed targets, tracking error signals were derived by which the telescope image was stabilized against wind induced perturbations. Two methods of stabilization were investigated; mechanical stabilization of the image by controlled two-axes motion of an active optical element in the telescope beam, and electronic stabilization by biasing of the electron scan in the recording camera. Both approaches have demonstrated telescope stability of about 0.6 arc sec under random perturbations which can cause the unstabilized image to move up to 120 arc sec at frequencies up to 30 Hz.
Tracking prominent points in image sequences
NASA Astrophysics Data System (ADS)
Hahn, Michael
1994-03-01
Measuring image motion and inferring scene geometry and camera motion are main aspects of image sequence analysis. The determination of image motion and the structure-from-motion problem are tasks that can be addressed independently or in cooperative processes. In this paper we focus on tracking prominent points. High stability, reliability, and accuracy are criteria for the extraction of prominent points. This implies that tracking should work quite well with those features; unfortunately, the reality looks quite different. In the experimental investigations we processed a long sequence of 128 images. This mono sequence is taken in an outdoor environment at the experimental field of Mercedes Benz in Rastatt. Different tracking schemes are explored and the results with respect to stability and quality are reported.
Earth elevation map production and high resolution sensing camera imaging analysis
NASA Astrophysics Data System (ADS)
Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai
2010-11-01
The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.
NASA Astrophysics Data System (ADS)
Tchernykh, Valerij; Dyblenko, Sergej; Janschek, Klaus; Seifart, Klaus; Harnisch, Bernd
2005-08-01
The cameras commonly used for Earth observation from satellites require high attitude stability during the image acquisition. For some types of cameras (high-resolution "pushbroom" scanners in particular), instantaneous attitude changes of even less than one arcsecond result in significant image distortion and blurring. Especially problematic are the effects of high-frequency attitude variations originating from micro-shocks and vibrations produced by the momentum and reaction wheels, mechanically activated coolers, and steering and deployment mechanisms on board. The resulting high attitude-stability requirements for Earth-observation satellites are one of the main reasons for their complexity and high cost. The novel SmartScan imaging concept, based on an opto-electronic system with no moving parts, offers the promise of high-quality imaging with only moderate satellite attitude stability. SmartScan uses real-time recording of the actual image motion in the focal plane of the camera during frame acquisition to correct the distortions in the image. Exceptional real-time performances with subpixel-accuracy image-motion measurement are provided by an innovative high-speed onboard opto-electronic correlation processor. SmartScan will therefore allow pushbroom scanners to be used for hyper-spectral imaging from satellites and other space platforms not primarily intended for imaging missions, such as micro- and nano-satellites with simplified attitude control, low-orbiting communications satellites, and manned space stations.
NASA Astrophysics Data System (ADS)
Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.
2017-08-01
This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.
Visual Control for Multirobot Organized Rendezvous.
Lopez-Nicolas, G; Aranda, M; Mezouar, Y; Sagues, C
2012-08-01
This paper addresses the problem of visual control of a set of mobile robots. In our framework, the perception system consists of an uncalibrated flying camera performing an unknown general motion. The robots are assumed to undergo planar motion considering nonholonomic constraints. The goal of the control task is to drive the multirobot system to a desired rendezvous configuration relying solely on visual information given by the flying camera. The desired multirobot configuration is defined with an image of the set of robots in that configuration without any additional information. We propose a homography-based framework relying on the homography induced by the multirobot system that gives a desired homography to be used to define the reference target, and a new image-based control law that drives the robots to the desired configuration by imposing a rigidity constraint. This paper extends our previous work, and the main contributions are that the motion constraints on the flying camera are removed, the control law is improved by reducing the number of required steps, the stability of the new control law is proved, and real experiments are provided to validate the proposal.
NASA Astrophysics Data System (ADS)
Gauthier, L. R.; Jansen, M. E.; Meyer, J. R.
2014-09-01
Camera motion is a potential problem when a video camera is used to perform dynamic displacement measurements. If the scene camera moves at the wrong time, the apparent motion of the object under study can easily be confused with the real motion of the object. In some cases, it is practically impossible to prevent camera motion, as for instance, when a camera is used outdoors in windy conditions. A method to address this challenge is described that provides an objective means to measure the displacement of an object of interest in the scene, even when the camera itself is moving in an unpredictable fashion at the same time. The main idea is to synchronously measure the motion of the camera and to use those data ex post facto to subtract out the apparent motion in the scene that is caused by the camera motion. The motion of the scene camera is measured by using a reference camera that is rigidly attached to the scene camera and oriented towards a stationary reference object. For instance, this reference object may be on the ground, which is known to be stationary. It is necessary to calibrate the reference camera by simultaneously measuring the scene images and the reference images at times when it is known that the scene object is stationary and the camera is moving. These data are used to map camera movement data to apparent scene movement data in pixel space and subsequently used to remove the camera movement from the scene measurements.
JackIn Head: Immersive Visual Telepresence System with Omnidirectional Wearable Camera.
Kasahara, Shunichi; Nagai, Shohei; Rekimoto, Jun
2017-03-01
Sharing one's own immersive experience over the Internet is one of the ultimate goals of telepresence technology. In this paper, we present JackIn Head, a visual telepresence system featuring an omnidirectional wearable camera with image motion stabilization. Spherical omnidirectional video footage taken around the head of a local user is stabilized and then broadcast to others, allowing remote users to explore the immersive visual environment independently of the local user's head direction. We describe the system design of JackIn Head and report the evaluation results of real-time image stabilization and alleviation of cybersickness. Then, through an exploratory observation study, we investigate how individuals can remotely interact, communicate with, and assist each other with our system. We report our observation and analysis of inter-personal communication, demonstrating the effectiveness of our system in augmenting remote collaboration.
Mechanical Design of the LSST Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordby, Martin; Bowden, Gordon; Foss, Mike
2008-06-13
The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors inmore » image reconstruction. Design and analysis for the camera body and cryostat will be detailed.« less
HDR video synthesis for vision systems in dynamic scenes
NASA Astrophysics Data System (ADS)
Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried
2016-09-01
High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.
The role of passive avian head stabilization in flapping flight
Pete, Ashley E.; Kress, Daniel; Dimitrov, Marina A.; Lentink, David
2015-01-01
Birds improve vision by stabilizing head position relative to their surroundings, while their body is forced up and down during flapping flight. Stabilization is facilitated by compensatory motion of the sophisticated avian head–neck system. While relative head motion has been studied in stationary and walking birds, little is known about how birds accomplish head stabilization during flapping flight. To unravel this, we approximate the avian neck with a linear mass–spring–damper system for vertical displacements, analogous to proven head stabilization models for walking humans. We corroborate the model's dimensionless natural frequency and damping ratios from high-speed video recordings of whooper swans (Cygnus cygnus) flying over a lake. The data show that flap-induced body oscillations can be passively attenuated through the neck. We find that the passive model robustly attenuates large body oscillations, even in response to head mass and gust perturbations. Our proof of principle shows that bird-inspired drones with flapping wings could record better images with a swan-inspired passive camera suspension. PMID:26311316
FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.
Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu
2017-07-18
Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.
Feghali, Rosario; Mitiche, Amar
2004-11-01
The purpose of this study is to investigate a method of tracking moving objects with a moving camera. This method estimates simultaneously the motion induced by camera movement. The problem is formulated as a Bayesian motion-based partitioning problem in the spatiotemporal domain of the image quence. An energy functional is derived from the Bayesian formulation. The Euler-Lagrange descent equations determine imultaneously an estimate of the image motion field induced by camera motion and an estimate of the spatiotemporal motion undary surface. The Euler-Lagrange equation corresponding to the surface is expressed as a level-set partial differential equation for topology independence and numerically stable implementation. The method can be initialized simply and can track multiple objects with nonsimultaneous motions. Velocities on motion boundaries can be estimated from geometrical properties of the motion boundary. Several examples of experimental verification are given using synthetic and real-image sequences.
Sinclair, Jonathan K; Vincent, Hayley; Richards, Jim D
2017-01-01
To investigate the effects of a prophylactic knee brace on knee joint kinetics and kinematics during netball specific movements. Repeated measures. Laboratory. Twenty university first team level female netball players. Participants performed three movements, run, cut and vertical jump under two conditions (brace and no-brace). 3-D knee joint kinetics and kinematics were measured using an eight-camera motion analysis system. Knee joint kinetics and kinematics were examined using 2 × 3 repeated measures ANOVA whilst the subjective ratings of comfort and stability were investigated using chi-squared tests. The results showed no differences (p > 0.05) in knee joint kinetics. However the internal/external rotation range of motion was significantly (p < 0.05) reduced when wearing the brace in all movements. The subjective ratings of stability revealed that netballers felt that the knee brace improved knee stability in all movements. Further study is required to determine whether reductions in transverse plane knee range of motion serve to attenuate the risk from injury in netballers. Copyright © 2016 Elsevier Ltd. All rights reserved.
Motion capture for human motion measuring by using single camera with triangle markers
NASA Astrophysics Data System (ADS)
Takahashi, Hidenori; Tanaka, Takayuki; Kaneko, Shun'ichi
2005-12-01
This study aims to realize a motion capture for measuring 3D human motions by using single camera. Although motion capture by using multiple cameras is widely used in sports field, medical field, engineering field and so on, optical motion capture method with one camera is not established. In this paper, the authors achieved a 3D motion capture by using one camera, named as Mono-MoCap (MMC), on the basis of two calibration methods and triangle markers which each length of side is given. The camera calibration methods made 3D coordinates transformation parameter and a lens distortion parameter with Modified DLT method. The triangle markers enabled to calculate a coordinate value of a depth direction on a camera coordinate. Experiments of 3D position measurement by using the MMC on a measurement space of cubic 2 m on each side show an average error of measurement of a center of gravity of a triangle marker was less than 2 mm. As compared with conventional motion capture method by using multiple cameras, the MMC has enough accuracy for 3D measurement. Also, by putting a triangle marker on each human joint, the MMC was able to capture a walking motion, a standing-up motion and a bending and stretching motion. In addition, a method using a triangle marker together with conventional spherical markers was proposed. Finally, a method to estimate a position of a marker by measuring the velocity of the marker was proposed in order to improve the accuracy of MMC.
Heliostat calibration using attached cameras and artificial targets
NASA Astrophysics Data System (ADS)
Burisch, Michael; Sanchez, Marcelino; Olarra, Aitor; Villasante, Cristobal
2016-05-01
The efficiency of the solar field greatly depends on the ability of the heliostats to precisely reflect solar radiation onto a central receiver. To control the heliostats with such a precision requires the accurate knowledge of the motion of each of them. The motion of each heliostat can be described by a set of parameters, most notably the position and axis configuration. These parameters have to be determined individually for each heliostat during a calibration process. With the ongoing development of small sized heliostats, the ability to automatically perform such a calibration becomes more and more crucial as possibly hundreds of thousands of heliostats are involved. Furthermore, efficiency becomes an important factor as small sized heliostats potentially have to be recalibrated far more often, due to the limited stability of the components. In the following we present an automatic calibration procedure using cameras attached to each heliostat which are observing different targets spread throughout the solar field. Based on a number of observations of these targets under different heliostat orientations, the parameters describing the heliostat motion can be estimated with high precision.
Holographic motion picture camera with Doppler shift compensation
NASA Technical Reports Server (NTRS)
Kurtz, R. L. (Inventor)
1976-01-01
A holographic motion picture camera is reported for producing three dimensional images by employing an elliptical optical system. There is provided in one of the beam paths (the object or reference beam path) a motion compensator which enables the camera to photograph faster moving objects.
Multi-Sensor Methods for Mobile Radar Motion Capture and Compensation
NASA Astrophysics Data System (ADS)
Nakata, Robert
Remote sensing has many applications, including surveying and mapping, geophysics exploration, military surveillance, search and rescue and counter-terrorism operations. Remote sensor systems typically use visible image, infrared or radar sensors. Camera based image sensors can provide high spatial resolution but are limited to line-of-sight capture during daylight. Infrared sensors have lower resolution but can operate during darkness. Radar sensors can provide high resolution motion measurements, even when obscured by weather, clouds and smoke and can penetrate walls and collapsed structures constructed with non-metallic materials up to 1 m to 2 m in depth depending on the wavelength and transmitter power level. However, any platform motion will degrade the target signal of interest. In this dissertation, we investigate alternative methodologies to capture platform motion, including a Body Area Network (BAN) that doesn't require external fixed location sensors, allowing full mobility of the user. We also investigated platform stabilization and motion compensation techniques to reduce and remove the signal distortion introduced by the platform motion. We evaluated secondary ultrasonic and radar sensors to stabilize the platform resulting in an average 5 dB of Signal to Interference Ratio (SIR) improvement. We also implemented a Digital Signal Processing (DSP) motion compensation algorithm that improved the SIR by 18 dB on average. These techniques could be deployed on a quadcopter platform and enable the detection of respiratory motion using an onboard radar sensor.
Watanabe, Toshiki; Omata, Sadao; Odamura, Motoki; Okada, Masahumi; Nakamura, Yoshihiko; Yokoyama, Hitoshi
2006-11-01
This study aimed to evaluate our newly developed 3-dimensional digital motion-capture and reconstruction system in an animal experiment setting and to characterize quantitatively the three regional cardiac surface motions, in the left anterior descending artery, right coronary artery, and left circumflex artery, before and after stabilization using a stabilizer. Six pigs underwent a full sternotomy. Three tiny metallic markers (diameter 2 mm) coated with a reflective material were attached on three regional cardiac surfaces (left anterior descending, right coronary, and left circumflex coronary artery regions). These markers were captured by two high-speed digital video cameras (955 frames per second) as 2-dimensional coordinates and reconstructed to 3-dimensional data points (about 480 xyz-position data per second) by a newly developed computer program. The remaining motion after stabilization ranged from 0.4 to 1.01 mm at the left anterior descending, 0.91 to 1.52 mm at the right coronary artery, and 0.53 to 1.14 mm at the left circumflex regions. Significant differences before and after stabilization were evaluated in maximum moving velocity (left anterior descending 456.7 +/- 178.7 vs 306.5 +/- 207.4 mm/s; right coronary artery 574.9 +/- 161.7 vs 446.9 +/- 170.7 mm/s; left circumflex 578.7 +/- 226.7 vs 398.9 +/- 192.6 mm/s; P < .0001) and maximum acceleration (left anterior descending 238.8 +/- 137.4 vs 169.4 +/- 132.7 m/s2; right coronary artery 315.0 +/- 123.9 vs 242.9 +/- 120.6 m/s2; left circumflex 307.9 +/- 151.0 vs 217.2 +/- 132.3 m/s2; P < .0001). This system is useful for a precise quantification of the heart surface movement. This helps us better understand the complexity of the heart, its motion, and the need for developing a better stabilizer for beating heart surgery.
NASA Technical Reports Server (NTRS)
Miller, Chris; Mulavara, Ajitkumar; Bloomberg, Jacob
2001-01-01
To confidently report any data collected from a video-based motion capture system, its functional characteristics must be determined, namely accuracy, repeatability and resolution. Many researchers have examined these characteristics with motion capture systems, but they used only two cameras, positioned 90 degrees to each other. Everaert used 4 cameras, but all were aligned along major axes (two in x, one in y and z). Richards compared the characteristics of different commercially available systems set-up in practical configurations, but all cameras viewed a single calibration volume. The purpose of this study was to determine the accuracy, repeatability and resolution of a 6-camera Motion Analysis system in a split-volume configuration using a quasistatic methodology.
Plenoptic Image Motion Deblurring.
Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo
2018-04-01
We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.
NASA Technical Reports Server (NTRS)
1992-01-01
The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.
Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm
NASA Astrophysics Data System (ADS)
Lahamy, H.; Lichti, D.
2011-09-01
Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.
Motion camera based on a custom vision sensor and an FPGA architecture
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel
1998-09-01
A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.
Low-cost human motion capture system for postural analysis onboard ships
NASA Astrophysics Data System (ADS)
Nocerino, Erica; Ackermann, Sebastiano; Del Pizzo, Silvio; Menna, Fabio; Troisi, Salvatore
2011-07-01
The study of human equilibrium, also known as postural stability, concerns different research sectors (medicine, kinesiology, biomechanics, robotics, sport) and is usually performed employing motion analysis techniques for recording human movements and posture. A wide range of techniques and methodologies has been developed, but the choice of instrumentations and sensors depends on the requirement of the specific application. Postural stability is a topic of great interest for the maritime community, since ship motions can make demanding and difficult the maintenance of the upright stance with hazardous consequences for the safety of people onboard. The need of capturing the motion of an individual standing on a ship during its daily service does not permit to employ optical systems commonly used for human motion analysis. These sensors are not designed for operating in disadvantageous environmental conditions (water, wetness, saltiness) and with not optimal lighting. The solution proposed in this study consists in a motion acquisition system that could be easily usable onboard ships. It makes use of two different methodologies: (I) motion capture with videogrammetry and (II) motion measurement with Inertial Measurement Unit (IMU). The developed image-based motion capture system, made up of three low-cost, light and compact video cameras, was validated against a commercial optical system and then used for testing the reliability of the inertial sensors. In this paper, the whole process of planning, designing, calibrating, and assessing the accuracy of the motion capture system is reported and discussed. Results from the laboratory tests and preliminary campaigns in the field are presented.
It's not the pixel count, you fool
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2012-01-01
The first thing a "marketing guy" asks the digital camera engineer is "how many pixels does it have, for we need as many mega pixels as possible since the other guys are killing us with their "umpteen" mega pixel pocket sized digital cameras. And so it goes until the pixels get smaller and smaller in order to inflate the pixel count in the never-ending pixel-wars. These small pixels just are not very good. The truth of the matter is that the most important feature of digital cameras in the last five years is the automatic motion control to stabilize the image on the sensor along with some very sophisticated image processing. All the rest has been hype and some "cool" design. What is the future for digital imaging and what will drive growth of camera sales (not counting the cell phone cameras which totally dominate the market in terms of camera sales) and more importantly after sales profits? Well sit in on the Dark Side of Color and find out what is being done to increase the after sales profits and don't be surprised if has been done long ago in some basement lab of a photographic company and of course, before its time.
Video Image Stabilization and Registration (VISAR) Software
NASA Technical Reports Server (NTRS)
1999-01-01
Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.
Video Image Stabilization and Registration (VISAR) Software
NASA Technical Reports Server (NTRS)
1999-01-01
Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.
Independent motion detection with a rival penalized adaptive particle filter
NASA Astrophysics Data System (ADS)
Becker, Stefan; Hübner, Wolfgang; Arens, Michael
2014-10-01
Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic filter for real-time detection and tracking of independently moving objects. The proposed approach introduces a competition scheme between particles in order to ensure an improved multi-modality. Further, the filter design helps to generate a particle distribution which is homogenous even in the presence of multiple targets showing non-rigid motion patterns. The effectiveness of the method is shown on exemplary outdoor sequences.
Augmented reality image guidance for minimally invasive coronary artery bypass
NASA Astrophysics Data System (ADS)
Figl, Michael; Rueckert, Daniel; Hawkes, David; Casula, Roberto; Hu, Mingxing; Pedro, Ose; Zhang, Dong Ping; Penney, Graeme; Bello, Fernando; Edwards, Philip
2008-03-01
We propose a novel system for image guidance in totally endoscopic coronary artery bypass (TECAB). A key requirement is the availability of 2D-3D registration techniques that can deal with non-rigid motion and deformation. Image guidance for TECAB is mainly required before the mechanical stabilization of the heart, thus the most dominant source of non-rigid deformation is the motion of the beating heart. To augment the images in the endoscope of the da Vinci robot, we have to find the transformation from the coordinate system of the preoperative imaging modality to the system of the endoscopic cameras. In a first step we build a 4D motion model of the beating heart. Intraoperatively we can use the ECG or video processing to determine the phase of the cardiac cycle. We can then take the heart surface from the motion model and register it to the stereo-endoscopic images of the da Vinci robot using 2D-3D registration methods. We are investigating robust feature tracking and intensity-based methods for this purpose. Images of the vessels available in the preoperative coordinate system can then be transformed to the camera system and projected into the calibrated endoscope view using two video mixers with chroma keying. It is hoped that the augmented view can improve the efficiency of TECAB surgery and reduce the conversion rate to more conventional procedures.
Moving Object Detection on a Vehicle Mounted Back-Up Camera
Kim, Dong-Sun; Kwon, Jinsan
2015-01-01
In the detection of moving objects from vision sources one usually assumes that the scene has been captured by stationary cameras. In case of backing up a vehicle, however, the camera mounted on the vehicle moves according to the vehicle’s movement, resulting in ego-motions on the background. This results in mixed motion in the scene, and makes it difficult to distinguish between the target objects and background motions. Without further treatments on the mixed motion, traditional fixed-viewpoint object detection methods will lead to many false-positive detection results. In this paper, we suggest a procedure to be used with the traditional moving object detection methods relaxing the stationary cameras restriction, by introducing additional steps before and after the detection. We also decribe the implementation as a FPGA platform along with the algorithm. The target application of this suggestion is use with a road vehicle’s rear-view camera systems. PMID:26712761
Multiple-camera/motion stereoscopy for range estimation in helicopter flight
NASA Technical Reports Server (NTRS)
Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.
1993-01-01
Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.
Camera Operator and Videographer
ERIC Educational Resources Information Center
Moore, Pam
2007-01-01
Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…
Radiation camera motion correction system
Hoffer, P.B.
1973-12-18
The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)
NASA Astrophysics Data System (ADS)
Santos, C. Almeida; Costa, C. Oliveira; Batista, J.
2016-05-01
The paper describes a kinematic model-based solution to estimate simultaneously the calibration parameters of the vision system and the full-motion (6-DOF) of large civil engineering structures, namely of long deck suspension bridges, from a sequence of stereo images captured by digital cameras. Using an arbitrary number of images and assuming a smooth structure motion, an Iterated Extended Kalman Filter is used to recursively estimate the projection matrices of the cameras and the structure full-motion (displacement and rotation) over time, helping to meet the structure health monitoring fulfilment. Results related to the performance evaluation, obtained by numerical simulation and with real experiments, are reported. The real experiments were carried out in indoor and outdoor environment using a reduced structure model to impose controlled motions. In both cases, the results obtained with a minimum setup comprising only two cameras and four non-coplanar tracking points, showed a high accuracy results for on-line camera calibration and structure full motion estimation.
Qiu, Kang-Fu
2017-01-01
This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2–2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking. PMID:29027950
Wang, Jeremy H-S; Qiu, Kang-Fu; Chao, Paul C-P
2017-10-13
This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2-2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking.
NASA Astrophysics Data System (ADS)
Shao, Xinxing; Zhu, Feipeng; Su, Zhilong; Dai, Xiangjun; Chen, Zhenning; He, Xiaoyuan
2018-03-01
The strain errors in stereo-digital image correlation (DIC) due to camera calibration were investigated using precisely controlled numerical experiments and real experiments. Three-dimensional rigid body motion tests were conducted to examine the effects of camera calibration on the measured results. For a fully accurate calibration, rigid body motion causes negligible strain errors. However, for inaccurately calibrated camera parameters and a short working distance, rigid body motion will lead to more than 50-μɛ strain errors, which significantly affects the measurement. In practical measurements, it is impossible to obtain a fully accurate calibration; therefore, considerable attention should be focused on attempting to avoid these types of errors, especially for high-accuracy strain measurements. It is necessary to avoid large rigid body motions in both two-dimensional DIC and stereo-DIC.
Trajectory of coronary motion and its significance in robotic motion cancellation.
Cattin, Philippe; Dave, Hitendu; Grünenfelder, Jürg; Szekely, Gabor; Turina, Marko; Zünd, Gregor
2004-05-01
To characterize remaining coronary artery motion of beating pig hearts after stabilization with an 'Octopus' using an optical remote analysis technique. Three pigs (40, 60 and 65 kg) underwent full sternotomy after receiving general anesthesia. An 8-bit high speed black and white video camera (50 frames/s) coupled with a laser sensor (60 microm resolution) were used to capture heart wall motion in all three dimensions. Dopamine infusion was used to deliberately modulate cardiac contractility. Synchronized ECG, blood pressure, airway pressure and video data of the region around the first branching point of the left anterior descending (LAD) coronary artery after Octopus stabilization were captured for stretches of 8 s each. Several sequences of the same region were captured over a period of several minutes. Computerized off-line analysis allowed us to perform minute characterization of the heart wall motion. The movement of the points of interest on the LAD ranged from 0.22 to 0.81 mm in the lateral plane (x/y-axis) and 0.5-2.6 mm out of the plane (z-axis). Fast excursions (>50 microm/s in the lateral plane) occurred corresponding to the QRS complex and the T wave; while slow excursion phases (<50 microm/s in the lateral plane) were observed during the P wave and the ST segment. The trajectories of the points of interest during consecutive cardiac cycles as well as during cardiac cycles minutes apart remained comparable (the differences were negligible), provided the hemodynamics remained stable. Inotrope-induced changes in cardiac contractility influenced not only the maximum excursion, but also the shape of the trajectory. Normal positive pressure ventilation displacing the heart in the thoracic cage was evident by the displacement of the reference point of the trajectory. The movement of the coronary artery after stabilization appears to be still significant. Minute characterization of the trajectory of motion could provide the substrate for achieving motion cancellation for existing robotic systems. Velocity plots could also help improve gated cardiac imaging.
Using a Digital Video Camera to Study Motion
ERIC Educational Resources Information Center
Abisdris, Gil; Phaneuf, Alain
2007-01-01
To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…
Vision System Measures Motions of Robot and External Objects
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2008-01-01
A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.
Measuring the circular motion of small objects using laser stroboscopic images.
Wang, Hairong; Fu, Y; Du, R
2008-01-01
Measuring the circular motion of a small object, including its displacement, speed, and acceleration, is a challenging task. This paper presents a new method for measuring repetitive and/or nonrepetitive, constant speed and/or variable speed circular motion using laser stroboscopic images. Under stroboscopic illumination, each image taken by an ordinary camera records multioutlines of an object in motion; hence, processing the stroboscopic image will be able to extract the motion information. We built an experiment apparatus consisting of a laser as the light source, a stereomicroscope to magnify the image, and a normal complementary metal oxide semiconductor camera to record the image. As the object is in motion, the stroboscopic illumination generates a speckle pattern on the object that can be recorded by the camera and analyzed by a computer. Experimental results indicate that the stroboscopic imaging is stable under various conditions. Moreover, the characteristics of the motion, including the displacement, the velocity, and the acceleration can be calculated based on the width of speckle marks, the illumination intensity, the duty cycle, and the sampling frequency. Compared with the popular high-speed camera method, the presented method may achieve the same measuring accuracy, but with much reduced cost and complexity.
The NACA High-Speed Motion-Picture Camera Optical Compensation at 40,000 Photographs Per Second
NASA Technical Reports Server (NTRS)
Miller, Cearcy D
1946-01-01
The principle of operation of the NACA high-speed camera is completely explained. This camera, operating at the rate of 40,000 photographs per second, took the photographs presented in numerous NACA reports concerning combustion, preignition, and knock in the spark-ignition engine. Many design details are presented and discussed, details of an entirely conventional nature are omitted. The inherent aberrations of the camera are discussed and partly evaluated. The focal-plane-shutter effect of the camera is explained. Photographs of the camera are presented. Some high-speed motion pictures of familiar objects -- photoflash bulb, firecrackers, camera shutter -- are reproduced as an illustration of the quality of the photographs taken by the camera.
Repurposing video recordings for structure motion estimations
NASA Astrophysics Data System (ADS)
Khaloo, Ali; Lattanzi, David
2016-04-01
Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.
A multi-criteria approach to camera motion design for volume data animation.
Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu
2013-12-01
We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.
EVA Robotic Assistant Project: Platform Attitude Prediction
NASA Technical Reports Server (NTRS)
Nickels, Kevin M.
2003-01-01
The Robotic Systems Technology Branch is currently working on the development of an EVA Robotic Assistant under the sponsorship of the Surface Systems Thrust of the NASA Cross Enterprise Technology Development Program (CETDP). This will be a mobile robot that can follow a field geologist during planetary surface exploration, carry his tools and the samples that he collects, and provide video coverage of his activity. Prior experiments have shown that for such a robot to be useful it must be able to follow the geologist at walking speed over any terrain of interest. Geologically interesting terrain tends to be rough rather than smooth. The commercial mobile robot that was recently purchased as an initial testbed for the EVA Robotic Assistant Project, an ATRV Jr., is capable of faster than walking speed outside but it has no suspension. Its wheels with inflated rubber tires are attached to axles that are connected directly to the robot body. Any angular motion of the robot produced by driving over rough terrain will directly affect the pointing of the on-board stereo cameras. The resulting image motion is expected to make tracking of the geologist more difficult. This will either require the tracker to search a larger part of the image to find the target from frame to frame or to search mechanically in pan and tilt whenever the image motion is large enough to put the target outside the image in the next frame. This project consists of the design and implementation of a Kalman filter that combines the output of the angular rate sensors and linear accelerometers on the robot to estimate the motion of the robot base. The motion of the stereo camera pair mounted on the robot that results from this motion as the robot drives over rough terrain is then straightforward to compute. The estimates may then be used, for example, to command the robot s on-board pan-tilt unit to compensate for the camera motion induced by the base movement. This has been accomplished in two ways: first, a standalone head stabilizer has been implemented and second, the estimates have been used to influence the search algorithm of the stereo tracking algorithm. Studies of the image motion of a tracked object indicate that the image motion of objects is suppressed while the robot crossing rough terrain. This work expands the range of speed and surface roughness over which the robot should be able to track and follow a field geologist and accept arm gesture commands from the geologist.
Systems and methods for estimating the structure and motion of an object
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dani, Ashwin P; Dixon, Warren
2015-11-03
In one embodiment, the structure and motion of a stationary object are determined using two images and a linear velocity and linear acceleration of a camera. In another embodiment, the structure and motion of a stationary or moving object are determined using an image and linear and angular velocities of a camera.
Video segmentation and camera motion characterization using compressed data
NASA Astrophysics Data System (ADS)
Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain
1997-10-01
We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.
NASA Astrophysics Data System (ADS)
Scopatz, Stephen D.; Mendez, Michael; Trent, Randall
2015-05-01
The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.
NASA Astrophysics Data System (ADS)
Yu, Fei; Hui, Mei; Zhao, Yue-jin
2009-08-01
The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.
7. MOTION PICTURE CAMERA STAND AT BUILDING 8768. Edwards ...
7. MOTION PICTURE CAMERA STAND AT BUILDING 8768. - Edwards Air Force Base, Air Force Rocket Propulsion Laboratory, Observation Bunkers for Test Stand 1-A, Test Area 1-120, north end of Jupiter Boulevard, Boron, Kern County, CA
Video Completion in Digital Stabilization Task Using Pseudo-Panoramic Technique
NASA Astrophysics Data System (ADS)
Favorskaya, M. N.; Buryachenko, V. V.; Zotin, A. G.; Pakhirka, A. I.
2017-05-01
Video completion is a necessary stage after stabilization of a non-stationary video sequence, if it is desirable to make the resolution of the stabilized frames equalled the resolution of the original frames. Usually the cropped stabilized frames lose 10-20% of area that means the worse visibility of the reconstructed scenes. The extension of a view of field may appear due to the pan-tilt-zoom unwanted camera movement. Our approach deals with a preparing of pseudo-panoramic key frame during a stabilization stage as a pre-processing step for the following inpainting. It is based on a multi-layered representation of each frame including the background and objects, moving differently. The proposed algorithm involves four steps, such as the background completion, local motion inpainting, local warping, and seamless blending. Our experiments show that a necessity of a seamless stitching occurs often than a local warping step. Therefore, a seamless blending was investigated in details including four main categories, such as feathering-based, pyramid-based, gradient-based, and optimal seam-based blending.
Visual Target Tracking in the Presence of Unknown Observer Motion
NASA Technical Reports Server (NTRS)
Williams, Stephen; Lu, Thomas
2009-01-01
Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.
Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror
Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji
2017-01-01
This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385
An efficient multiple exposure image fusion in JPEG domain
NASA Astrophysics Data System (ADS)
Hebbalaguppe, Ramya; Kakarala, Ramakrishna
2012-01-01
In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.
Trained neurons-based motion detection in optical camera communications
NASA Astrophysics Data System (ADS)
Teli, Shivani; Cahyadi, Willy Anugrah; Chung, Yeon Ho
2018-04-01
A concept of trained neurons-based motion detection (TNMD) in optical camera communications (OCC) is proposed. The proposed TNMD is based on neurons present in a neural network that perform repetitive analysis in order to provide efficient and reliable motion detection in OCC. This efficient motion detection can be considered another functionality of OCC in addition to two traditional functionalities of illumination and communication. To verify the proposed TNMD, the experiments were conducted in an indoor static downlink OCC, where a mobile phone front camera is employed as the receiver and an 8 × 8 red, green, and blue (RGB) light-emitting diode array as the transmitter. The motion is detected by observing the user's finger movement in the form of centroid through the OCC link via a camera. Unlike conventional trained neurons approaches, the proposed TNMD is trained not with motion itself but with centroid data samples, thus providing more accurate detection and far less complex detection algorithm. The experiment results demonstrate that the TNMD can detect all considered motions accurately with acceptable bit error rate (BER) performances at a transmission distance of up to 175 cm. In addition, while the TNMD is performed, a maximum data rate of 3.759 kbps over the OCC link is obtained. The OCC with the proposed TNMD combined can be considered an efficient indoor OCC system that provides illumination, communication, and motion detection in a convenient smart home environment.
Motion Imagery and Robotics Application Project (MIRA)
NASA Technical Reports Server (NTRS)
Grubbs, Rodney P.
2010-01-01
This viewgraph presentation describes the Motion Imagery and Robotics Application (MIRA) Project. A detailed description of the MIRA camera service software architecture, encoder features, and on-board communications are presented. A description of a candidate camera under development is also shown.
Towards frameless maskless SRS through real-time 6DoF robotic motion compensation.
Belcher, Andrew H; Liu, Xinmin; Chmura, Steven; Yenice, Kamil; Wiersma, Rodney D
2017-11-13
Stereotactic radiosurgery (SRS) uses precise dose placement to treat conditions of the CNS. Frame-based SRS uses a metal head ring fixed to the patient's skull to provide high treatment accuracy, but patient comfort and clinical workflow may suffer. Frameless SRS, while potentially more convenient, may increase uncertainty of treatment accuracy and be physiologically confining to some patients. By incorporating highly precise robotics and advanced software algorithms into frameless treatments, we present a novel frameless and maskless SRS system where a robot provides real-time 6DoF head motion stabilization allowing positional accuracies to match or exceed those of traditional frame-based SRS. A 6DoF parallel kinematics robot was developed and integrated with a real-time infrared camera in a closed loop configuration. A novel compensation algorithm was developed based on an iterative closest-path correction approach. The robotic SRS system was tested on six volunteers, whose motion was monitored and compensated for in real-time over 15 min simulated treatments. The system's effectiveness in maintaining the target's 6DoF position within preset thresholds was determined by comparing volunteer head motion with and without compensation. Comparing corrected and uncorrected motion, the 6DoF robotic system showed an overall improvement factor of 21 in terms of maintaining target position within 0.5 mm and 0.5 degree thresholds. Although the system's effectiveness varied among the volunteers examined, for all volunteers tested the target position remained within the preset tolerances 99.0% of the time when robotic stabilization was used, compared to 4.7% without robotic stabilization. The pre-clinical robotic SRS compensation system was found to be effective at responding to sub-millimeter and sub-degree cranial motions for all volunteers examined. The system's success with volunteers has demonstrated its capability for implementation with frameless and maskless SRS treatments, potentially able to achieve the same or better treatment accuracies compared to traditional frame-based approaches.
Towards frameless maskless SRS through real-time 6DoF robotic motion compensation
NASA Astrophysics Data System (ADS)
Belcher, Andrew H.; Liu, Xinmin; Chmura, Steven; Yenice, Kamil; Wiersma, Rodney D.
2017-12-01
Stereotactic radiosurgery (SRS) uses precise dose placement to treat conditions of the CNS. Frame-based SRS uses a metal head ring fixed to the patient’s skull to provide high treatment accuracy, but patient comfort and clinical workflow may suffer. Frameless SRS, while potentially more convenient, may increase uncertainty of treatment accuracy and be physiologically confining to some patients. By incorporating highly precise robotics and advanced software algorithms into frameless treatments, we present a novel frameless and maskless SRS system where a robot provides real-time 6DoF head motion stabilization allowing positional accuracies to match or exceed those of traditional frame-based SRS. A 6DoF parallel kinematics robot was developed and integrated with a real-time infrared camera in a closed loop configuration. A novel compensation algorithm was developed based on an iterative closest-path correction approach. The robotic SRS system was tested on six volunteers, whose motion was monitored and compensated for in real-time over 15 min simulated treatments. The system’s effectiveness in maintaining the target’s 6DoF position within preset thresholds was determined by comparing volunteer head motion with and without compensation. Comparing corrected and uncorrected motion, the 6DoF robotic system showed an overall improvement factor of 21 in terms of maintaining target position within 0.5 mm and 0.5 degree thresholds. Although the system’s effectiveness varied among the volunteers examined, for all volunteers tested the target position remained within the preset tolerances 99.0% of the time when robotic stabilization was used, compared to 4.7% without robotic stabilization. The pre-clinical robotic SRS compensation system was found to be effective at responding to sub-millimeter and sub-degree cranial motions for all volunteers examined. The system’s success with volunteers has demonstrated its capability for implementation with frameless and maskless SRS treatments, potentially able to achieve the same or better treatment accuracies compared to traditional frame-based approaches.
B-1 AFT Nacelle Flow Visualization Study
NASA Technical Reports Server (NTRS)
Celniker, Robert
1975-01-01
A 2-month program was conducted to perform engineering evaluation and design tasks to prepare for visualization and photography of the airflow along the aft portion of the B-1 nacelles and nozzles during flight test. Several methods of visualizing the flow were investigated and compared with respect to cost, impact of the device on the flow patterns, suitability for use in the flight environment, and operability throughout the flight. Data were based on a literature search and discussions with the test personnel. Tufts were selected as the flow visualization device in preference to several other devices studied. A tuft installation pattern has been prepared for the right-hand aft nacelle area of B-1 air vehicle No.2. Flight research programs to develop flow visualization devices other than tufts for use in future testing are recommended. A design study was conducted to select a suitable motion picture camera, to select the camera location, and to prepare engineering drawings sufficient to permit installation of the camera. Ten locations on the air vehicle were evaluated before the selection of the location in the horizontal stabilizer actuator fairing. The considerations included cost, camera angle, available volume, environmental control, flutter impact, and interference with antennas or other instrumentation.
Camera systems in human motion analysis for biomedical applications
NASA Astrophysics Data System (ADS)
Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.
2015-05-01
Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.
Registration of Large Motion Blurred Images
2016-05-09
in handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce...handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce two types...blur in the captured image when there is camera motion during exposure. However, contemporary CMOS sensors employ an electronic rolling shutter (RS
Teacher-in-Space Trainees - Arriflex Motion Picture Camera
1985-09-20
S85-40668 (18 Sept. 1985) --- The two teachers, Sharon Christa McAuliffe (left) and Barbara R. Morgan have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedures for space travelers. The second week of training included camera training, aircraft familiarization and other activities. Photo credit: NASA
Photogrammetry of Apollo 15 photography, part C
NASA Technical Reports Server (NTRS)
Wu, S. S. C.; Schafer, F. J.; Jordan, R.; Nakata, G. M.; Derick, J. L.
1972-01-01
In the Apollo 15 mission, a mapping camera system and a 61 cm optical bar, high resolution panoramic camera, as well as a laser altimeter were used. The panoramic camera is described, having several distortion sources, such as cylindrical shape of the negative film surface, the scanning action of the lens, the image motion compensator, and the spacecraft motion. Film products were processed on a specifically designed analytical plotter.
Real time moving scene holographic camera system
NASA Technical Reports Server (NTRS)
Kurtz, R. L. (Inventor)
1973-01-01
A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).
SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ingram, S; Rao, A; Wendt, R
Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the cameramore » by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, L; M Yang, Y; Nelson, B
Purpose: A novel end-to-end test system using a CCD camera and a scintillator based phantom (XRV-124, Logos Systems Int’l) capable of measuring the beam-by-beam delivery accuracy of Robotic Radiosurgery (CyberKnife) was developed and reported in our previous work. This work investigates its application in assessing the motion tracking (Synchrony) accuracy for CyberKnife. Methods: A QA plan with Anterior and Lateral beams (with 4 different collimator sizes) was created (Multiplan v5.3) for the XRV-124 phantom. The phantom was placed on a motion platform (superior and inferior movement), and the plans were delivered on the CyberKnife M6 system using four motion patterns:more » static, Sine- wave, Sine with 15° phase shift, and a patient breathing pattern composed of 2cm maximum motion with 4 second breathing cycle. Under integral recording mode, the time-averaged beam vectors (X, Y, Z) were measured by the phantom and compared with static delivery. In dynamic recording mode, the beam spots were recorded at a rate of 10 frames/second. The beam vector deviation from average position was evaluated against the various breathing patterns. Results: The average beam position of the six deliveries with no motion and three deliveries with Synchrony tracking on ideal motion (sinewave without phase shift) all agree within −0.03±0.00 mm, 0.10±0.04, and 0.04±0.03 in the X, Y, and X directions. Radiation beam width (FWHM) variations are within ±0.03 mm. Dynamic video record showed submillimeter tracking stability for both regular and irregular breathing pattern; however the tracking error up to 3.5 mm was observed when a 15 degree phase shift was introduced. Conclusion: The XRV-124 system is able to provide 3D and 4D targeting accuracy for CyberKnife delivery with Synchrony. The experimental results showed sub-millimeter delivery in phantom with excellent correlation in target to breathing motion. The accuracy was degraded when irregular motion and phase shift was introduced.« less
Richardson-Lucy deblurring for the star scene under a thinning motion path
NASA Astrophysics Data System (ADS)
Su, Laili; Shao, Xiaopeng; Wang, Lin; Wang, Haixin; Huang, Yining
2015-05-01
This paper puts emphasis on how to model and correct image blur that arises from a camera's ego motion while observing a distant star scene. Concerning the significance of accurate estimation of point spread function (PSF), a new method is employed to obtain blur kernel by thinning star motion path. In particular, how the blurred star image can be corrected to reconstruct the clear scene with a thinning motion blur model which describes the camera's path is presented. This thinning motion path to build blur kernel model is more effective at modeling the spatially motion blur introduced by camera's ego motion than conventional blind estimation of kernel-based PSF parameterization. To gain the reconstructed image, firstly, an improved thinning algorithm is used to obtain the star point trajectory, so as to extract the blur kernel of the motion-blurred star image. Then how motion blur model can be incorporated into the Richardson-Lucy (RL) deblurring algorithm, which reveals its overall effectiveness, is detailed. In addition, compared with the conventional estimated blur kernel, experimental results show that the proposed method of using thinning algorithm to get the motion blur kernel is of less complexity, higher efficiency and better accuracy, which contributes to better restoration of the motion-blurred star images.
Documenting Western Burrowing Owl Reproduction and Activity Patterns Using Motion-Activated Cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Derek B.; Greger, Paul D.
We used motion-activated cameras to monitor the reproduction and patterns of activity of the Burrowing Owl (Athene cunicularia) above ground at 45 burrows in south-central Nevada during the breeding seasons of 1999, 2000, 2001, and 2005. The 37 broods, encompassing 180 young, raised over the four years represented an average of 4.9 young per successful breeding pair. Young and adult owls were detected at the burrow entrance at all times of the day and night, but adults were detected more frequently during afternoon/early evening than were young. Motion-activated cameras require less effort to implement than other techniques. Limitations include photographingmore » only a small percentage of owl activity at the burrow; not detecting the actual number of eggs, young, or number fledged; and not being able to track individual owls over time. Further work is also necessary to compare the accuracy of productivity estimates generated from motion-activated cameras with other techniques.« less
Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry
NASA Technical Reports Server (NTRS)
Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)
2016-01-01
A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.
Concordance of Motion Sensor and Clinician-Rated Fall Risk Scores in Older Adults.
Elledge, Julie
2017-12-01
As the older adult population in the United States continues to grow, developing reliable, valid, and practical methods for identifying fall risk is a high priority. Falls are prevalent in older adults and contribute significantly to morbidity and mortality rates and rising health costs. Identifying at-risk older adults and intervening in a timely manner can reduce falls. Conventional fall risk assessment tools require a health professional trained in the use of each tool for administration and interpretation. Motion sensor technology, which uses three-dimensional cameras to measure patient movements, is promising for assessing older adults' fall risk because it could eliminate or reduce the need for provider oversight. The purpose of this study was to assess the concordance of fall risk scores as measured by a motion sensor device, the OmniVR Virtual Rehabilitation System, with clinician-rated fall risk scores in older adult outpatients undergoing physical rehabilitation. Three standardized fall risk assessments were administered by the OmniVR and by a clinician. Validity of the OmniVR was assessed by measuring the concordance between the two assessment methods. Stability of the OmniVR fall risk ratings was assessed by measuring test-retest reliability. The OmniVR scores showed high concordance with the clinician-rated scores and high stability over time, demonstrating comparability with provider measurements.
Video Image Stabilization and Registration (VISAR) Software
NASA Technical Reports Server (NTRS)
1999-01-01
Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.
Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path
Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki
2017-01-01
Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622
Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.
Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta
2010-01-01
This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.
3D kinematic measurement of human movement using low cost fish-eye cameras
NASA Astrophysics Data System (ADS)
Islam, Atiqul; Asikuzzaman, Md.; Garratt, Matthew A.; Pickering, Mark R.
2017-02-01
3D motion capture is difficult when the capturing is performed in an outdoor environment without controlled surroundings. In this paper, we propose a new approach of using two ordinary cameras arranged in a special stereoscopic configuration and passive markers on a subject's body to reconstruct the motion of the subject. Firstly for each frame of the video, an adaptive thresholding algorithm is applied for extracting the markers on the subject's body. Once the markers are extracted, an algorithm for matching corresponding markers in each frame is applied. Zhang's planar calibration method is used to calibrate the two cameras. As the cameras use the fisheye lens, they cannot be well estimated using a pinhole camera model which makes it difficult to estimate the depth information. In this work, to restore the 3D coordinates we use a unique calibration method for fisheye lenses. The accuracy of the 3D coordinate reconstruction is evaluated by comparing with results from a commercially available Vicon motion capture system.
Developmental Approach for Behavior Learning Using Primitive Motion Skills.
Dawood, Farhan; Loo, Chu Kiong
2018-05-01
Imitation learning through self-exploration is essential in developing sensorimotor skills. Most developmental theories emphasize that social interactions, especially understanding of observed actions, could be first achieved through imitation, yet the discussion on the origin of primitive imitative abilities is often neglected, referring instead to the possibility of its innateness. This paper presents a developmental model of imitation learning based on the hypothesis that humanoid robot acquires imitative abilities as induced by sensorimotor associative learning through self-exploration. In designing such learning system, several key issues will be addressed: automatic segmentation of the observed actions into motion primitives using raw images acquired from the camera without requiring any kinematic model; incremental learning of spatio-temporal motion sequences to dynamically generates a topological structure in a self-stabilizing manner; organization of the learned data for easy and efficient retrieval using a dynamic associative memory; and utilizing segmented motion primitives to generate complex behavior by the combining these motion primitives. In our experiment, the self-posture is acquired through observing the image of its own body posture while performing the action in front of a mirror through body babbling. The complete architecture was evaluated by simulation and real robot experiments performed on DARwIn-OP humanoid robot.
Improved head-controlled TV system produces high-quality remote image
NASA Technical Reports Server (NTRS)
Goertz, R.; Lindberg, J.; Mingesz, D.; Potts, C.
1967-01-01
Manipulator operator uses an improved resolution tv camera/monitor positioning system to view the remote handling and processing of reactive, flammable, explosive, or contaminated materials. The pan and tilt motions of the camera and monitor are slaved to follow the corresponding motions of the operators head.
Accuracy of an optical active-marker system to track the relative motion of rigid bodies.
Maletsky, Lorin P; Sun, Junyi; Morton, Nicholas A
2007-01-01
The measurement of relative motion between two moving bones is commonly accomplished for in vitro studies by attaching to each bone a series of either passive or active markers in a fixed orientation to create a rigid body (RB). This work determined the accuracy of motion between two RBs using an Optotrak optical motion capture system with active infrared LEDs. The stationary noise in the system was quantified by recording the apparent change in position with the RBs stationary and found to be 0.04 degrees and 0.03 mm. Incremental 10 degrees rotations and 10-mm translations were made using a more precise tool than the Optotrak. Increasing camera distance decreased the precision or increased the range of values observed for a set motion and increased the error in rotation or bias between the measured and actual rotation. The relative positions of the RBs with respect to the camera-viewing plane had a minimal effect on the kinematics and, therefore, for a given distance in the volume less than or close to the precalibrated camera distance, any motion was similarly reliable. For a typical operating set-up, a 10 degrees rotation showed a bias of 0.05 degrees and a 95% repeatability limit of 0.67 degrees. A 10-mm translation showed a bias of 0.03 mm and a 95% repeatability limit of 0.29 mm. To achieve a high level of accuracy it is important to keep the distance between the cameras and the markers near the distance the cameras are focused to during calibration.
Fabry, Christian; Kaehler, Michael; Herrmann, Sven; Woernle, Christoph; Bader, Rainer
2014-01-01
Tripolar systems have been implanted to reduce the risk of recurrent dislocation. However, there is little known about the dynamic behavior of tripolar hip endoprostheses under daily life conditions and achieved joint stability. Hence, the objective of this biomechanical study was to examine the in vivo dynamics and dislocation behavior of two types of tripolar systems compared to a standard total hip replacement (THR) with the same outer head diameter. Several load cases of daily life activities were applied to an eccentric and a concentric tripolar system by an industrial robot. During testing, the motion of the intermediate component was measured using a stereo camera system. Additionally, their behavior under different dislocation scenarios was investigated in comparison to a standard THR. For the eccentric tripolar system, the intermediate component demonstrated the shifting into moderate valgus-positions, regardless of the type of movement. This implant showed the highest resisting torque against dislocation in combination with a large range of motion. In contrast, the concentric tripolar system tended to remain in varus-positions and was primarily moved after stem contact. According to the results, eccentric tripolar systems can work well under in vivo conditions and increase hip joint stability in comparison to standard THRs. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Automatic Camera Calibration Using Multiple Sets of Pairwise Correspondences.
Vasconcelos, Francisco; Barreto, Joao P; Boyer, Edmond
2018-04-01
We propose a new method to add an uncalibrated node into a network of calibrated cameras using only pairwise point correspondences. While previous methods perform this task using triple correspondences, these are often difficult to establish when there is limited overlap between different views. In such challenging cases we must rely on pairwise correspondences and our solution becomes more advantageous. Our method includes an 11-point minimal solution for the intrinsic and extrinsic calibration of a camera from pairwise correspondences with other two calibrated cameras, and a new inlier selection framework that extends the traditional RANSAC family of algorithms to sampling across multiple datasets. Our method is validated on different application scenarios where a lack of triple correspondences might occur: addition of a new node to a camera network; calibration and motion estimation of a moving camera inside a camera network; and addition of views with limited overlap to a Structure-from-Motion model.
Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.
Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P
2016-01-01
Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.
Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis
Cerveri, Pietro; Barros, Ricardo M. L.; Marins, João C. B.; Silvatti, Amanda P.
2016-01-01
Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846
Marker-less multi-frame motion tracking and compensation in PET-brain imaging
NASA Astrophysics Data System (ADS)
Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.
2015-03-01
In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.
"Teacher in Space" Trainees - Arriflex Motion Picture Camera
1985-09-20
S85-40670 (18 Sept. 1985) --- The two teachers, Sharon Christa McAuliffe and Barbara R. Morgan (out of frame) have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedures for space travelers. The second week of training included camera training, aircraft familiarization and other activities. McAuliffe zeroes in on a test subject during a practice session with the Arriflex. Photo credit: NASA
Teacher-in-Space Trainees - Arriflex Motion Picture Camera
1985-09-20
S85-40669 (18 Sept. 1985) --- The two teachers, Sharon Christa McAuliffe (left) and Barbara R. Morgan have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedure for space travelers. The second week of training included camera training, aircraft familiarization and other activities. Morgan adjusts a lens as a studious McAuliffe looks on. Photo credit: NASA
"Teacher in Space" Trainees - Arriflex Motion Picture Camera
1985-09-20
S85-40671 (18 Sept. 1985) --- The two teachers, Barbara R. Morgan and Sharon Christa McAuliffe (out of frame) have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedures for space travelers. The second week of training included camera training, aircraft familiarization and other activities. Morgan zeroes in on a test subject during a practice session with the Arriflex. Photo credit: NASA
Space-variant restoration of images degraded by camera motion blur.
Sorel, Michal; Flusser, Jan
2008-02-01
We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.
Astronaut Walz on flight deck with IMAX camera
1996-11-04
STS079-362-023 (16-26 Sept. 1996) --- Astronaut Carl E. Walz, mission specialist, positions the IMAX camera for a shoot on the flight deck of the Space Shuttle Atlantis. The IMAX project is a collaboration among NASA, the Smithsonian Institution's National Air and Space Museum, IMAX Systems Corporation and the Lockheed Corporation to document in motion picture format significant space activities and promote NASA's educational goals using the IMAX film medium. This system, developed by IMAX of Toronto, uses specially designed 65mm cameras and projectors to record and display very high definition color motion pictures which, accompanied by six-channel high fidelity sound, are displayed on screens in IMAX and OMNIMAX theaters that are up to ten times larger than a conventional screen, producing a feeling of "being there." The 65mm photography is transferred to 70mm motion picture films for showing in IMAX theaters. IMAX cameras have been flown on 14 previous missions.
Visual fatigue modeling for stereoscopic video shot based on camera motion
NASA Astrophysics Data System (ADS)
Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing
2014-11-01
As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.
Human silhouette matching based on moment invariants
NASA Astrophysics Data System (ADS)
Sun, Yong-Chao; Qiu, Xian-Jie; Xia, Shi-Hong; Wang, Zhao-Qi
2005-07-01
This paper aims to apply the method of silhouette matching based on moment invariants to infer the human motion parameters from video sequences of single monocular uncalibrated camera. Currently, there are two ways of tracking human motion: Marker and Markerless. While a hybrid framework is introduced in this paper to recover the input video contents. A standard 3D motion database is built up by marker technique in advance. Given a video sequences, human silhouettes are extracted as well as the viewpoint information of the camera which would be utilized to project the standard 3D motion database onto the 2D one. Therefore, the video recovery problem is formulated as a matching issue of finding the most similar body pose in standard 2D library with the one in video image. The framework is applied to the special trampoline sport where we can obtain the complicated human motion parameters in the single camera video sequences, and a lot of experiments are demonstrated that this approach is feasible in the field of monocular video-based 3D motion reconstruction.
Human Age Estimation Method Robust to Camera Sensor and/or Face Movement
Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung
2015-01-01
Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282
Scene-aware joint global and local homographic video coding
NASA Astrophysics Data System (ADS)
Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.
2016-09-01
Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.
Ballesteros, Rocío
2017-01-01
The acquisition, processing, and interpretation of thermal images from unmanned aerial vehicles (UAVs) is becoming a useful source of information for agronomic applications because of the higher temporal and spatial resolution of these products compared with those obtained from satellites. However, due to the low load capacity of the UAV they need to mount light, uncooled thermal cameras, where the microbolometer is not stabilized to a constant temperature. This makes the camera precision low for many applications. Additionally, the low contrast of the thermal images makes the photogrammetry process inaccurate, which result in large errors in the generation of orthoimages. In this research, we propose the use of new calibration algorithms, based on neural networks, which consider the sensor temperature and the digital response of the microbolometer as input data. In addition, we evaluate the use of the Wallis filter for improving the quality of the photogrammetry process using structure from motion software. With the proposed calibration algorithm, the measurement accuracy increased from 3.55 °C with the original camera configuration to 1.37 °C. The implementation of the Wallis filter increases the number of tie-point from 58,000 to 110,000 and decreases the total positing error from 7.1 m to 1.3 m. PMID:28946606
Ribeiro-Gomes, Krishna; Hernández-López, David; Ortega, José F; Ballesteros, Rocío; Poblete, Tomás; Moreno, Miguel A
2017-09-23
The acquisition, processing, and interpretation of thermal images from unmanned aerial vehicles (UAVs) is becoming a useful source of information for agronomic applications because of the higher temporal and spatial resolution of these products compared with those obtained from satellites. However, due to the low load capacity of the UAV they need to mount light, uncooled thermal cameras, where the microbolometer is not stabilized to a constant temperature. This makes the camera precision low for many applications. Additionally, the low contrast of the thermal images makes the photogrammetry process inaccurate, which result in large errors in the generation of orthoimages. In this research, we propose the use of new calibration algorithms, based on neural networks, which consider the sensor temperature and the digital response of the microbolometer as input data. In addition, we evaluate the use of the Wallis filter for improving the quality of the photogrammetry process using structure from motion software. With the proposed calibration algorithm, the measurement accuracy increased from 3.55 °C with the original camera configuration to 1.37 °C. The implementation of the Wallis filter increases the number of tie-point from 58,000 to 110,000 and decreases the total positing error from 7.1 m to 1.3 m.
Feasibility evaluation of a motion detection system with face images for stereotactic radiosurgery.
Yamakawa, Takuya; Ogawa, Koichi; Iyatomi, Hitoshi; Kunieda, Etsuo
2011-01-01
In stereotactic radiosurgery we can irradiate a targeted volume precisely with a narrow high-energy x-ray beam, and thus the motion of a targeted area may cause side effects to normal organs. This paper describes our motion detection system with three USB cameras. To reduce the effect of change in illuminance in a tracking area we used an infrared light and USB cameras that were sensitive to the infrared light. The motion detection of a patient was performed by tracking his/her ears and nose with three USB cameras, where pattern matching between a predefined template image for each view and acquired images was done by an exhaustive search method with a general-purpose computing on a graphics processing unit (GPGPU). The results of the experiments showed that the measurement accuracy of our system was less than 0.7 mm, amounting to less than half of that of our previous system.
NASA Technical Reports Server (NTRS)
Humphreys, Brad; Bellisario, Brian; Gallo, Christopher; Thompson, William K.; Lewandowski, Beth
2016-01-01
Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts. To perform validation of these models and to support the Advanced Exercise Concepts Project, several candidate devices have been flown onboard NASAs Reduced Gravity Aircraft. In terrestrial laboratories, researchers typically have available to them motion capture systems for the measurement of subject kinematics. Onboard the parabolic flight aircraft it is not practical to utilize the traditional motion capture systems due to the large working volume they require and their relatively high replacement cost if damaged. To support measuring kinematics on board parabolic aircraft, a motion capture system is being developed utilizing open source computer vision code with commercial off the shelf (COTS) video camera hardware. While the systems accuracy is lower than lab setups, it provides a means to produce quantitative comparison motion capture kinematic data. Additionally, data such as required exercise volume for small spaces such as the Orion capsule can be determined. METHODS: OpenCV is an open source computer vision library that provides the ability to perform multi-camera 3 dimensional reconstruction. Utilizing OpenCV, via the Python programming language, a set of tools has been developed to perform motion capture in confined spaces using commercial cameras. Four Sony Video Cameras were intrinsically calibrated prior to flight. Intrinsic calibration provides a set of camera specific parameters to remove geometric distortion of the lens and sensor (specific to each individual camera). A set of high contrast markers were placed on the exercising subject (safety also necessitated that they be soft in case they become detached during parabolic flight); small yarn balls were used. Extrinsic calibration, the determination of camera location and orientation parameters, is performed using fixed landmark markers shared by the camera scenes. Additionally a wand calibration, the sweeping of the camera scenes simultaneously, was also performed. Techniques have been developed to perform intrinsic calibration, extrinsic calibration, isolation of the markers in the scene, calculation of marker 2D centroids, and 3D reconstruction from multiple cameras. These methods have been tested in the laboratory side-by-side comparison to a traditional motion capture system and also on a parabolic flight.
Dense depth maps from correspondences derived from perceived motion
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2017-01-01
Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.
Gaze control for an active camera system by modeling human pursuit eye movements
NASA Astrophysics Data System (ADS)
Toelg, Sebastian
1992-11-01
The ability to stabilize the image of one moving object in the presence of others by active movements of the visual sensor is an essential task for biological systems, as well as for autonomous mobile robots. An algorithm is presented that evaluates the necessary movements from acquired visual data and controls an active camera system (ACS) in a feedback loop. No a priori assumptions about the visual scene and objects are needed. The algorithm is based on functional models of human pursuit eye movements and is to a large extent influenced by structural principles of neural information processing. An intrinsic object definition based on the homogeneity of the optical flow field of relevant objects, i.e., moving mainly fronto- parallel, is used. Velocity and spatial information are processed in separate pathways, resulting in either smooth or saccadic sensor movements. The program generates a dynamic shape model of the moving object and focuses its attention to regions where the object is expected. The system proved to behave in a stable manner under real-time conditions in complex natural environments and manages general object motion. In addition it exhibits several interesting abilities well-known from psychophysics like: catch-up saccades, grouping due to coherent motion, and optokinetic nystagmus.
The Effect of Selected Cinemagraphic Elements on Audience Perception of Mediated Concepts.
ERIC Educational Resources Information Center
Orr, Quinn
This study is to explore cinemagraphic and visual elements and their inter-relations through the reinterpretation of previous research and literature. The cinemagraphic elements of visual images (camera angle, camera motion, subject motion, color, and lighting) work as a language requiring a proper grammar for the messages to be conveyed in their…
Time-Lapse Motion Picture Technique Applied to the Study of Geological Processes.
Miller, R D; Crandell, D R
1959-09-25
Light-weight, battery-operated timers were built and coupled to 16-mm motion-picture cameras having apertures controlled by photoelectric cells. The cameras were placed adjacent to Emmons Glacier on Mount Rainier. The film obtained confirms the view that exterior time-lapse photography can be applied to the study of slow-acting geologic processes.
ERIC Educational Resources Information Center
Lee, Victor R.
2015-01-01
Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video,…
1999-06-01
Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.
Considerations for opto-mechanical vs. digital stabilization in surveillance systems
NASA Astrophysics Data System (ADS)
Kowal, David
2015-05-01
Electro-optical surveillance and reconnaissance systems are frequently mounted on unstable or vibrating platforms such as ships, vehicles, aircraft and masts. Mechanical coupling between the platform and the cameras leads to angular vibration of the line of sight. Image motion during detector and eye integration times leads to image smear and a resulting loss of resolution. Additional effects are wavy images for detectors based on a rolling shutter mechanism and annoying movement of the image at low frequencies. A good stabilization system should yield sub-pixel stabilization errors and meet cost and size requirements. There are two main families of LOS stabilization methods: opto-mechanical stabilization and electronic stabilization. Each family, or a combination of both, can be implemented by a number of different techniques of varying complexity, size and cost leading to different levels of stabilization. Opto-mechanical stabilization is typically based on gyro readings, whereas electronic stabilization is typically based on gyro readings or image registration calculations. A few common stabilization techniques, as well as options for different gimbal arrangements will be described and analyzed. The relative merits and drawbacks of the different techniques and their applicability to specific systems and environments will be discussed. Over the years Controp has developed a large number of stabilized electro-optical payloads. A few examples of payloads with unique stabilization mechanisms will be described.
Sensor fusion of cameras and a laser for city-scale 3D reconstruction.
Bok, Yunsu; Choi, Dong-Geol; Kweon, In So
2014-11-04
This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.
Fluorescent image tracking velocimeter
Shaffer, Franklin D.
1994-01-01
A multiple-exposure fluorescent image tracking velocimeter (FITV) detects and measures the motion (trajectory, direction and velocity) of small particles close to light scattering surfaces. The small particles may follow the motion of a carrier medium such as a liquid, gas or multi-phase mixture, allowing the motion of the carrier medium to be observed, measured and recorded. The main components of the FITV include: (1) fluorescent particles; (2) a pulsed fluorescent excitation laser source; (3) an imaging camera; and (4) an image analyzer. FITV uses fluorescing particles excited by visible laser light to enhance particle image detectability near light scattering surfaces. The excitation laser light is filtered out before reaching the imaging camera allowing the fluoresced wavelengths emitted by the particles to be detected and recorded by the camera. FITV employs multiple exposures of a single camera image by pulsing the excitation laser light for producing a series of images of each particle along its trajectory. The time-lapsed image may be used to determine trajectory and velocity and the exposures may be coded to derive directional information.
Bio-inspired motion detection in an FPGA-based smart camera module.
Köhler, T; Röchter, F; Lindemann, J P; Möller, R
2009-03-01
Flying insects, despite their relatively coarse vision and tiny nervous system, are capable of carrying out elegant and fast aerial manoeuvres. Studies of the fly visual system have shown that this is accomplished by the integration of signals from a large number of elementary motion detectors (EMDs) in just a few global flow detector cells. We developed an FPGA-based smart camera module with more than 10,000 single EMDs, which is closely modelled after insect motion-detection circuits with respect to overall architecture, resolution and inter-receptor spacing. Input to the EMD array is provided by a CMOS camera with a high frame rate. Designed as an adaptable solution for different engineering applications and as a testbed for biological models, the EMD detector type and parameters such as the EMD time constants, the motion-detection directions and the angle between correlated receptors are reconfigurable online. This allows a flexible and simultaneous detection of complex motion fields such as translation, rotation and looming, such that various tasks, e.g., obstacle avoidance, height/distance control or speed regulation can be performed by the same compact device.
Multispectral image dissector camera flight test
NASA Technical Reports Server (NTRS)
Johnson, B. L.
1973-01-01
It was demonstrated that the multispectral image dissector camera is able to provide composite pictures of the earth surface from high altitude overflights. An electronic deflection feature was used to inject the gyro error signal into the camera for correction of aircraft motion.
van Dijk, Joris D; van Dalen, Jorn A; Mouden, Mohamed; Ottervanger, Jan Paul; Knollema, Siert; Slump, Cornelis H; Jager, Pieter L
2018-04-01
Correction of motion has become feasible on cadmium-zinc-telluride (CZT)-based SPECT cameras during myocardial perfusion imaging (MPI). Our aim was to quantify the motion and to determine the value of automatic correction using commercially available software. We retrospectively included 83 consecutive patients who underwent stress-rest MPI CZT-SPECT and invasive fractional flow reserve (FFR) measurement. Eight-minute stress acquisitions were reformatted into 1.0- and 20-second bins to detect respiratory motion (RM) and patient motion (PM), respectively. RM and PM were quantified and scans were automatically corrected. Total perfusion deficit (TPD) and SPECT interpretation-normal, equivocal, or abnormal-were compared between the noncorrected and corrected scans. Scans with a changed SPECT interpretation were compared with FFR, the reference standard. Average RM was 2.5 ± 0.4 mm and maximal PM was 4.5 ± 1.3 mm. RM correction influenced the diagnostic outcomes in two patients based on TPD changes ≥7% and in nine patients based on changed visual interpretation. In only four of these patients, the changed SPECT interpretation corresponded with FFR measurements. Correction for PM did not influence the diagnostic outcomes. Respiratory motion and patient motion were small. Motion correction did not appear to improve the diagnostic outcome and, hence, the added value seems limited in MPI using CZT-based SPECT cameras.
NASA Astrophysics Data System (ADS)
Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.
2018-05-01
A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.
Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu
2016-01-01
Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127
GEMINI-TITAN (GT)-10 - MISC. - INFLIGHT (MILKY WAY) - OUTER SPACE
1966-08-01
S66-45314 (19 July 1966) --- Ultraviolet spectra of stars in the region of the Southern Cross. These objective-grating spectra were obtained by astronauts John W. Young and Michael Collins during Gemini-10 stand-up EVA on July 19, 1966, with a 70mm Maurer camera and its f/3.3 focal length lens. The spectra extends from 2,200 angstroms to about 4,000 angstroms. The spacecraft was docked to the horizon-stabilized Agena-10; thus giving an apparent field of rotation resulting from the four-degree-per-minute orbital motion during the 20-second exposure time. Photo credit: NASA
GEMINI-TITAN (GT)-10 - MISC. - INFLIGHT (MILKY WAY) - OUTER SPACE
1966-08-01
S66-45328 (19 July 1966) --- Ultraviolet spectra of stars in the Carina-Vela region of the southern Milky Way. These objective-grating spectra were obtained by astronauts John W. Young and Michael Collins during Gemini-10 stand-up EVA on July 19, 1966, with a 70mm Maurer camera and its f/3.3 focal length lens. The spectra extends from 2,200 angstroms to about 4,000 angstroms. The spacecraft was docked to the horizon-stabilized Agena-10; thus giving an apparent field of rotation resulting from the four-degree-per-minute orbital motion during the 20-second exposure time. Photo credit: NASA
Instrument Pointing Control System for the Stellar Interferometry Mission - Planet Quest
NASA Technical Reports Server (NTRS)
Brugarolas, Paul B.; Kang, Bryan
2006-01-01
This paper describes the high precision Instrument Pointing Control System (PCS) for the Stellar Interferometry Mission (SIM) - Planet Quest. The PCS system provides front-end pointing, compensation for spacecraft motion, and feedforward stabilization, which are needed for proper interference. Optical interferometric measurements require very precise pointing (0.03 as, 1-(sigma) radial) for maximizing the interference pattern visibility. This requirement is achieved by fine pointing control of articulating pointing mirrors with feedback from angle tracking cameras. The overall pointing system design concept is presentcd. Functional requirements and an acquisition concept are given. Guide and Science pointing control loops are discussed. Simulation analyses demonstrate the feasibility of the design.
Towards designing an optical-flow based colonoscopy tracking algorithm: a comparative study
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2013-03-01
Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison among egomotion estimation algorithms showed that our FOE-constrained egomotion estimation method achieved the optimal balance between tracking accuracy and robustness. The comparative study demonstrated that our optical-flow based colonoscopy tracking algorithm maintains good accuracy and stability for routine use in clinical practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Wenpei; Wu, Jianbo; Yoon, Aram
Atomic motion at grain boundaries is essential to microstructure development, growth and stability of catalysts and other nanostructured materials. However, boundary atomic motion is often too fast to observe in a conventional transmission electron microscope (TEM) and too slow for ultrafast electron microscopy. We report on the entire transformation process of strained Pt icosahedral nanoparticles (ICNPs) into larger FCC crystals, captured at 2.5 ms time resolution using a fast electron camera. Results show slow diffusive dislocation motion at nm/s inside ICNPs and fast surface transformation at μm/s. By characterizing nanoparticle strain, we show that the fast transformation is driven bymore » inhomogeneous surface stress. And interaction with pre-existing defects led to the slowdown of the transformation front inside the nanoparticles. Particle coalescence, assisted by oxygen-induced surface migration at T ≥ 300°C, also played a critical role. Thus by studying transformation in the Pt ICNPs at high time and spatial resolution, we obtain critical insights into the transformation mechanisms in strained Pt nanoparticles.« less
NASA Astrophysics Data System (ADS)
Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing
2008-02-01
Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.
Stability analysis for a multi-camera photogrammetric system.
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-08-18
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.
Stability Analysis for a Multi-Camera Photogrammetric System
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-01-01
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012
The application of holography as a real-time three-dimensional motion picture camera
NASA Technical Reports Server (NTRS)
Kurtz, R. L.
1973-01-01
A historical introduction to holography is presented, as well as a basic description of sideband holography for stationary objects. A brief theoretical development of both time-dependent and time-independent holography is also provided, along with an analytical and intuitive discussion of a unique holographic arrangement which allows the resolution of front surface detail from an object moving at high speeds. As an application of such a system, a real-time three-dimensional motion picture camera system is discussed and the results of a recent demonstration of the world's first true three-dimensional motion picture are given.
Image deblurring in smartphone devices using built-in inertial measurement sensors
NASA Astrophysics Data System (ADS)
Šindelář, Ondřej; Šroubek, Filip
2013-01-01
Long-exposure handheld photography is degraded with blur, which is difficult to remove without prior information about the camera motion. In this work, we utilize inertial sensors (accelerometers and gyroscopes) in modern smartphones to detect exact motion trajectory of the smartphone camera during exposure and remove blur from the resulting photography based on the recorded motion data. The whole system is implemented on the Android platform and embedded in the smartphone device, resulting in a close-to-real-time deblurring algorithm. The performance of the proposed system is demonstrated in real-life scenarios.
Image registration for multi-exposed HDRI and motion deblurring
NASA Astrophysics Data System (ADS)
Lee, Seok; Wey, Ho-Cheon; Lee, Seong-Deok
2009-02-01
In multi-exposure based image fusion task, alignment is an essential prerequisite to prevent ghost artifact after blending. Compared to usual matching problem, registration is more difficult when each image is captured under different photographing conditions. In HDR imaging, we use long and short exposure images, which have different brightness and there exist over/under satuated regions. In motion deblurring problem, we use blurred and noisy image pair and the amount of motion blur varies from one image to another due to the different exposure times. The main difficulty is that luminance levels of the two images are not in linear relationship and we cannot perfectly equalize or normalize the brightness of each image and this leads to unstable and inaccurate alignment results. To solve this problem, we applied probabilistic measure such as mutual information to represent similarity between images after alignment. In this paper, we discribed about the characteristics of multi-exposed input images in the aspect of registration and also analyzed the magnitude of camera hand shake. By exploiting the independence of luminance of mutual information, we proposed a fast and practically useful image registration technique in multiple capturing. Our algorithm can be applied to extreme HDR scenes and motion blurred scenes with over 90% success rate and its simplicity enables to be embedded in digital camera and mobile camera phone. The effectiveness of our registration algorithm is examined by various experiments on real HDR or motion deblurring cases using hand-held camera.
Proposed patient motion monitoring system using feature point tracking with a web camera.
Miura, Hideharu; Ozawa, Shuichi; Matsuura, Takaaki; Yamada, Kiyoshi; Nagata, Yasushi
2017-12-01
Patient motion monitoring systems play an important role in providing accurate treatment dose delivery. We propose a system that utilizes a web camera (frame rate up to 30 fps, maximum resolution of 640 × 480 pixels) and an in-house image processing software (developed using Microsoft Visual C++ and OpenCV). This system is simple to use and convenient to set up. The pyramidal Lucas-Kanade method was applied to calculate motions for each feature point by analysing two consecutive frames. The image processing software employs a color scheme where the defined feature points are blue under stable (no movement) conditions and turn red along with a warning message and an audio signal (beeping alarm) for large patient movements. The initial position of the marker was used by the program to determine the marker positions in all the frames. The software generates a text file that contains the calculated motion for each frame and saves it as a compressed audio video interleave (AVI) file. We proposed a patient motion monitoring system using a web camera, which is simple and convenient to set up, to increase the safety of treatment delivery.
The Information Available to a Moving Observer on Shape with Unknown, Isotropic BRDFs.
Chandraker, Manmohan
2016-07-01
Psychophysical studies show motion cues inform about shape even with unknown reflectance. Recent works in computer vision have considered shape recovery for an object of unknown BRDF using light source or object motions. This paper proposes a theory that addresses the remaining problem of determining shape from the (small or differential) motion of the camera, for unknown isotropic BRDFs. Our theory derives a differential stereo relation that relates camera motion to surface depth, which generalizes traditional Lambertian assumptions. Under orthographic projection, we show differential stereo may not determine shape for general BRDFs, but suffices to yield an invariant for several restricted (still unknown) BRDFs exhibited by common materials. For the perspective case, we show that differential stereo yields the surface depth for unknown isotropic BRDF and unknown directional lighting, while additional constraints are obtained with restrictions on the BRDF or lighting. The limits imposed by our theory are intrinsic to the shape recovery problem and independent of choice of reconstruction method. We also illustrate trends shared by theories on shape from differential motion of light source, object or camera, to relate the hardness of surface reconstruction to the complexity of imaging setup.
Linearized motion estimation for articulated planes.
Datta, Ankur; Sheikh, Yaser; Kanade, Takeo
2011-04-01
In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.
Use of camera drive in stereoscopic display of learning contents of introductory physics
NASA Astrophysics Data System (ADS)
Matsuura, Shu
2011-03-01
Simple 3D physics simulations with stereoscopic display were created for a part of introductory physics e-Learning. First, cameras to see the 3D world can be made controllable by the user. This enabled to observe the system and motions of objects from any position in the 3D world. Second, cameras were made attachable to one of the moving object in the simulation so as to observe the relative motion of other objects. By this option, it was found that users perceive the velocity and acceleration more sensibly on stereoscopic display than on non-stereoscopic 3D display. Simulations were made using Adobe Flash ActionScript, and Papervison 3D library was used to render the 3D models in the flash web pages. To display the stereogram, two viewports from virtual cameras were displayed in parallel in the same web page. For observation of stereogram, the images of two viewports were superimposed by using 3D stereogram projection box (T&TS CO., LTD.), and projected on an 80-inch screen. The virtual cameras were controlled by keyboard and also by Nintendo Wii remote controller buttons. In conclusion, stereoscopic display offers learners more opportunities to play with the simulated models, and to perceive the characteristics of motion better.
Human detection and motion analysis at security points
NASA Astrophysics Data System (ADS)
Ozer, I. Burak; Lv, Tiehan; Wolf, Wayne H.
2003-08-01
This paper presents a real-time video surveillance system for the recognition of specific human activities. Specifically, the proposed automatic motion analysis is used as an on-line alarm system to detect abnormal situations in a campus environment. A smart multi-camera system developed at Princeton University is extended for use in smart environments in which the camera detects the presence of multiple persons as well as their gestures and their interaction in real-time.
Motion parallax in immersive cylindrical display systems
NASA Astrophysics Data System (ADS)
Filliard, N.; Reymond, G.; Kemeny, A.; Berthoz, A.
2012-03-01
Motion parallax is a crucial visual cue produced by translations of the observer for the perception of depth and selfmotion. Therefore, tracking the observer viewpoint has become inevitable in immersive virtual (VR) reality systems (cylindrical screens, CAVE, head mounted displays) used e.g. in automotive industry (style reviews, architecture design, ergonomics studies) or in scientific studies of visual perception. The perception of a stable and rigid world requires that this visual cue be coherent with other extra-retinal (e.g. vestibular, kinesthetic) cues signaling ego-motion. Although world stability is never questioned in real world, rendering head coupled viewpoint in VR can lead to the perception of an illusory perception of unstable environments, unless a non-unity scale factor is applied on recorded head movements. Besides, cylindrical screens are usually used with static observers due to image distortions when rendering image for viewpoints different from a sweet spot. We developed a technique to compensate in real-time these non-linear visual distortions, in an industrial VR setup, based on a cylindrical screen projection system. Additionally, to evaluate the amount of discrepancies tolerated without perceptual distortions between visual and extraretinal cues, a "motion parallax gain" between the velocity of the observer's head and that of the virtual camera was introduced in this system. The influence of this artificial gain was measured on the gait stability of free-standing participants. Results indicate that, below unity, gains significantly alter postural control. Conversely, the influence of higher gains remains limited, suggesting a certain tolerance of observers to these conditions. Parallax gain amplification is therefore proposed as a possible solution to provide a wider exploration of space to users of immersive virtual reality systems.
Visualizing the history of living spaces.
Ivanov, Yuri; Wren, Christopher; Sorokin, Alexander; Kaur, Ishwinder
2007-01-01
The technology available to building designers now makes it possible to monitor buildings on a very large scale. Video cameras and motion sensors are commonplace in practically every office space, and are slowly making their way into living spaces. The application of such technologies, in particular video cameras, while improving security, also violates privacy. On the other hand, motion sensors, while being privacy-conscious, typically do not provide enough information for a human operator to maintain the same degree of awareness about the space that can be achieved by using video cameras. We propose a novel approach in which we use a large number of simple motion sensors and a small set of video cameras to monitor a large office space. In our system we deployed 215 motion sensors and six video cameras to monitor the 3,000-square-meter office space occupied by 80 people for a period of about one year. The main problem in operating such systems is finding a way to present this highly multidimensional data, which includes both spatial and temporal components, to a human operator to allow browsing and searching recorded data in an efficient and intuitive way. In this paper we present our experiences and the solutions that we have developed in the course of our work on the system. We consider this work to be the first step in helping designers and managers of building systems gain access to information about occupants' behavior in the context of an entire building in a way that is only minimally intrusive to the occupants' privacy.
Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design
2015-10-01
the study. This equipment has included a modified GoPro head-mounted camera and a Vicon 13-camera optical motion capture system, which was not part...also completed for relevant members of the study team. 4. The head-mounted camera setup has been established (a modified GoPro Hero 3 with external
Clinical Gait Evaluation of Patients with Lumbar Spine Stenosis.
Sun, Jun; Liu, Yan-Cheng; Yan, Song-Hua; Wang, Sha-Sha; Lester, D Kevin; Zeng, Ji-Zhou; Miao, Jun; Zhang, Kuan
2018-02-01
The third generation Intelligent Device for Energy Expenditure and Activity (IDEEA3, MiniSun, CA) has been developed for clinical gait evaluation, and this study was designed to evaluate the accuracy and reliability of IDEEA3 for the gait measurement of lumbar spinal stenosis (LSS) patients. Twelve healthy volunteers were recruited to compare gait cycle, cadence, step length, velocity, and number of steps between a motion analysis system and a high-speed video camera. Twenty hospitalized LSS patients were recruited for the comparison of the five parameters between the IDEEA3 and GoPro camera. Paired t-test, intraclass correlation coefficient, concordance correlation coefficient, and Bland-Altman plots were used for the data analysis. The ratios of GoPro camera results to motion analysis system results, and the ratios of IDEEA3 results to GoPro camera results were all around 1.00. All P-values of paired t-tests for gait cycle, cadence, step length, and velocity were greater than 0.05, while all the ICC and CCC results were above 0.950 with P < 0.001. The measurements for gait cycle, cadence, step length, velocity, and number of steps with the GoPro camera are highly consistent with the measurements with the motion analysis system. The measurements for IDEEA3 are consistent with those for the GoPro camera. IDEEA3 can be effectively used in the gait measurement of LSS patients. © 2018 Chinese Orthopaedic Association and John Wiley & Sons Australia, Ltd.
Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas
2018-01-01
This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites. PMID:29673230
Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas.
Gakne, Paul Verlaine; O'Keefe, Kyle
2018-04-17
This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites.
A feasibility study of damage detection in beams using high-speed camera (Conference Presentation)
NASA Astrophysics Data System (ADS)
Wan, Chao; Yuan, Fuh-Gwo
2017-04-01
In this paper a method for damage detection in beam structures using high-speed camera is presented. Traditional methods of damage detection in structures typically involve contact (i.e., piezoelectric sensor or accelerometer) or non-contact sensors (i.e., laser vibrometer) which can be costly and time consuming to inspect an entire structure. With the popularity of the digital camera and the development of computer vision technology, video cameras offer a viable capability of measurement including higher spatial resolution, remote sensing and low-cost. In the study, a damage detection method based on the high-speed camera was proposed. The system setup comprises a high-speed camera and a line-laser which can capture the out-of-plane displacement of a cantilever beam. The cantilever beam with an artificial crack was excited and the vibration process was recorded by the camera. A methodology called motion magnification, which can amplify subtle motions in a video is used for modal identification of the beam. A finite element model was used for validation of the proposed method. Suggestions for applications of this methodology and challenges in future work will be discussed.
Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Multi-target camera tracking, hand-off and display LDRD 158819 final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
2007-09-01
the projective camera matrix (P) which is a 3x4 matrix that is represents both the intrinsic and extrinsic parameters of a camera. It is used to...K contains the intrinsic parameters of the camera and |R t⎡ ⎤⎣ ⎦ represents the extrinsic parameters of the camera. By definition, the extrinsic ... extrinsic parameters are known then the camera is said to be calibrated. If only the intrinsic parameters are known, then the projective camera can
Chen, Chia-Hsiung; Azari, David; Hu, Yu Hen; Lindstrom, Mary J.; Thelen, Darryl; Yen, Thomas Y.; Radwin, Robert G.
2015-01-01
Objective Marker-less 2D video tracking was studied as a practical means to measure upper limb kinematics for ergonomics evaluations. Background Hand activity level (HAL) can be estimated from speed and duty cycle. Accuracy was measured using a cross correlation template-matching algorithm for tracking a region of interest on the upper extremities. Methods Ten participants performed a paced load transfer task while varying HAL (2, 4, and 5) and load (2.2 N, 8.9 N and 17.8 N). Speed and acceleration measured from 2D video were compared against ground truth measurements using 3D infrared motion capture. Results The median absolute difference between 2D video and 3D motion capture was 86.5 mm/s for speed, and 591 mm/s2 for acceleration, and less than 93 mm/s for speed and 656 mm/s2 for acceleration when camera pan and tilt were within ±30 degrees. Conclusion Single-camera 2D video had sufficient accuracy (< 100 mm/s) for evaluating HAL. Practitioner Summary This study demonstrated that 2D video tracking had sufficient accuracy to measure HAL for ascertaining the American Conference of Government Industrial Hygienists Threshold Limit Value® for repetitive motion when the camera is located within ±30 degrees off the plane of motion when compared against 3D motion capture for a simulated repetitive motion task. PMID:25978764
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.
Ci, Wenyan; Huang, Yingping
2016-10-17
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.
Projective Structure from Two Uncalibrated Images: Structure from Motion and Recognition
1992-09-01
correspondence between points in Maybank 1990). The question, therefore, is why look for both views more of a problem, and hence, may make the...plane is fixed with respect to the 1987, Faugeras, Luong and Maybank 1992). The prob- camera coordinate frame. A rigid camera motion, there- lem of...the second reference Rieger-Lawton 1985, Faugeras and Maybank 1990, Hil- plane (assuming the four object points Pi, j = 1, ...,4, dreth 1991, Faugeras
Eltoukhy, Moataz; Kelly, Adam; Kim, Chang-Young; Jun, Hyung-Pil; Campbell, Richard; Kuenze, Christopher
2016-01-01
Cost effective, quantifiable assessment of lower extremity movement represents potential improvement over standard tools for evaluation of injury risk. Ten healthy participants completed three trials of a drop jump, overhead squat, and single leg squat task. Peak hip and knee kinematics were assessed using an 8 camera BTS Smart 7000DX motion analysis system and the Microsoft Kinect® camera system. The agreement and consistency between both uncorrected and correct Kinect kinematic variables and the BTS camera system were assessed using interclass correlations coefficients. Peak sagittal plane kinematics measured using the Microsoft Kinect® camera system explained a significant amount of variance [Range(hip) = 43.5-62.8%; Range(knee) = 67.5-89.6%] in peak kinematics measured using the BTS camera system. Across tasks, peak knee flexion angle and peak hip flexion were found to be consistent and in agreement when the Microsoft Kinect® camera system was directly compared to the BTS camera system but these values were improved following application of a corrective factor. The Microsoft Kinect® may not be an appropriate surrogate for traditional motion analysis technology, but it may have potential applications as a real-time feedback tool in pathological or high injury risk populations.
Phase-stepped fringe projection by rotation about the camera's perspective center.
Huddart, Y R; Valera, J D; Weston, N J; Featherstone, T C; Moore, A J
2011-09-12
A technique to produce phase steps in a fringe projection system for shape measurement is presented. Phase steps are produced by introducing relative rotation between the object and the fringe projection probe (comprising a projector and camera) about the camera's perspective center. Relative motion of the object in the camera image can be compensated, because it is independent of the distance of the object from the camera, whilst the phase of the projected fringes is stepped due to the motion of the projector with respect to the object. The technique was validated with a static fringe projection system by moving an object on a coordinate measuring machine (CMM). The alternative approach, of rotating a lightweight and robust CMM-mounted fringe projection probe, is discussed. An experimental accuracy of approximately 1.5% of the projected fringe pitch was achieved, limited by the standard phase-stepping algorithms used rather than by the accuracy of the phase steps produced by the new technique.
Li, Jin; Liu, Zilong; Liu, Si
2017-02-20
In on-board photographing processes of satellite cameras, the platform vibration can generate image motion, distortion, and smear, which seriously affect the image quality and image positioning. In this paper, we create a mathematical model of a vibrating modulate transfer function (VMTF) for a remote-sensing camera. The total MTF of a camera is reduced by the VMTF, which means the image quality is degraded. In order to avoid the degeneration of the total MTF caused by vibrations, we use an Mn-20Cu-5Ni-2Fe (M2052) manganese copper alloy material to fabricate a vibration-isolation mechanism (VIM). The VIM can transform platform vibration energy into irreversible thermal energy with its internal twin crystals structure. Our experiment shows the M2052 manganese copper alloy material is good enough to suppress image motion below 125 Hz, which is the vibration frequency of satellite platforms. The camera optical system has a higher MTF after suppressing the vibration of the M2052 material than before.
a Prompt Methodology to Georeference Complex Hypogea Environments
NASA Astrophysics Data System (ADS)
Troisi, S.; Baiocchi, V.; Del Pizzo, S.; Giannone, F.
2017-02-01
Actually complex underground structures and facilities occupy a wide space in our cities, most of them are often unsurveyed; cable duct, drainage system are not exception. Furthermore, several inspection operations are performed in critical air condition, that do not allow or make more difficult a conventional survey. In this scenario a prompt methodology to survey and georeferencing such facilities is often indispensable. A visual based approach was proposed in this paper; such methodology provides a 3D model of the environment and the path followed by the camera using the conventional photogrammetric/Structure from motion software tools. The key-role is played by the lens camera; indeed, a fisheye system was employed to obtain a very wide field of view (FOV) and therefore high overlapping among the frames. The camera geometry is in according to a forward motion along the axis camera. Consequently, to avoid instability of bundle adjustment algorithm a preliminary calibration of camera was carried out. A specific case study was reported and the accuracy achieved.
Real-time full-motion color Flash lidar for target detection and identification
NASA Astrophysics Data System (ADS)
Nelson, Roy; Coppock, Eric; Craig, Rex; Craner, Jeremy; Nicks, Dennis; von Niederhausern, Kurt
2015-05-01
Greatly improved understanding of areas and objects of interest can be gained when real time, full-motion Flash LiDAR is fused with inertial navigation data and multi-spectral context imagery. On its own, full-motion Flash LiDAR provides the opportunity to exploit the z dimension for improved intelligence vs. 2-D full-motion video (FMV). The intelligence value of this data is enhanced when it is combined with inertial navigation data to produce an extended, georegistered data set suitable for a variety of analysis. Further, when fused with multispectral context imagery the typical point cloud now becomes a rich 3-D scene which is intuitively obvious to the user and allows rapid cognitive analysis with little or no training. Ball Aerospace has developed and demonstrated a real-time, full-motion LIDAR system that fuses context imagery (VIS to MWIR demonstrated) and inertial navigation data in real time, and can stream these information-rich geolocated/fused 3-D scenes from an airborne platform. In addition, since the higher-resolution context camera is boresighted and frame synchronized to the LiDAR camera and the LiDAR camera is an array sensor, techniques have been developed to rapidly interpolate the LIDAR pixel values creating a point cloud that has the same resolution as the context camera, effectively creating a high definition (HD) LiDAR image. This paper presents a design overview of the Ball TotalSight™ LIDAR system along with typical results over urban and rural areas collected from both rotary and fixed-wing aircraft. We conclude with a discussion of future work.
Heliostat kinematic system calibration using uncalibrated cameras
NASA Astrophysics Data System (ADS)
Burisch, Michael; Gomez, Luis; Olasolo, David; Villasante, Cristobal
2017-06-01
The efficiency of the solar field greatly depends on the ability of the heliostats to precisely reflect solar radiation onto a central receiver. To control the heliostats with such a precision accurate knowledge of the motion of each of them modeled as a kinematic system is required. Determining the parameters of this system for each heliostat by a calibration system is crucial for the efficient operation of the solar field. For small sized heliostats being able to make such a calibration in a fast and automatic manner is imperative as the solar field potentially contain tens or even hundreds of thousands of them. A calibration system which can rapidly recalibrate a whole solar field would also allow reducing costs. Heliostats are generally designed to provide stability over a large period of time. Being able to relax this requirement and compensate any occurring error by adapting parameters in a model, the costs of the heliostat can be reduced. The presented method describes such an automatic calibration system using uncalibrated cameras rigidly attached to each heliostat. The cameras are used to observe targets spread out through the solar field; based on this the kinematic system of the heliostat can be estimated with high precision. A comparison of this approach to similar solutions shows the viability of the proposed solution.
Real-time tracking using stereo and motion: Visual perception for space robotics
NASA Technical Reports Server (NTRS)
Nishihara, H. Keith; Thomas, Hans; Huber, Eric; Reid, C. Ann
1994-01-01
The state-of-the-art in computing technology is rapidly attaining the performance necessary to implement many early vision algorithms at real-time rates. This new capability is helping to accelerate progress in vision research by improving our ability to evaluate the performance of algorithms in dynamic environments. In particular, we are becoming much more aware of the relative stability of various visual measurements in the presence of camera motion and system noise. This new processing speed is also allowing us to raise our sights toward accomplishing much higher-level processing tasks, such as figure-ground separation and active object tracking, in real-time. This paper describes a methodology for using early visual measurements to accomplish higher-level tasks; it then presents an overview of the high-speed accelerators developed at Teleos to support early visual measurements. The final section describes the successful deployment of a real-time vision system to provide visual perception for the Extravehicular Activity Helper/Retriever robotic system in tests aboard NASA's KC135 reduced gravity aircraft.
Terrain shape estimation from optical flow, using Kalman filtering
NASA Astrophysics Data System (ADS)
Hoff, William A.; Sklair, Cheryl W.
1990-01-01
As one moves through a static environment, the visual world as projected on the retina seems to flow past. This apparent motion, called optical flow, can be an important source of depth perception for autonomous robots. An important application is in planetary exploration -the landing vehicle must find a safe landing site in rugged terrain, and an autonomous rover must be able to navigate safely through this terrain. In this paper, we describe a solution to this problem. Image edge points are tracked between frames of a motion sequence, and the range to the points is calculated from the displacement of the edge points and the known motion of the camera. Kalman filtering is used to incrementally improve the range estimates to those points, and provide an estimate of the uncertainty in each range. Errors in camera motion and image point measurement can also be modelled with Kalman filtering. A surface is then interpolated to these points, providing a complete map from which hazards such as steeply sloping areas can be detected. Using the method of extended Kalman filtering, our approach allows arbitrary camera motion. Preliminary results of an implementation are presented, and show that the resulting range accuracy is on the order of 1-2% of the range.
NASA Astrophysics Data System (ADS)
Lee, Victor R.
2015-04-01
Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video, can be deployed in such a way to support students' participation in practices of scientific modeling. As participants in classroom design experiment, fifteen fifth-grade students worked with high-speed cameras and stop-motion animation software (SAM Animation) over several days to produce dynamic models of motion and body movement. The designed series of learning activities involved iterative cycles of animation creation and critique and use of various depictive materials. Subsequent analysis of flipbooks of human jumping movements created by the students at the beginning and end of the unit revealed a significant improvement in both the epistemic fidelity of students' representations. Excerpts from classroom observations highlight the role that the teacher plays in supporting students' thoughtful reflection of and attention to slow-motion video. In total, this design and research intervention demonstrates that the combination of technologies, activities, and teacher support can lead to improvements in some of the foundations associated with students' modeling.
Samba: a real-time motion capture system using wireless camera sensor networks.
Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai
2014-03-20
There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments.
Samba: A Real-Time Motion Capture System Using Wireless Camera Sensor Networks
Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai
2014-01-01
There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments. PMID:24658618
Pose-free structure from motion using depth from motion constraints.
Zhang, Ji; Boutin, Mireille; Aliaga, Daniel G
2011-10-01
Structure from motion (SFM) is the problem of recovering the geometry of a scene from a stream of images taken from unknown viewpoints. One popular approach to estimate the geometry of a scene is to track scene features on several images and reconstruct their position in 3-D. During this process, the unknown camera pose must also be recovered. Unfortunately, recovering the pose can be an ill-conditioned problem which, in turn, can make the SFM problem difficult to solve accurately. We propose an alternative formulation of the SFM problem with fixed internal camera parameters known a priori. In this formulation, obtained by algebraic variable elimination, the external camera pose parameters do not appear. As a result, the problem is better conditioned in addition to involving much fewer variables. Variable elimination is done in three steps. First, we take the standard SFM equations in projective coordinates and eliminate the camera orientations from the equations. We then further eliminate the camera center positions. Finally, we also eliminate all 3-D point positions coordinates, except for their depths with respect to the camera center, thus obtaining a set of simple polynomial equations of degree two and three. We show that, when there are merely a few points and pictures, these "depth-only equations" can be solved in a global fashion using homotopy methods. We also show that, in general, these same equations can be used to formulate a pose-free cost function to refine SFM solutions in a way that is more accurate than by minimizing the total reprojection error, as done when using the bundle adjustment method. The generalization of our approach to the case of varying internal camera parameters is briefly discussed. © 2011 IEEE
Examining wildlife responses to phenology and wildfire using a landscape-scale camera trap network
Miguel L. Villarreal; Leila Gass; Laura Norman; Joel B. Sankey; Cynthia S. A. Wallace; Dennis McMacken; Jack L. Childs; Roy Petrakis
2013-01-01
Between 2001 and 2009, the Borderlands Jaguar Detection Project deployed 174 camera traps in the mountains of southern Arizona to record jaguar activity. In addition to jaguars, the motion-activated cameras, placed along known wildlife travel routes, recorded occurrences of ~ 20 other animal species. We examined temporal relationships of white-tailed deer (Odocoileus...
Lewis, Jesse S.; Gerber, Brian D.
2014-01-01
Motion-activated cameras are a versatile tool that wildlife biologists can use for sampling wild animal populations to estimate species occurrence. Occupancy modelling provides a flexible framework for the analysis of these data; explicitly recognizing that given a species occupies an area the probability of detecting it is often less than one. Despite the number of studies using camera data in an occupancy framework, there is only limited guidance from the scientific literature about survey design trade-offs when using motion-activated cameras. A fuller understanding of these trade-offs will allow researchers to maximise available resources and determine whether the objectives of a monitoring program or research study are achievable. We use an empirical dataset collected from 40 cameras deployed across 160 km2 of the Western Slope of Colorado, USA to explore how survey effort (number of cameras deployed and the length of sampling period) affects the accuracy and precision (i.e., error) of the occupancy estimate for ten mammal and three virtual species. We do this using a simulation approach where species occupancy and detection parameters were informed by empirical data from motion-activated cameras. A total of 54 survey designs were considered by varying combinations of sites (10–120 cameras) and occasions (20–120 survey days). Our findings demonstrate that increasing total sampling effort generally decreases error associated with the occupancy estimate, but changing the number of sites or sampling duration can have very different results, depending on whether a species is spatially common or rare (occupancy = ψ) and easy or hard to detect when available (detection probability = p). For rare species with a low probability of detection (i.e., raccoon and spotted skunk) the required survey effort includes maximizing the number of sites and the number of survey days, often to a level that may be logistically unrealistic for many studies. For common species with low detection (i.e., bobcat and coyote) the most efficient sampling approach was to increase the number of occasions (survey days). However, for common species that are moderately detectable (i.e., cottontail rabbit and mule deer), occupancy could reliably be estimated with comparatively low numbers of cameras over a short sampling period. We provide general guidelines for reliably estimating occupancy across a range of terrestrial species (rare to common: ψ = 0.175–0.970, and low to moderate detectability: p = 0.003–0.200) using motion-activated cameras. Wildlife researchers/managers with limited knowledge of the relative abundance and likelihood of detection of a particular species can apply these guidelines regardless of location. We emphasize the importance of prior biological knowledge, defined objectives and detailed planning (e.g., simulating different study-design scenarios) for designing effective monitoring programs and research studies. PMID:25210658
Computational cameras for moving iris recognition
NASA Astrophysics Data System (ADS)
McCloskey, Scott; Venkatesha, Sharath
2015-05-01
Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.
Object tracking using multiple camera video streams
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford
2010-05-01
Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.
Schindler, Andreas; Bartels, Andreas
2018-05-15
Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.
Time-of-flight depth image enhancement using variable integration time
NASA Astrophysics Data System (ADS)
Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong
2013-03-01
Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.
Robot-assisted general surgery.
Hazey, Jeffrey W; Melvin, W Scott
2004-06-01
With the initiation of laparoscopic techniques in general surgery, we have seen a significant expansion of minimally invasive techniques in the last 16 years. More recently, robotic-assisted laparoscopy has moved into the general surgeon's armamentarium to address some of the shortcomings of laparoscopic surgery. AESOP (Computer Motion, Goleta, CA) addressed the issue of visualization as a robotic camera holder. With the introduction of the ZEUS robotic surgical system (Computer Motion), the ability to remotely operate laparoscopic instruments became a reality. US Food and Drug Administration approval in July 2000 of the da Vinci robotic surgical system (Intuitive Surgical, Sunnyvale, CA) further defined the ability of a robotic-assist device to address limitations in laparoscopy. This includes a significant improvement in instrument dexterity, dampening of natural hand tremors, three-dimensional visualization, ergonomics, and camera stability. As experience with robotic technology increased and its applications to advanced laparoscopic procedures have become more understood, more procedures have been performed with robotic assistance. Numerous studies have shown equivalent or improved patient outcomes when robotic-assist devices are used. Initially, robotic-assisted laparoscopic cholecystectomy was deemed safe, and now robotics has been shown to be safe in foregut procedures, including Nissen fundoplication, Heller myotomy, gastric banding procedures, and Roux-en-Y gastric bypass. These techniques have been extrapolated to solid-organ procedures (splenectomy, adrenalectomy, and pancreatic surgery) as well as robotic-assisted laparoscopic colectomy. In this chapter, we review the evolution of robotic technology and its applications in general surgical procedures.
Comparison of three different techniques for camera and motion control of a teleoperated robot.
Doisy, Guillaume; Ronen, Adi; Edan, Yael
2017-01-01
This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Westfeld, Patrick; Maas, Hans-Gerd; Bringmann, Oliver; Gröllich, Daniel; Schmauder, Martin
2013-11-01
The paper shows techniques for the determination of structured motion parameters from range camera image sequences. The core contribution of the work presented here is the development of an integrated least squares 3D tracking approach based on amplitude and range image sequences to calculate dense 3D motion vector fields. Geometric primitives of a human body model are fitted to time series of range camera point clouds using these vector fields as additional information. Body poses and motion information for individual body parts are derived from the model fit. On the basis of these pose and motion parameters, critical body postures are detected. The primary aim of the study is to automate ergonomic studies for risk assessments regulated by law, identifying harmful movements and awkward body postures in a workplace.
MonoSLAM: real-time single camera SLAM.
Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier
2007-06-01
We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.
Cross-Correlation-Based Structural System Identification Using Unmanned Aerial Vehicles
Yoon, Hyungchul; Hoskere, Vedhus; Park, Jong-Woong; Spencer, Billie F.
2017-01-01
Computer vision techniques have been employed to characterize dynamic properties of structures, as well as to capture structural motion for system identification purposes. All of these methods leverage image-processing techniques using a stationary camera. This requirement makes finding an effective location for camera installation difficult, because civil infrastructure (i.e., bridges, buildings, etc.) are often difficult to access, being constructed over rivers, roads, or other obstacles. This paper seeks to use video from Unmanned Aerial Vehicles (UAVs) to address this problem. As opposed to the traditional way of using stationary cameras, the use of UAVs brings the issue of the camera itself moving; thus, the displacements of the structure obtained by processing UAV video are relative to the UAV camera. Some efforts have been reported to compensate for the camera motion, but they require certain assumptions that may be difficult to satisfy. This paper proposes a new method for structural system identification using the UAV video directly. Several challenges are addressed, including: (1) estimation of an appropriate scale factor; and (2) compensation for the rolling shutter effect. Experimental validation is carried out to validate the proposed approach. The experimental results demonstrate the efficacy and significant potential of the proposed approach. PMID:28891985
NASA Technical Reports Server (NTRS)
Kim, Won S.; Bejczy, Antal K.
1993-01-01
A highly effective predictive/preview display technique for telerobotic servicing in space under several seconds communication time delay has been demonstrated on a large laboratory scale in May 1993, involving the Jet Propulsion Laboratory as the simulated ground control station and, 2500 miles away, the Goddard Space Flight Center as the simulated satellite servicing set-up. The technique is based on a high-fidelity calibration procedure that enables a high-fidelity overlay of 3-D graphics robot arm and object models over given 2-D TV camera images of robot arm and objects. To generate robot arm motions, the operator can confidently interact in real time with the graphics models of the robot arm and objects overlaid on an actual camera view of the remote work site. The technique also enables the operator to generate high-fidelity synthetic TV camera views showing motion events that are hidden in a given TV camera view or for which no TV camera views are available. The positioning accuracy achieved by this technique for a zoomed-in camera setting was about +/-5 mm, well within the allowable +/-12 mm error margin at the insertion of a 45 cm long tool in the servicing task.
Silvatti, Amanda P; Cerveri, Pietro; Telles, Thiago; Dias, Fábio A S; Baroni, Guido; Barros, Ricardo M L
2013-01-01
In this study we aim at investigating the applicability of underwater 3D motion capture based on submerged video cameras in terms of 3D accuracy analysis and trajectory reconstruction. Static points with classical direct linear transform (DLT) solution, a moving wand with bundle adjustment and a moving 2D plate with Zhang's method were considered for camera calibration. As an example of the final application, we reconstructed the hand motion trajectories in different swimming styles and qualitatively compared this with Maglischo's model. Four highly trained male swimmers performed butterfly, breaststroke and freestyle tasks. The middle fingertip trajectories of both hands in the underwater phase were considered. The accuracy (mean absolute error) of the two calibration approaches (wand: 0.96 mm - 2D plate: 0.73 mm) was comparable to out of water results and highly superior to the classical DLT results (9.74 mm). Among all the swimmers, the hands' trajectories of the expert swimmer in the style were almost symmetric and in good agreement with Maglischo's model. The kinematic results highlight symmetry or asymmetry between the two hand sides, intra- and inter-subject variability in terms of the motion patterns and agreement or disagreement with the model. The two outcomes, calibration results and trajectory reconstruction, both move towards the quantitative 3D underwater motion analysis.
Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles.
Atman, Jamal; Popp, Manuel; Ruppelt, Jan; Trommer, Gert F
2016-09-16
Micro Air Vehicles (MAVs) equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS). In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV's navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P) problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results.
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2011-01-01
A selection of hands-on experiments from different fields of physics, which happen too fast for the eye or video cameras to properly observe and analyse the phenomena, is presented. They are recorded and analysed using modern high speed cameras. Two types of cameras were used: the first were rather inexpensive consumer products such as Casio…
Kwon, Young-Hoo; Casebolt, Jeffrey B
2006-01-01
One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a through review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.
Kwon, Young-Hoo; Casebolt, Jeffrey B
2006-07-01
One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.
Mapping Land and Water Surface Topography with instantaneous Structure from Motion
NASA Astrophysics Data System (ADS)
Dietrich, J.; Fonstad, M. A.
2012-12-01
Structure from Motion (SfM) has given researchers an invaluable tool for low-cost, high-resolution 3D mapping of the environment. These SfM 3D surface models are commonly constructed from many digital photographs collected with one digital camera (either handheld or attached to aerial platform). This method works for stationary or very slow moving objects. However, objects in motion are impossible to capture with one-camera SfM. With multiple simultaneously triggered cameras, it becomes possible to capture multiple photographs at the same time which allows for the construction 3D surface models of moving objects and surfaces, an instantaneous SfM (ISfM) surface model. In river science, ISfM provides a low-cost solution for measuring a number of river variables that researchers normally estimate or are unable to collect over large areas. With ISfM and sufficient coverage of the banks and RTK-GPS control it is possible to create a digital surface model of land and water surface elevations across an entire channel and water surface slopes at any point within the surface model. By setting the cameras to collect time-lapse photography of a scene it is possible to create multiple surfaces that can be compared using traditional digital surface model differencing. These water surface models could be combined the high-resolution bathymetry to create fully 3D cross sections that could be useful in hydrologic modeling. Multiple temporal image sets could also be used in 2D or 3D particle image velocimetry to create 3D surface velocity maps of a channel. Other applications in earth science include anything where researchers could benefit from temporal surface modeling like mass movements, lava flows, dam removal monitoring. The camera system that was used for this research consisted of ten pocket digital cameras (Canon A3300) equipped with wireless triggers. The triggers were constructed with an Arduino-style microcontroller and off-the-shelf handheld radios with a maximum range of several kilometers. The cameras are controlled from another microcontroller/radio combination that allows for manual or automatic triggering of the cameras. The total cost of the camera system was approximately 1500 USD.
Optimising rigid motion compensation for small animal brain PET imaging
NASA Astrophysics Data System (ADS)
Spangler-Bickell, Matthew G.; Zhou, Lin; Kyme, Andre Z.; De Laat, Bart; Fulton, Roger R.; Nuyts, Johan
2016-10-01
Motion compensation (MC) in PET brain imaging of awake small animals is attracting increased attention in preclinical studies since it avoids the confounding effects of anaesthesia and enables behavioural tests during the scan. A popular MC technique is to use multiple external cameras to track the motion of the animal’s head, which is assumed to be represented by the motion of a marker attached to its forehead. In this study we have explored several methods to improve the experimental setup and the reconstruction procedures of this method: optimising the camera-marker separation; improving the temporal synchronisation between the motion tracker measurements and the list-mode stream; post-acquisition smoothing and interpolation of the motion data; and list-mode reconstruction with appropriately selected subsets. These techniques have been tested and verified on measurements of a moving resolution phantom and brain scans of an awake rat. The proposed techniques improved the reconstructed spatial resolution of the phantom by 27% and of the rat brain by 14%. We suggest a set of optimal parameter values to use for awake animal PET studies and discuss the relative significance of each parameter choice.
Unsupervised markerless 3-DOF motion tracking in real time using a single low-budget camera.
Quesada, Luis; León, Alejandro J
2012-10-01
Motion tracking is a critical task in many computer vision applications. Existing motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on motion tracking. In this paper, we present a novel three degrees of freedom motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera that can be found installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a nonmodeled unmarked object that may be nonrigid, nonconvex, partially occluded, self-occluded, or motion blurred, given that it is opaque, evenly colored, enough contrasting with the background in each frame, and that it does not rotate. Our system is also able to determine the most relevant object to track in the screen. Our proposal does not impose additional constraints, therefore it allows a market-wide implementation of applications that require the estimation of the three position degrees of freedom of an object.
Brownian Movement and Avogadro's Number: A Laboratory Experiment.
ERIC Educational Resources Information Center
Kruglak, Haym
1988-01-01
Reports an experimental procedure for studying Einstein's theory of Brownian movement using commercially available latex microspheres and a video camera. Describes how students can monitor sphere motions and determine Avogadro's number. Uses a black and white video camera, microscope, and TV. (ML)
NASA Technical Reports Server (NTRS)
Steele, P.; Kirch, D.
1975-01-01
In 47 men with arteriographically defined coronary artery disease comparative studies of left ventricular ejection fraction and segmental wall motion were made with radionuclide data obtained from the image intensifier camera computer system and with contrast cineventriculography. The radionuclide data was digitized and the images corresponding to left ventricular end-diastole and end-systole were identified from the left ventricular time-activity curve. The left ventricular end-diastolic and end-systolic images were subtracted to form a silhouette difference image which described wall motion of the anterior and inferior left ventricular segments. The image intensifier camera allows manipulation of dynamically acquired radionuclide data because of the high count rate and consequently improved resolution of the left ventricular image.
Geffers, H; Sigel, H; Bitter, F; Kampmann, H; Stauch, M; Adam, W E
1976-08-01
Camera-Kinematography is a nearly noninvasive method to investigate regional motion of the myocard, and allows evaluation of the function of the heart. About 20 min after injection of 15-20 mCi of 99mTC-Human-Serum-Albumin, when the tracer is distributed homogenously within the bloodpool, data acquisition starts. Myocardial wall motion is represented in an appropriate quasi three-dimensional form. In this representation scars can be revealed as "silent" (akinetic) regions, aneurysms by asynchronic motion. Time activity curves for arbitrarily chosen regions can be calculated and give an equivalent for regional volume changes. 16 patients with an old infarction have been investigated. In fourteen cases the location and extent of regions with abnormal motion could be evaluated. Only two cases of a small posterior wall infarction did not show deviations from normal contraction pattern.
Multisensory visual servoing by a neural network.
Wei, G Q; Hirzinger, G
1999-01-01
Conventional computer vision methods for determining a robot's end-effector motion based on sensory data needs sensor calibration (e.g., camera calibration) and sensor-to-hand calibration (e.g., hand-eye calibration). This involves many computations and even some difficulties, especially when different kinds of sensors are involved. In this correspondence, we present a neural network approach to the motion determination problem without any calibration. Two kinds of sensory data, namely, camera images and laser range data, are used as the input to a multilayer feedforward network to associate the direct transformation from the sensory data to the required motions. This provides a practical sensor fusion method. Using a recursive motion strategy and in terms of a network correction, we relax the requirement for the exactness of the learned transformation. Another important feature of our work is that the goal position can be changed without having to do network retraining. Experimental results show the effectiveness of our method.
CameraHRV: robust measurement of heart rate variability using a camera
NASA Astrophysics Data System (ADS)
Pai, Amruta; Veeraraghavan, Ashok; Sabharwal, Ashutosh
2018-02-01
The inter-beat-interval (time period of the cardiac cycle) changes slightly for every heartbeat; this variation is measured as Heart Rate Variability (HRV). HRV is presumed to occur due to interactions between the parasym- pathetic and sympathetic nervous system. Therefore, it is sometimes used as an indicator of the stress level of an individual. HRV also reveals some clinical information about cardiac health. Currently, HRV is accurately measured using contact devices such as a pulse oximeter. However, recent research in the field of non-contact imaging Photoplethysmography (iPPG) has made vital sign measurements using just the video recording of any exposed skin (such as a person's face) possible. The current signal processing methods for extracting HRV using peak detection perform well for contact-based systems but have poor performance for the iPPG signals. The main reason for this poor performance is the fact that current methods are sensitive to large noise sources which are often present in iPPG data. Further, current methods are not robust to motion artifacts that are common in iPPG systems. We developed a new algorithm, CameraHRV, for robustly extracting HRV even in low SNR such as is common with iPPG recordings. CameraHRV combined spatial combination and frequency demodulation to obtain HRV from the instantaneous frequency of the iPPG signal. CameraHRV outperforms other current methods of HRV estimation. Ground truth data was obtained from FDA-approved pulse oximeter for validation purposes. CameraHRV on iPPG data showed an error of 6 milliseconds for low motion and varying skin tone scenarios. The improvement in error was 14%. In case of high motion scenarios like reading, watching and talking, the error was 10 milliseconds.
The determination of high-resolution spatio-temporal glacier motion fields from time-lapse sequences
NASA Astrophysics Data System (ADS)
Schwalbe, Ellen; Maas, Hans-Gerd
2017-12-01
This paper presents a comprehensive method for the determination of glacier surface motion vector fields at high spatial and temporal resolution. These vector fields can be derived from monocular terrestrial camera image sequences and are a valuable data source for glaciological analysis of the motion behaviour of glaciers. The measurement concepts for the acquisition of image sequences are presented, and an automated monoscopic image sequence processing chain is developed. Motion vector fields can be derived with high precision by applying automatic subpixel-accuracy image matching techniques on grey value patterns in the image sequences. Well-established matching techniques have been adapted to the special characteristics of the glacier data in order to achieve high reliability in automatic image sequence processing, including the handling of moving shadows as well as motion effects induced by small instabilities in the camera set-up. Suitable geo-referencing techniques were developed to transform image measurements into a reference coordinate system.The result of monoscopic image sequence analysis is a dense raster of glacier surface point trajectories for each image sequence. Each translation vector component in these trajectories can be determined with an accuracy of a few centimetres for points at a distance of several kilometres from the camera. Extensive practical validation experiments have shown that motion vector and trajectory fields derived from monocular image sequences can be used for the determination of high-resolution velocity fields of glaciers, including the analysis of tidal effects on glacier movement, the investigation of a glacier's motion behaviour during calving events, the determination of the position and migration of the grounding line and the detection of subglacial channels during glacier lake outburst floods.
Development of a Sunspot Tracking System
NASA Technical Reports Server (NTRS)
Taylor, Jaime R.
1998-01-01
Large solar flares produce a significant amount of energetic particles which pose a hazard for human activity in space. In the hope of understanding flare mechanisms and thus better predicting solar flares, NASA's Marshall Space Flight Center (MSFC) developed an experimental vector magnetograph (EXVM) polarimeter to measure the Sun's magnetic field. The EXVM will be used to perform ground-based solar observations and will provide a proof of concept for the design of a similar instrument for the Japanese Solar-B space mission. The EXVM typically operates for a period of several minutes. During this time there is image motion due to atmospheric fluctuation and telescope wind loading. To optimize the EXVM performance an image motion compensation device (sunspot tracker) is needed. The sunspot tracker consists of two parts, an image motion determination system and an image deflection system. For image motion determination a CCD or CID camera is used to digitize an image, than an algorithm is applied to determine the motion. This motion or error signal is sent to the image deflection system which moves the image back to its original location. Both of these systems are under development. Two algorithms are available for sunspot tracking which require the use of only one row and one column of image data. To implement these algorithms, two identical independent systems are being developed, one system for each axis of motion. Two CID cameras have been purchased; the data from each camera will be used to determine image motion for each direction. The error signal generated by the tracking algorithm will be sent to an image deflection system consisting of an actuator and a mirror constrained to move about one axis. Magnetostrictive actuators were chosen to move the mirror over piezoelectrics due to their larger driving force and larger range of motion. The actuator and mirror mounts are currently under development.
Sofia Observatory Performance and Characterization
NASA Technical Reports Server (NTRS)
Temi, Pasquale; Miller, Walter; Dunham, Edward; McLean, Ian; Wolf, Jurgen; Becklin, Eric; Bida, Tom; Brewster, Rick; Casey, Sean; Collins, Peter;
2012-01-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) has recently concluded a set of engineering flights for Observatory performance evaluation. These in-flight opportunities have been viewed as a first comprehensive assessment of the Observatory's performance and will be used to address the development activity that is planned for 2012, as well as to identify additional Observatory upgrades. A series of 8 SOFIA Characterization And Integration (SCAI) flights have been conducted from June to December 2011. The HIPO science instrument in conjunction with the DSI Super Fast Diagnostic Camera (SFDC) have been used to evaluate pointing stability, including the image motion due to rigid-body and flexible-body telescope modes as well as possible aero-optical image motion. We report on recent improvements in pointing stability by using an Active Mass Damper system installed on Telescope Assembly. Measurements and characterization of the shear layer and cavity seeing, as well as image quality evaluation as a function of wavelength have been performed using the HIPO+FLITECAM Science Instrument configuration (FLIPO). A number of additional tests and measurements have targeted basic Observatory capabilities and requirements including, but not limited to, pointing accuracy, chopper evaluation and imager sensitivity. SCAI activities included in-flight partial Science Instrument commissioning prior to the use of the instruments as measuring engines. This paper reports on the data collected during the SCAI flights and presents current SOFIA Observatory performance and characterization.
Preplanning and Evaluating Video Documentaries and Features.
ERIC Educational Resources Information Center
Maynard, Riley
1997-01-01
This article presents a ten-part pre-production outline and post-production evaluation that helps communications students more effectively improve video skills. Examines camera movement and motion, camera angle and perspective, lighting, audio, graphics, backgrounds and color, special effects, editing, transitions, and music. Provides a glossary…
Computing camera heading: A study
NASA Astrophysics Data System (ADS)
Zhang, John Jiaxiang
2000-08-01
An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.
Algorithm for the stabilization of motion a bounding vehicle in the flight phase
NASA Technical Reports Server (NTRS)
Lapshin, V. V.
1980-01-01
The unsupported phase of motion of a multileg bounding vehicle is examined. An algorithm for stabilization of the angular motion of the vehicle housing by change of the motion of the legs during flight is constructed. The results of mathematical modelling of the stabilization process by computer are presented.
Dimensional coordinate measurements: application in characterizing cervical spine motion
NASA Astrophysics Data System (ADS)
Zheng, Weilong; Li, Linan; Wang, Shibin; Wang, Zhiyong; Shi, Nianke; Xue, Yuan
2014-06-01
Cervical spine as a complicated part in the human body, the form of its movement is diverse. The movements of the segments of vertebrae are three-dimensional, and it is reflected in the changes of the angle between two joint and the displacement in different directions. Under normal conditions, cervical can flex, extend, lateral flex and rotate. For there is no relative motion between measuring marks fixed on one segment of cervical vertebra, the cervical vertebrae with three marked points can be seen as a body. Body's motion in space can be decomposed into translational movement and rotational movement around a base point .This study concerns the calculation of dimensional coordinate of the marked points pasted to the human body's cervical spine by an optical method. Afterward, these measures will allow the calculation of motion parameters for every spine segment. For this study, we choose a three-dimensional measurement method based on binocular stereo vision. The object with marked points is placed in front of the CCD camera. Through each shot, we will get there two parallax images taken from different cameras. According to the principle of binocular vision we can be realized three-dimensional measurements. Cameras are erected parallelly. This paper describes the layout of experimental system and a mathematical model to get the coordinates.
Photogrammetric Modeling and Image-Based Rendering for Rapid Virtual Environment Creation
2004-12-01
area and different methods have been proposed. Pertinent methods include: Camera Calibration , Structure from Motion, Stereo Correspondence, and Image...Based Rendering 1.1.1 Camera Calibration Determining the 3D structure of a model from multiple views becomes simpler if the intrinsic (or internal...can introduce significant nonlinearities into the image. We have found that camera calibration is a straightforward process which can simplify the
A deep proper motion catalog within the Sloan digital sky survey footprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munn, Jeffrey A.; Harris, Hugh C.; Tilleman, Trudy M.
2014-12-01
A new proper motion catalog is presented, combining the Sloan Digital Sky Survey (SDSS) with second epoch observations in the r band within a portion of the SDSS imaging footprint. The new observations were obtained with the 90prime camera on the Steward Observatory Bok 90 inch telescope, and the Array Camera on the U.S. Naval Observatory, Flagstaff Station, 1.3 m telescope. The catalog covers 1098 square degrees to r = 22.0, an additional 1521 square degrees to r = 20.9, plus a further 488 square degrees of lesser quality data. Statistical errors in the proper motions range from 5 masmore » year{sup −1} at the bright end to 15 mas year{sup −1} at the faint end, for a typical epoch difference of six years. Systematic errors are estimated to be roughly 1 mas year{sup −1} for the Array Camera data, and as much as 2–4 mas year{sup −1} for the 90prime data (though typically less). The catalog also includes a second epoch of r band photometry.« less
Real-time film recording from stroke-written CRT's
NASA Technical Reports Server (NTRS)
Hunt, R.; Grunwald, A. J.
1980-01-01
Real-time simulation studies often require motion-picture recording of events directly from stroke written cathode-ray tubes (CRT's). Difficulty presented is prevention of "flicker," which results from lack of synchronization between display sequence on CRT and shutter motion of camera. Programmable method has been devised for phasing display sequence to shutter motion, ensuring flicker-free recordings.
Determination of the Static Friction Coefficient from Circular Motion
ERIC Educational Resources Information Center
Molina-Bolívar, J. A.; Cabrerizo-Vílchez, M. A.
2014-01-01
This paper describes a physics laboratory exercise for determining the coefficient of static friction between two surfaces. The circular motion of a coin placed on the surface of a rotating turntable has been studied. For this purpose, the motion is recorded with a high-speed digital video camera recording at 240 frames s[superscript-1], and the…
ERIC Educational Resources Information Center
Bumpus, Minnette A.
2005-01-01
Motion pictures and television shows can provide mediums to facilitate the learning of management and organizational behavior theories and concepts. Although the motion pictures and television shows cited in the literature cover a broad range of cinematic categories, racial inclusion is limited. The objectives of this article are to document the…
The Motion Picture and the Teaching of English.
ERIC Educational Resources Information Center
Sheridan, Marion C.; And Others
Written to help a viewer watch a motion picture perceptively, this book explains the characteristics of the film as an art form and examines the role of motion pictures in the English curriculum. Specific topics covered include (1) the technical aspects of the production of films (the order of "shots," camera angle, and point of view), (2) the…
Video Image Stabilization and Registration (VISAR) Software
NASA Technical Reports Server (NTRS)
1999-01-01
Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.
High speed imaging - An important industrial tool
NASA Technical Reports Server (NTRS)
Moore, Alton; Pinelli, Thomas E.
1986-01-01
High-speed photography, which is a rapid sequence of photographs that allow an event to be analyzed through the stoppage of motion or the production of slow-motion effects, is examined. In high-speed photography 16, 35, and 70 mm film and framing rates between 64-12,000 frames per second are utilized to measure such factors as angles, velocities, failure points, and deflections. The use of dual timing lamps in high-speed photography and the difficulties encountered with exposure and programming the camera and event are discussed. The application of video cameras to the recording of high-speed events is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, T; Ayan, A; Cochran, E
Purpose: To assess the performance of Varian’s real-time, Optical Surface Monitoring System (OSMS) by measuring relative regular and irregular surface detection accuracy in 6 degrees of motion (6DoM), across multiple installations. Methods: Varian’s Intracranial SRS Package includes OSMS, which utilizes 3 HD camera/projector pods to map a patient surface, track intra-fraction motion, and gate the treatment beam if motion exceeds a threshold. To evaluate motion-detection accuracy of OSMS, we recorded shifts of a cube-shaped phantom on a single Varian TrueBeam linear accelerator as known displacements were performed incrementally across 6DoM. A subset of these measurements was repeated on identical OSMSmore » installations. Phantom motion was driven using the TrueBeam treatment couch, and incremented across ±2cm in steps of 0.1mm, 1mm, and 1cm in the cardinal planes, and across ±40° in steps of 0.1°, 1°, and 5° in the rotational (couch kick) direction. Pitch and Roll were evaluated across ±2.5° in steps of 0.1° and 1°. We then repeated this procedure with a frameless SRS setup with a head phantom in a QFix Encompass mask. Results: Preliminary data show OSMS is capable of detecting regular-surfaced phantom displacement within 0.03±0.04mm in the cardinal planes, and within 0.01±0.03° rotation across all planes for multiple installations. In a frameless SRS setup, OSMS is accurate to within 0.10±0.07mm and 0.04±0.07° across 6DoM. Additionally, a reproducible “thermal drift” was observed during the first 15min of monitoring each day, and characterized by recording displacement of a stationary phantom each minute for 25min. Drift settled after 15min to an average delta of 0.26±0.03mm and 0.38±0.03mm from the initial capture in the Y and Z directions, respectively. Conclusion: For both regular surfaces and clinical SRS situations, OSMS exceeds quoted detection accuracy. To reduce error, a warm-up period should be employed to allow camera/projector pod thermal stabilization.« less
STS-41 crew is briefed on camera equipment during training session at JSC
NASA Technical Reports Server (NTRS)
1990-01-01
STS-41 crewmembers are briefed on camera equipment during training session at JSC. Trainer Judy M. Alexander explains the use 16mm motion picture equipment to (left to right) Pilot Robert D. Cabana, Mission Specialist (MS) Bruce E. Melnick, and MS Thomas D. Akers.
A smart telerobotic system driven by monocular vision
NASA Technical Reports Server (NTRS)
Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.
1994-01-01
A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.
Ultrafast Imaging of Electronic Motion in Atoms and Molecules
2016-01-12
pulses were measured with a home-made faraday cup and laser-triggered streak camera, respectively. Both are retractable and can measure the beam in...100 fs. The charge and duration of the electron pulses were measured with a home-made faraday cup and laser-triggered streak camera, respectively... faraday cup and laser-triggered streak camera, respectively. Both are retractable and can measure the beam in-situ. The gun was shown to generate pulses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Lin; Kien Ng, Sook; Zhang, Ying
Purpose: Ultrasound is ideal for real-time monitoring in radiotherapy with high soft tissue contrast, non-ionization, portability, and cost effectiveness. Few studies investigated clinical application of real-time ultrasound monitoring for abdominal stereotactic body radiation therapy (SBRT). This study aims to demonstrate the feasibility of real-time monitoring of 3D target motion using 4D ultrasound. Methods: An ultrasound probe holding system was designed to allow clinician to freely move and lock ultrasound probe. For phantom study, an abdominal ultrasound phantom was secured on a 2D programmable respiratory motion stage. One side of the stage was elevated than another side to generate 3D motion.more » The motion stage made periodic breath-hold movement. Phantom movement tracked by infrared camera was considered as ground truth. For volunteer study three healthy subjects underwent the same setup for abdominal SBRT with active breath control (ABC). 4D ultrasound B-mode images were acquired for both phantom and volunteers for real-time monitoring. 10 breath-hold cycles were monitored for each experiment. For phantom, the target motion tracked by ultrasound was compared with motion tracked by infrared camera. For healthy volunteers, the reproducibility of ABC breath-hold was evaluated. Results: Volunteer study showed the ultrasound system fitted well to the clinical SBRT setup. The reproducibility for 10 breath-holds is less than 2 mm in three directions for all three volunteers. For phantom study the motion between inspiration and expiration captured by camera (ground truth) is 2.35±0.02 mm, 1.28±0.04 mm, 8.85±0.03 mm in LR, AP, SI directly, respectively. The motion monitored by ultrasound is 2.21±0.07 mm, 1.32±0.12mm, 9.10±0.08mm, respectively. The motion monitoring error in any direction is less than 0.5 mm. Conclusion: The volunteer study proved the clinical feasibility of real-time ultrasound monitoring for abdominal SBRT. The phantom and volunteer ABC studies demonstrated sub-millimeter accuracy of 3D motion movement monitoring.« less
Optical Mapping of Membrane Potential and Epicardial Deformation in Beating Hearts.
Zhang, Hanyu; Iijima, Kenichi; Huang, Jian; Walcott, Gregory P; Rogers, Jack M
2016-07-26
Cardiac optical mapping uses potentiometric fluorescent dyes to image membrane potential (Vm). An important limitation of conventional optical mapping is that contraction is usually arrested pharmacologically to prevent motion artifacts from obscuring Vm signals. However, these agents may alter electrophysiology, and by abolishing contraction, also prevent optical mapping from being used to study coupling between electrical and mechanical function. Here, we present a method to simultaneously map Vm and epicardial contraction in the beating heart. Isolated perfused swine hearts were stained with di-4-ANEPPS and fiducial markers were glued to the epicardium for motion tracking. The heart was imaged at 750 Hz with a video camera. Fluorescence was excited with cyan or blue LEDs on alternating camera frames, thus providing a 375-Hz effective sampling rate. Marker tracking enabled the pixel(s) imaging any epicardial site within the marked region to be identified in each camera frame. Cyan- and blue-elicited fluorescence have different sensitivities to Vm, but other signal features, primarily motion artifacts, are common. Thus, taking the ratio of fluorescence emitted by a motion-tracked epicardial site in adjacent frames removes artifacts, leaving Vm (excitation ratiometry). Reconstructed Vm signals were validated by comparison to monophasic action potentials and to conventional optical mapping signals. Binocular imaging with additional video cameras enabled marker motion to be tracked in three dimensions. From these data, epicardial deformation during the cardiac cycle was quantified by computing finite strain fields. We show that the method can simultaneously map Vm and strain in a left-sided working heart preparation and can image changes in both electrical and mechanical function 5 min after the induction of regional ischemia. By allowing high-resolution optical mapping in the absence of electromechanical uncoupling agents, the method relieves a long-standing limitation of optical mapping and has potential to enhance new studies in coupled cardiac electromechanics. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Exploring the Language of Films.
ERIC Educational Resources Information Center
Roller, George E.
A film study course written for the Dade County, Fla. public schools is described which covers techniques of motion pictures and their historical development. Techniques include the "language of pictures" (distance shots, angle shots, color, lighting, arrangement), the "language of motion" (camera movement, subject movement),…
Present status of the Japanese Venus climate orbiter
NASA Astrophysics Data System (ADS)
Nakamura, M.; Imamura, T.; Abe, T.; Ishii, N.
The code name of 24th science spacecraft of ISAS/JAXA is Planet-C. It is the first Venus Climate Orbiter (VCO) of Japan. The ministry of finance of Japan finally agreed to start phase B study of VCO from this April, 2004. We plan 1-2 years phase B study followed by 2 years of flight model integration. The spacecraft will be launched between 2009 and 2010. After arriving Venus, 2 years of operation is expected. VCO will complemet the ESA's Venus Express mission which have several spectrometers and will reveal the composition of the Venusian atmosphere. On the other hand, VCO is designed to reveal the details of the atmospheric motion on Venus and approach the dynamics of the Venusian climate. Cooperation between Japanese VCO and ESA's Venus Express, in the colaboration framework of U.S., Europian, and Japanese scienctist is very important. To elucidate the driving mechanism of the 4-days super-rotation is one of our main targets. We have 4 cameras to take snap shots of the planets in different wave lengths. They are the IR1 camera (1 micron-meter), the IR2 camera (2.4 micron-meter), the LIR camera (10-12 micron-meter), and the UVI camera (340nm). They are attached to the side panel of the 3-axis stabilized spacecraft, and are directed to Venus with the spacecraft's attitude control. Snap shots are expected to be taken every 2 hours. The spacecraft has an orbit of 300km x 13Rv (Venusian radii) with 172 degrees inclination. Orbital period is 30 hours. The angular position of the spacecraft on this orbit is synchronized for 20 hours at its apoapsis with the global atmospheric circulation at the altitude of 50km, thus the snap shots of every 2 hours will be the images of the same side of the atmosphere. In addition to these 4 cameras, we have a Lightning and Airglow camera (LAC) in visible range. This will be operated when the orbiter is close to the planet.
ERIC Educational Resources Information Center
Li, Kun-Hsien; Lou, Shi-Jer; Tsai, Huei-Yin; Shih, Ru-Chu
2012-01-01
This study aims to explore the effects of applying game-based learning to webcam motion sensor games for autistic students' sensory integration training for autistic students. The research participants were three autistic students aged from six to ten. Webcam camera as the research tool wad connected internet games to engage in motion sensor…
Xu, Yilei; Roy-Chowdhury, Amit K
2007-05-01
In this paper, we present a theory for combining the effects of motion, illumination, 3D structure, albedo, and camera parameters in a sequence of images obtained by a perspective camera. We show that the set of all Lambertian reflectance functions of a moving object, at any position, illuminated by arbitrarily distant light sources, lies "close" to a bilinear subspace consisting of nine illumination variables and six motion variables. This result implies that, given an arbitrary video sequence, it is possible to recover the 3D structure, motion, and illumination conditions simultaneously using the bilinear subspace formulation. The derivation builds upon existing work on linear subspace representations of reflectance by generalizing it to moving objects. Lighting can change slowly or suddenly, locally or globally, and can originate from a combination of point and extended sources. We experimentally compare the results of our theory with ground truth data and also provide results on real data by using video sequences of a 3D face and the entire human body with various combinations of motion and illumination directions. We also show results of our theory in estimating 3D motion and illumination model parameters from a video sequence.
Simultaneous two-view epipolar geometry estimation and motion segmentation by 4D tensor voting.
Tong, Wai-Shun; Tang, Chi-Keung; Medioni, Gérard
2004-09-01
We address the problem of simultaneous two-view epipolar geometry estimation and motion segmentation from nonstatic scenes. Given a set of noisy image pairs containing matches of n objects, we propose an unconventional, efficient, and robust method, 4D tensor voting, for estimating the unknown n epipolar geometries, and segmenting the static and motion matching pairs into n independent motions. By considering the 4D isotropic and orthogonal joint image space, only two tensor voting passes are needed, and a very high noise to signal ratio (up to five) can be tolerated. Epipolar geometries corresponding to multiple, rigid motions are extracted in succession. Only two uncalibrated frames are needed, and no simplifying assumption (such as affine camera model or homographic model between images) other than the pin-hole camera model is made. Our novel approach consists of propagating a local geometric smoothness constraint in the 4D joint image space, followed by global consistency enforcement for extracting the fundamental matrices corresponding to independent motions. We have performed extensive experiments to compare our method with some representative algorithms to show that better performance on nonstatic scenes are achieved. Results on challenging data sets are presented.
Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles
Atman, Jamal; Popp, Manuel; Ruppelt, Jan; Trommer, Gert F.
2016-01-01
Micro Air Vehicles (MAVs) equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS). In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV’s navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P) problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results. PMID:27649203
The phantom robot - Predictive displays for teleoperation with time delay
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.; Kim, Won S.; Venema, Steven C.
1990-01-01
An enhanced teleoperation technique for time-delayed bilateral teleoperator control is discussed. The control technique selected for time delay is based on the use of a high-fidelity graphics phantom robot that is being controlled in real time (without time delay) against the static task image. Thus, the motion of the phantom robot image on the monitor predicts the motion of the real robot. The real robot's motion will follow the phantom robot's motion on the monitor with the communication time delay implied in the task. Real-time high-fidelity graphics simulation of a PUMA arm is generated and overlaid on the actual camera view of the arm. A simple camera calibration technique is used for calibrated graphics overlay. A preliminary experiment is performed with the predictive display by using a very simple tapping task. The results with this simple task indicate that predictive display enhances the human operator's telemanipulation task performance significantly during free motion when there is a long time delay. It appears, however, that either two-view or stereoscopic predictive displays are necessary for general three-dimensional tasks.
Study of the detail content of Apollo orbital photography
NASA Technical Reports Server (NTRS)
Kinzly, R. E.
1972-01-01
The results achieved during a study of the Detail Content of Apollo Orbital Photography are reported. The effect of residual motion smear or image reproduction processes upon the detail content of lunar surface imagery obtained from the orbiting command module are assessed. Data and conclusions obtained from the Apollo 8, 12, 14 and 15 missions are included. For the Apollo 8, 12 and 14 missions, the bracket-mounted Hasselblad camera had no mechanism internal to the camera for motion compensation. If the motion of the command module were left totally uncompensated, these photographs would exhibit a ground smear varying from 12 to 27 meters depending upon the focal length of the lens and the exposure time. During the photographic sequences motion compensation was attempted by firing the attitude control system of the spacecraft at a rate to compensate for the motion relative to the lunar surface. The residual smear occurring in selected frames of imagery was assessed using edge analyses methods to obtain and achieved modulation transfer function (MTF) which was compared to a baseline MTF.
Dynamics of the formation of an aureole in the bursting of soap films
NASA Astrophysics Data System (ADS)
Liang, N. Y.; Chan, C. K.; Choi, H. J.
1996-10-01
The thickness profiles of the aureole created in the bursting of vertical soap films are studied by a fast line scan charge-coupled device camera. Detail dynamics of the aureole are reported. Phenomena of the wavelike motions of the bursting rim and detachments of the aureole from the bursting film are also observed. We find that the stability of the aureole increases with the surfactant concentrations and is sensitive to the types of surfactant being used. The concentration dependence suggests that the interaction of micelles might be important in the bursting process. Furthermore, the surfactant monolayer in the aureole is found to be highly compressed and behaves like a rigid film. Existing theories of the aureole formation cannot account for all the observed phenomena.
Traffic monitoring with distributed smart cameras
NASA Astrophysics Data System (ADS)
Sidla, Oliver; Rosner, Marcin; Ulm, Michael; Schwingshackl, Gert
2012-01-01
The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. Today the automated analysis of traffic situations is still in its infancy--the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully captured and interpreted by a vision system. 3In this work we present steps towards a visual monitoring system which is designed to detect potentially dangerous traffic situations around a pedestrian crossing at a street intersection. The camera system is specifically designed to detect incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system has been field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in a weatherproof housing. Two cameras run vehicle detection and tracking software, one camera runs a pedestrian detection and tracking module based on the HOG dectection principle. All 3 cameras use sparse optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. Geometric calibration of the cameras allows us to estimate the real-world co-ordinates of detected objects and to link the cameras together into one common reference system. This work describes the foundation for all the different object detection modalities (pedestrians, vehicles), and explains the system setup, tis design, and evaluation results which we have achieved so far.
ERIC Educational Resources Information Center
Ballard, David M.
1990-01-01
Examines the characteristics of three types of motion detectors: Doppler radar, infrared, and ultrasonic wave, and how they are used on school buses to prevent students from being killed by their own school bus. Other safety devices cited are bus crossing arms and a camera monitor system. (MLF)
The algorithm of motion blur image restoration based on PSF half-blind estimation
NASA Astrophysics Data System (ADS)
Chen, Da-Ke; Lin, Zhe
2011-08-01
A novel algorithm of motion blur image restoration based on PSF half-blind estimation with Hough transform was introduced on the basis of full analysis of the principle of TDICCD camera, with the problem that vertical uniform linear motion estimation used by IBD algorithm as the original value of PSF led to image restoration distortion. Firstly, the mathematical model of image degradation was established with the transcendental information of multi-frame images, and then two parameters (movement blur length and angle) that have crucial influence on PSF estimation was set accordingly. Finally, the ultimate restored image can be acquired through multiple iterative of the initial value of PSF estimation in Fourier domain, which the initial value was gained by the above method. Experimental results show that the proposal algorithm can not only effectively solve the image distortion problem caused by relative motion between TDICCD camera and movement objects, but also the details characteristics of original image are clearly restored.
Controlling Brownian motion of single protein molecules and single fluorophores in aqueous buffer.
Cohen, Adam E; Moerner, W E
2008-05-12
We present an Anti-Brownian Electrokinetic trap (ABEL trap) capable of trapping individual fluorescently labeled protein molecules in aqueous buffer. The ABEL trap operates by tracking the Brownian motion of a single fluorescent particle in solution, and applying a time-dependent electric field designed to induce an electrokinetic drift that cancels the Brownian motion. The trapping strength of the ABEL trap is limited by the latency of the feedback loop. In previous versions of the trap, this latency was set by the finite frame rate of the camera used for video-tracking. In the present system, the motion of the particle is tracked entirely in hardware (without a camera or image-processing software) using a rapidly rotating laser focus and lock-in detection. The feedback latency is set by the finite rate of arrival of photons. We demonstrate trapping of individual molecules of the protein GroEL in buffer, and we show confinement of single fluorophores of the dye Cy3 in water.
A reduced-dimensionality approach to uncovering dyadic modes of body motion in conversations.
Gaziv, Guy; Noy, Lior; Liron, Yuvalal; Alon, Uri
2017-01-01
Face-to-face conversations are central to human communication and a fascinating example of joint action. Beyond verbal content, one of the primary ways in which information is conveyed in conversations is body language. Body motion in natural conversations has been difficult to study precisely due to the large number of coordinates at play. There is need for fresh approaches to analyze and understand the data, in order to ask whether dyads show basic building blocks of coupled motion. Here we present a method for analyzing body motion during joint action using depth-sensing cameras, and use it to analyze a sample of scientific conversations. Our method consists of three steps: defining modes of body motion of individual participants, defining dyadic modes made of combinations of these individual modes, and lastly defining motion motifs as dyadic modes that occur significantly more often than expected given the single-person motion statistics. As a proof-of-concept, we analyze the motion of 12 dyads of scientists measured using two Microsoft Kinect cameras. In our sample, we find that out of many possible modes, only two were motion motifs: synchronized parallel torso motion in which the participants swayed from side to side in sync, and still segments where neither person moved. We find evidence of dyad individuality in the use of motion modes. For a randomly selected subset of 5 dyads, this individuality was maintained for at least 6 months. The present approach to simplify complex motion data and to define motion motifs may be used to understand other joint tasks and interactions. The analysis tools developed here and the motion dataset are publicly available.
A reduced-dimensionality approach to uncovering dyadic modes of body motion in conversations
Noy, Lior; Liron, Yuvalal; Alon, Uri
2017-01-01
Face-to-face conversations are central to human communication and a fascinating example of joint action. Beyond verbal content, one of the primary ways in which information is conveyed in conversations is body language. Body motion in natural conversations has been difficult to study precisely due to the large number of coordinates at play. There is need for fresh approaches to analyze and understand the data, in order to ask whether dyads show basic building blocks of coupled motion. Here we present a method for analyzing body motion during joint action using depth-sensing cameras, and use it to analyze a sample of scientific conversations. Our method consists of three steps: defining modes of body motion of individual participants, defining dyadic modes made of combinations of these individual modes, and lastly defining motion motifs as dyadic modes that occur significantly more often than expected given the single-person motion statistics. As a proof-of-concept, we analyze the motion of 12 dyads of scientists measured using two Microsoft Kinect cameras. In our sample, we find that out of many possible modes, only two were motion motifs: synchronized parallel torso motion in which the participants swayed from side to side in sync, and still segments where neither person moved. We find evidence of dyad individuality in the use of motion modes. For a randomly selected subset of 5 dyads, this individuality was maintained for at least 6 months. The present approach to simplify complex motion data and to define motion motifs may be used to understand other joint tasks and interactions. The analysis tools developed here and the motion dataset are publicly available. PMID:28141861
Earth Observations taken by Expedition 41 crewmember
2014-09-13
ISS041-E-013683 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.
Earth Observations taken by Expedition 41 crewmember
2014-09-13
ISS041-E-013687 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.
Earth Observations taken by Expedition 41 crewmember
2014-09-13
ISS041-E-013693 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.
Imaging of optically diffusive media by use of opto-elastography
NASA Astrophysics Data System (ADS)
Bossy, Emmanuel; Funke, Arik R.; Daoudi, Khalid; Tanter, Mickael; Fink, Mathias; Boccara, Claude
2007-02-01
We present a camera-based optical detection scheme designed to detect the transient motion created by the acoustic radiation force in elastic media. An optically diffusive tissue mimicking phantom was illuminated with coherent laser light, and a high speed camera (2 kHz frame rate) was used to acquire and cross-correlate consecutive speckle patterns. Time-resolved transient decorrelations of the optical speckle were measured as the results of localised motion induced in the medium by the radiation force and subsequent propagating shear waves. As opposed to classical acousto-optic techniques which are sensitive to vibrations induced by compressional waves at ultrasonic frequencies, the proposed technique is sensitive only to the low frequency transient motion induced in the medium by the radiation force. It therefore provides a way to assess both optical and shear mechanical properties.
Ultrafast electron microscopy integrated with a direct electron detection camera.
Lee, Young Min; Kim, Young Jae; Kim, Ye-Jin; Kwon, Oh-Hoon
2017-07-01
In the past decade, we have witnessed the rapid growth of the field of ultrafast electron microscopy (UEM), which provides intuitive means to watch atomic and molecular motions of matter. Yet, because of the limited current of the pulsed electron beam resulting from space-charge effects, observations have been mainly made to periodic motions of the crystalline structure of hundreds of nanometers or higher by stroboscopic imaging at high repetition rates. Here, we develop an advanced UEM with robust capabilities for circumventing the present limitations by integrating a direct electron detection camera for the first time which allows for imaging at low repetition rates. This approach is expected to promote UEM to a more powerful platform to visualize molecular and collective motions and dissect fundamental physical, chemical, and materials phenomena in space and time.
Tracking a Head-Mounted Display in a Room-Sized Environment with Head-Mounted Cameras
1990-04-01
poor resolution and a very limited working volume [Wan90]. 4 OPTOTRAK [Nor88] uses one camera with two dual-axis CCD infrared position sensors. Each...Nor88] Northern Digital. Trade literature on Optotrak - Northern Digital’s Three Dimensional Optical Motion Tracking and Analysis System. Northern Digital
Motion Sickness When Driving With a Head-Slaved Camera System
2003-02-01
YPR-765 under armour (Report TM-97-A026). Soesterberg, The Netherlands: TNO Human Factors Research Institute. Van Erp, J.B.F., Padmos, P. & Tenkink, E...Institute. Van Erp, J.B.F., Van den Dobbelsteen, J.J. & Padmos, P. (1998). Improved camera-monitor system for driving YPR-765 under armour (Report TM-98
An HDR imaging method with DTDI technology for push-broom cameras
NASA Astrophysics Data System (ADS)
Sun, Wu; Han, Chengshan; Xue, Xucheng; Lv, Hengyi; Shi, Junxia; Hu, Changhong; Li, Xiangzhi; Fu, Yao; Jiang, Xiaonan; Huang, Liang; Han, Hongyin
2018-03-01
Conventionally, high dynamic-range (HDR) imaging is based on taking two or more pictures of the same scene with different exposure. However, due to a high-speed relative motion between the camera and the scene, it is hard for this technique to be applied to push-broom remote sensing cameras. For the sake of HDR imaging in push-broom remote sensing applications, the present paper proposes an innovative method which can generate HDR images without redundant image sensors or optical components. Specifically, this paper adopts an area array CMOS (complementary metal oxide semiconductor) with the digital domain time-delay-integration (DTDI) technology for imaging, instead of adopting more than one row of image sensors, thereby taking more than one picture with different exposure. And then a new HDR image by fusing two original images with a simple algorithm can be achieved. By conducting the experiment, the dynamic range (DR) of the image increases by 26.02 dB. The proposed method is proved to be effective and has potential in other imaging applications where there is a relative motion between the cameras and scenes.
Mars Odyssey Observes Martian Moons
2018-02-22
Phobos and Deimos, the moons of Mars, are seen by the Mars Odyssey orbiter's Thermal Emission Imaging System, or THEMIS, camera. The images were taken in visible-wavelength light. THEMIS also recorded thermal-infrared imagery in the same scan. The apparent motion is due to progression of the camera's pointing during the 17-second span of the February 15, 2018, observation, not from motion of the two moons. This was the second observation of Phobos by Mars Odyssey; the first was on September 29, 2017. Researchers have been using THEMIS to examine Mars since early 2002, but the maneuver turning the orbiter around to point the camera at Phobos was developed only recently. The distance to Phobos from Odyssey during the observation was about 3,489 miles (5,615 kilometers). The distance to Deimos from Odyssey during the observation was about 12,222 miles (19,670 kilometers). An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA22248
Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras
NASA Astrophysics Data System (ADS)
Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.
2017-02-01
Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.
Optical Indoor Positioning System Based on TFT Technology.
Gőzse, István
2015-12-24
A novel indoor positioning system is presented in the paper. Similarly to the camera-based solutions, it is based on visual detection, but it conceptually differs from the classical approaches. First, the objects are marked by LEDs, and second, a special sensing unit is applied, instead of a camera, to track the motion of the markers. This sensing unit realizes a modified pinhole camera model, where the light-sensing area is fixed and consists of a small number of sensing elements (photodiodes), and it is the hole that can be moved. The markers are tracked by controlling the motion of the hole, such that the light of the LEDs always hits the photodiodes. The proposed concept has several advantages: Apart from its low computational demands, it is insensitive to the disturbing ambient light. Moreover, as every component of the system can be realized by simple and inexpensive elements, the overall cost of the system can be kept low.
Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K
2008-01-01
A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.
STS-36 Mission Specialist Thuot operates 16mm camera on OV-104's middeck
1990-03-03
STS-36 Mission Specialist (MS) Pierre J. Thuot operates 16mm ARRIFLEX motion picture camera mounted on the open airlock hatch via a bracket. Thuot uses the camera to record activity of his fellow STS-36 crewmembers on the middeck of Atlantis, Orbiter Vehicle (OV) 104. Positioned between the airlock hatch and the starboard wall-mounted sleep restraints, Thuot, wearing a FAIRFAX t-shirt, squints into the cameras eye piece. Thuot and four other astronauts spent four days, 10 hours and 19 minutes aboard OV-104 for the Department of Defense (DOD) devoted mission.
STS-36 Mission Specialist Thuot operates 16mm camera on OV-104's middeck
NASA Technical Reports Server (NTRS)
1990-01-01
STS-36 Mission Specialist (MS) Pierre J. Thuot operates 16mm ARRIFLEX motion picture camera mounted on the open airlock hatch via a bracket. Thuot uses the camera to record activity of his fellow STS-36 crewmembers on the middeck of Atlantis, Orbiter Vehicle (OV) 104. Positioned between the airlock hatch and the starboard wall-mounted sleep restraints, Thuot, wearing a FAIRFAX t-shirt, squints into the cameras eye piece. Thuot and four other astronauts spent four days, 10 hours and 19 minutes aboard OV-104 for the Department of Defense (DOD) devoted mission.
NASA Astrophysics Data System (ADS)
Altug, Erdinc
Our work proposes a vision-based stabilization and output tracking control method for a model helicopter. This is a part of our effort to produce a rotorcraft based autonomous Unmanned Aerial Vehicle (UAV). Due to the desired maneuvering ability, a four-rotor helicopter has been chosen as the testbed. On previous research on flying vehicles, vision is usually used as a secondary sensor. Unlike previous research, our goal is to use visual feedback as the main sensor, which is not only responsible for detecting where the ground objects are but also for helicopter localization. A novel two-camera method has been introduced for estimating the full six degrees of freedom (DOF) pose of the helicopter. This two-camera system consists of a pan-tilt ground camera and an onboard camera. The pose estimation algorithm is compared through simulation to other methods, such as four-point, and stereo method and is shown to be less sensitive to feature detection errors. Helicopters are highly unstable flying vehicles; although this is good for agility, it makes the control harder. To build an autonomous helicopter, two methods of control are studied---one using a series of mode-based, feedback linearizing controllers and the other using a back-stepping control law. Various simulations with 2D and 3D models demonstrate the implementation of these controllers. We also show global convergence of the 3D quadrotor controller even with large calibration errors or presence of large errors on the image plane. Finally, we present initial flight experiments where the proposed pose estimation algorithm and non-linear control techniques have been implemented on a remote-controlled helicopter. The helicopter was restricted with a tether to vertical, yaw motions and limited x and y translations.
NASA Astrophysics Data System (ADS)
Laubier, D.; Bodin, P.; Pasquier, H.; Fredon, S.; Levacher, P.; Vola, P.; Buey, T.; Bernardi, P.
2017-11-01
PLATO (PLAnetary Transits and Oscillation of stars) is a candidate for the M3 Medium-size mission of the ESA Cosmic Vision programme (2015-2025 period). It is aimed at Earth-size and Earth-mass planet detection in the habitable zone of bright stars and their characterisation using the transit method and the asterosismology of their host star. That means observing more than 100 000 stars brighter than magnitude 11, and more than 1 000 000 brighter than magnitude 13, with a long continuous observing time for 20 % of them (2 to 3 years). This yields a need for an unusually long term signal stability. For the brighter stars, the noise requirement is less than 34 ppm.hr-1/2, from a frequency of 40 mHz down to 20 μHz, including all sources of noise like for instance the motion of the star images on the detectors and frequency beatings. Those extremely tight requirements result in a payload consisting of 32 synchronised, high aperture, wide field of view cameras thermally regulated down to -80°C, whose data are combined to increase the signal to noise performances. They are split into 4 different subsets pointing at 4 directions to widen the total field of view; stars in the centre of that field of view are observed by all 32 cameras. 2 extra cameras are used with color filters and provide pointing measurement to the spacecraft Attitude and Orbit Control System (AOCS) loop. The satellite is orbiting the Sun at the L2 Lagrange point. This paper presents the optical, electronic and electrical, thermal and mechanical designs devised to achieve those requirements, and the results from breadboards developed for the optics, the focal plane, the power supply and video electronics.
System and method for generating motion corrected tomographic images
Gleason, Shaun S [Knoxville, TN; Goddard, Jr., James S.
2012-05-01
A method and related system for generating motion corrected tomographic images includes the steps of illuminating a region of interest (ROI) to be imaged being part of an unrestrained live subject and having at least three spaced apart optical markers thereon. Simultaneous images are acquired from a first and a second camera of the markers from different angles. Motion data comprising 3D position and orientation of the markers relative to an initial reference position is then calculated. Motion corrected tomographic data obtained from the ROI using the motion data is then obtained, where motion corrected tomographic images obtained therefrom.
Teasing Apart Complex Motions using VideoPoint
NASA Astrophysics Data System (ADS)
Fischer, Mark
2002-10-01
Using video analysis software such as VideoPoint, it is possible to explore the physics of any phenomenon that can be captured on videotape. The good news is that complex motions can be filmed and analyzed. The bad news is that the motions can become very complex very quickly. An example of such a complicated motion, the 2-dimensional motion of an object as filmed by a camera that is moving and rotating in the same plane will be discussed. Methods for extracting the desired object motion will be given as well as suggestions for shooting more easily analyzable video clips.
Automatic 3D relief acquisition and georeferencing of road sides by low-cost on-motion SfM
NASA Astrophysics Data System (ADS)
Voumard, Jérémie; Bornemann, Perrick; Malet, Jean-Philippe; Derron, Marc-Henri; Jaboyedoff, Michel
2017-04-01
3D terrain relief acquisition is important for a large part of geosciences. Several methods have been developed to digitize terrains, such as total station, LiDAR, GNSS or photogrammetry. To digitize road (or rail tracks) sides on long sections, mobile spatial imaging system or UAV are commonly used. In this project, we compare a still fairly new method -the SfM on-motion technics- with some traditional technics of terrain digitizing (terrestrial laser scanning, traditional SfM, UAS imaging solutions, GNSS surveying systems and total stations). The SfM on-motion technics generates 3D spatial data by photogrammetric processing of images taken from a moving vehicle. Our mobile system consists of six action cameras placed on a vehicle. Four fisheye cameras mounted on a mast on the vehicle roof are placed at 3.2 meters above the ground. Three of them have a GNNS chip providing geotagged images. Two pictures were acquired every second by each camera. 4K resolution fisheye videos were also used to extract 8.3M not geotagged pictures. All these pictures are then processed with the Agisoft PhotoScan Professional software. Results from the SfM on-motion technics are compared with results from classical SfM photogrammetry on a 500 meters long alpine track. They were also compared with mobile laser scanning data on the same road section. First results seem to indicate that slope structures are well observable up to decimetric accuracy. For the georeferencing, the planimetric (XY) accuracy of few meters is much better than the altimetric (Z) accuracy. There is indeed a Z coordinate shift of few tens of meters between GoPro cameras and Garmin camera. This makes necessary to give a greater freedom to altimetric coordinates in the processing software. Benefits of this low-cost SfM on-motion method are: 1) a simple setup to use in the field (easy to switch between vehicle types as car, train, bike, etc.), 2) a low cost and 3) an automatic georeferencing of 3D points clouds. Main disadvantages are: 1) results are less accurate than those from LiDAR system, 2) a heavy images processing and 3) a short distance of acquisition.
ERIC Educational Resources Information Center
Smallman, Kirk
The fundamentals of motion picture photography are introduced with a physiological explanation for the illusion of motion in a film. Film stock formats and emulsions, camera features, and lights are listed and described. Various techniques of exposure control are illustrated in terms of their effects. Photographing action with a stationary or a…
A Vision-Based Motion Sensor for Undergraduate Laboratories.
ERIC Educational Resources Information Center
Salumbides, Edcel John; Maristela, Joyce; Uy, Alfredson; Karremans, Kees
2002-01-01
Introduces an alternative method to determine the mechanics of a moving object that uses computer vision algorithms with a charge-coupled device (CCD) camera as a recording device. Presents two experiments, pendulum motion and terminal velocity, to compare results of the alternative and conventional methods. (YDS)
Automatic acquisition of motion trajectories: tracking hockey players
NASA Astrophysics Data System (ADS)
Okuma, Kenji; Little, James J.; Lowe, David
2003-12-01
Computer systems that have the capability of analyzing complex and dynamic scenes play an essential role in video annotation. Scenes can be complex in such a way that there are many cluttered objects with different colors, shapes and sizes, and can be dynamic with multiple interacting moving objects and a constantly changing background. In reality, there are many scenes that are complex, dynamic, and challenging enough for computers to describe. These scenes include games of sports, air traffic, car traffic, street intersections, and cloud transformations. Our research is about the challenge of inventing a descriptive computer system that analyzes scenes of hockey games where multiple moving players interact with each other on a constantly moving background due to camera motions. Ultimately, such a computer system should be able to acquire reliable data by extracting the players" motion as their trajectories, querying them by analyzing the descriptive information of data, and predict the motions of some hockey players based on the result of the query. Among these three major aspects of the system, we primarily focus on visual information of the scenes, that is, how to automatically acquire motion trajectories of hockey players from video. More accurately, we automatically analyze the hockey scenes by estimating parameters (i.e., pan, tilt, and zoom) of the broadcast cameras, tracking hockey players in those scenes, and constructing a visual description of the data by displaying trajectories of those players. Many technical problems in vision such as fast and unpredictable players' motions and rapid camera motions make our challenge worth tackling. To the best of our knowledge, there have not been any automatic video annotation systems for hockey developed in the past. Although there are many obstacles to overcome, our efforts and accomplishments would hopefully establish the infrastructure of the automatic hockey annotation system and become a milestone for research in automatic video annotation in this domain.
Wijenayake, Udaya; Park, Soon-Yong
2017-01-01
Accurate tracking and modeling of internal and external respiratory motion in the thoracic and abdominal regions of a human body is a highly discussed topic in external beam radiotherapy treatment. Errors in target/normal tissue delineation and dose calculation and the increment of the healthy tissues being exposed to high radiation doses are some of the unsolicited problems caused due to inaccurate tracking of the respiratory motion. Many related works have been introduced for respiratory motion modeling, but a majority of them highly depend on radiography/fluoroscopy imaging, wearable markers or surgical node implanting techniques. We, in this article, propose a new respiratory motion tracking approach by exploiting the advantages of an RGB-D camera. First, we create a patient-specific respiratory motion model using principal component analysis (PCA) removing the spatial and temporal noise of the input depth data. Then, this model is utilized for real-time external respiratory motion measurement with high accuracy. Additionally, we introduce a marker-based depth frame registration technique to limit the measuring area into an anatomically consistent region that helps to handle the patient movements during the treatment. We achieved a 0.97 correlation comparing to a spirometer and 0.53 mm average error considering a laser line scanning result as the ground truth. As future work, we will use this accurate measurement of external respiratory motion to generate a correlated motion model that describes the movements of internal tumors. PMID:28792468
Motion Evaluation for Rehabilitation Training of the Disabled
NASA Astrophysics Data System (ADS)
Kim, Tae-Young; Park, Jun; Lim, Cheol-Su
In this paper, a motion evaluation technique for rehabilitation training is introduced. Motion recognition technologies have been developed for determining matching motions in the training set. However, we need to measure how well and how much of the motion has been followed for training motion evaluation. We employed a Finite State Machine as a framework of motion evaluation. For similarity analysis, we used weighted angular value differences although any template matching algorithm may be used. For robustness under illumination changes, IR LED's and cameras with IR-pass filter were used. Developed technique was successfully used for rehabilitation training of the disabled. Therapists appraised the system as practically useful.
NASA Astrophysics Data System (ADS)
Steinmetz, Klaus
1995-05-01
Within the automotive industry, especially for the development and improvement of safety systems, we find a lot of high accelerated motions, that can not be followed and consequently not be analyzed by human eye. For the vehicle safety tests at AUDI, which are performed as 'Crash Tests', 'Sled Tests' and 'Static Component Tests', 'Stalex', 'Hycam', and 'Locam' cameras are in use. Nowadays the automobile production is inconceivable without the use of high speed cameras.
STS-31 crew activity on the middeck of the Earth-orbiting Discovery, OV-103
1990-04-29
STS031-05-002 (24-29 April 1990) --- A 35mm camera with a "fish eye" lens captured this high angle image on Discovery's middeck. Astronaut Kathryn D. Sullivan works with the IMAX camera in foreground, while Astronaut Steven A. Hawley consults a checklist in corner. An Arriflex motion picture camera records student ion arc experiment in apparatus mounted on stowage locker. The experiment was the project of Gregory S. Peterson, currently a student at Utah State University.
Movable Cameras And Monitors For Viewing Telemanipulator
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Venema, Steven C.
1993-01-01
Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.
On a novel low cost high accuracy experimental setup for tomographic particle image velocimetry
NASA Astrophysics Data System (ADS)
Discetti, Stefano; Ianiro, Andrea; Astarita, Tommaso; Cardone, Gennaro
2013-07-01
This work deals with the critical aspects related to cost reduction of a Tomo PIV setup and to the bias errors introduced in the velocity measurements by the coherent motion of the ghost particles. The proposed solution consists of using two independent imaging systems composed of three (or more) low speed single frame cameras, which can be up to ten times cheaper than double shutter cameras with the same image quality. Each imaging system is used to reconstruct a particle distribution in the same measurement region, relative to the first and the second exposure, respectively. The reconstructed volumes are then interrogated by cross-correlation in order to obtain the measured velocity field, as in the standard tomographic PIV implementation. Moreover, differently from tomographic PIV, the ghost particle distributions of the two exposures are uncorrelated, since their spatial distribution is camera orientation dependent. For this reason, the proposed solution promises more accurate results, without the bias effect of the coherent ghost particles motion. Guidelines for the implementation and the application of the present method are proposed. The performances are assessed with a parametric study on synthetic experiments. The proposed low cost system produces a much lower modulation with respect to an equivalent three-camera system. Furthermore, the potential accuracy improvement using the Motion Tracking Enhanced MART (Novara et al 2010 Meas. Sci. Technol. 21 035401) is much higher than in the case of the standard implementation of tomographic PIV.
CCDs in the Mechanics Lab--A Competitive Alternative (Part II).
ERIC Educational Resources Information Center
Pinto, Fabrizio
1995-01-01
Describes a system of interactive astronomy whereby nonscience students are able to acquire their own images from a room remotely linked to a telescope. Briefly discusses some applications of Charge-Coupled Device cameras (CCDs) in teaching free fall, projectile motion, and the motion of the pendulum. (JRH)
Video Analysis of Muscle Motion
ERIC Educational Resources Information Center
Foster, Boyd
2004-01-01
In this article, the author discusses how video cameras can help students in physical education and sport science classes successfully learn and present anatomy and kinesiology content at levels. Video analysis of physical activity is an excellent way to expand student knowledge of muscle location and function, planes and axes of motion, and…
Integrating motion-detection cameras and hair snags for wolverine identification
Audrey J. Magoun; Clinton D. Long; Michael K. Schwartz; Kristine L. Pilgrim; Richard E. Lowell; Patrick Valkenburg
2011-01-01
We developed an integrated system for photographing a wolverine's (Gulo gulo) ventral pattern while concurrently collecting hair for microsatellite DNA genotyping. Our objectives were to 1) test the system on a wild population of wolverines using an array of camera and hair-snag (C&H) stations in forested habitat where wolverines were known to occur, 2)...
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2012-01-01
The recent introduction of inexpensive high-speed cameras offers a new experimental approach to many simple but fast-occurring events in physics. In this paper, the authors present two simple demonstration experiments recorded with high-speed cameras in the fields of gas dynamics and thermal physics. The experiments feature vapour pressure effects…
The World in Slow Motion: Using a High-Speed Camera in a Physics Workshop
ERIC Educational Resources Information Center
Dewanto, Andreas; Lim, Geok Quee; Kuang, Jianhong; Zhang, Jinfeng; Yeo, Ye
2012-01-01
We present a physics workshop for college students to investigate various physical phenomena using high-speed cameras. The technical specifications required, the step-by-step instructions, as well as the practical limitations of the workshop, are discussed. This workshop is also intended to be a novel way to promote physics to Generation-Y…
Analysis of Motorcycle Weave Mode by using Energy Flow Method
NASA Astrophysics Data System (ADS)
Marumo, Yoshitaka; Katayama, Tsuyoshi
The activation mechanism of motorcycle weave mode is clarified within the framework of the energy flow method, which calculates energy flow of mechanical forces in each motion. It is demonstrated that only a few mechanical forces affect the stability of the weave mode from among a total of about 40 mechanical forces. The activation of the lateral, yawing and rolling motions destabilize the weave mode, while activation of the steering motion stabilizes the weave mode. A detailed investigation of the energy flow of the steering motion reveals that the steering motion plays an important role in clarifying the characteristics of the weave mode. As activation of the steering motion progresses the phase of the front tire side force, and the weave mode is consequently stabilized. This paper provides a design guide for stabilizing the weave mode and the wobble mode compatibility.
Robust Parallel Motion Estimation and Mapping with Stereo Cameras in Underground Infrastructure
NASA Astrophysics Data System (ADS)
Liu, Chun; Li, Zhengning; Zhou, Yuan
2016-06-01
Presently, we developed a novel robust motion estimation method for localization and mapping in underground infrastructure using a pre-calibrated rigid stereo camera rig. Localization and mapping in underground infrastructure is important to safety. Yet it's also nontrivial since most underground infrastructures have poor lighting condition and featureless structure. Overcoming these difficulties, we discovered that parallel system is more efficient than the EKF-based SLAM approach since parallel system divides motion estimation and 3D mapping tasks into separate threads, eliminating data-association problem which is quite an issue in SLAM. Moreover, the motion estimation thread takes the advantage of state-of-art robust visual odometry algorithm which is highly functional under low illumination and provides accurate pose information. We designed and built an unmanned vehicle and used the vehicle to collect a dataset in an underground garage. The parallel system was evaluated by the actual dataset. Motion estimation results indicated a relative position error of 0.3%, and 3D mapping results showed a mean position error of 13cm. Off-line process reduced position error to 2cm. Performance evaluation by actual dataset showed that our system is capable of robust motion estimation and accurate 3D mapping in poor illumination and featureless underground environment.
Techy, Fernando; Mageswaran, Prasath; Colbrunn, Robb W; Bonner, Tara F; McLain, Robert F
2013-05-01
Segmental fixation improves fusion rates and promotes patient mobility by controlling instability after lumbar surgery. Efforts to obtain stability using less invasive techniques have lead to the advent of new implants and constructs. A new interspinous fixation device (ISD) has been introduced as a minimally invasive method of stabilizing two adjacent interspinous processes by augmenting an interbody cage in transforaminal interbody fusion. The ISD is intended to replace the standard pedicle screw instrumentation used for posterior fixation. The purpose of this study is to compare the rigidity of these implant systems when supplementing an interbody cage as used in transforaminal lumbar interbody fusion. An in vitro human cadaveric biomechanical study. Seven human cadaver spines (T12 to the sacrum) were mounted in a custom-designed testing apparatus, for biomechanical testing using a multiaxial robotic system. A comparison of segmental stiffness was carried out among five conditions: intact spine control; interbody spacer (IBS), alone; interbody cage with ISD; IBS, ISD, and unilateral pedicle screws (unilat); and IBS, with bilateral pedicle screws (bilat). An industrial robot (KUKA, GmbH, Augsburg, Germany) applied a pure moment (±5 Nm) in flexion-extension (FE), lateral bending (LB), and axial rotation (AR) through an anchor to the T12 vertebral body. The relative vertebral motion was captured using an optoelectronic camera system (Optotrak; Northern Digital, Inc., Waterloo, Ontario, Canada). The load sensor and the camera were synchronized. Maximum rotation was measured at each level and compared with the intact control. Implant constructs were compared with the control and with each other. A statistical analysis was performed using analysis of variance. A comparison between the intact spine and the IBS group showed no significant difference in the range of motion (ROM) in FE, LB, or AR for the operated level, L3-L4. After implantation of the ISD to augment the IBS, there was a significant decrease in the ROM of 74% in FE (p<.001) but no significant change in the ROM in LB and AR. The unilat construct significantly reduced the ROM by 77% compared with FE control (p<.001) and by 55% (p=.002) and 42% (p=.04) in LB and AR, respectively, compared with control. The bilat construct reduced the ROM in FE by 77% (p<.001), LB by 77% (p=.001), and AR by 65% (p=.001) when compared with the control spine. There was no statistically significant difference in the ROM in FE among the stand-alone ISD, unilat, and bilat constructs. However, in both LB and AR, the unilat and the bilat constructs were significantly stiffer (reduction in the ROM) than the ISD and the IBS combination. The ISD stability in LB and AR was not different from the intact control with no instrumentation at all. There was no statistical difference between the stability of the unilat and the bilat constructs in any direction. However, LB and AR in the unilat group produced a mean rotation of 3.83°±3.30° and 2.33°±1.33°, respectively, compared with the bilat construct that limited motion to 1.96°±1.46° and 1.39°±0.73°. There was a trend suggesting that the bilat construct was the most rigid construct. In FE, the ISD can provide lumbar stability comparable with Bilat instrumentation. It provides minimal rigidity in LB and AR when used alone to stabilize the segment after an IBS placement. The unilat and the more typical bilat screw constructs were shown to provide similar levels of stability in all directions after an IBS placement, though the bilat construct showed a trend toward improved stiffness overall. Copyright © 2013 Elsevier Inc. All rights reserved.
Stationary motion stability of monocycle on ice surface
NASA Astrophysics Data System (ADS)
Lebedev, Dmitri A.
2018-05-01
The problem of the one-wheeled crew motion on smooth horizontal ice is considered. The motion equations are worked out in quasicoordinates in the form of Euler-Lagrange's equations. The variety of stationary motions is defined. Stability of some stationary motions is investigated. Comparison of the results received for a similar model of one-wheeled crew at its motion on the horizontal plane without slipping is carried out.
Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle
NASA Astrophysics Data System (ADS)
Ettl, Svenja
2015-04-01
'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.
NASA Technical Reports Server (NTRS)
Rothrock, A M; Spencer, R C; Miller, Cearcy D
1941-01-01
Combustion in a spark-ignition engine was investigated by means of the NACA high-speed motion-picture cameras. This camera is operated at a speed of 40,000 photographs a second and therefore makes possible the study of changes that take place in the intervals as short as 0.000025 second. When the motion pictures are projected at the normal speed of 16 frames a second, any rate of movement shown is slowed down 2500 times. Photographs are presented of normal combustion, of combustion from preignitions, and of knock both with and without preignition. The photographs of combustion show that knock may be preceded by a period of exothermic reaction in the end zone that persists for a time interval of as much as 0.0006 second. The knock takes place in 0.00005 second or less.
Scalable Photogrammetric Motion Capture System "mosca": Development and Application
NASA Astrophysics Data System (ADS)
Knyaz, V. A.
2015-05-01
Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.
NASA Astrophysics Data System (ADS)
Wang, Yanli; Puria, Sunil; Steele, Charles R.; Ricci, Anthony J.
2018-05-01
Mechanical stimulation of the stereocilia hair bundles of the inner and outer hair cells (IHCs and OHCs, respectively) drives IHC synaptic release and OHC electromotility. The modes of hair-bundle motion can have a dramatic influence on the electrophysiological responses of the hair cells. The in vivo modes of motion are, however, unknown for both IHC and OHC bundles. In this work, we are developing technology to investigate the in situ hair-bundle motion in excised mouse cochleae, for which the hair bundles of the OHCs are embedded in the tectorial membrane but those of the IHCs are not. Motion is generated by pushing onto the stapes at 1 kHz with a glass probe coupled to a piezo stack, and recorded using a high-speed camera at 10,000 frames per second. The motions of individual IHC stereocilia and the cell boundary are analyzed using 2D and 1D Gaussian fitting algorithms, respectively. Preliminary results show that the IHC bundle moves mainly in the radial direction and exhibits a small degree of splay, and that the stereocilia in the second row move less than those in the first row, even in the same focal plane.
Vision-based control for flight relative to dynamic environments
NASA Astrophysics Data System (ADS)
Causey, Ryan Scott
The concept of autonomous systems has been considered an enabling technology for a diverse group of military and civilian applications. The current direction for autonomous systems is increased capabilities through more advanced systems that are useful for missions that require autonomous avoidance, navigation, tracking, and docking. To facilitate this level of mission capability, passive sensors, such as cameras, and complex software are added to the vehicle. By incorporating an on-board camera, visual information can be processed to interpret the surroundings. This information allows decision making with increased situational awareness without the cost of a sensor signature, which is critical in military applications. The concepts presented in this dissertation facilitate the issues inherent to vision-based state estimation of moving objects for a monocular camera configuration. The process consists of several stages involving image processing such as detection, estimation, and modeling. The detection algorithm segments the motion field through a least-squares approach and classifies motions not obeying the dominant trend as independently moving objects. An approach to state estimation of moving targets is derived using a homography approach. The algorithm requires knowledge of the camera motion, a reference motion, and additional feature point geometry for both the target and reference objects. The target state estimates are then observed over time to model the dynamics using a probabilistic technique. The effects of uncertainty on state estimation due to camera calibration are considered through a bounded deterministic approach. The system framework focuses on an aircraft platform of which the system dynamics are derived to relate vehicle states to image plane quantities. Control designs using standard guidance and navigation schemes are then applied to the tracking and homing problems using the derived state estimation. Four simulations are implemented in MATLAB that build on the image concepts present in this dissertation. The first two simulations deal with feature point computations and the effects of uncertainty. The third simulation demonstrates the open-loop estimation of a target ground vehicle in pursuit whereas the four implements a homing control design for the Autonomous Aerial Refueling (AAR) using target estimates as feedback.
Evangelista, Dennis J.; Ray, Dylan D.; Hedrick, Tyson L.
2016-01-01
ABSTRACT Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts. PMID:27444791
Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects
Lambers, Martin; Kolb, Andreas
2017-01-01
In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data. PMID:29271888
Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.
Bulczak, David; Lambers, Martin; Kolb, Andreas
2017-12-22
In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.
Photogrammetry System and Method for Determining Relative Motion Between Two Bodies
NASA Technical Reports Server (NTRS)
Miller, Samuel A. (Inventor); Severance, Kurt (Inventor)
2014-01-01
A photogrammetry system and method provide for determining the relative position between two objects. The system utilizes one or more imaging devices, such as high speed cameras, that are mounted on a first body, and three or more photogrammetry targets of a known location on a second body. The system and method can be utilized with cameras having fish-eye, hyperbolic, omnidirectional, or other lenses. The system and method do not require overlapping fields-of-view if two or more cameras are utilized. The system and method derive relative orientation by equally weighting information from an arbitrary number of heterogeneous cameras, all with non-overlapping fields-of-view. Furthermore, the system can make the measurements with arbitrary wide-angle lenses on the cameras.
Keyboard before Head Tracking Depresses User Success in Remote Camera Control
NASA Astrophysics Data System (ADS)
Zhu, Dingyun; Gedeon, Tom; Taylor, Ken
In remote mining, operators of complex machinery have more tasks or devices to control than they have hands. For example, operating a rock breaker requires two handed joystick control to position and fire the jackhammer, leaving the camera control to either automatic control or require the operator to switch between controls. We modelled such a teleoperated setting by performing experiments using a simple physical game analogue, being a half size table soccer game with two handles. The complex camera angles of the mining application were modelled by obscuring the direct view of the play area and the use of a Pan-Tilt-Zoom (PTZ) camera. The camera control was via either a keyboard or via head tracking using two different sets of head gestures called “head motion” and “head flicking” for turning camera motion on/off. Our results show that the head motion control was able to provide a comparable performance to using a keyboard, while head flicking was significantly worse. In addition, the sequence of use of the three control methods is highly significant. It appears that use of the keyboard first depresses successful use of the head tracking methods, with significantly better results when one of the head tracking methods was used first. Analysis of the qualitative survey data collected supports that the worst (by performance) method was disliked by participants. Surprisingly, use of that worst method as the first control method significantly enhanced performance using the other two control methods.
Royo Sánchez, Ana Cristina; Aguilar Martín, Juan José; Santolaria Mazo, Jorge
2014-12-01
Motion capture systems are often used for checking and analyzing human motion in biomechanical applications. It is important, in this context, that the systems provide the best possible accuracy. Among existing capture systems, optical systems are those with the highest accuracy. In this paper, the development of a new calibration procedure for optical human motion capture systems is presented. The performance and effectiveness of that new calibration procedure are also checked by experimental validation. The new calibration procedure consists of two stages. In the first stage, initial estimators of intrinsic and extrinsic parameters are sought. The camera calibration method used in this stage is the one proposed by Tsai. These parameters are determined from the camera characteristics, the spatial position of the camera, and the center of the capture volume. In the second stage, a simultaneous nonlinear optimization of all parameters is performed to identify the optimal values, which minimize the objective function. The objective function, in this case, minimizes two errors. The first error is the distance error between two markers placed in a wand. The second error is the error of position and orientation of the retroreflective markers of a static calibration object. The real co-ordinates of the two objects are calibrated in a co-ordinate measuring machine (CMM). The OrthoBio system is used to validate the new calibration procedure. Results are 90% lower than those from the previous calibration software and broadly comparable with results from a similarly configured Vicon system.
Real-time image mosaicing for medical applications.
Loewke, Kevin E; Camarillo, David B; Jobst, Christopher A; Salisbury, J Kenneth
2007-01-01
In this paper we describe the development of a robotically-assisted image mosaicing system for medical applications. The processing occurs in real-time due to a fast initial image alignment provided by robotic position sensing. Near-field imaging, defined by relatively large camera motion, requires translations as well as pan and tilt orientations to be measured. To capture these measurements we use 5-d.o.f. sensing along with a hand-eye calibration to account for sensor offset. This sensor-based approach speeds up the mosaicing, eliminates cumulative errors, and readily handles arbitrary camera motions. Our results have produced visually satisfactory mosaics on a dental model but can be extended to other medical images.
Power estimation of martial arts movement using 3D motion capture camera
NASA Astrophysics Data System (ADS)
Azraai, Nur Zaidi; Awang Soh, Ahmad Afiq Sabqi; Mat Jafri, Mohd Zubir
2017-06-01
Motion capture camera (MOCAP) has been widely used in many areas such as biomechanics, physiology, animation, arts, etc. This project is done by approaching physics mechanics and the extended of MOCAP application through sports. Most researchers will use a force plate, but this will only can measure the force of impact, but for us, we are keen to observe the kinematics of the movement. Martial arts is one of the sports that uses more than one part of the human body. For this project, martial art `Silat' was chosen because of its wide practice in Malaysia. 2 performers have been selected, one of them has an experienced in `Silat' practice and another one have no experience at all so that we can compare the energy and force generated by the performers. Every performer will generate a punching with same posture which in this project, two types of punching move were selected. Before the measuring start, a calibration has been done so the software knows the area covered by the camera and reduce the error when analyze by using the T stick that have been pasted with a marker. A punching bag with mass 60 kg was hung on an iron bar as a target. The use of this punching bag is to determine the impact force of a performer when they punch. This punching bag also will be stuck with the optical marker so we can observe the movement after impact. 8 cameras have been used and placed with 2 cameras at every side of the wall with different angle in a rectangular room 270 ft2 and the camera covered approximately 50 ft2. We covered only a small area so less noise will be detected and make the measurement more accurate. A Marker has been pasted on the limb of the entire hand that we want to observe and measure. A passive marker used in this project has a characteristic to reflect the infrared that being generated by the camera. The infrared will reflected to the camera sensor so the marker position can be detected and show in software. The used of many cameras is to increase the precision and improve the accuracy of the marker. Performer movement was recorded and analyzed using software Cortex motion analysis where velocity and acceleration of a performer movement can be measured. With classical mechanics approach we have estimated the power and force of impact and shows that an experienced performer produces more power and force of impact is higher than the inexperienced performer.
On the stability of motion of several types of heavy symmetric gyroscopes with damping torques
NASA Astrophysics Data System (ADS)
Ge, Z.-M.; Wu, M.-H.
Sufficient conditions for the stability of motion of several gyroscopes are obtained using Liapunov's direct method. The stability of a 'temporarily' sleeping top with damping torque is considered for the cases of the support being fixed, being in vertical harmonic motion, and being in vertical periodic motion. Sufficient conditions are also obtained for the stability of a heavy symmetric gyroscope with damping torque and motor torque for the cases of regular precession, vertical axis permanent rotation with and without the axis of the outer gimbal being inclined, and the gyroscope being in a Newtonian central gravitational field.
Wright, Cynthia J.; Arnold, Brent L.; Ross, Scott E.
2016-01-01
Context It has been proposed that altered dynamic-control strategies during functional activity such as jump landings may partially explain recurrent instability in individuals with functional ankle instability (FAI). Objective To capture jump-landing time to stabilization (TTS) and ankle motion using a multisegment foot model among FAI, coper, and healthy control individuals. Design Cross-sectional study. Setting Laboratory. Patients or Other Participants Participants were 23 individuals with a history of at least 1 ankle sprain and at least 2 episodes of giving way in the past year (FAI), 23 individuals with a history of a single ankle sprain and no subsequent episodes of instability (copers), and 23 individuals with no history of ankle sprain or instability in their lifetime (controls). Participants were matched for age, height, and weight (age = 23.3 ± 3.8 years, height = 1.71 ± 0.09 m, weight = 69.0 ± 13.7 kg). Intervention(s) Ten single-legged drop jumps were recorded using a 12-camera Vicon MX motion-capture system and a strain-gauge force plate. Main Outcome Measures Mediolateral (ML) and anteroposterior (AP) TTS in seconds, as well as forefoot and hindfoot sagittal- and frontal-plane angles at jump-landing initial contact and at the point of maximum vertical ground reaction force were calculated. Results For the forefoot and hindfoot in the sagittal plane, group differences were present at initial contact (forefoot: P = .043, hindfoot: P = .004). At the hindfoot, individuals with FAI displayed more dorsiflexion than the control and coper groups. Time to stabilization differed among groups (AP TTS: P < .001; ML TTS: P = .040). Anteroposterior TTS was longer in the coper group than in the FAI or control groups, and ML TTS was longer in the FAI group than in the control group. Conclusions During jump landings, copers showed differences in sagittal-plane control, including less plantar flexion at initial contact and increased AP sway during stabilization, which may contribute to increased dynamic stability. PMID:26794631
TH-AB-202-11: Spatial and Rotational Quality Assurance of 6DOF Patient Tracking Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belcher, AH; Liu, X; Grelewicz, Z
2016-06-15
Purpose: External tracking systems used for patient positioning and motion monitoring during radiotherapy are now capable of detecting both translations and rotations (6DOF). In this work, we develop a novel technique to evaluate the 6DOF performance of external motion tracking systems. We apply this methodology to an infrared (IR) marker tracking system and two 3D optical surface mapping systems in a common tumor 6DOF workspace. Methods: An in-house designed and built 6DOF parallel kinematics robotic motion phantom was used to follow input trajectories with sub-millimeter and sub-degree accuracy. The 6DOF positions of the robotic system were then tracked and recordedmore » independently by three optical camera systems. A calibration methodology which associates the motion phantom and camera coordinate frames was first employed, followed by a comprehensive 6DOF trajectory evaluation, which spanned a full range of positions and orientations in a 20×20×16 mm and 5×5×5 degree workspace. The intended input motions were compared to the calibrated 6DOF measured points. Results: The technique found the accuracy of the IR marker tracking system to have maximal root mean square error (RMSE) values of 0.25 mm translationally and 0.09 degrees rotationally, in any one axis, comparing intended 6DOF positions to positions measured by the IR camera. The 6DOF RSME discrepancy for the first 3D optical surface tracking unit yielded maximal values of 0.60 mm and 0.11 degrees over the same 6DOF volume. An earlier generation 3D optical surface tracker was observed to have worse tracking capabilities than both the IR camera unit and the newer 3D surface tracking system with maximal RMSE of 0.74 mm and 0.28 degrees within the same 6DOF evaluation space. Conclusion: The proposed technique was effective at evaluating the performance of 6DOF patient tracking systems. All systems examined exhibited tracking capabilities at the sub-millimeter and sub-degree level within a 6DOF workspace.« less
Large Scale Structure From Motion for Autonomous Underwater Vehicle Surveys
2004-09-01
Govern the Formation of Multiple Images of a Scene and Some of Their Applications. MIT Press, 2001. [26] 0. Faugeras and S. Maybank . Motion from point...Machine Vision Conference, volume 1, pages 384-393, September 2002. [69] S. Maybank and 0. Faugeras. A theory of self-calibration of a moving camera
Settling dynamics of asymmetric rigid fibers
E.J. Tozzi; C Tim Scott; David Vahey; D.J. Klingenberg
2011-01-01
The three-dimensional motion of asymmetric rigid fibers settling under gravity in a quiescent fluid was experimentally measured using a pair of cameras located on a movable platform. The particle motion typically consisted of an initial transient after which the particle approached a steady rate of rotation about an axis parallel to the acceleration of gravity, with...
Design of a compact low-power human-computer interaction equipment for hand motion
NASA Astrophysics Data System (ADS)
Wu, Xianwei; Jin, Wenguang
2017-01-01
Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.
Robust object tracking techniques for vision-based 3D motion analysis applications
NASA Astrophysics Data System (ADS)
Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.
2016-04-01
Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.
NASA Technical Reports Server (NTRS)
Lane, Marc; Hsieh, Cheng; Adams, Lloyd
1989-01-01
In undertaking the design of a 2000-mm focal length camera for the Mariner Mark II series of spacecraft, JPL sought novel materials with the requisite dimensional and thermal stability, outgassing and corrosion resistance, low mass, high stiffness, and moderate cost. Metal-matrix composites and Al-Li alloys have, in addition to excellent mechanical properties and low density, a suitably low coefficient of thermal expansion, high specific stiffness, and good electrical conductivity. The greatest single obstacle to application of these materials to camera structure design is noted to have been the lack of information regarding long-term dimensional stability.
Completely optical orientation determination for an unstabilized aerial three-line camera
NASA Astrophysics Data System (ADS)
Wohlfeil, Jürgen
2010-10-01
Aerial line cameras allow the fast acquisition of high-resolution images at low costs. Unfortunately the measurement of the camera's orientation with the necessary rate and precision is related with large effort, unless extensive camera stabilization is used. But also stabilization implicates high costs, weight, and power consumption. This contribution shows that it is possible to completely derive the absolute exterior orientation of an unstabilized line camera from its images and global position measurements. The presented approach is based on previous work on the determination of the relative orientation of subsequent lines using optical information from the remote sensing system. The relative orientation is used to pre-correct the line images, in which homologous points can reliably be determined using the SURF operator. Together with the position measurements these points are used to determine the absolute orientation from the relative orientations via bundle adjustment of a block of overlapping line images. The approach was tested at a flight with the DLR's RGB three-line camera MFC. To evaluate the precision of the resulting orientation the measurements of a high-end navigation system and ground control points are used.
Camera pose estimation for augmented reality in a small indoor dynamic scene
NASA Astrophysics Data System (ADS)
Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad
2017-09-01
Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.
Calculation for simulation of archery goal value using a web camera and ultrasonic sensor
NASA Astrophysics Data System (ADS)
Rusjdi, Darma; Abdurrasyid, Wulandari, Dewi Arianti
2017-08-01
Development of the device simulator digital indoor archery-based embedded systems as a solution to the limitations of the field or open space is adequate, especially in big cities. Development of the device requires simulations to calculate the value of achieving the target based on the approach defined by the parabolic motion variable initial velocity and direction of motion of the arrow reaches the target. The simulator device should be complemented with an initial velocity measuring device using ultrasonic sensors and measuring direction of the target using a digital camera. The methodology uses research and development of application software from modeling and simulation approach. The research objective to create simulation applications calculating the value of the achievement of the target arrows. Benefits as a preliminary stage for the development of the simulator device of archery. Implementation of calculating the value of the target arrows into the application program generates a simulation game of archery that can be used as a reference development of the digital archery simulator in a room with embedded systems using ultrasonic sensors and web cameras. Applications developed with the simulation calculation comparing the outer radius of the circle produced a camera from a distance of three meters.
Using Wide-Field Meteor Cameras to Actively Engage Students in Science
NASA Astrophysics Data System (ADS)
Kuehn, D. M.; Scales, J. N.
2012-08-01
Astronomy has always afforded teachers an excellent topic to develop students' interest in science. New technology allows the opportunity to inexpensively outfit local school districts with sensitive, wide-field video cameras that can detect and track brighter meteors and other objects. While the data-collection and analysis process can be mostly automated by software, there is substantial human involvement that is necessary in the rejection of spurious detections, in performing dynamics and orbital calculations, and the rare recovery and analysis of fallen meteorites. The continuous monitoring allowed by dedicated wide-field surveillance cameras can provide students with a better understanding of the behavior of the night sky including meteors and meteor showers, stellar motion, the motion of the Sun, Moon, and planets, phases of the Moon, meteorological phenomena, etc. Additionally, some students intrigued by the possibility of UFOs and "alien visitors" may find that actual monitoring data can help them develop methods for identifying "unknown" objects. We currently have two ultra-low light-level surveillance cameras coupled to fish-eye lenses that are actively obtaining data. We have developed curricula suitable for middle or high school students in astronomy and earth science courses and are in the process of testing and revising our materials.
Pinhole/coronograph pointing control system integration and noise reduction analysis
NASA Technical Reports Server (NTRS)
Greene, M.
1981-01-01
The Pinhole Occulter Facility (P/OF) is a Space Shuttle based experiment for the production of solar coronographics and hard X-ray images. The system is basically pinhole camera utilizing a deployable 50-m flexible boom for separating the pinholes and coronograph shields from the recording devices located in the Shuttle bay. At the distal end of the boom from the Shuttle is a 25 kg mask containing pinholes and coronograph shields. At the proximal end the detectors are located and mounted, along with the deployable boom, to the ASPS gimbal pointing system (AGS). The mask must be pointed at the Sun with a high degree of pointing stability and accuracy to align the axes of the detectors with the pinholes and shields. Failure to do so will result in a blurring of the images on the detectors and a loss of resolution. Being a Shuttle based experiment, the system will be subjected to the disturbances of the Shuttle. The worst of these is thruster firing for orbit correction; the Shuttle uses a bang-bang thruster control system to maintain orbit to within preset limits. Other disturbances include man motion, motion induced by other systems, and gravity gradient torques.
Pinhole/coronograph pointing control system integration and noise reduction analysis
NASA Astrophysics Data System (ADS)
Greene, M.
1981-09-01
The Pinhole Occulter Facility (P/OF) is a Space Shuttle based experiment for the production of solar coronographics and hard X-ray images. The system is basically pinhole camera utilizing a deployable 50-m flexible boom for separating the pinholes and coronograph shields from the recording devices located in the Shuttle bay. At the distal end of the boom from the Shuttle is a 25 kg mask containing pinholes and coronograph shields. At the proximal end the detectors are located and mounted, along with the deployable boom, to the ASPS gimbal pointing system (AGS). The mask must be pointed at the Sun with a high degree of pointing stability and accuracy to align the axes of the detectors with the pinholes and shields. Failure to do so will result in a blurring of the images on the detectors and a loss of resolution. Being a Shuttle based experiment, the system will be subjected to the disturbances of the Shuttle. The worst of these is thruster firing for orbit correction; the Shuttle uses a bang-bang thruster control system to maintain orbit to within preset limits. Other disturbances include man motion, motion induced by other systems, and gravity gradient torques.
Real-time Awake Animal Motion Tracking System for SPECT Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goddard Jr, James Samuel; Baba, Justin S; Lee, Seung Joon
Enhancements have been made in the development of a real-time optical pose measurement and tracking system that provides 3D position and orientation data for a single photon emission computed tomography (SPECT) imaging system for awake, unanesthetized, unrestrained small animals. Three optical cameras with infrared (IR) illumination view the head movements of an animal enclosed in a transparent burrow. Markers placed on the head provide landmark points for image segmentation. Strobed IR LED s are synchronized to the cameras and illuminate the markers to prevent motion blur for each set of images. The system using the three cameras automatically segments themore » markers, detects missing data, rejects false reflections, performs trinocular marker correspondence, and calculates the 3D pose of the animal s head. Improvements have been made in methods for segmentation, tracking, and 3D calculation to give higher speed and more accurate measurements during a scan. The optical hardware has been installed within a Siemens MicroCAT II small animal scanner at Johns Hopkins without requiring functional changes to the scanner operation. The system has undergone testing using both phantoms and live mice and has been characterized in terms of speed, accuracy, robustness, and reliability. Experimental data showing these motion tracking results are given.« less
Novel health monitoring method using an RGB camera.
Hassan, M A; Malik, A S; Fofi, D; Saad, N; Meriaudeau, F
2017-11-01
In this paper we present a novel health monitoring method by estimating the heart rate and respiratory rate using an RGB camera. The heart rate and the respiratory rate are estimated from the photoplethysmography (PPG) and the respiratory motion. The method mainly operates by using the green spectrum of the RGB camera to generate a multivariate PPG signal to perform multivariate de-noising on the video signal to extract the resultant PPG signal. A periodicity based voting scheme (PVS) was used to measure the heart rate and respiratory rate from the estimated PPG signal. We evaluated our proposed method with a state of the art heart rate measuring method for two scenarios using the MAHNOB-HCI database and a self collected naturalistic environment database. The methods were furthermore evaluated for various scenarios at naturalistic environments such as a motion variance session and a skin tone variance session. Our proposed method operated robustly during the experiments and outperformed the state of the art heart rate measuring methods by compensating the effects of the naturalistic environment.
Optical Indoor Positioning System Based on TFT Technology
Gőzse, István
2015-01-01
A novel indoor positioning system is presented in the paper. Similarly to the camera-based solutions, it is based on visual detection, but it conceptually differs from the classical approaches. First, the objects are marked by LEDs, and second, a special sensing unit is applied, instead of a camera, to track the motion of the markers. This sensing unit realizes a modified pinhole camera model, where the light-sensing area is fixed and consists of a small number of sensing elements (photodiodes), and it is the hole that can be moved. The markers are tracked by controlling the motion of the hole, such that the light of the LEDs always hits the photodiodes. The proposed concept has several advantages: Apart from its low computational demands, it is insensitive to the disturbing ambient light. Moreover, as every component of the system can be realized by simple and inexpensive elements, the overall cost of the system can be kept low. PMID:26712753
Flexcam Image Capture Viewing and Spot Tracking
NASA Technical Reports Server (NTRS)
Rao, Shanti
2008-01-01
Flexcam software was designed to allow continuous monitoring of the mechanical deformation of the telescope structure at Palomar Observatory. Flexcam allows the user to watch the motion of a star with a low-cost astronomical camera, to measure the motion of the star on the image plane, and to feed this data back into the telescope s control system. This automatic interaction between the camera and a user interface facilitates integration and testing. Flexcam is a CCD image capture and analysis tool for the ST-402 camera from Santa Barbara Instruments Group (SBIG). This program will automatically take a dark exposure and then continuously display corrected images. The image size, bit depth, magnification, exposure time, resolution, and filter are always displayed on the title bar. Flexcam locates the brightest pixel and then computes the centroid position of the pixels falling in a box around that pixel. This tool continuously writes the centroid position to a network file that can be used by other instruments.
Camera Trajectory fromWide Baseline Images
NASA Astrophysics Data System (ADS)
Havlena, M.; Torii, A.; Pajdla, T.
2008-09-01
Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the angle θ of its corresponding rays w.r.t. the optical axis as θ = ar 1+br2 . After a successful calibration, we know the correspondence of the image points to the 3D optical rays in the coordinate system of the camera. The following steps aim at finding the transformation between the camera and the world coordinate systems, i.e. the pose of the camera in the 3D world, using 2D image matches. For computing 3D structure, we construct a set of tentative matches detecting different affine covariant feature regions including MSER, Harris Affine, and Hessian Affine in acquired images. These features are alternative to popular SIFT features and work comparably in our situation. Parameters of the detectors are chosen to limit the number of regions to 1-2 thousands per image. The detected regions are assigned local affine frames (LAF) and transformed into standard positions w.r.t. their LAFs. Discrete Cosine Descriptors are computed for each region in the standard position. Finally, mutual distances of all regions in one image and all regions in the other image are computed as the Euclidean distances of their descriptors and tentative matches are constructed by selecting the mutually closest pairs. Opposed to the methods using short baseline images, simpler image features which are not affine covariant cannot be used because the view point can change a lot between consecutive frames. Furthermore, feature matching has to be performed on the whole frame because no assumptions on the proximity of the consecutive projections can be made for wide baseline images. This is making the feature detection, description, and matching much more time-consuming than it is for short baseline images and limits the usage to low frame rate sequences when operating in real-time. Robust 3D structure can be computed by RANSAC which searches for the largest subset of the set of tentative matches which is, within a predefined threshold ", consistent with an epipolar geometry. We use ordered sampling as suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an arbitrary object recognition routine designed to work with images acquired by perspective cameras can be used without any further modifications.
NASA Astrophysics Data System (ADS)
Blackford, Ethan B.; Estepp, Justin R.
2015-03-01
Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.
Major, J.J.; Dzurisin, D.; Schilling, S.P.; Poland, Michael P.
2009-01-01
We present an analysis of lava dome growth during the 2004–2008 eruption of Mount St. Helens using oblique terrestrial images from a network of remotely placed cameras. This underutilized monitoring tool augmented more traditional monitoring techniques, and was used to provide a robust assessment of the nature, pace, and state of the eruption and to quantify the kinematics of dome growth. Eruption monitoring using terrestrial photography began with a single camera deployed at the mouth of the volcano's crater during the first year of activity. Analysis of those images indicates that the average lineal extrusion rate decayed approximately logarithmically from about 8 m/d to about 2 m/d (± 2 m/d) from November 2004 through December 2005, and suggests that the extrusion rate fluctuated on time scales of days to weeks. From May 2006 through September 2007, imagery from multiple cameras deployed around the volcano allowed determination of 3-dimensional motion across the dome complex. Analysis of the multi-camera imagery shows spatially differential, but remarkably steady to gradually slowing, motion, from about 1–2 m/d from May through October 2006, to about 0.2–1.0 m/d from May through September 2007. In contrast to the fluctuations in lineal extrusion rate documented during the first year of eruption, dome motion from May 2006 through September 2007 was monotonic (± 0.10 m/d) to gradually slowing on time scales of weeks to months. The ability to measure spatial and temporal rates of motion of the effusing lava dome from oblique terrestrial photographs provided a significant, and sometimes the sole, means of identifying and quantifying dome growth during the eruption, and it demonstrates the utility of using frequent, long-term terrestrial photography to monitor and study volcanic eruptions.
Science observations with the IUE using the one-gyro mode
NASA Technical Reports Server (NTRS)
Imhoff, C.; Pitts, R.; Arquilla, R.; Shrader, Chris R.; Perez, M. R.; Webb, J.
1990-01-01
The International Ultraviolet Explorer (IUE) attitude control system originally included an inertial reference package containing six gyroscopes for three axis stabilization. The science instrument includes a prime and redundant Field Error Sensor (FES) camera for target acquisition and offset guiding. Since launch, four of the six gyroscopes have failed. The current attitude control system utilizes the remaining two gyros and a Fine Sun Sensor (FSS) for three axis stabilization. When the next gyro fails, a new attitude control system will be uplinked which will rely on the remaining gyro and the FSS for general three axis stabilization. In addition to the FSS, the FES cameras will be required to assist in maintaining fine attitude control during target acquisition. This has required thoroughly determining the characteristics of the FES cameras and the spectrograph aperture plate as well as devising new target acquisition procedures. The results of this work are presented.
Science observations with the IUE using the one-gyro mode
NASA Technical Reports Server (NTRS)
Imhoff, C.; Pitts, R.; Arquilla, R.; Shrader, C.; Perez, M.; Webb, J.
1990-01-01
The International Ultraviolet Explorer (IUE) attitude control system originally included an inertial reference package containing six gyroscopes for three axis stabilization. The science instrument includes a prime and redundant Field Error Sensor (FES) camera for target acquisition and offset guiding. Since launch, four of the six gyroscopes have failed. The current attitude control system utilizes the remaining two gyros and a Fine Sun Sensor (FSS) for three axis stabilization. When the next gyro fails, a new attitude control system will be uplinked, which will relay on the remaining gyro and the FSS for general three axis stabilization. In addition to the FSS, the FES cameras will be required to assist in maintaining fine attitude control during target acquisition. This has required thoroughly determining the characteristics of the FES cameras and the spectrograph aperture plate as well as devising new target acquisition procedures. The results of this work are presented.
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera
Ci, Wenyan; Huang, Yingping
2016-01-01
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508
Principal axis-based correspondence between multiple cameras for people tracking.
Hu, Weiming; Hu, Min; Zhou, Xue; Tan, Tieniu; Lou, Jianguang; Maybank, Steve
2006-04-01
Visual surveillance using multiple cameras has attracted increasing interest in recent years. Correspondence between multiple cameras is one of the most important and basic problems which visual surveillance using multiple cameras brings. In this paper, we propose a simple and robust method, based on principal axes of people, to match people across multiple cameras. The correspondence likelihood reflecting the similarity of pairs of principal axes of people is constructed according to the relationship between "ground-points" of people detected in each camera view and the intersections of principal axes detected in different camera views and transformed to the same view. Our method has the following desirable properties: 1) Camera calibration is not needed. 2) Accurate motion detection and segmentation are less critical due to the robustness of the principal axis-based feature to noise. 3) Based on the fused data derived from correspondence results, positions of people in each camera view can be accurately located even when the people are partially occluded in all views. The experimental results on several real video sequences from outdoor environments have demonstrated the effectiveness, efficiency, and robustness of our method.
Cinematic camera emulation using two-dimensional color transforms
NASA Astrophysics Data System (ADS)
McElvain, Jon S.; Gish, Walter
2015-02-01
For cinematic and episodic productions, on-set look management is an important component of the creative process, and involves iterative adjustments of the set, actors, lighting and camera configuration. Instead of using the professional motion capture device to establish a particular look, the use of a smaller form factor DSLR is considered for this purpose due to its increased agility. Because the spectral response characteristics will be different between the two camera systems, a camera emulation transform is needed to approximate the behavior of the destination camera. Recently, twodimensional transforms have been shown to provide high-accuracy conversion of raw camera signals to a defined colorimetric state. In this study, the same formalism is used for camera emulation, whereby a Canon 5D Mark III DSLR is used to approximate the behavior a Red Epic cinematic camera. The spectral response characteristics for both cameras were measured and used to build 2D as well as 3x3 matrix emulation transforms. When tested on multispectral image databases, the 2D emulation transforms outperform their matrix counterparts, particularly for images containing highly chromatic content.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaffney, Kelly
Movies have transformed our perception of the world. With slow motion photography, we can see a hummingbird flap its wings, and a bullet pierce an apple. The remarkably small and extremely fast molecular world that determines how your body functions cannot be captured with even the most sophisticated movie camera today. To see chemistry in real time requires a camera capable of seeing molecules that are one ten billionth of a foot with a frame rate of 10 trillion frames per second! SLAC has embarked on the construction of just such a camera. Please join me as I discuss howmore » this molecular movie camera will work and how it will change our perception of the molecular world.« less
The MicronEye Motion Monitor: A New Tool for Class and Laboratory Demonstrations.
ERIC Educational Resources Information Center
Nissan, M.; And Others
1988-01-01
Describes a special camera that can be directly linked to a computer that has been adapted for studying movement. Discusses capture, processing, and analysis of two-dimensional data with either IBM PC or Apple II computers. Gives examples of a variety of mechanical tests including pendulum motion, air track, and air table. (CW)
Observation of Planetary Motion Using a Digital Camera
ERIC Educational Resources Information Center
Meyn, Jan-Peter
2008-01-01
A digital SLR camera with a standard lens (50 mm focal length, f/1.4) on a fixed tripod is used to obtain photographs of the sky which contain stars up to 8[superscript m] apparent magnitude. The angle of view is large enough to ensure visual identification of the photograph with a large sky region in a stellar map. The resolution is sufficient to…
Mauntel, Timothy C; Padua, Darin A; Stanley, Laura E; Frank, Barnett S; DiStefano, Lindsay J; Peck, Karen Y; Cameron, Kenneth L; Marshall, Stephen W
2017-11-01
The Landing Error Scoring System (LESS) can be used to identify individuals with an elevated risk of lower extremity injury. The limitation of the LESS is that raters identify movement errors from video replay, which is time-consuming and, therefore, may limit its use by clinicians. A markerless motion-capture system may be capable of automating LESS scoring, thereby removing this obstacle. To determine the reliability of an automated markerless motion-capture system for scoring the LESS. Cross-sectional study. United States Military Academy. A total of 57 healthy, physically active individuals (47 men, 10 women; age = 18.6 ± 0.6 years, height = 174.5 ± 6.7 cm, mass = 75.9 ± 9.2 kg). Participants completed 3 jump-landing trials that were recorded by standard video cameras and a depth camera. Their movement quality was evaluated by expert LESS raters (standard video recording) using the LESS rubric and by software that automates LESS scoring (depth-camera data). We recorded an error for a LESS item if it was present on at least 2 of 3 jump-landing trials. We calculated κ statistics, prevalence- and bias-adjusted κ (PABAK) statistics, and percentage agreement for each LESS item. Interrater reliability was evaluated between the 2 expert rater scores and between a consensus expert score and the markerless motion-capture system score. We observed reliability between the 2 expert LESS raters (average κ = 0.45 ± 0.35, average PABAK = 0.67 ± 0.34; percentage agreement = 0.83 ± 0.17). The markerless motion-capture system had similar reliability with consensus expert scores (average κ = 0.48 ± 0.40, average PABAK = 0.71 ± 0.27; percentage agreement = 0.85 ± 0.14). However, reliability was poor for 5 LESS items in both LESS score comparisons. A markerless motion-capture system had the same level of reliability as expert LESS raters, suggesting that an automated system can accurately assess movement. Therefore, clinicians can use the markerless motion-capture system to reliably score the LESS without being limited by the time requirements of manual LESS scoring.
Local Dynamic Stability Assessment of Motion Impaired Elderly Using Electronic Textile Pants.
Liu, Jian; Lockhart, Thurmon E; Jones, Mark; Martin, Tom
2008-10-01
A clear association has been demonstrated between gait stability and falls in the elderly. Integration of wearable computing and human dynamic stability measures into home automation systems may help differentiate fall-prone individuals in a residential environment. The objective of the current study was to evaluate the capability of a pair of electronic textile (e-textile) pants system to assess local dynamic stability and to differentiate motion-impaired elderly from their healthy counterparts. A pair of e-textile pants comprised of numerous e-TAGs at locations corresponding to lower extremity joints was developed to collect acceleration, angular velocity and piezoelectric data. Four motion-impaired elderly together with nine healthy individuals (both young and old) participated in treadmill walking with a motion capture system simultaneously collecting kinematic data. Local dynamic stability, characterized by maximum Lyapunov exponent, was computed based on vertical acceleration and angular velocity at lower extremity joints for the measurements from both e-textile and motion capture systems. Results indicated that the motion-impaired elderly had significantly higher maximum Lyapunov exponents (computed from vertical acceleration data) than healthy individuals at the right ankle and hip joints. In addition, maximum Lyapunov exponents assessed by the motion capture system were found to be significantly higher than those assessed by the e-textile system. Despite the difference between these measurement techniques, attaching accelerometers at the ankle and hip joints was shown to be an effective sensor configuration. It was concluded that the e-textile pants system, via dynamic stability assessment, has the potential to identify motion-impaired elderly.
The lucky image-motion prediction for simple scene observation based soft-sensor technology
NASA Astrophysics Data System (ADS)
Li, Yan; Su, Yun; Hu, Bin
2015-08-01
High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.
Visual acuity, contrast sensitivity, and range performance with compressed motion video
NASA Astrophysics Data System (ADS)
Bijl, Piet; de Vries, Sjoerd C.
2010-10-01
Video of visual acuity (VA) and contrast sensitivity (CS) test charts in a complex background was recorded using a CCD color camera mounted on a computer-controlled tripod and was fed into real-time MPEG-2 compression/decompression equipment. The test charts were based on the triangle orientation discrimination (TOD) test method and contained triangle test patterns of different sizes and contrasts in four possible orientations. In a perception experiment, observers judged the orientation of the triangles in order to determine VA and CS thresholds at the 75% correct level. Three camera velocities (0, 1.0, and 2.0 deg/s, or 0, 4.1, and 8.1 pixels/frame) and four compression rates (no compression, 4 Mb/s, 2 Mb/s, and 1 Mb/s) were used. VA is shown to be rather robust to any combination of motion and compression. CS, however, dramatically decreases when motion is combined with high compression ratios. The measured thresholds were fed into the TOD target acquisition model to predict the effect of motion and compression on acquisition ranges for tactical military vehicles. The effect of compression on static performance is limited but strong with motion video. The data suggest that with the MPEG2 algorithm, the emphasis is on the preservation of image detail at the cost of contrast loss.
Towards automated assistance for operating home medical devices.
Gao, Zan; Detyniecki, Marcin; Chen, Ming-Yu; Wu, Wen; Hauptmann, Alexander G; Wactlar, Howard D
2010-01-01
To detect errors when subjects operate a home medical device, we observe them with multiple cameras. We then perform action recognition with a robust approach to recognize action information based on explicitly encoding motion information. This algorithm detects interest points and encodes not only their local appearance but also explicitly models local motion. Our goal is to recognize individual human actions in the operations of a home medical device to see if the patient has correctly performed the required actions in the prescribed sequence. Using a specific infusion pump as a test case, requiring 22 operation steps from 6 action classes, our best classifier selects high likelihood action estimates from 4 available cameras, to obtain an average class recognition rate of 69%.
A robust vision-based sensor fusion approach for real-time pose estimation.
Assa, Akbar; Janabi-Sharifi, Farrokh
2014-02-01
Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.
NASA Technical Reports Server (NTRS)
Donegan, James J; Robinson, Samuel W , Jr; Gates, Ordway, B , jr
1955-01-01
A method is presented for determining the lateral-stability derivatives, transfer-function coefficients, and the modes for lateral motion from frequency-response data for a rigid aircraft. The method is based on the application of the vector technique to the equations of lateral motion, so that the three equations of lateral motion can be separated into six equations. The method of least squares is then applied to the data for each of these equations to yield the coefficients of the equations of lateral motion from which the lateral-stability derivatives and lateral transfer-function coefficients are computed. Two numerical examples are given to demonstrate the use of the method.
Robust real-time extraction of respiratory signals from PET list-mode data.
Salomon, Andre; Zhang, Bin; Olivier, Patrick; Goedicke, Andreas
2018-05-01
Respiratory motion, which typically cannot simply be suspended during PET image acquisition, affects lesions' detection and quantitative accuracy inside or in close vicinity to the lungs. Some motion compensation techniques address this issue via pre-sorting ("binning") of the acquired PET data into a set of temporal gates, where each gate is assumed to be minimally affected by respiratory motion. Tracking respiratory motion is typically realized using dedicated hardware (e.g. using respiratory belts and digital cameras). Extracting respiratory signalsdirectly from the acquired PET data simplifies the clinical workflow as it avoids to handle additional signal measurement equipment. We introduce a new data-driven method "Combined Local Motion Detection" (CLMD). It uses the Time-of-Flight (TOF) information provided by state-of-the-art PET scanners in order to enable real-time respiratory signal extraction without additional hardware resources. CLMD applies center-of-mass detection in overlapping regions based on simple back-positioned TOF event sets acquired in short time frames. Following a signal filtering and quality-based pre-selection step, the remaining extracted individual position information over time is then combined to generate a global respiratory signal. The method is evaluated using 7 measured FDG studies from single and multiple scan positions of the thorax region, and it is compared to other software-based methods regarding quantitative accuracy and statistical noise stability. Correlation coefficients around 90% between the reference and the extracted signal have been found for those PET scans where motion affected features such as tumors or hot regions were present in the PET field-of-view. For PET scans with a quarter of typically applied radiotracer doses, the CLMD method still provides similar high correlation coefficients which indicates its robustness to noise. Each CLMD processing needed less than 0.4s in total on a standard multi-core CPU and thus provides a robust and accurate approach enabling real-time processing capabilities using standard PC hardware. © 2018 Institute of Physics and Engineering in Medicine.
Robust real-time extraction of respiratory signals from PET list-mode data
NASA Astrophysics Data System (ADS)
Salomon, André; Zhang, Bin; Olivier, Patrick; Goedicke, Andreas
2018-06-01
Respiratory motion, which typically cannot simply be suspended during PET image acquisition, affects lesions’ detection and quantitative accuracy inside or in close vicinity to the lungs. Some motion compensation techniques address this issue via pre-sorting (‘binning’) of the acquired PET data into a set of temporal gates, where each gate is assumed to be minimally affected by respiratory motion. Tracking respiratory motion is typically realized using dedicated hardware (e.g. using respiratory belts and digital cameras). Extracting respiratory signals directly from the acquired PET data simplifies the clinical workflow as it avoids handling additional signal measurement equipment. We introduce a new data-driven method ‘combined local motion detection’ (CLMD). It uses the time-of-flight (TOF) information provided by state-of-the-art PET scanners in order to enable real-time respiratory signal extraction without additional hardware resources. CLMD applies center-of-mass detection in overlapping regions based on simple back-positioned TOF event sets acquired in short time frames. Following a signal filtering and quality-based pre-selection step, the remaining extracted individual position information over time is then combined to generate a global respiratory signal. The method is evaluated using seven measured FDG studies from single and multiple scan positions of the thorax region, and it is compared to other software-based methods regarding quantitative accuracy and statistical noise stability. Correlation coefficients around 90% between the reference and the extracted signal have been found for those PET scans where motion affected features such as tumors or hot regions were present in the PET field-of-view. For PET scans with a quarter of typically applied radiotracer doses, the CLMD method still provides similar high correlation coefficients which indicates its robustness to noise. Each CLMD processing needed less than 0.4 s in total on a standard multi-core CPU and thus provides a robust and accurate approach enabling real-time processing capabilities using standard PC hardware.
Multi-Angle Snowflake Camera Instrument Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuefer, Martin; Bailey, J.
2016-07-01
The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less
Minimum Requirements for Taxicab Security Cameras.
Zeng, Shengke; Amandus, Harlan E; Amendola, Alfred A; Newbraugh, Bradley H; Cantis, Douglas M; Weaver, Darlene
2014-07-01
The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability.
NASA Astrophysics Data System (ADS)
Miranda, Alan; Staelens, Steven; Stroobants, Sigrid; Verhaeghe, Jeroen
2017-03-01
Preclinical positron emission tomography (PET) imaging in small animals is generally performed under anesthesia to immobilize the animal during scanning. More recently, for rat brain PET studies, methods to perform scans of unrestrained awake rats are being developed in order to avoid the unwanted effects of anesthesia on the brain response. Here, we investigate the use of a projected structure stereo camera to track the motion of the rat head during the PET scan. The motion information is then used to correct the PET data. The stereo camera calculates a 3D point cloud representation of the scene and the tracking is performed by point cloud matching using the iterative closest point algorithm. The main advantage of the proposed motion tracking is that no intervention, e.g. for marker attachment, is needed. A manually moved microDerenzo phantom experiment and 3 awake rat [18F]FDG experiments were performed to evaluate the proposed tracking method. The tracking accuracy was 0.33 mm rms. After motion correction image reconstruction, the microDerenzo phantom was recovered albeit with some loss of resolution. The reconstructed FWHM of the 2.5 and 3 mm rods increased with 0.94 and 0.51 mm respectively in comparison with the motion-free case. In the rat experiments, the average tracking success rate was 64.7%. The correlation of relative brain regional [18F]FDG uptake between the anesthesia and awake scan reconstructions was increased from on average 0.291 (not significant) before correction to 0.909 (p < 0.0001) after motion correction. Markerless motion tracking using structured light can be successfully used for tracking of the rat head for motion correction in awake rat PET scans.
Retention of the "first-trial effect" in gait-slip among community-living older adults.
Liu, Xuan; Bhatt, Tanvi; Wang, Shuaijie; Yang, Feng; Pai, Yi-Chung Clive
2017-02-01
"First-trial effect" characterizes the rapid adaptive behavior that changes the performance outcome (from fall to non-fall) after merely a single exposure to postural disturbance. The purpose of this study was to investigate how long the first-trial effect could last. Seventy-five (≥ 65 years) community-dwelling older adults, who were protected by an overhead full body harness system, were retested for a single slip 6-12 months after their initial exposure to a single gait-slip. Subjects' body kinematics that was used to compute their proactive (feedforward) and reactive (feedback) control of stability was recorded by an eight-camera motion analysis system. We found the laboratory falls of subjects on their retest slip were significantly lower than that on the novel initial slip, and the reactive stability of these subjects was also significantly improved. However, the proactive stability of subjects remains unchanged between their initial slip and retest slip. The fall rates and stability control had no difference among the 6-, 9-, and 12-month retest groups, which indicated a maximum retention on 12 months after a single slip in the laboratory. These results highlighted the importance of the "first-trial effect" and suggested that perturbation training is effective for fall prevention, with lower trial doses for a long period (up to 1 year). Therefore, single slip training might benefit those older adults who could not tolerate larger doses in reality.
Identifying sports videos using replay, text, and camera motion features
NASA Astrophysics Data System (ADS)
Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.
1999-12-01
Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.
NASA Astrophysics Data System (ADS)
Mundhenk, T. Nathan; Ni, Kang-Yu; Chen, Yang; Kim, Kyungnam; Owechko, Yuri
2012-01-01
An aerial multiple camera tracking paradigm needs to not only spot unknown targets and track them, but also needs to know how to handle target reacquisition as well as target handoff to other cameras in the operating theater. Here we discuss such a system which is designed to spot unknown targets, track them, segment the useful features and then create a signature fingerprint for the object so that it can be reacquired or handed off to another camera. The tracking system spots unknown objects by subtracting background motion from observed motion allowing it to find targets in motion, even if the camera platform itself is moving. The area of motion is then matched to segmented regions returned by the EDISON mean shift segmentation tool. Whole segments which have common motion and which are contiguous to each other are grouped into a master object. Once master objects are formed, we have a tight bound on which to extract features for the purpose of forming a fingerprint. This is done using color and simple entropy features. These can be placed into a myriad of different fingerprints. To keep data transmission and storage size low for camera handoff of targets, we try several different simple techniques. These include Histogram, Spatiogram and Single Gaussian Model. These are tested by simulating a very large number of target losses in six videos over an interval of 1000 frames each from the DARPA VIVID video set. Since the fingerprints are very simple, they are not expected to be valid for long periods of time. As such, we test the shelf life of fingerprints. This is how long a fingerprint is good for when stored away between target appearances. Shelf life gives us a second metric of goodness and tells us if a fingerprint method has better accuracy over longer periods. In videos which contain multiple vehicle occlusions and vehicles of highly similar appearance we obtain a reacquisition rate for automobiles of over 80% using the simple single Gaussian model compared with the null hypothesis of <20%. Additionally, the performance for fingerprints stays well above the null hypothesis for as much as 800 frames. Thus, a simple and highly compact single Gaussian model is useful for target reacquisition. Since the model is agnostic to view point and object size, it is expected to perform as well on a test of target handoff. Since some of the performance degradation is due to problems with the initial target acquisition and tracking, the simple Gaussian model may perform even better with an improved initial acquisition technique. Also, since the model makes no assumption about the object to be tracked, it should be possible to use it to fingerprint a multitude of objects, not just cars. Further accuracy may be obtained by creating manifolds of objects from multiple samples.
Precise Image-Based Motion Estimation for Autonomous Small Body Exploration
NASA Technical Reports Server (NTRS)
Johnson, Andrew Edie; Matthies, Larry H.
2000-01-01
We have developed and tested a software algorithm that enables onboard autonomous motion estimation near small bodies using descent camera imagery and laser altimetry. Through simulation and testing, we have shown that visual feature tracking can decrease uncertainty in spacecraft motion to a level that makes landing on small, irregularly shaped, bodies feasible. Possible future work will include qualification of the algorithm as a flight experiment for the Deep Space 4/Champollion comet lander mission currently under study at the Jet Propulsion Laboratory.
On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation
2015-03-01
SWIR Short Wave Infrared VisualSFM Visual Structure from Motion WPAFB Wright Patterson Air Force Base xi ON THE INTEGRATION OF MEDIUM WAVE INFRARED...Structure from Motion Visual Structure from Motion ( VisualSFM ) is an application that performs incremental SfM using images fed into it of a scene [20...too drastically in between frames. When this happens, VisualSFM will begin creating a new model with images that do not fit to the old one. These new
NASA Astrophysics Data System (ADS)
Bruegge, Carol J.; Val, Sebastian; Diner, David J.; Jovanovic, Veljko; Gray, Ellyn; Di Girolamo, Larry; Zhao, Guangyu
2014-09-01
The Multi-angle Imaging SpectroRadiometer (MISR) has successfully operated on the EOS/ Terra spacecraft since 1999. It consists of nine cameras pointing from nadir to 70.5° view angle with four spectral channels per camera. Specifications call for a radiometric uncertainty of 3% absolute and 1% relative to the other cameras. To accomplish this, MISR utilizes an on-board calibrator (OBC) to measure camera response changes. Once every two months the two Spectralon panels are deployed to direct solar-light into the cameras. Six photodiode sets measure the illumination level that are compared to MISR raw digital numbers, thus determining the radiometric gain coefficients used in Level 1 data processing. Although panel stability is not required, there has been little detectable change in panel reflectance, attributed to careful preflight handling techniques. The cameras themselves have degraded in radiometric response by 10% since launch, but calibration updates using the detector-based scheme has compensated for these drifts and allowed the radiance products to meet accuracy requirements. Validation using Sahara desert observations show that there has been a drift of ~1% in the reported nadir-view radiance over a decade, common to all spectral bands.
Bubble driven quasioscillatory translational motion of catalytic micromotors.
Manjare, Manoj; Yang, Bo; Zhao, Y-P
2012-09-21
A new quasioscillatory translational motion has been observed for big Janus catalytic micromotors with a fast CCD camera. Such motional behavior is found to coincide with both the bubble growth and burst processes resulting from the catalytic reaction, and the competition of the two processes generates a net forward motion. Detailed physical models have been proposed to describe the above processes. It is suggested that the bubble growth process imposes a growth force moving the micromotor forward, while the burst process induces an instantaneous local pressure depression pulling the micromotor backward. The theoretic predictions are consistent with the experimental data.
Bubble Driven Quasioscillatory Translational Motion of Catalytic Micromotors
NASA Astrophysics Data System (ADS)
Manjare, Manoj; Yang, Bo; Zhao, Y.-P.
2012-09-01
A new quasioscillatory translational motion has been observed for big Janus catalytic micromotors with a fast CCD camera. Such motional behavior is found to coincide with both the bubble growth and burst processes resulting from the catalytic reaction, and the competition of the two processes generates a net forward motion. Detailed physical models have been proposed to describe the above processes. It is suggested that the bubble growth process imposes a growth force moving the micromotor forward, while the burst process induces an instantaneous local pressure depression pulling the micromotor backward. The theoretic predictions are consistent with the experimental data.
Stability basin estimates fall risk from observed kinematics, demonstrated on the Sit-to-Stand task.
Shia, Victor; Moore, Talia Yuki; Holmes, Patrick; Bajcsy, Ruzena; Vasudevan, Ram
2018-04-27
The ability to quantitatively measure stability is essential to ensuring the safety of locomoting systems. While the response to perturbation directly reflects the stability of a motion, this experimental method puts human subjects at risk. Unfortunately, existing indirect methods for estimating stability from unperturbed motion have been shown to have limited predictive power. This paper leverages recent advances in dynamical systems theory to accurately estimate the stability of human motion without requiring perturbation. This approach relies on kinematic observations of a nominal Sit-to-Stand motion to construct an individual-specific dynamic model, input bounds, and feedback control that are then used to compute the set of perturbations from which the model can recover. This set, referred to as the stability basin, was computed for 14 individuals, and was able to successfully differentiate between less and more stable Sit-to-Stand strategies for each individual with greater accuracy than existing methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment
NASA Astrophysics Data System (ADS)
Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.
2016-06-01
Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.
Versatile microsecond movie camera
NASA Astrophysics Data System (ADS)
Dreyfus, R. W.
1980-03-01
A laboratory-type movie camera is described which satisfies many requirements in the range 1 microsec to 1 sec. The camera consists of a He-Ne laser and compatible state-of-the-art components; the primary components are an acoustooptic modulator, an electromechanical beam deflector, and a video tape system. The present camera is distinct in its operation in that submicrosecond laser flashes freeze the image motion while still allowing the simplicity of electromechanical image deflection in the millisecond range. The gating and pulse delay circuits of an oscilloscope synchronize the modulator and scanner relative to the subject being photographed. The optical table construction and electronic control enhance the camera's versatility and adaptability. The instant replay video tape recording allows for easy synchronization and immediate viewing of the results. Economy is achieved by using off-the-shelf components, optical table construction, and short assembly time.
ERIC Educational Resources Information Center
Feng, Yongqiang; Max, Ludo
2014-01-01
Purpose: Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable…
STS-28 Columbia, OV-102, MS Brown uses ARRIFLEX camera on aft flight deck
1989-08-13
STS028-17-033 (August 1989) --- Astronaut Mark N. Brown, STS-28 mission specialist, pauses from a session of motion-picture photography conducted through one of the aft windows on the flight deck of the Earth-orbiting Space Shuttle Columbia. He is using an Arriflex camera. The horizon of the blue and white appearing Earth and its airglow are visible in the background.
NASA Astrophysics Data System (ADS)
Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an
2017-09-01
High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.
Educational Aspects of the CONCAM Sky Monitoring Project
NASA Astrophysics Data System (ADS)
Nemiroff, R. J.; Rafert, J. B.; Ftaclas, C.; Pereira, W. E.; Perez-Ramirez, D.
2000-12-01
We have built a prototype CONtinuous CAMera (CONCAM) that mates a fisheye lens to a CCD camera run by a laptop computer. Presently, one CONCAM is deployed at Kitt Peak National Observatory and another is being set up on Mauna Kea in Hawaii. CONCAMs can detect stars of visual magnitude 6 near the image center in a two-minute exposure. CONCAMs are weather-proof, take continuous data from 2 π steradians on the sky, are programmable over the internet, create data files downloadable over the internet, are small enough to fit inside a briefcase, and cost under \\$10 K. . Images archived at http://concam.net can be used to teach many introductory concepts. These include: the rotation of the Earth, the relative location and phase of the Moon, the location and relative motion of planets, the location of the Galactic plane, the motion of Earth satellites, the location and motion of comets, the motion of meteors, the radiant of a meteor shower, the relative locations of interesting stars, and the relative brightness changes of highly variable stars. Concam.net is not meant to replace first hand student observations of the sky, but rather to complement them with classroom-accessible actual-sky-image examples.
Graphics simulation and training aids for advanced teleoperation
NASA Technical Reports Server (NTRS)
Kim, Won S.; Schenker, Paul S.; Bejczy, Antal K.
1993-01-01
Graphics displays can be of significant aid in accomplishing a teleoperation task throughout all three phases of off-line task analysis and planning, operator training, and online operation. In the first phase, graphics displays provide substantial aid to investigate work cell layout, motion planning with collision detection and with possible redundancy resolution, and planning for camera views. In the second phase, graphics displays can serve as very useful tools for introductory training of operators before training them on actual hardware. In the third phase, graphics displays can be used for previewing planned motions and monitoring actual motions in any desired viewing angle, or, when communication time delay prevails, for providing predictive graphics overlay on the actual camera view of the remote site to show the non-time-delayed consequences of commanded motions in real time. This paper addresses potential space applications of graphics displays in all three operational phases of advanced teleoperation. Possible applications are illustrated with techniques developed and demonstrated in the Advanced Teleoperation Laboratory at JPL. The examples described include task analysis and planning of a simulated Solar Maximum Satellite Repair task, a novel force-reflecting teleoperation simulator for operator training, and preview and predictive displays for on-line operations.
Noise Reduction in Brainwaves by Using Both EEG Signals and Frontal Viewing Camera Images
Bang, Jae Won; Choi, Jong-Suk; Park, Kang Ryoung
2013-01-01
Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have been used in various applications, including human–computer interfaces, diagnosis of brain diseases, and measurement of cognitive status. However, EEG signals can be contaminated with noise caused by user's head movements. Therefore, we propose a new method that combines an EEG acquisition device and a frontal viewing camera to isolate and exclude the sections of EEG data containing these noises. This method is novel in the following three ways. First, we compare the accuracies of detecting head movements based on the features of EEG signals in the frequency and time domains and on the motion features of images captured by the frontal viewing camera. Second, the features of EEG signals in the frequency domain and the motion features captured by the frontal viewing camera are selected as optimal ones. The dimension reduction of the features and feature selection are performed using linear discriminant analysis. Third, the combined features are used as inputs to support vector machine (SVM), which improves the accuracy in detecting head movements. The experimental results show that the proposed method can detect head movements with an average error rate of approximately 3.22%, which is smaller than that of other methods. PMID:23669713
Close-range photogrammetry with video cameras
NASA Technical Reports Server (NTRS)
Burner, A. W.; Snow, W. L.; Goad, W. K.
1985-01-01
Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.
Close-Range Photogrammetry with Video Cameras
NASA Technical Reports Server (NTRS)
Burner, A. W.; Snow, W. L.; Goad, W. K.
1983-01-01
Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.
1999-06-01
Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.
Dynamic light scattering microscopy
NASA Astrophysics Data System (ADS)
Dzakpasu, Rhonda
An optical microscope technique, dynamic light scattering microscopy (DLSM) that images dynamically scattered light fluctuation decay rates is introduced. Using physical optics we show theoretically that within the optical resolution of the microscope, relative motions between scattering centers are sufficient to produce significant phase variations resulting in interference intensity fluctuations in the image plane. The time scale for these intensity fluctuations is predicted. The spatial coherence distance defining the average distance between constructive and destructive interference in the image plane is calculated and compared with the pixel size. We experimentally tested DLSM on polystyrene latex nanospheres and living macrophage cells. In order to record these rapid fluctuations, on a slow progressive scan CCD camera, we used a thin laser line of illumination on the sample such that only a single column of pixels in the CCD camera is illuminated. This allowed the use of the rate of the column-by-column readout transfer process as the acquisition rate of the camera. This manipulation increased the data acquisition rate by at least an order of magnitude in comparison to conventional CCD cameras rates defined by frames/s. Analysis of the observed fluctuations provides information regarding the rates of motion of the scattering centers. These rates, acquired from each position on the sample are used to create a spatial map of the fluctuation decay rates. Our experiments show that with this technique, we are able to achieve a good signal-to-noise ratio and can monitor fast intensity fluctuations, on the order of milliseconds. DLSM appears to provide dynamic information about fast motions within cells at a sub-optical resolution scale and provides a new kind of spatial contrast.
Spin Stabilized Impulsively Controlled Missile (SSICM)
NASA Astrophysics Data System (ADS)
Crawford, J. I.; Howell, W. M.
1985-12-01
This patent is for the Spin Stabilized Impulsively Controlled Missile (SSICM). SSICM is a missile configuration which employs spin stabilization, nutational motion, and impulsive thrusting, and a body mounted passive or semiactive sensor to achieve very small miss distances against a high speed moving target. SSICM does not contain an autopilot, control surfaces, a control actuation system, nor sensor stabilization gimbals. SSICM spins at a rate sufficient to provide frequency separation between body motions and inertial target motion. Its impulsive thrusters provide near instantaneous changes in lateral velocity, whereas conventional missiles require a significant time delay to achieve lateral acceleration.
A traffic situation analysis system
NASA Astrophysics Data System (ADS)
Sidla, Oliver; Rosner, Marcin
2011-01-01
The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. For example embedded vision systems built into vehicles can be used as early warning systems, or stationary camera systems can modify the switching frequency of signals at intersections. Today the automated analysis of traffic situations is still in its infancy - the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully understood by a vision system. We present steps towards such a traffic monitoring system which is designed to detect potentially dangerous traffic situations, especially incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system is field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in an outdoor capable housing. Two cameras run vehicle detection software including license plate detection and recognition, one camera runs a complex pedestrian detection and tracking module based on the HOG detection principle. As a supplement, all 3 cameras use additional optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. This work describes the foundation for all 3 different object detection modalities (pedestrians, vehi1cles, license plates), and explains the system setup and its design.
Partial camera automation in an unmanned air vehicle.
Korteling, J E; van der Borg, W
1997-03-01
The present study focused on an intelligent, semiautonomous, interface for a camera operator of a simulated unmanned air vehicle (UAV). This interface used system "knowledge" concerning UAV motion in order to assist a camera operator in tracking an object moving through the landscape below. The semiautomated system compensated for the translations of the UAV relative to the earth. This compensation was accompanied by the appropriate joystick movements ensuring tactile (haptic) feedback of these system interventions. The operator had to superimpose self-initiated joystick manipulations over these system-initiated joystick motions in order to track the motion of a target (a driving truck) relative to the terrain. Tracking data showed that subjects performed substantially better with the active system. Apparently, the subjects had no difficulty in maintaining control, i.e., "following" the active stick while superimposing self-initiated control movements over the system-interventions. Furthermore, tracking performance with an active interface was clearly superior relative to the passive system. The magnitude of this effect was equal to the effect of update-frequency (2-5 Hz) of the monitor image. The benefits of update frequency enhancement and semiautomated tracking were the greatest under difficult steering conditions. Mental workload scores indicated that, for the difficult tracking-dynamics condition, both semiautomation and update frequency increase resulted in less experienced mental effort. For the easier dynamics this effect was only seen for update frequency.
Feng, Yongqiang; Max, Ludo
2014-01-01
Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484
A new position measurement system using a motion-capture camera for wind tunnel tests.
Park, Hyo Seon; Kim, Ji Young; Kim, Jin Gi; Choi, Se Woon; Kim, Yousok
2013-09-13
Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS) could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS). The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape) of the test specimen using system identification methods (frequency domain decomposition, FDD). By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape) with the 3D measurements.
A New Position Measurement System Using a Motion-Capture Camera for Wind Tunnel Tests
Park, Hyo Seon; Kim, Ji Young; Kim, Jin Gi; Choi, Se Woon; Kim, Yousok
2013-01-01
Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS) could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS). The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape) of the test specimen using system identification methods (frequency domain decomposition, FDD). By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape) with the 3D measurements. PMID:24064600
Blind image deblurring based on trained dictionary and curvelet using sparse representation
NASA Astrophysics Data System (ADS)
Feng, Liang; Huang, Qian; Xu, Tingfa; Li, Shao
2015-04-01
Motion blur is one of the most significant and common artifacts causing poor image quality in digital photography, in which many factors resulted. In imaging process, if the objects are moving quickly in the scene or the camera moves in the exposure interval, the image of the scene would blur along the direction of relative motion between the camera and the scene, e.g. camera shake, atmospheric turbulence. Recently, sparse representation model has been widely used in signal and image processing, which is an effective method to describe the natural images. In this article, a new deblurring approach based on sparse representation is proposed. An overcomplete dictionary learned from the trained image samples via the KSVD algorithm is designed to represent the latent image. The motion-blur kernel can be treated as a piece-wise smooth function in image domain, whose support is approximately a thin smooth curve, so we employed curvelet to represent the blur kernel. Both of overcomplete dictionary and curvelet system have high sparsity, which improves the robustness to the noise and more satisfies the observer's visual demand. With the two priors, we constructed restoration model of blurred images and succeeded to solve the optimization problem with the help of alternating minimization technique. The experiment results prove the method can preserve the texture of original images and suppress the ring artifacts effectively.
Stabilization of exact nonlinear Timoshenko beams in space by boundary feedback
NASA Astrophysics Data System (ADS)
Do, K. D.
2018-05-01
Boundary feedback controllers are designed to stabilize Timoshenko beams with large translational and rotational motions in space under external disturbances. The exact nonlinear partial differential equations governing motion of the beams are derived and used in the control design. The designed controllers guarantee globally practically asymptotically (and locally practically exponentially) stability of the beam motions at the reference state. The control design, well-posedness and stability analysis are based on various relationships between the earth-fixed and body-fixed coordinates, Sobolev embeddings, and a Lyapunov-type theorem developed to study well-posedness and stability for a class of evolution systems in Hilbert space. Simulation results are included to illustrate the effectiveness of the proposed control design.
Precise Image-Based Motion Estimation for Autonomous Small Body Exploration
NASA Technical Reports Server (NTRS)
Johnson, Andrew E.; Matthies, Larry H.
1998-01-01
Space science and solar system exploration are driving NASA to develop an array of small body missions ranging in scope from near body flybys to complete sample return. This paper presents an algorithm for onboard motion estimation that will enable the precision guidance necessary for autonomous small body landing. Our techniques are based on automatic feature tracking between a pair of descent camera images followed by two frame motion estimation and scale recovery using laser altimetry data. The output of our algorithm is an estimate of rigid motion (attitude and position) and motion covariance between frames. This motion estimate can be passed directly to the spacecraft guidance and control system to enable rapid execution of safe and precise trajectories.
NASA Technical Reports Server (NTRS)
1978-01-01
The large format camera (LFC) designed as a 30 cm focal length cartographic camera system that employs forward motion compensation in order to achieve the full image resolution provided by its 80 degree field angle lens is described. The feasibility of application of the current LFC design to deployment in the orbiter program as the Orbiter Camera Payload System was assessed and the changes that are necessary to meet such a requirement are discussed. Current design and any proposed design changes were evaluated relative to possible future deployment of the LFC on a free flyer vehicle or in a WB-57F. Preliminary mission interface requirements for the LFC are given.
Influence of camera parameters on the quality of mobile 3D capture
NASA Astrophysics Data System (ADS)
Georgiev, Mihail; Boev, Atanas; Gotchev, Atanas; Hannuksela, Miska
2010-01-01
We investigate the effect of camera de-calibration on the quality of depth estimation. Dense depth map is a format particularly suitable for mobile 3D capture (scalable and screen independent). However, in real-world scenario cameras might move (vibrations, temp. bend) form their designated positions. For experiments, we create a test framework, described in the paper. We investigate how mechanical changes will affect different (4) stereo-matching algorithms. We also assess how different geometric corrections (none, motion compensation-like, full rectification) will affect the estimation quality (how much offset can be still compensated with "crop" over a larger CCD). Finally, we show how estimated camera pose change (E) relates with stereo-matching, which can be used for "rectification quality" measure.
Distributed Sensing and Processing for Multi-Camera Networks
NASA Astrophysics Data System (ADS)
Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.
Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.
Real-time intra-fraction-motion tracking using the treatment couch: a feasibility study
NASA Astrophysics Data System (ADS)
D'Souza, Warren D.; Naqvi, Shahid A.; Yu, Cedric X.
2005-09-01
Significant differences between planned and delivered treatments may occur due to respiration-induced tumour motion, leading to underdosing of parts of the tumour and overdosing of parts of the surrounding critical structures. Existing methods proposed to counter tumour motion include breath-holds, gating and MLC-based tracking. Breath-holds and gating techniques increase treatment time considerably, whereas MLC-based tracking is limited to two dimensions. We present an alternative solution in which a robotic couch moves in real time in response to organ motion. To demonstrate proof-of-principle, we constructed a miniature adaptive couch model consisting of two movable platforms that simulate tumour motion and couch motion, respectively. These platforms were connected via an electronic feedback loop so that the bottom platform responded to the motion of the top platform. We tested our model with a seven-field step-and-shoot delivery case in which we performed three film-based experiments: (1) static geometry, (2) phantom-only motion and (3) phantom motion with simulated couch motion. Our measurements demonstrate that the miniature couch was able to compensate for phantom motion to the extent that the dose distributions were practically indistinguishable from those in static geometry. Motivated by this initial success, we investigated a real-time couch compensation system consisting of a stereoscopic infra-red camera system interfaced to a robotic couch known as the Hexapod™, which responds in real time to any change in position detected by the cameras. Optical reflectors placed on a solid water phantom were used as surrogates for motion. We tested the effectiveness of couch-based motion compensation for fixed fields and a dynamic arc delivery cases. Due to hardware limitations, we performed film-based experiments (1), (2) and (3), with the robotic couch at a phantom motion period and dose rate of 16 s and 100 MU min-1, respectively. Analysis of film measurements showed near-equivalent dose distributions (<=2 mm agreement of corresponding isodose lines) for static geometry and motion-synchronized real-time robotic couch tracking-based radiation delivery.
NASA Astrophysics Data System (ADS)
Shmyrov, A.; Shmyrov, V.; Shymanchuk, D.
2017-10-01
This article considers the motion of a celestial body within the restricted three-body problem of the Sun-Earth system. The equations of controlled coupled attitude-orbit motion in the neighborhood of collinear libration point L1 are investigated. The translational orbital motion of a celestial body is described using Hill's equations of circular restricted three-body problem of the Sun-Earth system. Rotational orbital motion is described using Euler's dynamic equations and quaternion kinematic equation. We investigate the problem of stability of celestial body rotational orbital motion in relative equilibrium positions and stabilization of celestial body rotational orbital motion with proposed control laws in the neighborhood of collinear libration point L1. To study stabilization problem, Lyapunov function is constructed in the form of the sum of the kinetic energy and special "kinematic function" of the Rodriguez-Hamiltonian parameters. Numerical modeling of the controlled rotational motion of a celestial body at libration point L1 is carried out. The numerical characteristics of the control parameters and rotational motion are given.
Observation and analysis of high-speed human motion with frequent occlusion in a large area
NASA Astrophysics Data System (ADS)
Wang, Yuru; Liu, Jiafeng; Liu, Guojun; Tang, Xianglong; Liu, Peng
2009-12-01
The use of computer vision technology in collecting and analyzing statistics during sports matches or training sessions is expected to provide valuable information for tactics improvement. However, the measurements published in the literature so far are either unreliably documented to be used in training planning due to their limitations or unsuitable for studying high-speed motion in large area with frequent occlusions. A sports annotation system is introduced in this paper for tracking high-speed non-rigid human motion over a large playing area with the aid of motion camera, taking short track speed skating competitions as an example. The proposed system is composed of two sub-systems: precise camera motion compensation and accurate motion acquisition. In the video registration step, a distinctive invariant point feature detector (probability density grads detector) and a global parallax based matching points filter are used, to provide reliable and robust matching across a large range of affine distortion and illumination change. In the motion acquisition step, a two regions' relationship constrained joint color model and Markov chain Monte Carlo based joint particle filter are emphasized, by dividing the human body into two relative key regions. Several field tests are performed to assess measurement errors, including comparison to popular algorithms. With the help of the system presented, the system obtains position data on a 30 m × 60 m large rink with root-mean-square error better than 0.3975 m, velocity and acceleration data with absolute error better than 1.2579 m s-1 and 0.1494 m s-2, respectively.
3D video-based deformation measurement of the pelvis bone under dynamic cyclic loading
2011-01-01
Background Dynamic three-dimensional (3D) deformation of the pelvic bones is a crucial factor in the successful design and longevity of complex orthopaedic oncological implants. The current solutions are often not very promising for the patient; thus it would be interesting to measure the dynamic 3D-deformation of the whole pelvic bone in order to get a more realistic dataset for a better implant design. Therefore we hypothesis if it would be possible to combine a material testing machine with a 3D video motion capturing system, used in clinical gait analysis, to measure the sub millimetre deformation of a whole pelvis specimen. Method A pelvis specimen was placed in a standing position on a material testing machine. Passive reflective markers, traceable by the 3D video motion capturing system, were fixed to the bony surface of the pelvis specimen. While applying a dynamic sinusoidal load the 3D-movement of the markers was recorded by the cameras and afterwards the 3D-deformation of the pelvis specimen was computed. The accuracy of the 3D-movement of the markers was verified with 3D-displacement curve with a step function using a manual driven 3D micro-motion-stage. Results The resulting accuracy of the measurement system depended on the number of cameras tracking a marker. The noise level for a marker seen by two cameras was during the stationary phase of the calibration procedure ± 0.036 mm, and ± 0.022 mm if tracked by 6 cameras. The detectable 3D-movement performed by the 3D-micro-motion-stage was smaller than the noise level of the 3D-video motion capturing system. Therefore the limiting factor of the setup was the noise level, which resulted in a measurement accuracy for the dynamic test setup of ± 0.036 mm. Conclusion This 3D test setup opens new possibilities in dynamic testing of wide range materials, like anatomical specimens, biomaterials, and its combinations. The resulting 3D-deformation dataset can be used for a better estimation of material characteristics of the underlying structures. This is an important factor in a reliable biomechanical modelling and simulation as well as in a successful design of complex implants. PMID:21762533
Ambient-Light-Canceling Camera Using Subtraction of Frames
NASA Technical Reports Server (NTRS)
Morookian, John Michael
2004-01-01
The ambient-light-canceling camera (ALCC) is a proposed near-infrared electronic camera that would utilize a combination of (1) synchronized illumination during alternate frame periods and (2) subtraction of readouts from consecutive frames to obtain images without a background component of ambient light. The ALCC is intended especially for use in tracking the motion of an eye by the pupil center corneal reflection (PCCR) method. Eye tracking by the PCCR method has shown potential for application in human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological deficiencies. In the PCCR method, an eye is illuminated by near-infrared light from a lightemitting diode (LED). Some of the infrared light is reflected from the surface of the cornea. Some of the infrared light enters the eye through the pupil and is reflected from back of the eye out through the pupil a phenomenon commonly observed as the red-eye effect in flash photography. An electronic camera is oriented to image the user's eye. The output of the camera is digitized and processed by algorithms that locate the two reflections. Then from the locations of the centers of the two reflections, the direction of gaze is computed. As described thus far, the PCCR method is susceptible to errors caused by reflections of ambient light. Although a near-infrared band-pass optical filter can be used to discriminate against ambient light, some sources of ambient light have enough in-band power to compete with the LED signal. The mode of operation of the ALCC would complement or supplant spectral filtering by providing more nearly complete cancellation of the effect of ambient light. In the operation of the ALCC, a near-infrared LED would be pulsed on during one camera frame period and off during the next frame period. Thus, the scene would be illuminated by both the LED (signal) light and the ambient (background) light during one frame period, and would be illuminated with only ambient (background) light during the next frame period. The camera output would be digitized and sent to a computer, wherein the pixel values of the background-only frame would be subtracted from the pixel values of the signal-plus-background frame to obtain signal-only pixel values (see figure). To prevent artifacts of motion from entering the images, it would be necessary to acquire image data at a rate greater than the standard video rate of 30 frames per second. For this purpose, the ALCC would exploit a novel control technique developed at NASA s Jet Propulsion Laboratory for advanced charge-coupled-device (CCD) cameras. This technique provides for readout from a subwindow [region of interest (ROI)] within the image frame. Because the desired reflections from the eye would typically occupy a small fraction of the area within the image frame, the ROI capability would make it possible to acquire and subtract pixel values at rates of several hundred frames per second considerably greater than the standard video rate and sufficient to both (1) suppress motion artifacts and (2) track the motion of the eye between consecutive subtractive frame pairs.
Walsh, Mark; Peper, Andreas; Bierbaum, Stefanie; Karamanidis, Kiros; Arampatzis, Adamantios
2011-04-01
The present study aimed to investigate the effect of lower extremity muscle fatigue on the dynamic stability control of physically active adults during forward falls. Thirteen participants (body mass: 70.2kg, height: 175cm) were instructed to regain balance with a single step after a sudden induced fall from a forward-leaning position before and after the fatigue protocol. The ground reaction forces were collected using four force plates at a sampling rate of 1080Hz. Kinematic data were recorded with 12 vicon cameras operating at 120Hz. Neither the reaction time nor the duration until touchdown showed any differences (p>0.05). The ability of the subjects to prevent falling did not change after the fatigue protocol. In the fatigued condition, the participants demonstrated an increase in knee flexion during the main stance phase and an increased time to decelerate the horizontal CM motion (both p<0.05). Significant (p<0.05) decreases were seen post-fatigue in average horizontal and vertical force and maximum knee and ankle joint moments. The fatigue related decrease in muscle strength did not affect the margin of stability, the boundary of the base of support or the position of the extrapolated centre of mass during the forward induced falls, indicating an appropriate adjustment of the motor commands to compensate the deficit in muscle strength. Copyright © 2010 Elsevier Ltd. All rights reserved.
Floquet stability analysis of the longitudinal dynamics of two hovering model insects
Wu, Jiang Hao; Sun, Mao
2012-01-01
Because of the periodically varying aerodynamic and inertial forces of the flapping wings, a hovering or constant-speed flying insect is a cyclically forcing system, and, generally, the flight is not in a fixed-point equilibrium, but in a cyclic-motion equilibrium. Current stability theory of insect flight is based on the averaged model and treats the flight as a fixed-point equilibrium. In the present study, we treated the flight as a cyclic-motion equilibrium and used the Floquet theory to analyse the longitudinal stability of insect flight. Two hovering model insects were considered—a dronefly and a hawkmoth. The former had relatively high wingbeat frequency and small wing-mass to body-mass ratio, and hence very small amplitude of body oscillation; while the latter had relatively low wingbeat frequency and large wing-mass to body-mass ratio, and hence relatively large amplitude of body oscillation. For comparison, analysis using the averaged-model theory (fixed-point stability analysis) was also made. Results of both the cyclic-motion stability analysis and the fixed-point stability analysis were tested by numerical simulation using complete equations of motion coupled with the Navier–Stokes equations. The Floquet theory (cyclic-motion stability analysis) agreed well with the simulation for both the model dronefly and the model hawkmoth; but the averaged-model theory gave good results only for the dronefly. Thus, for an insect with relatively large body oscillation at wingbeat frequency, cyclic-motion stability analysis is required, and for their control analysis, the existing well-developed control theories for systems of fixed-point equilibrium are no longer applicable and new methods that take the cyclic variation of the flight dynamics into account are needed. PMID:22491980
A Single Camera Motion Capture System for Human-Computer Interaction
NASA Astrophysics Data System (ADS)
Okada, Ryuzo; Stenger, Björn
This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.
Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras
Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Guo, Yin
2016-01-01
The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained. PMID:27869731
Detection of unmanned aerial vehicles using a visible camera system.
Hu, Shuowen; Goldman, Geoffrey H; Borel-Donohue, Christoph C
2017-01-20
Unmanned aerial vehicles (UAVs) flown by adversaries are an emerging asymmetric threat to homeland security and the military. To help address this threat, we developed and tested a computationally efficient UAV detection algorithm consisting of horizon finding, motion feature extraction, blob analysis, and coherence analysis. We compare the performance of this algorithm against two variants, one using the difference image intensity as the motion features and another using higher-order moments. The proposed algorithm and its variants are tested using field test data of a group 3 UAV acquired with a panoramic video camera in the visible spectrum. The performance of the algorithms was evaluated using receiver operating characteristic curves. The results show that the proposed approach had the best performance compared to the two algorithmic variants.
Estimation of velocities via optical flow
NASA Astrophysics Data System (ADS)
Popov, A.; Miller, A.; Miller, B.; Stepanyan, K.
2017-02-01
This article presents an approach to the optical flow (OF) usage as a general navigation means providing the information about the linear and angular vehicle's velocities. The term of "OF" came from opto-electronic devices where it corresponds to a video sequence of images related to the camera motion either over static surfaces or set of objects. Even if the positions of these objects are unknown in advance, one can estimate the camera motion provided just by video sequence itself and some metric information, such as distance between the objects or the range to the surface. This approach is applicable to any passive observation system which is able to produce a sequence of images, such as radio locator or sonar. Here the UAV application of the OF is considered since it is historically
Optic flow-based collision-free strategies: From insects to robots.
Serres, Julien R; Ruffier, Franck
2017-09-01
Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Comparison of epicardial deformation in passive and active isolated rabbit hearts
NASA Astrophysics Data System (ADS)
Ho, Andrew; Tang, Liang; Chiang, Fu-Pen; Lin, Shien-Fong
2007-02-01
Mechanical deformation of isolated rabbit hearts through passive inflation techniques have been a viable form of replicating heart motion, but its relation to the heart's natural active contractions remain unclear. The mechanical properties of the myocardium may show diverse characteristics while in tension and compression. In this study, epicardial strain was measured with the assistance of computer-aided speckle interferometry (CASI)1. CASI tracks the movement of clusters of particles for measuring epicardial deformation. The heart was cannulated and perfused with Tyrode's solution. Silicon carbide particles were applied onto the myocardium to form random speckle pattern images while the heart was allowed to actively contract and stabilize. High resolution videos (1000x1000 pixels) of the left ventricle were taken with a complementary metal oxide semiconductor (CMOS) camera as the heart was actively contracting through electrical pacing at various cycle lengths between 250-800 ms. A latex balloon was then inserted into the left ventricle via left atrium and videos were taken as the balloon was repeatedly inflated and deflated at controlled volumes (1-3 ml/cycle). The videos were broken down into frames and analyzed through CASI. Active contractions resulted in non-uniform circular epicardial and uniaxial contractions at different stages of the motion. In contrast, the passive heart demonstrated very uniform expansion and contraction originating from the source of the latex balloon. The motion of the active heart caused variations in deformation, but in comparison to the passive heart, had a more enigmatic displacement field. The active heart demonstrated areas of large displacement and others with relatively no displacement. Application of CASI was able to successfully distinguish the motions between the active and passive hearts.
3-D Velocimetry of Strombolian Explosions
NASA Astrophysics Data System (ADS)
Taddeucci, J.; Gaudin, D.; Orr, T. R.; Scarlato, P.; Houghton, B. F.; Del Bello, E.
2014-12-01
Using two synchronized high-speed cameras we were able to reconstruct the three-dimensional displacement and velocity field of bomb-sized pyroclasts in Strombolian explosions at Stromboli Volcano. Relatively low-intensity Strombolian-style activity offers a rare opportunity to observe volcanic processes that remain hidden from view during more violent explosive activity. Such processes include the ejection and emplacement of bomb-sized clasts along pure or drag-modified ballistic trajectories, in-flight bomb collision, and gas liberation dynamics. High-speed imaging of Strombolian activity has already opened new windows for the study of the abovementioned processes, but to date has only utilized two-dimensional analysis with limited motion detection and ability to record motion towards or away from the observer. To overcome this limitation, we deployed two synchronized high-speed video cameras at Stromboli. The two cameras, located sixty meters apart, filmed Strombolian explosions at 500 and 1000 frames per second and with different resolutions. Frames from the two cameras were pre-processed and combined into a single video showing frames alternating from one to the other camera. Bomb-sized pyroclasts were then manually identified and tracked in the combined video, together with fixed reference points located as close as possible to the vent. The results from manual tracking were fed to a custom software routine that, knowing the relative position of the vent and cameras, and the field of view of the latter, provided the position of each bomb relative to the reference points. By tracking tens of bombs over five to ten frames at different intervals during one explosion, we were able to reconstruct the three-dimensional evolution of the displacement and velocity fields of bomb-sized pyroclasts during individual Strombolian explosions. Shifting jet directivity and dispersal angle clearly appear from the three-dimensional analysis.
The kinelite project. A new powerful motion analyser for spacelab and space station
NASA Astrophysics Data System (ADS)
Venet, M.; Pinard, H.; McIntyre, J.; Berthoz, A.; Lacquaniti, F.
The goal of the Kinelite Project is to develop a space qualified motion analysis system to be used in space by the scientific community, mainly to support neuroscience protocols. The measurement principle of the Kinelite is to determine, by triangulation mean, the 3D position of small, lightweight, reflective markers positionned at the different points of interest. The scene is illuminated by Infra Red flashes and the reflected light is acquired by up to 8 precalibrated and synchronized CCD cameras. The main characteristics of the system are: - Camera field of view: 45 °, - Number of cameras: 2 to 8, - Acquisition frequency: 25, 50, 100 or 200 Hz, - CCD format: 256 × 256, - Number of markers: up to 64, - 3D accuracy: 2 mm, - Main dimensions: 45 cm × 45 cm × 30 cm, - Mass: 23 kg, - Power consumption: less than 200 W. The Kinelite will first fly aboard the NASA Spacelab; it will be used, during the NEUROLAB mission (4/98), to support the "Frames of References and Internal Models" (Principal Investigator: Pr. A.BERTHOZ, Co Investigators: J. Mc INTYRE, F. LACQUANITI).
Vision robot with rotational camera for searching ID tags
NASA Astrophysics Data System (ADS)
Kimura, Nobutaka; Moriya, Toshio
2008-02-01
We propose a new concept, called "real world crawling", in which intelligent mobile sensors completely recognize environments by actively gathering information in those environments and integrating that information on the basis of location. First we locate objects by widely and roughly scanning the entire environment with these mobile sensors, and we check the objects in detail by moving the sensors to find out exactly what and where they are. We focused on the automation of inventory counting with barcodes as an application of our concept. We developed "a barcode reading robot" which autonomously moved in a warehouse. It located and read barcode ID tags using a camera and a barcode reader while moving. However, motion blurs caused by the robot's translational motion made it difficult to recognize the barcodes. Because of the high computational cost of image deblurring software, we used the pan rotation of the camera to reduce these blurs. We derived the appropriate pan rotation velocity from the robot's translational velocity and from the distance to the surfaces of barcoded boxes. We verified the effectiveness of our method in an experimental test.
Muscle forces analysis in the shoulder mechanism during wheelchair propulsion.
Lin, Hwai-Ting; Su, Fong-Chin; Wu, Hong-Wen; An, Kai-Nan
2004-01-01
This study combines an ergometric wheelchair, a six-camera video motion capture system and a prototype computer graphics based musculoskeletal model (CGMM) to predict shoulder joint loading, muscle contraction force per muscle and the sequence of muscular actions during wheelchair propulsion, and also to provide an animated computer graphics model of the relative interactions. Five healthy male subjects with no history of upper extremity injury participated. A conventional manual wheelchair was equipped with a six-component load cell to collect three-dimensional forces and moments experienced by the wheel, allowing real-time measurement of hand/rim force applied by subjects during normal wheelchair operation. An ExpertVision six-camera video motion capture system collected trajectory data of markers attached on anatomical positions. The CGMM was used to simulate and animate muscle action by using an optimization technique combining observed muscular motions with physiological constraints to estimate muscle contraction forces during wheelchair propulsion. The CGMM provides results that satisfactorily match the predictions of previous work, disregarding minor differences which presumably result from differing experimental conditions, measurement technologies and subjects. Specifically, the CGMM shows that the supraspinatus, infraspinatus, anterior deltoid, pectoralis major and biceps long head are the prime movers during the propulsion phase. The middle and posterior deltoid and supraspinatus muscles are responsible for arm return during the recovery phase. CGMM modelling shows that the rotator cuff and pectoralis major play an important role during wheelchair propulsion, confirming the known risk of injury for these muscles during wheelchair propulsion. The CGMM successfully transforms six-camera video motion capture data into a technically useful and visually interesting animated video model of the shoulder musculoskeletal system. The CGMM further yields accurate estimates of muscular forces during motion, indicating that this prototype modelling and analysis technique will aid in study, analysis and therapy of the mechanics and underlying pathomechanics involved in various musculoskeletal overuse syndromes.
Recognition of Drainage Tunnels during Glacier Lake Outburst Events from Terrestrial Image Sequences
NASA Astrophysics Data System (ADS)
Schwalbe, E.; Koschitzki, R.; Maas, H.-G.
2016-06-01
In recent years, many glaciers all over the world have been distinctly retreating and thinning. One of the consequences of this is the increase of so called glacier lake outburst flood events (GLOFs). The mechanisms ruling such GLOF events are still not yet fully understood by glaciologists. Thus, there is a demand for data and measurements that can help to understand and model the phenomena. Thereby, a main issue is to obtain information about the location and formation of subglacial channels through which some lakes, dammed by a glacier, start to drain. The paper will show how photogrammetric image sequence analysis can be used to collect such data. For the purpose of detecting a subglacial tunnel, a camera has been installed in a pilot study to observe the area of the Colonia Glacier (Northern Patagonian Ice Field) where it dams the Lake Cachet II. To verify the hypothesis, that the course of the subglacial tunnel is indicated by irregular surface motion patterns during its collapse, the camera acquired image sequences of the glacier surface during several GLOF events. Applying tracking techniques to these image sequences, surface feature motion trajectories could be obtained for a dense raster of glacier points. Since only a single camera has been used for image sequence acquisition, depth information is required to scale the trajectories. Thus, for scaling and georeferencing of the measurements a GPS-supported photogrammetric network has been measured. The obtained motion fields of the Colonia Glacier deliver information about the glacier's behaviour before during and after a GLOF event. If the daily vertical glacier motion of the glacier is integrated over a period of several days and projected into a satellite image, the location and shape of the drainage channel underneath the glacier becomes visible. The high temporal resolution of the motion fields may also allows for an analysis of the tunnels dynamic in comparison to the changing water level of the lake.
Evolutionary Fuzzy Block-Matching-Based Camera Raw Image Denoising.
Yang, Chin-Chang; Guo, Shu-Mei; Tsai, Jason Sheng-Hong
2017-09-01
An evolutionary fuzzy block-matching-based image denoising algorithm is proposed to remove noise from a camera raw image. Recently, a variance stabilization transform is widely used to stabilize the noise variance, so that a Gaussian denoising algorithm can be used to remove the signal-dependent noise in camera sensors. However, in the stabilized domain, the existed denoising algorithm may blur too much detail. To provide a better estimate of the noise-free signal, a new block-matching approach is proposed to find similar blocks by the use of a type-2 fuzzy logic system (FLS). Then, these similar blocks are averaged with the weightings which are determined by the FLS. Finally, an efficient differential evolution is used to further improve the performance of the proposed denoising algorithm. The experimental results show that the proposed denoising algorithm effectively improves the performance of image denoising. Furthermore, the average performance of the proposed method is better than those of two state-of-the-art image denoising algorithms in subjective and objective measures.
NASA Astrophysics Data System (ADS)
Klaessens, John H.; van der Veen, Albert; Verdaasdonk, Rudolf M.
2017-03-01
Recently, low cost smart phone based thermal cameras are being considered to be used in a clinical setting for monitoring physiological temperature responses such as: body temperature change, local inflammations, perfusion changes or (burn) wound healing. These thermal cameras contain uncooled micro-bolometers with an internal calibration check and have a temperature resolution of 0.1 degree. For clinical applications a fast quality measurement before use is required (absolute temperature check) and quality control (stability, repeatability, absolute temperature, absolute temperature differences) should be performed regularly. Therefore, a calibrated temperature phantom has been developed based on thermistor heating on both ends of a black coated metal strip to create a controllable temperature gradient from room temperature 26 °C up to 100 °C. The absolute temperatures on the strip are determined with software controlled 5 PT-1000 sensors using lookup tables. In this study 3 FLIR-ONE cameras and one high end camera were checked with this temperature phantom. The results show a relative good agreement between both low-cost and high-end camera's and the phantom temperature gradient, with temperature differences of 1 degree up to 6 degrees between the camera's and the phantom. The measurements were repeated as to absolute temperature and temperature stability over the sensor area. Both low-cost and high-end thermal cameras measured relative temperature changes with high accuracy and absolute temperatures with constant deviations. Low-cost smart phone based thermal cameras can be a good alternative to high-end thermal cameras for routine clinical measurements, appropriate to the research question, providing regular calibration checks for quality control.
Computer aided photographic engineering
NASA Technical Reports Server (NTRS)
Hixson, Jeffrey A.; Rieckhoff, Tom
1988-01-01
High speed photography is an excellent source of engineering data but only provides a two-dimensional representation of a three-dimensional event. Multiple cameras can be used to provide data for the third dimension but camera locations are not always available. A solution to this problem is to overlay three-dimensional CAD/CAM models of the hardware being tested onto a film or photographic image, allowing the engineer to measure surface distances, relative motions between components, and surface variations.
ERIC Educational Resources Information Center
Flory, John
Although there have been great developments in motion picture technology, such as super 8mm film, magnetic sound, low cost color film, simpler projectors and movie cameras, and cartridge-loading projectors, there is still only limited use of audiovisual materials in the classroom today. This paper suggests some of the possible reasons for the lack…
SU-E-T-570: New Quality Assurance Method Using Motion Tracking for 6D Robotic Couches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheon, W; Cho, J; Ahn, S
Purpose: To accommodate geometrically accurate patient positioning, a robotic couch that is capable of 6-degrees of freedom has been introduced. However, conventional couch QA methods are not sufficient to enable the necessary accuracy of tests. Therefore, we have developed a camera based motion detection and geometry calibration system for couch QA. Methods: Employing a Visual-Tracking System (VTS, BonitaB10, Vicon, UK) which tracks infrared reflective(IR) markers, camera calibration was conducted using a 5.7 × 5.7 × 5.7 cm{sup 3} cube attached with IR markers at each corner. After positioning a robotic-couch at the origin with the cube on the table top,more » 3D coordinates of the cube’s eight corners were acquired by VTS in the VTS coordinate system. Next, positions in reference coordinates (roomcoordinates) were assigned using the known relation between each point. Finally, camera calibration was completed by finding a transformation matrix between VTS and reference coordinate systems and by applying a pseudo inverse matrix method. After the calibration, the accuracy of linear and rotational motions as well as couch sagging could be measured by analyzing the continuously acquired data of the cube while the couch moves to a designated position. Accuracy of the developed software was verified through comparison with measurement data when using a Laser tracker (FARO, Lake Mary, USA) for a robotic-couch installed for proton therapy. Results: VTS system could track couch motion accurately and measured position in room-coordinates. The VTS measurements and Laser tracker data agreed within 1% of difference for linear and rotational motions. Also because the program analyzes motion in 3-Dimension, it can compute couch sagging. Conclusion: Developed QA system provides submillimeter/ degree accuracy which fulfills the high-end couch QA. This work was supported by the National Research Foundation of Korea funded by Ministry of Science, ICT & Future Planning. (2013M2A2A7043507 and 2012M3A9B6055201)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogunmolu, O; Gans, N; Jiang, S
Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance ofmore » the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control will be pursued.« less
Potential for application of an acoustic camera in particle tracking velocimetry.
Wu, Fu-Chun; Shao, Yun-Chuan; Wang, Chi-Kuei; Liou, Jim
2008-11-01
We explored the potential and limitations for applying an acoustic camera as the imaging instrument of particle tracking velocimetry. The strength of the acoustic camera is its usability in low-visibility environments where conventional optical cameras are ineffective, while its applicability is limited by lower temporal and spatial resolutions. We conducted a series of experiments in which acoustic and optical cameras were used to simultaneously image the rotational motion of tracer particles, allowing for a comparison of the acoustic- and optical-based velocities. The results reveal that the greater fluctuations associated with the acoustic-based velocities are primarily attributed to the lower temporal resolution. The positive and negative biases induced by the lower spatial resolution are balanced, with the positive ones greater in magnitude but the negative ones greater in quantity. These biases reduce with the increase in the mean particle velocity and approach minimum as the mean velocity exceeds the threshold value that can be sensed by the acoustic camera.
Ranging Apparatus and Method Implementing Stereo Vision System
NASA Technical Reports Server (NTRS)
Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.
NASA Astrophysics Data System (ADS)
Polyakhova, Elena; Shmyrov, Alexander; Shmyrov, Vasily
2018-05-01
Orbital maneuvering in a neighborhood of the collinear libration point L1 of Sun-Earth system has specific properties, primarily associated with the instability L1. For a long stay in this area of space the stabilization problem of orbital motion requires a solution. Numerical experiments have shown that for stabilization of motion it is requires very small control influence in comparison with the gravitational forces. On the other hand, the stabilization time is quite long - months, and possibly years. This makes it highly desirable to use solar pressure forces. In this paper we illustrate the solar sail possibilities for solving of stabilization problem in a neighborhood L1 with use of the model example.
Time-lapse photogrammetry in geomorphic studies
NASA Astrophysics Data System (ADS)
Eltner, Anette; Kaiser, Andreas
2017-04-01
Image based approaches to reconstruct the earth surface (Structure from Motion - SfM) are establishing as a standard technology for high resolution topographic data. This is amongst other advantages due to the comparatively ease of use and flexibility of data generation. Furthermore, the increased spatial resolution led to its implementation at a vast range of applications from sub-mm to tens-of-km scale. Almost fully automatic calculation of referenced digital elevation models allows for a significant increase of temporal resolution, as well, potentially up to sub-second scales. Thereby, the setup of a time-lapse multi-camera system is necessary and different aspects need to be considered: The camera array has to be temporary stable or potential movements need to be compensated by temporary stable reference targets/areas. The stability of the internal camera geometry has to be considered due to a usually significantly lower amount of images of the scene, and thus redundancy for parameter estimation, compared to more common SfM applications. Depending on the speed of surface change, synchronisation has to be very accurate. Due to the usual application in the field, changing environmental conditions important for lighting and visual range are also crucial factors to keep in mind. Besides these important considerations much potential is comprised by time-lapse photogrammetry. The integration of multi-sensor systems, e.g. using thermal cameras, enables the potential detection of other processes not visible with RGB-images solely. Furthermore, the implementation of low-cost sensors allows for a significant increase of areal coverage and their setup at locations, where a loss of the system cannot be ruled out. The usage of micro-computers offers smart camera triggering, e.g. acquiring images with increased frequency controlled by a rainfall-triggered sensor. In addition these micro-computers can enable on-site data processing, e.g. recognition of increased surface movement, and thus might be used as warning system in the case of natural hazards. A large variety of applications are suitable with time-lapse photogrammetry, i.e. change detection of all sorts; e.g. volumetric alterations, movement tracking or roughness changes. The multi-camera systems can be used for slope investigations, soil studies, glacier observation, snow cover measurement, volcanic surveillance or plant growth monitoring. A conceptual workflow is introduced highlighting the limits and potentials of time-lapse photogrammetry.
How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.
Some Thoughts on Stability in Nonlinear Periodic Focusing Systems
DOE R&D Accomplishments Database
McMillan, E. M.
1967-09-05
A brief discussion is given of the long-term stability of particle motions through periodic focusing structures containing lumped nonlinear elements. A method is presented whereby one can specify the nonlinear elements in such a way as to generate a variety of structures in which the motion has long-term stability.
Yang, Qiang; Zhang, Jie; Nozato, Koji; Saito, Kenichi; Williams, David R.; Roorda, Austin; Rossi, Ethan A.
2014-01-01
Eye motion is a major impediment to the efficient acquisition of high resolution retinal images with the adaptive optics (AO) scanning light ophthalmoscope (AOSLO). Here we demonstrate a solution to this problem by implementing both optical stabilization and digital image registration in an AOSLO. We replaced the slow scanning mirror with a two-axis tip/tilt mirror for the dual functions of slow scanning and optical stabilization. Closed-loop optical stabilization reduced the amplitude of eye-movement related-image motion by a factor of 10–15. The residual RMS error after optical stabilization alone was on the order of the size of foveal cones: ~1.66–2.56 μm or ~0.34–0.53 arcmin with typical fixational eye motion for normal observers. The full implementation, with real-time digital image registration, corrected the residual eye motion after optical stabilization with an accuracy of ~0.20–0.25 μm or ~0.04–0.05 arcmin RMS, which to our knowledge is more accurate than any method previously reported. PMID:25401030
Minimum Requirements for Taxicab Security Cameras*
Zeng, Shengke; Amandus, Harlan E.; Amendola, Alfred A.; Newbraugh, Bradley H.; Cantis, Douglas M.; Weaver, Darlene
2015-01-01
Problem The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Methods Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Results Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. Practical Applications These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability. PMID:26823992
ISS Squat and Deadlift Kinematics on the Advanced Resistive Exercise Device
NASA Technical Reports Server (NTRS)
Newby, N.; Caldwell, E.; Sibonga, J.; Ploutz-Snyder, L.
2014-01-01
Visual assessment of exercise form on the Advanced Resistive Exercise Device (ARED) on orbit is difficult due to the motion of the entire device on its Vibration Isolation System (VIS). The VIS allows for two degrees of device translational motion, and one degree of rotational motion. In order to minimize the forces that the VIS must damp in these planes of motion, the floor of the ARED moves as well during exercise to reduce changes in the center of mass of the system. To help trainers and other exercise personnel better assess squat and deadlift form a tool was developed that removes the VIS motion and creates a stick figure video of the exerciser. Another goal of the study was to determine whether any useful kinematic information could be obtained from just a single camera. Finally, the use of these data may aid in the interpretation of QCT hip structure data in response to ARED exercises performed in-flight. After obtaining informed consent, four International Space Station (ISS) crewmembers participated in this investigation. Exercise was videotaped using a single camera positioned to view the side of the crewmember during exercise on the ARED. One crewmember wore reflective tape on the toe, heel, ankle, knee, hip, and shoulder joints. This technique was not available for the other three crewmembers, so joint locations were assessed and digitized frame-by-frame by lab personnel. A custom Matlab program was used to assign two-dimensional coordinates to the joint locations throughout exercise. A second custom Matlab program was used to scale the data, calculate joint angles, estimate the foot center of pressure (COP), approximate normal and shear loads, and to create the VIS motion-corrected stick figure videos. Kinematics for the squat and deadlift vary considerably for the four crewmembers in this investigation. Some have very shallow knee and hip angles, and others have quite large ranges of motion at these joints. Joint angle analysis showed that crewmembers do not return to a normal upright stance during squat, but remain somewhat bent at the hips. COP excursions were quite large during these exercises covering the entire length of the base of support in most cases. Anterior-posterior shear was very pronounced at the bottom of the squat and deadlift correlating with a COP shift to the toes at this part of the exercise. The stick figure videos showing a feet fixed reference frame have made it visually much easier for exercise personnel and trainers to assess exercise kinematics. Not returning to fully upright, hips extended position during squat exercises could have implications for the amount of load that is transmitted axially along the skeleton. The estimated shear loads observed in these crewmembers, along with a concomitant reduction in normal force, may also affect bone loading. The increased shear is likely due to the surprisingly large deviations in COP. Since the footplate on ARED moves along an arced path, much of the squat and deadlift movement is occurring on a tilted foot surface. This leads to COP movements away from the heel. The combination of observed kinematics and estimated kinetics make squat and deadlift exercises on the ARED distinctly different from their ground-based counterparts. CONCLUSION This investigation showed that some useful exercise information can be obtained at low cost, using a single video camera that is readily available on ISS. Squat and deadlift kinematics on the ISS ARED differ from ground-based ARED exercise. The amount of COP shift during these exercises sometimes approaches the limit of stability leading to modifications in the kinematics. The COP movement and altered kinematics likely reduce the bone loading experienced during these exercises. Further, the stick figure videos may prove to be a useful tool in assisting trainers to identify exercise form and make suggestions for improvements
Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation
NASA Technical Reports Server (NTRS)
Lee, George
1992-01-01
A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market.
Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry
NASA Astrophysics Data System (ADS)
Feng, Shijie; Zuo, Chao; Tao, Tianyang; Hu, Yan; Zhang, Minliang; Chen, Qian; Gu, Guohua
2018-04-01
Phase-shifting profilometry (PSP) is a widely used approach to high-accuracy three-dimensional shape measurements. However, when it comes to moving objects, phase errors induced by the movement often result in severe artifacts even though a high-speed camera is in use. From our observations, there are three kinds of motion artifacts: motion ripples, motion-induced phase unwrapping errors, and motion outliers. We present a novel motion-compensated PSP to remove the artifacts for dynamic measurements of rigid objects. The phase error of motion ripples is analyzed for the N-step phase-shifting algorithm and is compensated using the statistical nature of the fringes. The phase unwrapping errors are corrected exploiting adjacent reliable pixels, and the outliers are removed by comparing the original phase map with a smoothed phase map. Compared with the three-step PSP, our method can improve the accuracy by more than 95% for objects in motion.
A One-Axis-Controlled Magnetic Bearing and Its Performance
NASA Astrophysics Data System (ADS)
Li, Lichuan; Shinshi, Tadahiko; Kuroki, Jiro; Shimokohbe, Akira
Magnetic bearings (MBs) are complex machines in which sensors and controllers must be used to stabilize the rotor. A standard MB requires active control of five motion axes, imposing significant complexity and high cost. In this paper we report a very simple MB and its experimental testing. In this MB, the rotor is stabilized by active control of only one motion axis. The other four motion axes are passively stabilized by permanent magnets and appropriate magnetic circuit design. In rotor radial translational motion, which is passively stabilized, a resonant frequency of 205Hz is achieved for a rotor mass of 11.5×10-3kg. This MB features virtually zero control current and zero rotor iron loss (hysteresis and eddy current losses). Although the rotational speed and accuracy are limited by the resonance of passively stabilized axes, the MB is still suitable for applications where cost is critical but performance is not, such as cooling fans and auxiliary support for aerodynamic bearings.
Calibration Techniques for Accurate Measurements by Underwater Camera Systems
Shortis, Mark
2015-01-01
Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172
Automation of the targeting and reflective alignment concept
NASA Technical Reports Server (NTRS)
Redfield, Robin C.
1992-01-01
The automated alignment system, described herein, employs a reflective, passive (requiring no power) target and includes a PC-based imaging system and one camera mounted on a six degree of freedom robot manipulator. The system detects and corrects for manipulator misalignment in three translational and three rotational directions by employing the Targeting and Reflective Alignment Concept (TRAC), which simplifies alignment by decoupling translational and rotational alignment control. The concept uses information on the camera and the target's relative position based on video feedback from the camera. These relative positions are converted into alignment errors and minimized by motions of the robot. The system is robust to exogenous lighting by virtue of a subtraction algorithm which enables the camera to only see the target. These capabilities are realized with relatively minimal complexity and expense.
A theoretical analysis of airplane longitudinal stability and control as affected by wind shear
NASA Technical Reports Server (NTRS)
Sherman, W. L.
1977-01-01
The longitudinal equations of motion with wind shear terms were used to analyze the stability and motions of a jet transport. A positive wind shear gives a decreasing head wind or changes a head wind into a tail wind. A negative wind shear gives a decreasing tail wind or changes a tail wind into a head wind. It was found that wind shear had very little effect on the short period mode and that negative wind shear, although it affected the phugoid, did not cause stability problems. On the other hand, it was found that positive wind shear can cause the phugoid to become aperiodic and unstable. In this case, a stability boundary for the phugoid was found that is valid for most aircraft at all flight speeds. Calculations of aircraft motions confirmed the results of the stability analysis. It was found that a flight path control automatic pilot and an airspeed control system provide good control in all types of wind shear. Appendixes give equations of motion that include the effects of downdrafts and updrafts and extend the longitudinal equations of motion for shear to six degrees of freedom.
Recent results in visual servoing
NASA Astrophysics Data System (ADS)
Chaumette, François
2008-06-01
Visual servoing techniques consist in using the data provided by a vision sensor in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, aerial robots,… but can also be virtual robots for applications in computer animation, or even a virtual camera for applications in computer vision and augmented reality. A large variety of positioning tasks, or mobile target tracking, can be implemented by controlling from one to all the degrees of freedom of the system. Whatever the sensor configuration, which can vary from one on-board camera on the robot end-effector to several free-standing cameras, a set of visual features has to be selected at best from the image measurements available, allowing to control the degrees of freedom desired. A control law has also to be designed so that these visual features reach a desired value, defining a correct realization of the task. With a vision sensor providing 2D measurements, potential visual features are numerous, since as well 2D data (coordinates of feature points in the image, moments, …) as 3D data provided by a localization algorithm exploiting the extracted 2D measurements can be considered. It is also possible to combine 2D and 3D visual features to take the advantages of each approach while avoiding their respective drawbacks. From the selected visual features, the behavior of the system will have particular properties as for stability, robustness with respect to noise or to calibration errors, robot 3D trajectory, etc. The talk will present the main basic aspects of visual servoing, as well as technical advances obtained recently in the field inside the Lagadic group at INRIA/INRISA Rennes. Several application results will be also described.
Breisblatt, W M; Schulman, D S; Follansbee, W P
1991-06-01
A new miniaturized nonimaging radionuclide detector (Cardioscint, Oxford, England) was evaluated for the continuous on-line assessment of left ventricular function. This cesium iodide probe can be placed on the patient's chest and can be interfaced to an IBM compatible personal computer conveniently placed at the patient's bedside. This system can provide a beat-to-beat or gated determination of left ventricular ejection fraction and ST segment analysis. In 28 patients this miniaturized probe was correlated against a high resolution gamma camera study. Over a wide range of ejection fraction (31% to 76%) in patients with and without regional wall motion abnormalities, the correlation between the Cardioscint detector and the gamma camera was excellent (r = 0.94, SEE +/- 2.1). This detector system has high temporal (10 msec) resolution, and comparison of peak filling rate (PFR) and time to peak filling (TPFR) also showed close agreement with the gamma camera (PFR, r = 0.94, SEE +/- 0.17; TPFR, r = 0.92, SEE +/- 6.8). In 18 patients on bed rest the long-term stability of this system for measuring ejection fraction and ST segments was verified. During the monitoring period (108 +/- 28 minutes) only minor changes in ejection fraction occurred (coefficient of variation 0.035 +/- 0.016) and ST segment analysis showed no significant change from baseline. To determine whether continuous on-line measurement of ejection fraction would be useful after coronary angioplasty, 12 patients who had undergone a successful procedure were evaluated for 280 +/- 35 minutes with the Cardioscint system.(ABSTRACT TRUNCATED AT 250 WORDS)
A Surface-Coupled Optical Trap with 1-bp Precision via Active Stabilization
Okoniewski, Stephen R.; Carter, Ashley R.; Perkins, Thomas T.
2017-01-01
Optical traps can measure bead motions with Å-scale precision. However, using this level of precision to infer 1-bp motion of molecular motors along DNA is difficult, since a variety of noise sources degrade instrumental stability. In this chapter, we detail how to improve instrumental stability by (i) minimizing laser pointing, mode, polarization, and intensity noise using an acousto-optical-modulator mediated feedback loop and (ii) minimizing sample motion relative to the optical trap using a 3-axis piezo-electric-stage mediated feedback loop. These active techniques play a critical role in achieving a surface stability of 1 Å in 3D over tens of seconds and a 1-bp stability and precision in a surface-coupled optical trap over a broad bandwidth (Δf = 0.03–2 Hz) at low force (6 pN). These active stabilization techniques can also aid other biophysical assays that would benefit from improved laser stability and/or Å-scale sample stability, such as atomic force microscopy and super-resolution imaging. PMID:27844426
Performance prediction of optical image stabilizer using SVM for shaker-free production line
NASA Astrophysics Data System (ADS)
Kim, HyungKwan; Lee, JungHyun; Hyun, JinWook; Lim, Haekeun; Kim, GyuYeol; Moon, HyukSoo
2016-04-01
Recent smartphones adapt the camera module with optical image stabilizer(OIS) to enhance imaging quality in handshaking conditions. However, compared to the non-OIS camera module, the cost for implementing the OIS module is still high. One reason is that the production line for the OIS camera module requires a highly precise shaker table in final test process, which increases the unit cost of the production. In this paper, we propose a framework for the OIS quality prediction that is trained with the support vector machine and following module characterizing features : noise spectral density of gyroscope, optically measured linearity and cross-axis movement of hall and actuator. The classifier was tested on an actual production line and resulted in 88% accuracy of recall rate.
Compact full-motion video hyperspectral cameras: development, image processing, and applications
NASA Astrophysics Data System (ADS)
Kanaev, A. V.
2015-10-01
Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.
Takemura, Akihiro; Ueda, Shinichi; Noto, Kimiya; Kurata, Yuichi; Shoji, Saori
2011-01-01
In this study, we proposed and evaluated a positional accuracy assessment method with two high-resolution digital cameras for add-on six-degrees-of-freedom radiotherapy (6D) couches. Two high resolution digital cameras (D5000, Nikon Co.) were used in this accuracy assessment method. These cameras were placed on two orthogonal axes of a linear accelerator (LINAC) coordinate system and focused on the isocenter of the LINAC. Pictures of a needle that was fixed on the 6D couch were taken by the cameras during couch motions of translation and rotation of each axis. The coordinates of the needle in the pictures were obtained using manual measurement, and the coordinate error of the needle was calculated. The accuracy of a HexaPOD evo (Elekta AB, Sweden) was evaluated using this method. All of the mean values of the X, Y, and Z coordinate errors in the translation tests were within ±0.1 mm. However, the standard deviation of the Z coordinate errors in the Z translation test was 0.24 mm, which is higher than the others. In the X rotation test, we found that the X coordinate of the rotational origin of the 6D couch was shifted. We proposed an accuracy assessment method for a 6D couch. The method was able to evaluate the accuracy of the motion of only the 6D couch and revealed the deviation of the origin of the couch rotation. This accuracy assessment method is effective for evaluating add-on 6D couch positioning.
Relative effects of posture and activity on human height estimation from surveillance footage.
Ramstrand, Nerrolyn; Ramstrand, Simon; Brolund, Per; Norell, Kristin; Bergström, Peter
2011-10-10
Height estimations based on security camera footage are often requested by law enforcement authorities. While valid and reliable techniques have been established to determine vertical distances from video frames, there is a discrepancy between a person's true static height and their height as measured when assuming different postures or when in motion (e.g., walking). The aim of the research presented in this report was to accurately record the height of subjects as they performed a variety of activities typically observed in security camera footage and compare results to height recorded using a standard height measuring device. Forty-six able bodied adults participated in this study and were recorded using a 3D motion analysis system while performing eight different tasks. Height measurements captured using the 3D motion analysis system were compared to static height measurements in order to determine relative differences. It is anticipated that results presented in this report can be used by forensic image analysis experts as a basis for correcting height estimations of people captured on surveillance footage. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Moving target feature phenomenology data collection at China Lake
NASA Astrophysics Data System (ADS)
Gross, David C.; Hill, Jeff; Schmitz, James L.
2002-08-01
This paper describes the DARPA Moving Target Feature Phenomenology (MTFP) data collection conducted at the China Lake Naval Weapons Center's Junction Ranch in July 2001. The collection featured both X-band and Ku-band radars positioned on top of Junction Ranch's Parrot Peak. The test included seven targets used in eleven configurations with vehicle motion consisting of circular, straight-line, and 90-degree turning motion. Data was collected at 10-degree and 17-degree depression angles. Key parameters in the collection were polarization, vehicle speed, and road roughness. The collection also included a canonical target positioned at Junction Ranch's tilt-deck turntable. The canonical target included rotating wheels (military truck tire and civilian pick-up truck tire) and a flat plate with variable positioned corner reflectors. The canonical target was also used to simulate a rotating antenna and a vibrating plate. The target vehicles were instrumented with ARDS pods for differential GPS and roll, pitch and yaw measurements. Target motion was also documented using a video camera slaved to the X-band radar antenna and by a video camera operated near the target site.
In-vivo confirmation of the use of the dart thrower's motion during activities of daily living.
Brigstocke, G H O; Hearnden, A; Holt, C; Whatling, G
2014-05-01
The dart thrower's motion is a wrist rotation along an oblique plane from radial extension to ulnar flexion. We report an in-vivo study to confirm the use of the dart thrower's motion during activities of daily living. Global wrist motion in ten volunteers was recorded using a three-dimensional optoelectronic motion capture system, in which digital infra-red cameras track the movement of retro-reflective marker clusters. Global wrist motion has been approximated to the dart thrower's motion when hammering a nail, throwing a ball, drinking from a glass, pouring from a jug and twisting the lid of a jar, but not when combing hair or manipulating buttons. The dart thrower's motion is the plane of global wrist motion used during most activities of daily living. Arthrodesis of the radiocarpal joint instead of the midcarpal joint will allow better wrist function during most activities of daily living by preserving the dart thrower's motion.
Adaptive correlation filter-based video stabilization without accumulative global motion estimation
NASA Astrophysics Data System (ADS)
Koh, Eunjin; Lee, Chanyong; Jeong, Dong Gil
2014-12-01
We present a digital video stabilization approach that provides both robustness and efficiency for practical applications. In this approach, we adopt a stabilization model that maintains spatio-temporal information of past input frames efficiently and can track original stabilization position. Because of the stabilization model, the proposed method does not need accumulative global motion estimation and can recover the original position even if there is a failure in interframe motion estimation. It can also intelligently overcome the situation of damaged or interrupted video sequences. Moreover, because it is simple and suitable to parallel scheme, we implement it on a commercial field programmable gate array and a graphics processing unit board with compute unified device architecture in a breeze. Experimental results show that the proposed approach is both fast and robust.
Potential Utility of a 4K Consumer Camera for Surgical Education in Ophthalmology.
Ichihashi, Tsunetomo; Hirabayashi, Yutaka; Nagahara, Miyuki
2017-01-01
Purpose. We evaluated the potential utility of a cost-effective 4K consumer video system for surgical education in ophthalmology. Setting. Tokai University Hachioji Hospital, Tokyo, Japan. Design. Experimental study. Methods. The eyes that underwent cataract surgery, glaucoma surgery, vitreoretinal surgery, or oculoplastic surgery between February 2016 and April 2016 were recorded with 17.2 million pixels using a high-definition digital video camera (LUMIX DMC-GH4, Panasonic, Japan) and with 0.41 million pixels using a conventional analog video camera (MKC-501, Ikegami, Japan). Motion pictures of two cases for each surgery type were evaluated and classified as having poor, normal, or excellent visibility. Results. The 4K video system was easily installed by reading the instructions without technical expertise. The details of the surgical picture in the 4K system were highly improved over those of the conventional pictures, and the visual effects for surgical education were significantly improved. Motion pictures were stored for approximately 11 h with 512 GB SD memory. The total price of this system was USD 8000, which is a very low price compared with a commercial system. Conclusion. This 4K consumer camera was able to record and play back with high-definition surgical field visibility on the 4K monitor and is a low-cost, high-performing alternative for surgical facilities.
D Animation Reconstruction from Multi-Camera Coordinates Transformation
NASA Astrophysics Data System (ADS)
Jhan, J. P.; Rau, J. Y.; Chou, C. M.
2016-06-01
Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP) in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australiscoded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.
NASA Astrophysics Data System (ADS)
Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha
2012-09-01
Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.
33 CFR 117.829 - Northeast Cape Fear River.
Code of Federal Regulations, 2014 CFR
2014-07-01
... maintenance authorized in accordance with Subpart A of this part. (3) Trains shall be controlled so that any... of failure or obstruction of the motion sensors, laser scanners, video cameras or marine-radio...
33 CFR 117.829 - Northeast Cape Fear River.
Code of Federal Regulations, 2013 CFR
2013-07-01
... maintenance authorized in accordance with Subpart A of this part. (3) Trains shall be controlled so that any... of failure or obstruction of the motion sensors, laser scanners, video cameras or marine-radio...
Crosswind stability of FSAE race car considering the location of the pressure center
NASA Astrophysics Data System (ADS)
Zhao, Lijun; He, Huimin; Wang, Jianfeng; Li, Yaou; Yang, Na; Liu, Yiqun
2017-09-01
An 8-DOF vehicle dynamic model of FSAE race car was established, including the lateral motion, pitch motion, roll motion, yaw motion and four tires rotation. The model of aerodynamic lateral force and pressure center model were set up based on the vehicle speed and crosswind parameters. The simulation model was built by Simulink, to analyse the crosswind stability for straight-line condition. Results showed that crosswind influences the yawing velocity and sideslip angle seriously.
Song, Young Seop; Yang, Kyung Yong; Youn, Kibum; Yoon, Chiyul; Yeom, Jiwoon; Hwang, Hyeoncheol; Lee, Jehee; Kim, Keewon
2016-08-01
To compare optical motion capture system (MoCap), attitude and heading reference system (AHRS) sensor, and Microsoft Kinect for the continuous measurement of cervical range of motion (ROM). Fifteen healthy adult subjects were asked to sit in front of the Kinect camera with optical markers and AHRS sensors attached to the body in a room equipped with optical motion capture camera. Subjects were instructed to independently perform axial rotation followed by flexion/extension and lateral bending. Each movement was repeated 5 times while being measured simultaneously with 3 devices. Using the MoCap system as the gold standard, the validity of AHRS and Kinect for measurement of cervical ROM was assessed by calculating correlation coefficient and Bland-Altman plot with 95% limits of agreement (LoA). MoCap and ARHS showed fair agreement (95% LoA<10°), while MoCap and Kinect showed less favorable agreement (95% LoA>10°) for measuring ROM in all directions. Intraclass correlation coefficient (ICC) values between MoCap and AHRS in -40° to 40° range were excellent for flexion/extension and lateral bending (ICC>0.9). ICC values were also fair for axial rotation (ICC>0.8). ICC values between MoCap and Kinect system in -40° to 40° range were fair for all motions. Our study showed feasibility of using AHRS to measure cervical ROM during continuous motion with an acceptable range of error. AHRS and Kinect system can also be used for continuous monitoring of flexion/extension and lateral bending in ordinary range.
NASA Technical Reports Server (NTRS)
2000-01-01
The Automated Endoscopic System for Optimal Positioning, or AESOP, was developed by Computer Motion, Inc. under a SBIR contract from the Jet Propulsion Lab. AESOP is a robotic endoscopic positioning system used to control the motion of a camera during endoscopic surgery. The camera, which is mounted at the end of a robotic arm, previously had to be held in place by the surgical staff. With AESOP the robotic arm can make more precise and consistent movements. AESOP is also voice controlled by the surgeon. It is hoped that this technology can be used in space repair missions which require precision beyond human dexterity. A new generation of the same technology entitled the ZEUS Robotic Surgical System can make endoscopic procedures even more successful. ZEUS allows the surgeon control various instruments in its robotic arms, allowing for the precision the procedure requires.
Digital amateur observations of Venus at 0.9μm
NASA Astrophysics Data System (ADS)
Kardasis, E.
2017-09-01
Venus atmosphere is extremely dynamic, though it is very difficult to observe any features on it in the visible and even in the near-IR range. Digital observations with planetary cameras in recent years routinely produce high-quality images, especially in the near-infrared (0.7-1μm), since IR wavelengths are less influenced by Earth's atmosphere and Venus's atmosphere is partially transparent in this spectral region. Continuous observations over a few hours may track dark atmospheric features in the dayside and determine their motion. In this work we will present such observations and some dark-feature motion measurements at 0.9μm. Ground-based observations at this wavelength are rare and are complementary to in situ observations by JAXA's Akatsuki orbiter, that studies the atmospheric dynamics of Venus also in this band with the IR1 camera.
Patient positioning in radiotherapy based on surface imaging using time of flight cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilles, M., E-mail: marlene.gilles@univ-brest.fr
2016-08-15
Purpose: To evaluate the patient positioning accuracy in radiotherapy using a stereo-time of flight (ToF)-camera system. Methods: A system using two ToF cameras was used to scan the surface of the patients in order to position them daily on the treatment couch. The obtained point clouds were registered to (a) detect translations applied to the table (intrafraction motion) and (b) predict the displacement to be applied in order to place the patient in its reference position (interfraction motion). The measures provided by this system were compared to the effectively applied translations. The authors analyzed 150 fractions including lung, pelvis/prostate, andmore » head and neck cancer patients. Results: The authors obtained small absolute errors for displacement detection: 0.8 ± 0.7, 0.8 ± 0.7, and 0.7 ± 0.6 mm along the vertical, longitudinal, and lateral axes, respectively, and 0.8 ± 0.7 mm for the total norm displacement. Lung cancer patients presented the largest errors with a respective mean of 1.1 ± 0.9, 0.9 ± 0.9, and 0.8 ± 0.7 mm. Conclusions: The proposed stereo-ToF system allows for sufficient accuracy and faster patient repositioning in radiotherapy. Its capability to track the complete patient surface in real time could allow, in the future, not only for an accurate positioning but also a real time tracking of any patient intrafraction motion (translation, involuntary, and breathing).« less
Patient positioning in radiotherapy based on surface imaging using time of flight cameras.
Gilles, M; Fayad, H; Miglierini, P; Clement, J F; Scheib, S; Cozzi, L; Bert, J; Boussion, N; Schick, U; Pradier, O; Visvikis, D
2016-08-01
To evaluate the patient positioning accuracy in radiotherapy using a stereo-time of flight (ToF)-camera system. A system using two ToF cameras was used to scan the surface of the patients in order to position them daily on the treatment couch. The obtained point clouds were registered to (a) detect translations applied to the table (intrafraction motion) and (b) predict the displacement to be applied in order to place the patient in its reference position (interfraction motion). The measures provided by this system were compared to the effectively applied translations. The authors analyzed 150 fractions including lung, pelvis/prostate, and head and neck cancer patients. The authors obtained small absolute errors for displacement detection: 0.8 ± 0.7, 0.8 ± 0.7, and 0.7 ± 0.6 mm along the vertical, longitudinal, and lateral axes, respectively, and 0.8 ± 0.7 mm for the total norm displacement. Lung cancer patients presented the largest errors with a respective mean of 1.1 ± 0.9, 0.9 ± 0.9, and 0.8 ± 0.7 mm. The proposed stereo-ToF system allows for sufficient accuracy and faster patient repositioning in radiotherapy. Its capability to track the complete patient surface in real time could allow, in the future, not only for an accurate positioning but also a real time tracking of any patient intrafraction motion (translation, involuntary, and breathing).
Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design
2016-10-01
study of the resulting videos led to a new prosthetics-use taxonomy that is generalizable to various levels of amputation and terminal devices. The...taxonomy was applied to classification of the recorded videos via custom tagging software with midi controller interface. The software creates...a motion capture studio and video cameras to record accurate and detailed upper body motion during a series of standardized tasks. These tasks are
NASA Astrophysics Data System (ADS)
Awang Soh, Ahmad Afiq Sabqi; Mat Jafri, Mohd Zubir; Azraai, Nur Zaidi
2017-07-01
In Malay world, there is a spirit traditional ritual where they use it as healing practices or for normal life. Malay martial arts (silat) also is not exceptional where some branch of silat have spirit traditional ritual where they said can help them in combat. In this paper, we will not use any ritual, instead we will use some medicinal and environment change when they are performing. There will be 2 performers (fighter) selected, one of them have an experience in martial arts training and another performer does not have experience. Motion Capture (MOCAP) camera will help observe and analyze this move. 8 cameras have been placed in the MOCAP room 2 on each side of the wall facing toward the center of the room from every angle. This will help prevent the loss detection of a marker that been stamped on the limb of a performer. Passive marker has been used where it will reflect the infrared to the camera sensor. Infrared is generated by the source around the camera lens. A 60 kg punching bag was hung on the iron bar function as the target for the performer when throws a punch. Markers also have been stamped on the punching bag so we can detect the movement how far can it swing when hit by the performer. 2 performers will perform 2 moves each with the same position and posture. For every 2 moves, we have made the environment change without the performer notice about it. The first 2 punch with normal environment, second part we have played a positive music to change the performer's mood and third part we have put a medicine (cream/oil) on the skin of the performer. This medicine will make the skin feel a little bit hot. This process repeated to another performer with no experience. The position of this marker analyzed by the Cortex Motion Analysis software where from this data, we can estimate the kinetics and kinematics of the performer. It shows that the increase of kinetics for every part because of the change in the environment, and different result for the 2 performers.
Variation in detection among passive infrared triggered-cameras used in wildlife research
Damm, Philip E.; Grand, James B.; Barnett, Steven W.
2010-01-01
Precise and accurate estimates of demographics such as age structure, productivity, and density are necessary in determining habitat and harvest management strategies for wildlife populations. Surveys using automated cameras are becoming an increasingly popular tool for estimating these parameters. However, most camera studies fail to incorporate detection probabilities, leading to parameter underestimation. The objective of this study was to determine the sources of heterogeneity in detection for trail cameras that incorporate a passive infrared (PIR) triggering system sensitive to heat and motion. Images were collected at four baited sites within the Conecuh National Forest, Alabama, using three cameras at each site operating continuously over the same seven-day period. Detection was estimated for four groups of animals based on taxonomic group and body size. Our hypotheses of detection considered variation among bait sites and cameras. The best model (w=0.99) estimated different rates of detection for each camera in addition to different detection rates for four animal groupings. Factors that explain this variability might include poor manufacturing tolerances, variation in PIR sensitivity, animal behavior, and species-specific infrared radiation. Population surveys using trail cameras with PIR systems must incorporate detection rates for individual cameras. Incorporating time-lapse triggering systems into survey designs should eliminate issues associated with PIR systems.
Ghorbanpour, Arsalan; Azghani, Mahmoud Reza; Taghipour, Mohammad; Salahzadeh, Zahra; Ghaderi, Fariba; Oskouei, Ali E
2018-04-01
[Purpose] The aim of this study was to compare the effects of "McGill stabilization exercises" and "conventional physiotherapy" on pain, functional disability and active back flexion and extension range of motion in patients with chronic non-specific low back pain. [Subjects and Methods] Thirty four patients with chronic non-specific low back pain were randomly assigned to McGill stabilization exercises group (n=17) and conventional physiotherapy group (n=17). In both groups, patients performed the corresponding exercises for six weeks. The visual analog scale (VAS), Quebec Low Back Pain Disability Scale Questionnaire and inclinometer were used to measure pain, functional disability, and active back flexion and extension range of motion, respectively. [Results] Statistically significant improvements were observed in pain, functional disability, and active back extension range of motion in McGill stabilization exercises group. However, active back flexion range of motion was the only clinical symptom that statistically increased in patients who performed conventional physiotherapy. There was no significant difference between the clinical characteristics while compared these two groups of patients. [Conclusion] The results of this study indicated that McGill stabilization exercises and conventional physiotherapy provided approximately similar improvement in pain, functional disability, and active back range of motion in patients with chronic non-specific low back pain. However, it appears that McGill stabilization exercises provide an additional benefit to patients with chronic non-specific low back, especially in pain and functional disability improvement.
The stability of individual patterns of autonomic responses to motion sickness stimulation
NASA Technical Reports Server (NTRS)
Cowings, Patricia S.; Toscano, William B.; Naifeh, Karen H.
1990-01-01
As part of a program to develop a treatment for motion sickness based on self-regulation of autonomic nervous system (ANS) activity, this study examined the stability of an individual's pattern of ANS responses to motion sickness stimulation on repeated occasions. Motion sickness symptoms were induced in 58 people during two rotating chair test. Physiological responses measured were heart rate, finger pulse volume, respiration rate, and skin conductance. Using standard scores, stability of responses of specific magnitudes across both tests is as examined. Correlational analyses, analysis of variance, and a components of variance analysis all revealed marked, but quite stable, individual differences in ANS responses to both mild and severe motion sickness. These findings confirm the prior observation that people are sufficiently unique in their ANS responses to motion sickness provocation to make it nesessary to individually tailor self-regulation training. Further, these data support the contention that individual ANS patterns are sufficiently consistent from test to test so as to serve as an objective indicator of individual motion sickness malaise levels.
NASA Astrophysics Data System (ADS)
Rodriguez, Steven; Jaworski, Justin
2017-11-01
The impact of above-rated wave-induced motions on the stability of floating offshore wind turbine near-wakes is studied numerically. The rotor near-wake is generated using a lifting-line free vortex wake method, which is strongly coupled to a finite element solver for kinematically nonlinear blade deformations. A synthetic time series of relatively high-amplitude/high-frequency representative of above-rated conditions of the NREL 5MW referece wind turbine is imposed on the rotor structure. To evaluate the impact of these above-rated conditions, a linear stability analysis is first performed on the near wake generated by a fixed-tower wind turbine configuration at above-rated inflow conditions. The platform motion is then introduced via synthetic time series, and a stability analysis is performed on the wake generated by the floating offshore wind turbine at the same above-rated inflow conditions. The stability trends (disturbance modes versus the divergence rate of vortex structures) of the two analyses are compared to identify the impact that above-rated wave-induced structural motions have on the stability of the floating offshore wind turbine wake.
Real-time marker-free motion capture system using blob feature analysis
NASA Astrophysics Data System (ADS)
Park, Chang-Joon; Kim, Sung-Eun; Kim, Hong-Seok; Lee, In-Ho
2005-02-01
This paper presents a real-time marker-free motion capture system which can reconstruct 3-dimensional human motions. The virtual character of the proposed system mimics the motion of an actor in real-time. The proposed system captures human motions by using three synchronized CCD cameras and detects the root and end-effectors of an actor such as a head, hands, and feet by exploiting the blob feature analysis. And then, the 3-dimensional positions of end-effectors are restored and tracked by using Kalman filter. At last, the positions of the intermediate joint are reconstructed by using anatomically constrained inverse kinematics algorithm. The proposed system was implemented under general lighting conditions and we confirmed that the proposed system could reconstruct motions of a lot of people wearing various clothes in real-time stably.
LabVIEW application for motion tracking using USB camera
NASA Astrophysics Data System (ADS)
Rob, R.; Tirian, G. O.; Panoiu, M.
2017-05-01
The technical state of the contact line and also the additional equipment in electric rail transport is very important for realizing the repairing and maintenance of the contact line. During its functioning, the pantograph motion must stay in standard limits. Present paper proposes a LabVIEW application which is able to track in real time the motion of a laboratory pantograph and also to acquire the tracking images. An USB webcam connected to a computer acquires the desired images. The laboratory pantograph contains an automatic system which simulates the real motion. The tracking parameters are the horizontally motion (zigzag) and the vertically motion which can be studied in separate diagrams. The LabVIEW application requires appropriate tool-kits for vision development. Therefore the paper describes the subroutines that are especially programmed for real-time image acquisition and also for data processing.
Enhancing physics demos using iPhone slow motion
NASA Astrophysics Data System (ADS)
Lincoln, James
2017-12-01
Slow motion video enhances our ability to perceive and experience the physical world. This can help students and teachers especially in cases of fast moving objects or detailed events that happen too quickly for the eye to follow. As often as possible, demonstrations should be performed by the students themselves and luckily many of them will already have this technology in their pockets. The "S" series of iPhone has the slow motion video feature standard, which also includes simultaneous sound recording (somewhat unusual among slow motion cameras). In this article I share some of my experiences using this feature and provide advice on how to successfully use this technology in the classroom.
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
Development of a 300,000-pixel ultrahigh-speed high-sensitivity CCD
NASA Astrophysics Data System (ADS)
Ohtake, H.; Hayashida, T.; Kitamura, K.; Arai, T.; Yonai, J.; Tanioka, K.; Maruyama, H.; Etoh, T. Goji; Poggemann, D.; Ruckelshausen, A.; van Kuijk, H.; Bosiers, Jan T.
2006-02-01
We are developing an ultrahigh-speed, high-sensitivity broadcast camera that is capable of capturing clear, smooth slow-motion videos even where lighting is limited, such as at professional baseball games played at night. In earlier work, we developed an ultrahigh-speed broadcast color camera1) using three 80,000-pixel ultrahigh-speed, highsensitivity CCDs2). This camera had about ten times the sensitivity of standard high-speed cameras, and enabled an entirely new style of presentation for sports broadcasts and science programs. Most notably, increasing the pixel count is crucially important for applying ultrahigh-speed, high-sensitivity CCDs to HDTV broadcasting. This paper provides a summary of our experimental development aimed at improving the resolution of CCD even further: a new ultrahigh-speed high-sensitivity CCD that increases the pixel count four-fold to 300,000 pixels.
Expedition One CDR Shepherd with IMAX camera
2001-02-11
STS98-E-5164 (11 February 2001) --- Astronaut William M. (Bill) Shepherd documents activity onboard the newly attached Destiny laboratory using an IMAX motion picture camera. The crews of Atlantis and the International Space Station on February 11 opened the Destiny laboratory and spent the first full day of what are planned to be years of work ahead inside the orbiting science and command center. Shepherd opened the Destiny hatch, and he and Shuttle commander Kenneth D. Cockrell ventured inside at 8:38 a.m. (CST). Members of both crews went to work quickly inside the new module, activating air systems, fire extinguishers, alarm systems, computers and internal communications. The crew also continued equipment transfers from the shuttle to the station and filmed several scenes onboard the station using an IMAX camera. This scene was recorded with a digital still camera.
NASA Technical Reports Server (NTRS)
Gunter, E. J.; Humphris, R. R.; Springer, H.
1983-01-01
In this paper, some of the effects of unbalance on the nonlinear response and stability of flexible rotor-bearing systems is presented from both a theoretical and experimental standpoint. In a linear system, operating above its stability threshold, the amplitude of motion grows exponentially with time and the orbits become unbounded. In an actual system, this is not necessarily the case. The actual amplitudes of motion may be bounded due to various nonlinear effects in the system. These nonlinear effects cause limit cycles of motion. Nonlinear effects are inherent in fluid film bearings and seals. Other contributors to nonlinear effects are shafts, couplings and foundations. In addition to affecting the threshold of stability, the nonlinear effects can cause jump phenomena to occur at not only the critical speeds, but also at stability onset or restabilization speeds.
Motion correction for improved estimation of heart rate using a visual spectrum camera
NASA Astrophysics Data System (ADS)
Tarbox, Elizabeth A.; Rios, Christian; Kaur, Balvinder; Meyer, Shaun; Hirt, Lauren; Tran, Vy; Scott, Kaitlyn; Ikonomidou, Vasiliki
2017-05-01
Heart rate measurement using a visual spectrum recording of the face has drawn interest over the last few years as a technology that can have various health and security applications. In our previous work, we have shown that it is possible to estimate the heart beat timing accurately enough to perform heart rate variability analysis for contactless stress detection. However, a major confounding factor in this approach is the presence of movement, which can interfere with the measurements. To mitigate the effects of movement, in this work we propose the use of face detection and tracking based on the Karhunen-Loewe algorithm in order to counteract measurement errors introduced by normal subject motion, as expected during a common seated conversation setting. We analyze the requirements on image acquisition for the algorithm to work, and its performance under different ranges of motion, changes of distance to the camera, as well and the effect of illumination changes due to different positioning with respect to light sources on the acquired signal. Our results suggest that the effect of face tracking on visual-spectrum based cardiac signal estimation depends on the amplitude of the motion. While for larger-scale conversation-induced motion it can significantly improve estimation accuracy, with smaller-scale movements, such as the ones caused by breathing or talking without major movement errors in facial tracking may interfere with signal estimation. Overall, employing facial tracking is a crucial step in adapting this technology to real-life situations with satisfactory results.
A Radiation-Triggered Surveillance System for UF6 Cylinder Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, Michael M.; Myjak, Mitchell J.
This report provides background information and representative scenarios for testing a prototype radiation-triggered surveillance system at an operating facility that handles uranium hexafluoride (UF 6) cylinders. The safeguards objective is to trigger cameras using radiation, or radiation and motion, rather than motion alone, to reduce significantly the number of image files generated by a motion-triggered system. The authors recommend the use of radiation-triggered surveillance at all facilities where cylinder paths are heavily traversed by personnel. The International Atomic Energy Agency (IAEA) has begun using surveillance cameras in the feed and withdrawal areas of gas centrifuge enrichment plants (GCEPs). The camerasmore » generate imagery using elapsed time or motion, but this creates problems in areas occupied 24/7 by personnel. Either motion-or-interval-based triggering generates thousands of review files over the course of a month. Since inspectors must review the files to verify operator material-flow-declarations, a plethora of files significantly extends the review process. The primary advantage of radiation-triggered surveillance is the opportunity to obtain full-time cylinder throughput verification versus what presently amounts to part-time verification. Cost savings should be substantial, as the IAEA presently uses frequent unannounced inspections to verify cylinder-throughput declarations. The use of radiation-triggered surveillance allows the IAEA to implement less frequent unannounced inspections for the purpose of flow verification, but its principal advantage is significantly shorter and more effective inspector video reviews.« less
NASA Technical Reports Server (NTRS)
Gradl, Paul
2016-01-01
Paired images were collected using a projected pattern instead of standard painting of the speckle pattern on her abdomen. High Speed cameras were post triggered after movements felt. Data was collected at 120 fps -limited due to 60hz frequency of projector. To ensure that kicks and movement data was real a background test was conducted with no baby movement (to correct for breathing and body motion).
Methods and new approaches to the calculation of physiological parameters by videodensitometry
NASA Technical Reports Server (NTRS)
Kedem, D.; Londstrom, D. P.; Rhea, T. C., Jr.; Nelson, J. H.; Price, R. R.; Smith, C. W.; Graham, T. P., Jr.; Brill, A. B.; Kedem, D.
1976-01-01
A complex system featuring a video-camera connected to a video disk, cine (medical motion picture) camera and PDP-9 computer with various input/output facilities has been developed. This system enables the performance of quantitative analysis of various functions recorded in clinical studies. Several studies are described, such as heart chamber volume calculations, left ventricle ejection fraction, blood flow through the lungs and also the possibility of obtaining information about blood flow and constrictions in small cross-section vessels
Toward Active Control of Noise from Hot Supersonic Jets
2012-05-14
was developed that would allow for easy data sharing among the research teams. This format includes the acoustic data along with all calibration ...SUPERSONIC | QUARTERLY RPT. 3 ■ 1 i; ’XZ. "• Tff . w w i — r i (a) Far-Field Array Calibration (b) MHz Rate PIV Camera Setup Figure... Plenoptic camera is a similar setup to determine 3-D motion of the flow using a thick light sheet. 2.3 Update on CFD Progress In the previous interim
Wind Tunnel Tests of the Space Shuttle Foam Insulation with Simulated Debonded Regions
1981-04-01
set identification number Gage sensitivity Calculated gage sen8itivity 82 = Sl * f(TGE) Material specimen identification designation Free-stream...ColoY motion pictures (2 cameras) and pre- and posttest color stills recorded ariy changes "in the samples. The movie cameras were operated at...The oBli ~ue shock wave generated by the -wedge reduces the free-stream Mach nut1ber to the desired local Mach number. Since the free=sti’eam
SeaVipers- Computer Vision and Inertial Position/Reference Sensor System (CVIPRSS)
2015-08-01
uses an Inertial Measurement Unit (IMU) to detect changes in roll , pitch, and yaw (x-, y-, and z-axis movement). We use a 9DOF Razor IMU from SparkFun... inertial measurement unit (IMU) and cameras that are hardware synchronized to provide close coupling. Several fast food companies, Internet giants like...light cameras [32]. 4.1.4 Inertial Measurement Unit To assist the PTU in video stabilization for the camera and aiming the rangefinder, Sea- Vipers
NASA Technical Reports Server (NTRS)
1994-01-01
In laparoscopic surgery, tiny incisions are made in the patient's body and a laparoscope (an optical tube with a camera at the end) is inserted. The camera's image is projected onto two video screens, whose views guide the surgeon through the procedure. AESOP, a medical robot developed by Computer Motion, Inc. with NASA assistance, eliminates the need for a human assistant to operate the camera. The surgeon uses a foot pedal control to move the device, allowing him to use both hands during the surgery. Miscommunication is avoided; AESOP's movement is smooth and steady, and the memory vision is invaluable. Operations can be completed more quickly, and the patient spends less time under anesthesia. AESOP has been approved by the FDA.
Stability of Dynamical Systems with Discontinuous Motions:
NASA Astrophysics Data System (ADS)
Michel, Anthony N.; Hou, Ling
In this paper we present a stability theory for discontinuous dynamical systems (DDS): continuous-time systems whose motions are not necessarily continuous with respect to time. We show that this theory is not only applicable in the analysis of DDS, but also in the analysis of continuous dynamical systems (continuous-time systems whose motions are continuous with respect to time), discrete-time dynamical systems (systems whose motions are defined at discrete points in time) and hybrid dynamical systems (HDS) (systems whose descriptions involve simultaneously continuous-time and discrete-time). We show that the stability results for DDS are in general less conservative than the corresponding well-known classical Lyapunov results for continuous dynamical systems and discrete-time dynamical systems. Although the DDS stability results are applicable to general dynamical systems defined on metric spaces (divorced from any kind of description by differential equations, or any other kinds of equations), we confine ourselves to finite-dimensional dynamical systems defined by ordinary differential equations and difference equations, to make this paper as widely accessible as possible. We present only sample results, namely, results for uniform asymptotic stability in the large.
COUGHLAN, JAMES; MANDUCHI, ROBERTO
2009-01-01
We describe a wayfinding system for blind and visually impaired persons that uses a camera phone to determine the user's location with respect to color markers, posted at locations of interest (such as offices), which are automatically detected by the phone. The color marker signs are specially designed to be detected in real time in cluttered environments using computer vision software running on the phone; a novel segmentation algorithm quickly locates the borders of the color marker in each image, which allows the system to calculate how far the marker is from the phone. We present a model of how the user's scanning strategy (i.e. how he/she pans the phone left and right to find color markers) affects the system's ability to detect color markers given the limitations imposed by motion blur, which is always a possibility whenever a camera is in motion. Finally, we describe experiments with our system tested by blind and visually impaired volunteers, demonstrating their ability to reliably use the system to find locations designated by color markers in a variety of indoor and outdoor environments, and elucidating which search strategies were most effective for users. PMID:19960101
Brown, David M; Juarez, Juan C; Brown, Andrea M
2013-12-01
A laser differential image-motion monitor (DIMM) system was designed and constructed as part of a turbulence characterization suite during the DARPA free-space optical experimental network experiment (FOENEX) program. The developed link measurement system measures the atmospheric coherence length (r0), atmospheric scintillation, and power in the bucket for the 1550 nm band. DIMM measurements are made with two separate apertures coupled to a single InGaAs camera. The angle of arrival (AoA) for the wavefront at each aperture can be calculated based on focal spot movements imaged by the camera. By utilizing a single camera for the simultaneous measurement of the focal spots, the correlation of the variance in the AoA allows a straightforward computation of r0 as in traditional DIMM systems. Standard measurements of scintillation and power in the bucket are made with the same apertures by redirecting a percentage of the incoming signals to InGaAs detectors integrated with logarithmic amplifiers for high sensitivity and high dynamic range. By leveraging two, small apertures, the instrument forms a small size and weight configuration for mounting to actively tracking laser communication terminals for characterizing link performance.
NASA Technical Reports Server (NTRS)
Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.
1994-01-01
Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.
Video Altimeter and Obstruction Detector for an Aircraft
NASA Technical Reports Server (NTRS)
Delgado, Frank J.; Abernathy, Michael F.; White, Janis; Dolson, William R.
2013-01-01
Video-based altimetric and obstruction detection systems for aircraft have been partially developed. The hardware of a system of this type includes a downward-looking video camera, a video digitizer, a Global Positioning System receiver or other means of measuring the aircraft velocity relative to the ground, a gyroscope based or other attitude-determination subsystem, and a computer running altimetric and/or obstruction-detection software. From the digitized video data, the altimetric software computes the pixel velocity in an appropriate part of the video image and the corresponding angular relative motion of the ground within the field of view of the camera. Then by use of trigonometric relationships among the aircraft velocity, the attitude of the camera, the angular relative motion, and the altitude, the software computes the altitude. The obstruction-detection software performs somewhat similar calculations as part of a larger task in which it uses the pixel velocity data from the entire video image to compute a depth map, which can be correlated with a terrain map, showing locations of potential obstructions. The depth map can be used as real-time hazard display and/or to update an obstruction database.
Camera-pose estimation via projective Newton optimization on the manifold.
Sarkis, Michel; Diepold, Klaus
2012-04-01
Determining the pose of a moving camera is an important task in computer vision. In this paper, we derive a projective Newton algorithm on the manifold to refine the pose estimate of a camera. The main idea is to benefit from the fact that the 3-D rigid motion is described by the special Euclidean group, which is a Riemannian manifold. The latter is equipped with a tangent space defined by the corresponding Lie algebra. This enables us to compute the optimization direction, i.e., the gradient and the Hessian, at each iteration of the projective Newton scheme on the tangent space of the manifold. Then, the motion is updated by projecting back the variables on the manifold itself. We also derive another version of the algorithm that employs homeomorphic parameterization to the special Euclidean group. We test the algorithm on several simulated and real image data sets. Compared with the standard Newton minimization scheme, we are now able to obtain the full numerical formula of the Hessian with a 60% decrease in computational complexity. Compared with Levenberg-Marquardt, the results obtained are more accurate while having a rather similar complexity.
Coughlan, James; Manduchi, Roberto
2009-06-01
We describe a wayfinding system for blind and visually impaired persons that uses a camera phone to determine the user's location with respect to color markers, posted at locations of interest (such as offices), which are automatically detected by the phone. The color marker signs are specially designed to be detected in real time in cluttered environments using computer vision software running on the phone; a novel segmentation algorithm quickly locates the borders of the color marker in each image, which allows the system to calculate how far the marker is from the phone. We present a model of how the user's scanning strategy (i.e. how he/she pans the phone left and right to find color markers) affects the system's ability to detect color markers given the limitations imposed by motion blur, which is always a possibility whenever a camera is in motion. Finally, we describe experiments with our system tested by blind and visually impaired volunteers, demonstrating their ability to reliably use the system to find locations designated by color markers in a variety of indoor and outdoor environments, and elucidating which search strategies were most effective for users.
1945-03-07
Picture (285) ....... •"■*’ Cameraman, Motion Picture (043) 115 Canvas Cover Renairuan (OhU) ■ * Car Carpenter, Railway (046) i<" Car Mechanic...Film Editor, Motion Picture (l3l) .,,,.,♦ * 15 Filter Operator, ^ tor Supply (O83). # 10 Fingerprinter (307) ’. . * 30 Fire Fighter (383) ,. 128...Mechanic (322) .... Registered Nurse (225) ....... Repairman, Camera (042) Repairman, Canvas Cover (044) . . . Repairman, Central. Of fice (095
Scanning and storage of electrophoretic records
McKean, Ronald A.; Stiegman, Jeff
1990-01-01
An electrophoretic record that includes at least one gel separation is mounted for motion laterally of the separation record. A light source is positioned to illuminate at least a portion of the record, and a linear array camera is positioned to have a field of view of the illuminated portion of the record and orthogonal to the direction of record motion. The elements of the linear array are scanned at increments of motion of the record across the field of view to develop a series of signals corresponding to intensity of light at each element at each scan increment.
Full-frame video stabilization with motion inpainting.
Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung
2006-07-01
Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.
A Surface-Coupled Optical Trap with 1-bp Precision via Active Stabilization.
Okoniewski, Stephen R; Carter, Ashley R; Perkins, Thomas T
2017-01-01
Optical traps can measure bead motions with Å-scale precision. However, using this level of precision to infer 1-bp motion of molecular motors along DNA is difficult, since a variety of noise sources degrade instrumental stability. In this chapter, we detail how to improve instrumental stability by (1) minimizing laser pointing, mode, polarization, and intensity noise using an acousto-optical-modulator mediated feedback loop and (2) minimizing sample motion relative to the optical trap using a three-axis piezo-electric-stage mediated feedback loop. These active techniques play a critical role in achieving a surface stability of 1 Å in 3D over tens of seconds and a 1-bp stability and precision in a surface-coupled optical trap over a broad bandwidth (Δf = 0.03-2 Hz) at low force (6 pN). These active stabilization techniques can also aid other biophysical assays that would benefit from improved laser stability and/or Å-scale sample stability, such as atomic force microscopy and super-resolution imaging.
Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects
Lu, Shin-Yee
1998-01-01
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.
Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects
Lu, S.Y.
1998-12-22
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.
Synthesis of a controller for stabilizing the motion of a rigid body about a fixed point
NASA Astrophysics Data System (ADS)
Zabolotnov, Yu. M.; Lobanov, A. A.
2017-05-01
A method for the approximate design of an optimal controller for stabilizing the motion of a rigid body about a fixed point is considered. It is assumed that rigid body motion is nearly the motion in the classical Lagrange case. The method is based on the common use of the Bellman dynamic programming principle and the averagingmethod. The latter is used to solve theHamilton-Jacobi-Bellman equation approximately, which permits synthesizing the controller. The proposed method for controller design can be used in many problems close to the problem of motion of the Lagrange top (the motion of a rigid body in the atmosphere, the motion of a rigid body fastened to a cable in deployment of the orbital cable system, etc.).
Kim, Young-Keun; Kim, Kyung-Soo
2014-10-01
Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Young-Keun, E-mail: ykkim@handong.edu; Kim, Kyung-Soo
Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-basedmore » sensor, the system is expected to be highly robust to sea weather conditions.« less
NASA Astrophysics Data System (ADS)
Kim, Young-Keun; Kim, Kyung-Soo
2014-10-01
Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.
Dynamic visual attention: motion direction versus motion magnitude
NASA Astrophysics Data System (ADS)
Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.
2008-02-01
Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.
Determination of the static friction coefficient from circular motion
NASA Astrophysics Data System (ADS)
Molina-Bolívar, J. A.; Cabrerizo-Vílchez, M. A.
2014-07-01
This paper describes a physics laboratory exercise for determining the coefficient of static friction between two surfaces. The circular motion of a coin placed on the surface of a rotating turntable has been studied. For this purpose, the motion is recorded with a high-speed digital video camera recording at 240 frames s-1, and the videos are analyzed using Tracker video-analysis software, allowing the students to dynamically model the motion of the coin. The students have to obtain the static friction coefficient by comparing the centripetal and maximum static friction forces. The experiment only requires simple and inexpensive materials. The dynamics of circular motion and static friction forces are difficult for many students to understand. The proposed laboratory exercise addresses these topics, which are relevant to the physics curriculum.
Some Thoughts on Stability in Nonlinear Periodic Focusing Systems [Addendum
DOE R&D Accomplishments Database
McMillan, Edwin M.
1968-03-29
Addendum to September 5, 1967 report with the same title and with the abstract: A brief discussion is given of the long-term stability of particle motions through periodic focusing structures containing lumped nonlinear elements. A method is presented whereby one can specify the nonlinear elements in such a way as to generate a variety of structures in which the motion has long-term stability.
High-Speed Video Analysis in a Conceptual Physics Class
NASA Astrophysics Data System (ADS)
Desbien, Dwain M.
2011-09-01
The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.
3D Surface Reconstruction and Volume Calculation of Rills
NASA Astrophysics Data System (ADS)
Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.
2015-04-01
We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.
Phenology cameras observing boreal ecosystems of Finland
NASA Astrophysics Data System (ADS)
Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali
2016-04-01
Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.
Applications of Phase-Based Motion Processing
NASA Technical Reports Server (NTRS)
Branch, Nicholas A.; Stewart, Eric C.
2018-01-01
Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.
NASA Technical Reports Server (NTRS)
Bainum, P. M.; Kumar, V. K.
1980-01-01
The dynamics and stability of large orbiting flexible beams, and platforms and dish type structures oriented along the local horizontal are treated both analytically and numerically. It is assumed that such structures could be gravitationally stabilized by attaching a rigid light-weight dumbbell at the center of mass by a spring loaded hinge which also could provide viscous damping. For the beam, the small amplitude inplane pitch motion, dumbbell librational motion, and the anti-symmetric elastic modes are all coupled. The three dimensional equations of motion for a circular flat plate and shallow spherical shell in orbit with a two-degree-of freedom gimballed dumbbell are also developed and show that only those elastic modes described by a single nodal diameter line are influenced by the dumbbell motion. Stability criteria are developed for all the examples and a sensitivity study of the system response characteristics to the key system parameters is carried out.
NASA Astrophysics Data System (ADS)
Pavlov, A. I.; Maciejewski, A. J.
2003-08-01
We use the alternative MEGNO (Mean Exponential Growth of Nearby Orbits) technique developed by Cincotta and Simo to study the stability of orbital-rotational motions for plane oscillations and three-dimensional rotations. We present a detailed numerical-analytical study of a rigid body in the case where the proper rotation of the body is synchronized with its orbital motion as 3: 2 (Mercurian-type synchronism). For plane rotations, the loss of stability of the periodic solution that corresponds to a 3: 2 resonance is shown to be soft, which should be taken into account to estimate the upper limit for the ellipticity of Mercury. In studying stable and chaotic translational-rotational motions, we point out that the MEGNO criterion can be effectively used. This criterion gives a clear picture of the resonant structures and allows the calculations to be conveniently presented in the form of the corresponding MEGNO stability maps for multidimensional systems. We developed an appropriate software package.
Analysis of Brown camera distortion model
NASA Astrophysics Data System (ADS)
Nowakowski, Artur; Skarbek, Władysław
2013-10-01
Contemporary image acquisition devices introduce optical distortion into image. It results in pixel displacement and therefore needs to be compensated for many computer vision applications. The distortion is usually modeled by the Brown distortion model, which parameters can be included in camera calibration task. In this paper we describe original model, its dependencies and analyze orthogonality with regard to radius for its decentering distortion component. We also report experiments with camera calibration algorithm included in OpenCV library, especially a stability of distortion parameters estimation is evaluated.
NASA Technical Reports Server (NTRS)
Sternfield, Leonard
1951-01-01
A theoretical investigation has been made to determine the effect of nonlinear stability derivatives on the lateral stability of an airplane. Motions were calculated on the assumption that the directional-stability and the damping-in-yawing derivatives are functions of the angle of sideslip. The application of the Laplace transform to the calculation of an airplane motion when certain types of nonlinear derivatives are present is described in detail. The types of nonlinearities assumed correspond to the condition in which the values of the directional-stability and damping-in-yawing derivatives are zero for small angle of sideslip.
2D/3D Visual Tracker for Rover Mast
NASA Technical Reports Server (NTRS)
Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria
2006-01-01
A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion.
Video pulse rate variability analysis in stationary and motion conditions.
Melchor Rodríguez, Angel; Ramos-Castro, J
2018-01-29
In the last few years, some studies have measured heart rate (HR) or heart rate variability (HRV) parameters using a video camera. This technique focuses on the measurement of the small changes in skin colour caused by blood perfusion. To date, most of these works have obtained HRV parameters in stationary conditions, and there are practically no studies that obtain these parameters in motion scenarios and by conducting an in-depth statistical analysis. In this study, a video pulse rate variability (PRV) analysis is conducted by measuring the pulse-to-pulse (PP) intervals in stationary and motion conditions. Firstly, given the importance of the sampling rate in a PRV analysis and the low frame rate of commercial cameras, we carried out an analysis of two models to evaluate their performance in the measurements. We propose a selective tracking method using the Viola-Jones and KLT algorithms, with the aim of carrying out a robust video PRV analysis in stationary and motion conditions. Data and results of the proposed method are contrasted with those reported in the state of the art. The webcam achieved better results in the performance analysis of video cameras. In stationary conditions, high correlation values were obtained in PRV parameters with results above 0.9. The PP time series achieved an RMSE (mean ± standard deviation) of 19.45 ± 5.52 ms (1.70 ± 0.75 bpm). In the motion analysis, most of the PRV parameters also achieved good correlation results, but with lower values as regards stationary conditions. The PP time series presented an RMSE of 21.56 ± 6.41 ms (1.79 ± 0.63 bpm). The statistical analysis showed good agreement between the reference system and the proposed method. In stationary conditions, the results of PRV parameters were improved by our method in comparison with data reported in related works. An overall comparative analysis of PRV parameters in motion conditions was more limited due to the lack of studies or studies containing insufficient data analysis. Based on the results, the proposed method could provide a low-cost, contactless and reliable alternative for measuring HR or PRV parameters in non-clinical environments.
Chinnadurai, Sathya K; Spodnick, Gary; Degernes, Laurel; DeVoe, Ryan S; Marcellin-Little, Denis J
2009-12-01
An extracapsular stabilization technique was used to repair cruciate ligament ruptures in a trumpeter hornbill (Bycanistes bucinator) and an African grey parrot (Psittacus erithacus). The hornbill demonstrated cranial drawer motion and severe rotational instability of the stifle from ruptures of the cranial and caudal cruciate ligaments and stifle joint capsule. The luxation was reduced, and the fibula was cranially transposed, in relation to the tibiotarsus, and anchored with 2 positive profile threaded acrylic pins. A lateral extracapsular stabilization was then performed. The African grey parrot had a traumatic stifle luxation, and an open reduction and a lateral extracapsular stabilization were performed. Both birds regained function of the affected leg by 1 month after surgery. Extracapsular stabilization allows motion of the stifle joint to be maintained during the postoperative recovery period, an advantage over rigid stabilization. Maintaining motion in the stifle joint facilitates physical therapy and can aid in full recovery after avian stifle injuries.
Mageswaran, Prasath; Techy, Fernando; Colbrunn, Robb W; Bonner, Tara F; McLain, Robert F
2012-09-01
The object of this study was to evaluate the effect of hybrid dynamic stabilization on adjacent levels of the lumbar spine. Seven human spine specimens from T-12 to the sacrum were used. The following conditions were implemented: 1) intact spine; 2) fusion of L4-5 with bilateral pedicle screws and titanium rods; and 3) supplementation of the L4-5 fusion with pedicle screw dynamic stabilization constructs at L3-4, with the purpose of protecting the L3-4 level from excessive range of motion (ROM) and to create a smoother motion transition to the rest of the lumbar spine. An industrial robot was used to apply continuous pure moment (± 2 Nm) in flexion-extension with and without a follower load, lateral bending, and axial rotation. Intersegmental rotations of the fused, dynamically stabilized, and adjacent levels were measured and compared. In flexion-extension only, the rigid instrumentation at L4-5 caused a 78% decrease in the segment's ROM when compared with the intact specimen. To compensate, it caused an increase in motion at adjacent levels L1-2 (45.6%) and L2-3 (23.2%) only. The placement of the dynamic construct at L3-4 decreased the operated level's ROM by 80.4% (similar stability as the fusion at L4-5), when compared with the intact specimen, and caused a significant increase in motion at all tested adjacent levels. In flexion-extension with a follower load, instrumentation at L4-5 affected only a subadjacent level, L5-sacrum (52.0%), while causing a reduction in motion at the operated level (L4-5, -76.4%). The dynamic construct caused a significant increase in motion at the adjacent levels T12-L1 (44.9%), L1-2 (57.3%), and L5-sacrum (83.9%), while motion at the operated level (L3-4) was reduced by 76.7%. In lateral bending, instrumentation at L4-5 increased motion at only T12-L1 (22.8%). The dynamic construct at L3-4 caused an increase in motion at T12-L1 (69.9%), L1-2 (59.4%), L2-3 (44.7%), and L5-sacrum (43.7%). In axial rotation, only the placement of the dynamic construct at L3-4 caused a significant increase in motion of the adjacent levels L2-3 (25.1%) and L5-sacrum (31.4%). The dynamic stabilization system displayed stability characteristics similar to a solid, all-metal construct. Its addition of the supraadjacent level (L3-4) to the fusion (L4-5) did protect the adjacent level from excessive motion. However, it essentially transformed a 1-level lumbar fusion into a 2-level lumbar fusion, with exponential transfer of motion to the fewer remaining discs.
Crosnier, Emilie A; Keogh, Patrick S; Miles, Anthony W
2016-08-01
The hip joint is subjected to cyclic loading and motion during activities of daily living and this can induce micromotions at the bone-implant interface of cementless total hip replacements. Initial stability has been identified as a crucial factor to achieve osseointegration and long-term survival. Whilst fixation of femoral stems achieves good clinical results, the fixation of acetabular components remains a challenge. In vitro methods assessing cup stability keep the hip joint in a fixed position, overlooking the effect of hip motion. The effect of hip motion on cup micromotion using a hip motion simulator replicating hip flexion-extension and a six degrees of freedom measurement system was investigated. The results show an increase in cup micromotion under dynamic hip motion compared to Static Flexion. This highlights the need to incorporate hip motion and measure all degrees of freedom when assessing cup micromotion. In addition, comparison of two press-fit acetabular cups with different surface coatings suggested similar stability between the two cups. This new method provides a basis for a more representative protocol for future pre-clinical evaluation of different cup designs. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming
2018-06-01
This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.
Technical Note: Kinect V2 surface filtering during gantry motion for radiotherapy applications.
Nazir, Souha; Rihana, Sandy; Visvikis, Dimitris; Fayad, Hadi
2018-04-01
In radiotherapy, the Kinect V2 camera, has recently received a lot of attention concerning many clinical applications including patient positioning, respiratory motion tracking, and collision detection during the radiotherapy delivery phase. However, issues associated with such applications are related to some materials and surfaces reflections generating an offset in depth measurements especially during gantry motion. This phenomenon appears in particular when the collimator surface is observed by the camera; resulting in erroneous depth measurements, not only in Kinect surfaces itself, but also as a large peak when extracting a 1D respiratory signal from these data. In this paper, we proposed filtering techniques to reduce the noise effect in the Kinect-based 1D respiratory signal, using a trend removal filter, and in associated 2D surfaces, using a temporal median filter. Filtering process was validated using a phantom, in order to simulate a patient undergoing radiotherapy treatment while having the ground truth. Our results indicate a better correlation between the reference respiratory signal and its corresponding filtered signal (Correlation coefficient of 0.76) than that of the nonfiltered signal (Correlation coefficient of 0.13). Furthermore, surface filtering results show a decrease in the mean square distance error (85%) between the reference and the measured point clouds. This work shows a significant noise compensation and surface restitution after surface filtering and therefore a potential use of the Kinect V2 camera for different radiotherapy-based applications, such as respiratory tracking and collision detection. © 2018 American Association of Physicists in Medicine.
High-throughput microfluidic line scan imaging for cytological characterization
NASA Astrophysics Data System (ADS)
Hutcheson, Joshua A.; Powless, Amy J.; Majid, Aneeka A.; Claycomb, Adair; Fritsch, Ingrid; Balachandran, Kartik; Muldoon, Timothy J.
2015-03-01
Imaging cells in a microfluidic chamber with an area scan camera is difficult due to motion blur and data loss during frame readout causing discontinuity of data acquisition as cells move at relatively high speeds through the chamber. We have developed a method to continuously acquire high-resolution images of cells in motion through a microfluidics chamber using a high-speed line scan camera. The sensor acquires images in a line-by-line fashion in order to continuously image moving objects without motion blur. The optical setup comprises an epi-illuminated microscope with a 40X oil immersion, 1.4 NA objective and a 150 mm tube lens focused on a microfluidic channel. Samples containing suspended cells fluorescently stained with 0.01% (w/v) proflavine in saline are introduced into the microfluidics chamber via a syringe pump; illumination is provided by a blue LED (455 nm). Images were taken of samples at the focal plane using an ELiiXA+ 8k/4k monochrome line-scan camera at a line rate of up to 40 kHz. The system's line rate and fluid velocity are tightly controlled to reduce image distortion and are validated using fluorescent microspheres. Image acquisition was controlled via MATLAB's Image Acquisition toolbox. Data sets comprise discrete images of every detectable cell which may be subsequently mined for morphological statistics and definable features by a custom texture analysis algorithm. This high-throughput screening method, comparable to cell counting by flow cytometry, provided efficient examination including counting, classification, and differentiation of saliva, blood, and cultured human cancer cells.
Calibration Procedures in Mid Format Camera Setups
NASA Astrophysics Data System (ADS)
Pivnicka, F.; Kemper, G.; Geissler, S.
2012-07-01
A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied. However, there is a misalignment (bore side angle) that must be evaluated by photogrammetric process using advanced tools e.g. in Bingo. Once, all these parameters have been determined, the system is capable for projects without or with only a few ground control points. But which effect has the photogrammetric process when directly applying the achieved direct orientation values compared with an AT based on a proper tiepoint matching? The paper aims to show the steps to be done by potential users and gives a kind of quality estimation about the importance and quality influence of the various calibration and adjustment steps.
Patient training in respiratory-gated radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kini, Vijay R.; Vedam, Subrahmanya S.; Keall, Paul J.
2003-03-31
Respiratory gating is used to counter the effects of organ motion during radiotherapy for chest tumors. The effects of variations in patient breathing patterns during a single treatment and from day to day are unknown. We evaluated the feasibility of using patient training tools and their effect on the breathing cycle regularity and reproducibility during respiratory-gated radiotherapy. To monitor respiratory patterns, we used a component of a commercially available respiratory-gated radiotherapy system (Real Time Position Management (RPM) System, Varian Oncology Systems, Palo Alto, CA 94304). This passive marker video tracking system consists of reflective markers placed on the patient's chestmore » or abdomen, which are detected by a wall-mounted video camera. Software installed on a PC interfaced to this camera detects the marker motion digitally and records it. The marker position as a function of time serves as the motion signal that may be used to trigger imaging or treatment. The training tools used were audio prompting and visual feedback, with free breathing as a control. The audio prompting method used instructions to 'breathe in' or 'breathe out' at periodic intervals deduced from patients' own breathing patterns. In the visual feedback method, patients were shown a real-time trace of their abdominal wall motion due to breathing. Using this, they were asked to maintain a constant amplitude of motion. Motion traces of the abdominal wall were recorded for each patient for various maneuvers. Free breathing showed a variable amplitude and frequency. Audio prompting resulted in a reproducible frequency; however, the variability and the magnitude of amplitude increased. Visual feedback gave a better control over the amplitude but showed minor variations in frequency. We concluded that training improves the reproducibility of amplitude and frequency of patient breathing cycles. This may increase the accuracy of respiratory-gated radiation therapy.« less
Motion video analysis using planar parallax
NASA Astrophysics Data System (ADS)
Sawhney, Harpreet S.
1994-04-01
Motion and structure analysis in video sequences can lead to efficient descriptions of objects and their motions. Interesting events in videos can be detected using such an analysis--for instance independent object motion when the camera itself is moving, figure-ground segregation based on the saliency of a structure compared to its surroundings. In this paper we present a method for 3D motion and structure analysis that uses a planar surface in the environment as a reference coordinate system to describe a video sequence. The motion in the video sequence is described as the motion of the reference plane, and the parallax motion of all the non-planar components of the scene. It is shown how this method simplifies the otherwise hard general 3D motion analysis problem. In addition, a natural coordinate system in the environment is used to describe the scene which can simplify motion based segmentation. This work is a part of an ongoing effort in our group towards video annotation and analysis for indexing and retrieval. Results from a demonstration system being developed are presented.
Hummingbirds control hovering flight by stabilizing visual motion.
Goller, Benjamin; Altshuler, Douglas L
2014-12-23
Relatively little is known about how sensory information is used for controlling flight in birds. A powerful method is to immerse an animal in a dynamic virtual reality environment to examine behavioral responses. Here, we investigated the role of vision during free-flight hovering in hummingbirds to determine how optic flow--image movement across the retina--is used to control body position. We filmed hummingbirds hovering in front of a projection screen with the prediction that projecting moving patterns would disrupt hovering stability but stationary patterns would allow the hummingbird to stabilize position. When hovering in the presence of moving gratings and spirals, hummingbirds lost positional stability and responded to the specific orientation of the moving visual stimulus. There was no loss of stability with stationary versions of the same stimulus patterns. When exposed to a single stimulus many times or to a weakened stimulus that combined a moving spiral with a stationary checkerboard, the response to looming motion declined. However, even minimal visual motion was sufficient to cause a loss of positional stability despite prominent stationary features. Collectively, these experiments demonstrate that hummingbirds control hovering position by stabilizing motions in their visual field. The high sensitivity and persistence of this disruptive response is surprising, given that the hummingbird brain is highly specialized for sensory processing and spatial mapping, providing other potential mechanisms for controlling position.
NASA Technical Reports Server (NTRS)
Curfman, Howard J , Jr
1955-01-01
Through theoretical and analog results the effects of two nonlinear stability derivatives on the longitudinal motions of an aircraft have been investigated. Nonlinear functions of pitching-moment and lift coefficients with angle of attack were considered. Analog results of aircraft motions in response to step elevator deflections and to the action of the proportional control systems are presented. The occurrence of continuous hunting oscillations was predicted and demonstrated for the attitude stabilization system with proportional control for certain nonlinear pitching-moment variations and autopilot adjustments.
Determination of Global Stability of the Slosh Motion in a Spacecraft via Num Erical Experiment
NASA Astrophysics Data System (ADS)
Kang, Ja-Young
2003-12-01
The global stability of the attitude motion of a spin-stabilized space vehicle is investigated by performing numerical experiment. In the previous study, a stationary solution and a particular resonant condition for a given model were found by using analytical method but failed to represent the system stability over parameter values near and off the stationary points. Accordingly, as an extension of the previous work, this study performs numerical experiment to investigate the stability of the system across the parameter space and determines stable and unstable regions of the design parameters of the system.
NASA Technical Reports Server (NTRS)
Hui, W. H.
1985-01-01
Bifurcation theory is used to analyze the nonlinear dynamic stability characteristics of an aircraft subject to single-degree-of-freedom. The requisite moment of the aerodynamic forces in the equations of motion is shown to be representable in a form equivalent to the response to finite amplitude oscillations. It is shown how this information can be deduced from the case of infinitesimal-amplitude oscillations. The bifurcation theory analysis reveals that when the bifurcation parameter is increased beyond a critical value at which the aerodynamic damping vanishes, new solutions representing finite amplitude periodic motions bifurcate from the previously stable steady motion. The sign of a simple criterion, cast in terms of aerodynamic properties, determines whether the bifurcating solutions are stable or unstable. For the pitching motion of flat-plate airfoils flying at supersonic/hypersonic speed and for oscillation of flaps at transonic speed, the bifurcation is subcritical, implying either the exchanges of stability between steady and periodic motion are accompanied by hysteresis phenomena, or that potentially large aperiodic departures from steady motion may develop.
STABILITY OF A CABLE-MOORED AIRSHIP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, M.D.
1959-03-01
Equations of motion for the longitudinal and directional stability of a cable-moored airship were analyzed on an analog computer by subjecting the configuration to a horizontals ramp-type gust and observing the induced motion. The results indicate that the proposed configuration is dynamically stable (overdamped, essentially), and that it possesses a sufficiently high degree of stability to permit a 20-percent reduction in the fin planform area governing stability in the longitudinal direction. However, this reduction cannot be accomplished by decreasing the appropriate area of the proposed three-fintail because such a decrease causes the airship to become directionally unstable. Aerodynamically, the desiredmore » area reduction can best be effected by the use of a four-fin tail. Motions are calculated for various modifications to the proposed configuration. (auth)« less
Attitude dynamic of spin-stabilized satellites with flexible appendages
NASA Technical Reports Server (NTRS)
Renard, M. L.
1973-01-01
Equations of motion and computer programs have been developed for analyzing the motion of a spin-stabilized spacecraft having long, flexible appendages. Stability charts were derived, or can be redrawn with the desired accuracy for any particular set of design parameters. Simulation graphs of variables of interest are readily obtainable on line using program FLEXAT. Finally, applications to actual satellites, such as UK-4 and IMP-1 have been considered.
Investigating plasma viscosity with fast framing photography in the ZaP-HD Flow Z-Pinch experiment
NASA Astrophysics Data System (ADS)
Weed, Jonathan Robert
The ZaP-HD Flow Z-Pinch experiment investigates the stabilizing effect of sheared axial flows while scaling toward a high-energy-density laboratory plasma (HEDLP > 100 GPa). Stabilizing flows may persist until viscous forces dissipate a sheared flow profile. Plasma viscosity is investigated by measuring scale lengths in turbulence intentionally introduced in the plasma flow. A boron nitride turbulence-tripping probe excites small scale length turbulence in the plasma, and fast framing optical cameras are used to study time-evolved turbulent structures and viscous dissipation. A Hadland Imacon 790 fast framing camera is modified for digital image capture, but features insufficient resolution to study turbulent structures. A Shimadzu HPV-X camera captures the evolution of turbulent structures with great spatial and temporal resolution, but is unable to resolve the anticipated Kolmogorov scale in ZaP-HD as predicted by a simplified pinch model.
Motion Imagery and Robotics Application (MIRA)
NASA Technical Reports Server (NTRS)
Martinez, Lindolfo; Rich, Thomas
2011-01-01
Objectives include: I. Prototype a camera service leveraging the CCSDS Integrated protocol stack (MIRA/SM&C/AMS/DTN): a) CCSDS MIRA Service (New). b) Spacecraft Monitor and Control (SM&C). c) Asynchronous Messaging Service (AMS). d) Delay/Disruption Tolerant Networking (DTN). II. Additional MIRA Objectives: a) Demo of Camera Control through ISS using CCSDS protocol stack (Berlin, May 2011). b) Verify that the CCSDS standards stack can provide end-to-end space camera services across ground and space environments. c) Test interoperability of various CCSDS protocol standards. d) Identify overlaps in the design and implementations of the CCSDS protocol standards. e) Identify software incompatibilities in the CCSDS stack interfaces. f) Provide redlines to the SM&C, AMS, and DTN working groups. d) Enable the CCSDS MIRA service for potential use in ISS Kibo camera commanding. e) Assist in long-term evolution of this entire group of CCSDS standards to TRL 6 or greater.
PRIMAS: a real-time 3D motion-analysis system
NASA Astrophysics Data System (ADS)
Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans
1994-03-01
The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.
Research on three-dimensional reconstruction method based on binocular vision
NASA Astrophysics Data System (ADS)
Li, Jinlin; Wang, Zhihui; Wang, Minjun
2018-03-01
As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.
A novel validation and calibration method for motion capture systems based on micro-triangulation.
Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M
2018-06-06
Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
Piecewise-Planar StereoScan: Sequential Structure and Motion using Plane Primitives.
Raposo, Carolina; Antunes, Michel; P Barreto, Joao
2017-08-09
The article describes a pipeline that receives as input a sequence of stereo images, and outputs the camera motion and a Piecewise-Planar Reconstruction (PPR) of the scene. The pipeline, named Piecewise-Planar StereoScan (PPSS), works as follows: the planes in the scene are detected for each stereo view using semi-dense depth estimation; the relative pose is computed by a new closed-form minimal algorithm that only uses point correspondences whenever plane detections do not fully constrain the motion; the camera motion and the PPR are jointly refined by alternating between discrete optimization and continuous bundle adjustment; and, finally, the detected 3D planes are segmented in images using a new framework that handles low texture and visibility issues. PPSS is extensively validated in indoor and outdoor datasets, and benchmarked against two popular point-based SfM pipelines. The experiments confirm that plane-based visual odometry is resilient to situations of small image overlap, poor texture, specularity, and perceptual aliasing where the fast LIBVISO2 pipeline fails. The comparison against VisualSfM+CMVS/PMVS shows that, for a similar computational complexity, PPSS is more accurate and provides much more compelling and visually pleasant 3D models. These results strongly suggest that plane primitives are an advantageous alternative to point correspondences for applications of SfM and 3D reconstruction in man-made environments.
Determining wildlife use of wildlife crossing structures under different scenarios.
DOT National Transportation Integrated Search
2012-05-01
This research evaluated Utahs wildlife crossing structures to help UDOT and the Utah Division of Wildlife Resources assess crossing efficacy. In this study, remote motion-sensed cameras were used at 14 designated wildlife crossing culverts and bri...
An investigation into the use of road drainage structures by wildlife in Maryland.
DOT National Transportation Integrated Search
2011-08-01
The research team documented culvert use by 57 species of vertebrates with both infra-red motion detecting digital : game cameras and visual sightings. Species affiliations with culvert characteristics were analyzed using 2 : statistics, Canonical ...
Hand-held photomicroscopy system
NASA Technical Reports Server (NTRS)
Zabower, H. R.
1972-01-01
Photomicroscopy system, with simple optics and any standard microscope objective, is used with any type of motion picture, still, or television camera system. Device performs well under difficult environmental conditions and applies to work in ecological studies, field hospitals, and geological surveys.
Iceland: Eyjafjallajökull Volcano
Atmospheric Science Data Center
2013-04-17
... causes motion of the plume features between camera views. A quantitative computer analysis is necessary to separate out wind and height ... MD. The MISR data were obtained from the NASA Langley Research Center Atmospheric Science Data Center in Hampton, VA. Image ...
2011-01-01
Background Orthopaedic research projects focusing on small displacements in a small measurement volume require a radiation free, three dimensional motion analysis system. A stereophotogrammetrical motion analysis system can track wireless, small, light-weight markers attached to the objects. Thereby the disturbance of the measured objects through the marker tracking can be kept at minimum. The purpose of this study was to develop and evaluate a non-position fixed compact motion analysis system configured for a small measurement volume and able to zoom while tracking small round flat markers in respect to a fiducial marker which was used for the camera pose estimation. Methods The system consisted of two web cameras and the fiducial marker placed in front of them. The markers to track were black circles on a white background. The algorithm to detect a centre of the projected circle on the image plane was described and applied. In order to evaluate the accuracy (mean measurement error) and precision (standard deviation of the measurement error) of the optical measurement system, two experiments were performed: 1) inter-marker distance measurement and 2) marker displacement measurement. Results The first experiment of the 10 mm distances measurement showed a total accuracy of 0.0086 mm and precision of ± 0.1002 mm. In the second experiment, translations from 0.5 mm to 5 mm were measured with total accuracy of 0.0038 mm and precision of ± 0.0461 mm. The rotations of 2.25° amount were measured with the entire accuracy of 0.058° and the precision was of ± 0.172°. Conclusions The description of the non-proprietary measurement device with very good levels of accuracy and precision may provide opportunities for new, cost effective applications of stereophotogrammetrical analysis in musculoskeletal research projects, focusing on kinematics of small displacements in a small measurement volume. PMID:21284867
Miura, Hideharu; Ozawa, Shuichi; Hayata, Masahiro; Tsuda, Shintaro; Yamada, Kiyoshi; Nagata, Yasushi
2016-09-08
We proposed a simple visual method for evaluating the dynamic tumor tracking (DTT) accuracy of a gimbal mechanism using a light field. A single photon beam was set with a field size of 30 × 30 mm2 at a gantry angle of 90°. The center of a cube phantom was set up at the isocenter of a motion table, and 4D modeling was performed based on the tumor and infrared (IR) marker motion. After 4D modeling, the cube phantom was replaced with a sheet of paper, which was placed perpen-dicularly, and a light field was projected on the sheet of paper. The light field was recorded using a web camera in a treatment room that was as dark as possible. Calculated images from each image obtained using the camera were summed to compose a total summation image. Sinusoidal motion sequences were produced by moving the phantom with a fixed amplitude of 20 mm and different breathing periods of 2, 4, 6, and 8 s. The light field was projected on the sheet of paper under three conditions: with the moving phantom and DTT based on the motion of the phantom, with the moving phantom and non-DTT, and with a stationary phantom for comparison. The values of tracking errors using the light field were 1.12 ± 0.72, 0.31 ± 0.19, 0.27 ± 0.12, and 0.15 ± 0.09 mm for breathing periods of 2, 4, 6, and 8s, respectively. The tracking accuracy showed dependence on the breath-ing period. We proposed a simple quality assurance (QA) process for the tracking accuracy of a gimbal mechanism system using a light field and web camera. Our method can assess the tracking accuracy using a light field without irradiation and clearly visualize distributions like film dosimetry. © 2016 The Authors.
Restoration of motion blurred images
NASA Astrophysics Data System (ADS)
Gaxiola, Leopoldo N.; Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.
2017-08-01
Image restoration is a classic problem in image processing. Image degradations can occur due to several reasons, for instance, imperfections of imaging systems, quantization errors, atmospheric turbulence, relative motion between camera or objects, among others. Motion blur is a typical degradation in dynamic imaging systems. In this work, we present a method to estimate the parameters of linear motion blur degradation from a captured blurred image. The proposed method is based on analyzing the frequency spectrum of a captured image in order to firstly estimate the degradation parameters, and then, to restore the image with a linear filter. The performance of the proposed method is evaluated by processing synthetic and real-life images. The obtained results are characterized in terms of accuracy of image restoration given by an objective criterion.
Smart security system for Indian rail wagons using IOT
NASA Astrophysics Data System (ADS)
Bhanuteja, S.; Shilpi, S.; Pragna, K.; Arun, M.
2017-11-01
The objective of this project is to create a Security System for the goods that are carried in open top freight trains. The most efficient way to secure anything from thieves is to have a continuous observation. So for continuous observation of the open top freight train, Camera module2 has been used. Passive Infrared Sensor (PIR) 1 has been used to detect the motion or to sense movement of people, animals, or any object. So whenever a motion is detected by the PIR sensor, the Camera takes a picture of that particular instance. That picture will be send to the Raspberry PI which does Skin Detection Algorithm and specifies whether that motion was created by a human or not. If a human makes it, then that picture will send to the drop box. Any Official can have a look at the same. The existing system has a CCTV installed at various critical locations like bridges, railway stations etc. but they does not provide a continuous observation. This paper describes about the Security System that provides continuous observation for open top freight trains so that goods can be carried safely to its destination.
NASA Astrophysics Data System (ADS)
To, T.; Nguyen, D.; Tran, G.
2015-04-01
Heritage system of Vietnam has decline because of poor-conventional condition. For sustainable development, it is required a firmly control, space planning organization, and reasonable investment. Moreover, in the field of Cultural Heritage, the use of automated photogrammetric systems, based on Structure from Motion techniques (SfM), is widely used. With the potential of high-resolution, low-cost, large field of view, easiness, rapidity and completeness, the derivation of 3D metric information from Structure-and- Motion images is receiving great attention. In addition, heritage objects in form of 3D physical models are recorded not only for documentation issues, but also for historical interpretation, restoration, cultural and educational purposes. The study suggests the archaeological documentation of the "One Pilla" pagoda placed in Hanoi capital, Vietnam. The data acquired through digital camera Cannon EOS 550D, CMOS APS-C sensor 22.3 x 14.9 mm. Camera calibration and orientation were carried out by VisualSFM, CMPMVS (Multi-View Reconstruction) and SURE (Photogrammetric Surface Reconstruction from Imagery) software. The final result represents a scaled 3D model of the One Pilla Pagoda and displayed different views in MeshLab software.
Aerodynamic Stability and Performance of Next-Generation Parachutes for Mars Descent
NASA Technical Reports Server (NTRS)
Gonyea, Keir C.; Tanner, Christopher L.; Clark, Ian G.; Kushner, Laura K.; Schairer, Edward T.; Braun, Robert D.
2013-01-01
The Low Density Supersonic Decelerator Project is developing a next-generation supersonic parachute for use on future Mars missions. In order to determine the new parachute configuration, a wind tunnel test was conducted at the National Full-scale Aerodynamics Complex 80- by 120-foot Wind Tunnel at the NASA Ames Research Center. The goal of the wind tunnel test was to quantitatively determine the aerodynamic stability and performance of various canopy configurations in order to help select the design to be flown on the Supersonic Flight Dynamics tests. Parachute configurations included the diskgap- band, ringsail, and ringsail-variant designs referred to as a disksail and starsail. During the wind tunnel test, digital cameras captured synchronized image streams of the parachute from three directions. Stereo hotogrammetric processing was performed on the image data to track the position of the vent of the canopy throughout each run. The position data were processed to determine the geometric angular history of the parachute, which were then used to calculate the total angle of attack and its derivatives at each instant in time. Static and dynamic moment coefficients were extracted from these data using a parameter estimation method involving the one-dimensional equation of motion for a rotation of parachute. The coefficients were calculated over all of the available canopy states to reconstruct moment coefficient curves as a function of total angle of attack. From the stability curves, useful metrics such as the trim total angle of attack and pitch stiffness at the trim angle could be determined. These stability metrics were assessed in the context of the parachute's drag load and geometric porosity. While there was generally an inverse relationship between the drag load and the stability of the canopy, the data showed that it was possible to obtain similar stability properties as the disk-gap-band with slightly higher drag loads by appropriately tailoring the geometric porosity distribution.
Flight dynamics of axisymmetric rotating bodies in an air medium
NASA Astrophysics Data System (ADS)
Borisenok, I. T.; Lokshin, B. Ia.; Privalov, V. A.
1984-04-01
The free flight motion of a rigid axisymmetric body due to the action of its own weight, aerodynamic effects (autorotation), and possible reactive forces is examined. It is assumed that the central ellipsoid of inertia of the body is an ellipsoid of rotation about the axis of symmetry, and that the center of gravity is at the geometric center of the body. The region of stability of vertical descent is approximated by dividing a system of characteristic equations into fast and slow parts. It is shown that, for given gyroscopic forces, the presence of the nonconservative Magnus moment may lead to a loss of stability of this type of motion. The stability of the case of planar motion, where the Magnus force and weight form an equilibrium force system, and of the case of spiral motion is considered. Stability is also studied for the case of the center of mass at an arbitrary point on the axis of symmetry, and for the case of an axisymmetric body not having an equatorial plane of symmetry. Conditions for the equilibrium and precession stability of a rotating parachute in a wind tunnel are identified.
A role of abdomen in butterfly's flapping flight
NASA Astrophysics Data System (ADS)
Jayakumar, Jeeva; Senda, Kei; Yokoyama, Naoto
2017-11-01
Butterfly's forward flight with periodic flapping motion is longitudinally unstable, and control of the thoracic pitching angle is essential to stabilize the flight. This study aims to comprehend roles which the abdominal motion play in the pitching stability of butterfly's flapping flight by using a two-dimensional model. The control of the thoracic pitching angle by the abdominal motion is an underactuated problem because of the limit on the abdominal angle. The control input of the thorax-abdomen joint torque is obtained by the hierarchical sliding mode control in this study. Numerical simulations reveal that the control by the abdominal motion provides short-term pitching stabilization in the butterfly's flight. Moreover, the control input due to a large thorax-abdomen joint torque can counteract a quite large perturbation, and can return the pitching attitude to the periodic trajectory with a short recovery time. These observations are consistent with biologists' view that living butterflies use their abdomens as rudders. On the other hand, the abdominal control mostly fails in long-term pitching stabilization, because it cannot directly alter the aerodynamic forces. The control for the long-term pitching stabilization will also be discussed.
Examining wildlife responses to phenology and wildfire using a landscape-scale camera trap network
Villarreal, Miguel L.; Gass, Leila; Norman, Laura; Sankeya, Joel B.; Wallace, Cynthia S.A.; McMacken, Dennis; Childs, Jack L.; Petrakis, Roy E.
2012-01-01
Between 2001 and 2009, the Borderlands Jaguar Detection Project deployed 174 camera traps in the mountains of southern Arizona to record jaguar activity. In addition to jaguars, the motion-activated cameras, placed along known wildlife travel routes, recorded occurrences of ~ 20 other animal species. We examined temporal relationships of white-tailed deer (Odocoileus virginianus) and javelina (Pecari tajacu) to landscape phenology (as measured by monthly Normalized Difference Vegetation Index data) and the timing of wildfire (Alambre Fire of 2007). Mixed model analyses suggest that temporal dynamics of these two species were related to vegetation phenology and natural disturbance in the Sky Island region, information important for wildlife managers faced with uncertainty regarding changing climate and disturbance regimes.
Integrated evaluation of visually induced motion sickness in terms of autonomic nervous regulation.
Kiryu, Tohru; Tada, Gen; Toyama, Hiroshi; Iijima, Atsuhiko
2008-01-01
To evaluate visually-induced motion sickness, we integrated subjective and objective responses in terms of autonomic nervous regulation. Twenty-seven subjects viewed a 2-min-long first-person-view video section five times (total 10 min) continuously. Measured biosignals, the RR interval, respiration, and blood pressure, were used to estimate the indices related to autonomic nervous activity (ANA). Then we determined the trigger points and some sensation sections based on the time-varying behavior of ANA-related indices. We found that there was a suitable combination of biosignals to present the symptoms of visually-induced motion sickness. Based on the suitable combination, integrating trigger points and subjective scores allowed us to represent the time-distribution of subjective responses during visual exposure, and helps us to understand what types of camera motions will cause visually-induced motion sickness.
Determination of Stability and Translation in a Boundary Layer
NASA Technical Reports Server (NTRS)
Crepeau, John; Tobak, Murray
1996-01-01
Reducing the infinite degrees of freedom inherent in fluid motion into a manageable number of modes to analyze fluid motion is presented. The concepts behind the center manifold technique are used. Study of the Blasius boundary layer and a precise description of stability within the flow field are discussed.
Visual Neuroscience: Unique Neural System for Flight Stabilization in Hummingbirds.
Ibbotson, M R
2017-01-23
The pretectal visual motion processing area in the hummingbird brain is unlike that in other birds: instead of emphasizing detection of horizontal movements, it codes for motion in all directions through 360°, possibly offering precise visual stability control during hovering. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rivera, Gabriel; Rivera, Angela R. V.; Blob, Richard W.
2011-01-01
Hydrodynamic stability is the ability to resist recoil motions of the body produced by destabilizing forces. Previous studies have suggested that recoil motions can decrease locomotor performance, efficiency and sensory perception and that swimming animals might utilize kinematic strategies or possess morphological adaptations that reduce recoil motions and produce more stable trajectories. We used high-speed video to assess hydrodynamic stability during rectilinear swimming in the freshwater painted turtle (Chrysemys picta). Parameters of vertical stability (heave and pitch) were non-cyclic and variable, whereas measures of lateral stability (sideslip and yaw) showed repeatable cyclic patterns. In addition, because freshwater and marine turtles use different swimming styles, we tested the effects of propulsive mode on hydrodynamic stability during rectilinear swimming, by comparing our data from painted turtles with previously collected data from two species of marine turtle (Caretta caretta and Chelonia mydas). Painted turtles had higher levels of stability than both species of marine turtle for six of the eight parameters tested, highlighting potential disadvantages associated with ‘aquatic flight’. Finally, available data on hydrodynamic stability of other rigid-bodied vertebrates indicate that turtles are less stable than boxfish and pufferfish. PMID:21389201
Effects of camera location on the reconstruction of 3D flare trajectory with two cameras
NASA Astrophysics Data System (ADS)
Özsaraç, Seçkin; Yeşilkaya, Muhammed
2015-05-01
Flares are used as valuable electronic warfare assets for the battle against infrared guided missiles. The trajectory of the flare is one of the most important factors that determine the effectiveness of the counter measure. Reconstruction of the three dimensional (3D) position of a point, which is seen by multiple cameras, is a common problem. Camera placement, camera calibration, corresponding pixel determination in between the images of different cameras and also the triangulation algorithm affect the performance of 3D position estimation. In this paper, we specifically investigate the effects of camera placement on the flare trajectory estimation performance by simulations. Firstly, 3D trajectory of a flare and also the aircraft, which dispenses the flare, are generated with simple motion models. Then, we place two virtual ideal pinhole camera models on different locations. Assuming the cameras are tracking the aircraft perfectly, the view vectors of the cameras are computed. Afterwards, using the view vector of each camera and also the 3D position of the flare, image plane coordinates of the flare on both cameras are computed using the field of view (FOV) values. To increase the fidelity of the simulation, we have used two sources of error. One is used to model the uncertainties in the determination of the camera view vectors, i.e. the orientations of the cameras are measured noisy. Second noise source is used to model the imperfections of the corresponding pixel determination of the flare in between the two cameras. Finally, 3D position of the flare is estimated using the corresponding pixel indices, view vector and also the FOV of the cameras by triangulation. All the processes mentioned so far are repeated for different relative camera placements so that the optimum estimation error performance is found for the given aircraft and are trajectories.
Robust real-time horizon detection in full-motion video
NASA Astrophysics Data System (ADS)
Young, Grace B.; Bagnall, Bryan; Lane, Corey; Parameswaran, Shibin
2014-06-01
The ability to detect the horizon on a real-time basis in full-motion video is an important capability to aid and facilitate real-time processing of full-motion videos for the purposes such as object detection, recognition and other video/image segmentation applications. In this paper, we propose a method for real-time horizon detection that is designed to be used as a front-end processing unit for a real-time marine object detection system that carries out object detection and tracking on full-motion videos captured by ship/harbor-mounted cameras, Unmanned Aerial Vehicles (UAVs) or any other method of surveillance for Maritime Domain Awareness (MDA). Unlike existing horizon detection work, we cannot assume a priori the angle or nature (for e.g. straight line) of the horizon, due to the nature of the application domain and the data. Therefore, the proposed real-time algorithm is designed to identify the horizon at any angle and irrespective of objects appearing close to and/or occluding the horizon line (for e.g. trees, vehicles at a distance) by accounting for its non-linear nature. We use a simple two-stage hierarchical methodology, leveraging color-based features, to quickly isolate the region of the image containing the horizon and then perform a more ne-grained horizon detection operation. In this paper, we present our real-time horizon detection results using our algorithm on real-world full-motion video data from a variety of surveillance sensors like UAVs and ship mounted cameras con rming the real-time applicability of this method and its ability to detect horizon with no a priori assumptions.
NASA Technical Reports Server (NTRS)
Curtiss, H. C., Jr.; Komatsuzaki, T.; Traybar, J. J.
1979-01-01
The influence of single loop feedbacks to improve the stability of the system are considered. Reduced order dynamic models are employed where appropriate to promote physical insight. The influence of fuselage freedom on the aeroelastic stability, and the influence of the airframe flexibility on the low frequency modes of motion relevant to the stability and control characteristics of the vehicle were examined.
Integrating a Motion Base into a CAVE Automatic Virtual Environment: Phase 1
2001-07-01
this, a CAVE system must perform well in the following motion-related areas: visual gaze stability, simulator sickness, realism (or face validity...and performance validity. Visual Gaze Stability Visual gaze stability, the ability to maintain eye fixation on a particular target, depends upon human...reflexes such as the vestibulo-ocular reflex (VOR) and the optokinetic nystagmus (OKN). VOR is a reflex that counter-rotates the eye relative to the
NASA Technical Reports Server (NTRS)
Campbell, John P; Mckinney, Marion O
1952-01-01
A summary of methods for making dynamic lateral stability and response calculations and for estimating the aerodynamic stability derivatives required for use in these calculations is presented. The processes of performing calculations of the time histories of lateral motions, of the period and damping of these motions, and of the lateral stability boundaries are presented as a series of simple straightforward steps. Existing methods for estimating the stability derivatives are summarized and, in some cases, simple new empirical formulas are presented. Detailed estimation methods are presented for low-subsonic-speed conditions but only a brief discussion and a list of references are given for transonic and supersonic speed conditions.
Operational tracking of lava lake surface motion at Kīlauea Volcano, Hawai‘i
Patrick, Matthew R.; Orr, Tim R.
2018-03-08
Surface motion is an important component of lava lake behavior, but previous studies of lake motion have been focused on short time intervals. In this study, we implement the first continuous, real-time operational routine for tracking lava lake surface motion, applying the technique to the persistent lava lake in Halema‘uma‘u Crater at the summit of Kīlauea Volcano, Hawai‘i. We measure lake motion by using images from a fixed thermal camera positioned on the crater rim, transmitting images to the Hawaiian Volcano Observatory (HVO) in real time. We use an existing optical flow toolbox in Matlab to calculate motion vectors, and we track the position of lava upwelling in the lake, as well as the intensity of spattering on the lake surface. Over the past 2 years, real-time tracking of lava lake surface motion at Halema‘uma‘u has been an important part of monitoring the lake’s activity, serving as another valuable tool in the volcano monitoring suite at HVO.
3-D Flow Visualization with a Light-field Camera
NASA Astrophysics Data System (ADS)
Thurow, B.
2012-12-01
Light-field cameras have received attention recently due to their ability to acquire photographs that can be computationally refocused after they have been acquired. In this work, we describe the development of a light-field camera system for 3D visualization of turbulent flows. The camera developed in our lab, also known as a plenoptic camera, uses an array of microlenses mounted next to an image sensor to resolve both the position and angle of light rays incident upon the camera. For flow visualization, the flow field is seeded with small particles that follow the fluid's motion and are imaged using the camera and a pulsed light source. The tomographic MART algorithm is then applied to the light-field data in order to reconstruct a 3D volume of the instantaneous particle field. 3D, 3C velocity vectors are then determined from a pair of 3D particle fields using conventional cross-correlation algorithms. As an illustration of the concept, 3D/3C velocity measurements of a turbulent boundary layer produced on the wall of a conventional wind tunnel are presented. Future experiments are planned to use the camera to study the influence of wall permeability on the 3-D structure of the turbulent boundary layer.Schematic illustrating the concept of a plenoptic camera where each pixel represents both the position and angle of light rays entering the camera. This information can be used to computationally refocus an image after it has been acquired. Instantaneous 3D velocity field of a turbulent boundary layer determined using light-field data captured by a plenoptic camera.
Six-degrees-of-freedom sensing based on pictures taken by single camera.
Zhongke, Li; Yong, Wang; Yongyuan, Qin; Peijun, Lu
2005-02-01
Two six-degrees-of-freedom sensing methods are presented. In the first method, three laser beams are employed to set up Descartes' frame on a rigid body and a screen is adopted to form diffuse spots. In the second method, two superimposed grid screens and two laser beams are used. A CCD camera is used to take photographs in both methods. Both approaches provide a simple and error-free method to record the positions and the attitudes of a rigid body in motion continuously.
Variable-focus liquid lens for miniature cameras
NASA Astrophysics Data System (ADS)
Kuiper, S.; Hendriks, B. H. W.
2004-08-01
The meniscus between two immiscible liquids can be used as an optical lens. A change in curvature of this meniscus by electrowetting leads to a change in focal distance. It is demonstrated that two liquids in a tube form a self-centered lens with a high optical quality. The motion of the lens during a focusing action was studied by observation through the transparent tube wall. Finally, a miniature achromatic camera module was designed and constructed based on this adjustable lens, showing that it is excellently suited for use in portable applications.
NASA Astrophysics Data System (ADS)
Kang, Sungil; Roh, Annah; Nam, Bodam; Hong, Hyunki
2011-12-01
This paper presents a novel vision system for people detection using an omnidirectional camera mounted on a mobile robot. In order to determine regions of interest (ROI), we compute a dense optical flow map using graphics processing units, which enable us to examine compliance with the ego-motion of the robot in a dynamic environment. Shape-based classification algorithms are employed to sort ROIs into human beings and nonhumans. The experimental results show that the proposed system detects people more precisely than previous methods.
View of Arabella, one of the two Skylab 3 spiders used in experiment
NASA Technical Reports Server (NTRS)
1973-01-01
A close-up view of Arabella, one of the two Skylab 3 common cross spiders 'Araneus diadematus,' and the web it had spun in the zero gravity of space aboard the Skylab space station cluster in Earth orbit. This is a photographic reproduction made from a color television transmission aboard Skylab. Arabella and Anita, were housed in an enclosure onto which a motion picture camera and a still camera were attached to record the spiders' attempts to build a web in the weightless environment.