A simple demonstration when studying the equivalence principle
NASA Astrophysics Data System (ADS)
Mayer, Valery; Varaksina, Ekaterina
2016-06-01
The paper proposes a lecture experiment that can be demonstrated when studying the equivalence principle formulated by Albert Einstein. The demonstration consists of creating stroboscopic photographs of a ball moving along a parabola in Earth's gravitational field. In the first experiment, a camera is stationary relative to Earth's surface. In the second, the camera falls freely downwards with the ball, allowing students to see that the ball moves uniformly and rectilinearly relative to the frame of reference of the freely falling camera. The equivalence principle explains this result, as it is always possible to propose an inertial frame of reference for a small region of a gravitational field, where space-time effects of curvature are negligible.
Heterogeneous Vision Data Fusion for Independently Moving Cameras
2010-03-01
target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY
Mapping Land and Water Surface Topography with instantaneous Structure from Motion
NASA Astrophysics Data System (ADS)
Dietrich, J.; Fonstad, M. A.
2012-12-01
Structure from Motion (SfM) has given researchers an invaluable tool for low-cost, high-resolution 3D mapping of the environment. These SfM 3D surface models are commonly constructed from many digital photographs collected with one digital camera (either handheld or attached to aerial platform). This method works for stationary or very slow moving objects. However, objects in motion are impossible to capture with one-camera SfM. With multiple simultaneously triggered cameras, it becomes possible to capture multiple photographs at the same time which allows for the construction 3D surface models of moving objects and surfaces, an instantaneous SfM (ISfM) surface model. In river science, ISfM provides a low-cost solution for measuring a number of river variables that researchers normally estimate or are unable to collect over large areas. With ISfM and sufficient coverage of the banks and RTK-GPS control it is possible to create a digital surface model of land and water surface elevations across an entire channel and water surface slopes at any point within the surface model. By setting the cameras to collect time-lapse photography of a scene it is possible to create multiple surfaces that can be compared using traditional digital surface model differencing. These water surface models could be combined the high-resolution bathymetry to create fully 3D cross sections that could be useful in hydrologic modeling. Multiple temporal image sets could also be used in 2D or 3D particle image velocimetry to create 3D surface velocity maps of a channel. Other applications in earth science include anything where researchers could benefit from temporal surface modeling like mass movements, lava flows, dam removal monitoring. The camera system that was used for this research consisted of ten pocket digital cameras (Canon A3300) equipped with wireless triggers. The triggers were constructed with an Arduino-style microcontroller and off-the-shelf handheld radios with a maximum range of several kilometers. The cameras are controlled from another microcontroller/radio combination that allows for manual or automatic triggering of the cameras. The total cost of the camera system was approximately 1500 USD.
Suitability of digital camcorders for virtual reality image data capture
NASA Astrophysics Data System (ADS)
D'Apuzzo, Nicola; Maas, Hans-Gerd
1998-12-01
Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.
Capturing the plenoptic function in a swipe
NASA Astrophysics Data System (ADS)
Lawson, Michael; Brookes, Mike; Dragotti, Pier Luigi
2016-09-01
Blur in images, caused by camera motion, is typically thought of as a problem. The approach described in this paper shows instead that it is possible to use the blur caused by the integration of light rays at different positions along a moving camera trajectory to extract information about the light rays present within the scene. Retrieving the light rays of a scene from different viewpoints is equivalent to retrieving the plenoptic function of the scene. In this paper, we focus on a specific case in which the blurred image of a scene, containing a flat plane with a texture signal that is a sum of sine waves, is analysed to recreate the plenoptic function. The image is captured by a single lens camera with shutter open, moving in a straight line between two points, resulting in a swiped image. It is shown that finite rate of innovation sampling theory can be used to recover the scene geometry and therefore the epipolar plane image from the single swiped image. This epipolar plane image can be used to generate unblurred images for a given camera location.
Video quality of 3G videophones for telephone cardiopulmonary resuscitation.
Tränkler, Uwe; Hagen, Oddvar; Horsch, Alexander
2008-01-01
We simulated a cardiopulmonary resuscitation (CPR) scene with a manikin and used two 3G videophones on the caller's side to transmit video to a laptop PC. Five observers (two doctors with experience in emergency medicine and three paramedics) evaluated the video. They judged whether the manikin was breathing and whether they would give advice for CPR; they also graded the confidence of their decision-making. Breathing was only visible from certain orientations of the videophones, at distances below 150 cm with good illumination and a still background. Since the phones produced a degradation in colours and shadows, detection of breathing mainly depended on moving contours. Low camera positioning produced better results than having the camera high up. Darkness, shaking of the camera and a moving background made detection of breathing almost impossible. The video from the two 3G videophones that were tested was of sufficient quality for telephone CPR provided that camera orientation, distance, illumination and background were carefully chosen. Thus it seems possible to use 3G videophones for emergency calls involving CPR. However, further studies on the required video quality in different scenarios are necessary.
Calibration of asynchronous smart phone cameras from moving objects
NASA Astrophysics Data System (ADS)
Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel
2015-04-01
Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.
Nonholonomic camera-space manipulation using cameras mounted on a mobile base
NASA Astrophysics Data System (ADS)
Goodwine, Bill; Seelinger, Michael J.; Skaar, Steven B.; Ma, Qun
1998-10-01
The body of work called `Camera Space Manipulation' is an effective and proven method of robotic control. Essentially, this technique identifies and refines the input-output relationship of the plant using estimation methods and drives the plant open-loop to its target state. 3D `success' of the desired motion, i.e., the end effector of the manipulator engages a target at a particular location with a particular orientation, is guaranteed when there is camera space success in two cameras which are adequately separated. Very accurate, sub-pixel positioning of a robotic end effector is possible using this method. To date, however, most efforts in this area have primarily considered holonomic systems. This work addresses the problem of nonholonomic camera space manipulation by considering the problem of a nonholonomic robot with two cameras and a holonomic manipulator on board the nonholonomic platform. While perhaps not as common in robotics, such a combination of holonomic and nonholonomic degrees of freedom are ubiquitous in industry: fork lifts and earth moving equipment are common examples of a nonholonomic system with an on-board holonomic actuator. The nonholonomic nature of the system makes the automation problem more difficult due to a variety of reasons; in particular, the target location is not fixed in the image planes, as it is for holonomic systems (since the cameras are attached to a moving platform), and there is a fundamental `path dependent' nature of nonholonomic kinematics. This work focuses on the sensor space or camera-space-based control laws necessary for effectively implementing an autonomous system of this type.
Generating Stereoscopic Television Images With One Camera
NASA Technical Reports Server (NTRS)
Coan, Paul P.
1996-01-01
Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.
Predicting Moves-on-Stills for Comic Art Using Viewer Gaze Data.
Jain, Eakta; Sheikh, Yaser; Hodgins, Jessica
2016-01-01
Comic art consists of a sequence of panels of different shapes and sizes that visually communicate the narrative to the reader. The move-on-stills technique allows such still images to be retargeted for digital displays via camera moves. Today, moves-on-stills can be created by software applications given user-provided parameters for each desired camera move. The proposed algorithm uses viewer gaze as input to computationally predict camera move parameters. The authors demonstrate their algorithm on various comic book panels and evaluate its performance by comparing their results with a professional DVD.
NASA Astrophysics Data System (ADS)
Scopatz, Stephen D.; Mendez, Michael; Trent, Randall
2015-05-01
The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.
Constrained space camera assembly
Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.
1999-01-01
A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.
Detecting method of subjects' 3D positions and experimental advanced camera control system
NASA Astrophysics Data System (ADS)
Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi
1997-04-01
Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.
Non-iterative volumetric particle reconstruction near moving bodies
NASA Astrophysics Data System (ADS)
Mendelson, Leah; Techet, Alexandra
2017-11-01
When multi-camera 3D PIV experiments are performed around a moving body, the body often obscures visibility of regions of interest in the flow field in a subset of cameras. We evaluate the performance of non-iterative particle reconstruction algorithms used for synthetic aperture PIV (SAPIV) in these partially-occluded regions. We show that when partial occlusions are present, the quality and availability of 3D tracer particle information depends on the number of cameras and reconstruction procedure used. Based on these findings, we introduce an improved non-iterative reconstruction routine for SAPIV around bodies. The reconstruction procedure combines binary masks, already required for reconstruction of the body's 3D visual hull, and a minimum line-of-sight algorithm. This approach accounts for partial occlusions without performing separate processing for each possible subset of cameras. We combine this reconstruction procedure with three-dimensional imaging on both sides of the free surface to reveal multi-fin wake interactions generated by a jumping archer fish. Sufficient particle reconstruction in near-body regions is crucial to resolving the wake structures of upstream fins (i.e., dorsal and anal fins) before and during interactions with the caudal tail.
Real time moving scene holographic camera system
NASA Technical Reports Server (NTRS)
Kurtz, R. L. (Inventor)
1973-01-01
A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).
Rogers, B.T. Jr.; Davis, W.C.
1957-12-17
This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.
Constrained space camera assembly
Heckendorn, F.M.; Anderson, E.K.; Robinson, C.W.; Haynes, H.B.
1999-05-11
A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity is disclosed. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras. 17 figs.
Using external sensors in solution of SLAM task
NASA Astrophysics Data System (ADS)
Provkov, V. S.; Starodubtsev, I. S.
2018-05-01
This article describes the algorithms of spatial orientation of SLAM, PTAM and their positive and negative sides. Based on the SLAM method, a method that uses an RGBD camera and additional sensors was developed: an accelerometer, a gyroscope, and a magnetometer. The investigated orientation methods have their advantages when moving along a straight trajectory or when rotating a moving platform. As a result of experiments and a weighted linear combination of the positions obtained from data of the RGBD camera and the nine-axis sensor, it became possible to improve the accuracy of the original algorithm even using a constant as a weight function. In the future, it is planned to develop an algorithm for the dynamic construction of a weight function, as a result of which an increase in the accuracy of the algorithm is expected.
Moving Object Detection on a Vehicle Mounted Back-Up Camera
Kim, Dong-Sun; Kwon, Jinsan
2015-01-01
In the detection of moving objects from vision sources one usually assumes that the scene has been captured by stationary cameras. In case of backing up a vehicle, however, the camera mounted on the vehicle moves according to the vehicle’s movement, resulting in ego-motions on the background. This results in mixed motion in the scene, and makes it difficult to distinguish between the target objects and background motions. Without further treatments on the mixed motion, traditional fixed-viewpoint object detection methods will lead to many false-positive detection results. In this paper, we suggest a procedure to be used with the traditional moving object detection methods relaxing the stationary cameras restriction, by introducing additional steps before and after the detection. We also decribe the implementation as a FPGA platform along with the algorithm. The target application of this suggestion is use with a road vehicle’s rear-view camera systems. PMID:26712761
Design and realization of an AEC&AGC system for the CCD aerial camera
NASA Astrophysics Data System (ADS)
Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun
2015-08-01
An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.
FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.
Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu
2017-07-18
Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.
Feghali, Rosario; Mitiche, Amar
2004-11-01
The purpose of this study is to investigate a method of tracking moving objects with a moving camera. This method estimates simultaneously the motion induced by camera movement. The problem is formulated as a Bayesian motion-based partitioning problem in the spatiotemporal domain of the image quence. An energy functional is derived from the Bayesian formulation. The Euler-Lagrange descent equations determine imultaneously an estimate of the image motion field induced by camera motion and an estimate of the spatiotemporal motion undary surface. The Euler-Lagrange equation corresponding to the surface is expressed as a level-set partial differential equation for topology independence and numerically stable implementation. The method can be initialized simply and can track multiple objects with nonsimultaneous motions. Velocities on motion boundaries can be estimated from geometrical properties of the motion boundary. Several examples of experimental verification are given using synthetic and real-image sequences.
NASA Astrophysics Data System (ADS)
Gauthier, L. R.; Jansen, M. E.; Meyer, J. R.
2014-09-01
Camera motion is a potential problem when a video camera is used to perform dynamic displacement measurements. If the scene camera moves at the wrong time, the apparent motion of the object under study can easily be confused with the real motion of the object. In some cases, it is practically impossible to prevent camera motion, as for instance, when a camera is used outdoors in windy conditions. A method to address this challenge is described that provides an objective means to measure the displacement of an object of interest in the scene, even when the camera itself is moving in an unpredictable fashion at the same time. The main idea is to synchronously measure the motion of the camera and to use those data ex post facto to subtract out the apparent motion in the scene that is caused by the camera motion. The motion of the scene camera is measured by using a reference camera that is rigidly attached to the scene camera and oriented towards a stationary reference object. For instance, this reference object may be on the ground, which is known to be stationary. It is necessary to calibrate the reference camera by simultaneously measuring the scene images and the reference images at times when it is known that the scene object is stationary and the camera is moving. These data are used to map camera movement data to apparent scene movement data in pixel space and subsequently used to remove the camera movement from the scene measurements.
Jung, Kyunghwa; Choi, Hyunseok; Hong, Hanpyo; Adikrishna, Arnold; Jeon, In-Ho; Hong, Jaesung
2017-02-01
A hands-free region-of-interest (ROI) selection interface is proposed for solo surgery using a wide-angle endoscope. A wide-angle endoscope provides images with a larger field of view than a conventional endoscope. With an appropriate selection interface for a ROI, surgeons can also obtain a detailed local view as if they moved a conventional endoscope in a specific position and direction. To manipulate the endoscope without releasing the surgical instrument in hand, a mini-camera is attached to the instrument, and the images taken by the attached camera are analyzed. When a surgeon moves the instrument, the instrument orientation is calculated by an image processing. Surgeons can select the ROI with this instrument movement after switching from 'task mode' to 'selection mode.' The accelerated KAZE algorithm is used to track the features of the camera images once the instrument is moved. Both the wide-angle and detailed local views are displayed simultaneously, and a surgeon can move the local view area by moving the mini-camera attached to the surgical instrument. Local view selection for a solo surgery was performed without releasing the instrument. The accuracy of camera pose estimation was not significantly different between camera resolutions, but it was significantly different between background camera images with different numbers of features (P < 0.01). The success rate of ROI selection diminished as the number of separated regions increased. However, separated regions up to 12 with a region size of 160 × 160 pixels were selected with no failure. Surgical tasks on a phantom model and a cadaver were attempted to verify the feasibility in a clinical environment. Hands-free endoscope manipulation without releasing the instruments in hand was achieved. The proposed method requires only a small, low-cost camera and an image processing. The technique enables surgeons to perform solo surgeries without a camera assistant.
Development of a table tennis robot for ball interception using visual feedback
NASA Astrophysics Data System (ADS)
Parnichkun, Manukid; Thalagoda, Janitha A.
2016-07-01
This paper presents a concept of intercepting a moving table tennis ball using a robot. The robot has four degrees of freedom(DOF) which are simplified in such a way that The system is able to perform the task within the bounded limit. It employs computer vision to localize the ball. For ball identification, Colour Based Threshold Segmentation(CBTS) and Background Subtraction(BS) methodologies are used. Coordinate Transformation(CT) is employed to transform the data, which is taken based on camera coordinate frame to the general coordinate frame. The sensory system consisted of two HD Web Cameras. The computation time of image processing from web cameras is long .it is not possible to intercept table tennis ball using only image processing. Therefore the projectile motion model is employed to predict the final destination of the ball.
Development of Automated Tracking System with Active Cameras for Figure Skating
NASA Astrophysics Data System (ADS)
Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi
This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.
Object tracking using multiple camera video streams
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford
2010-05-01
Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.
Robust drone detection for day/night counter-UAV with static VIS and SWIR cameras
NASA Astrophysics Data System (ADS)
Müller, Thomas
2017-05-01
Recent progress in the development of unmanned aerial vehicles (UAVs) has led to more and more situations in which drones like quadrocopters or octocopters pose a potential serious thread or could be used as a powerful tool for illegal activities. Therefore, counter-UAV systems are required in a lot of applications to detect approaching drones as early as possible. In this paper, an efficient and robust algorithm is presented for UAV detection using static VIS and SWIR cameras. Whereas VIS cameras with a high resolution enable to detect UAVs in the daytime in further distances, surveillance at night can be performed with a SWIR camera. First, a background estimation and structural adaptive change detection process detects movements and other changes in the observed scene. Afterwards, the local density of changes is computed used for background density learning and to build up the foreground model which are compared in order to finally get the UAV alarm result. The density model is used to filter out noise effects, on the one hand. On the other hand, moving scene parts like moving leaves in the wind or driving cars on a street can easily be learned in order to mask such areas out and suppress false alarms there. This scene learning is done automatically simply by processing without UAVs in order to capture the normal situation. The given results document the performance of the presented approach in VIS and SWIR in different situations.
Sequential Monte Carlo Instant Radiosity.
Hedman, Peter; Karras, Tero; Lehtinen, Jaakko
2017-05-01
Instant Radiosity and its derivatives are interactive methods for efficiently estimating global (indirect) illumination. They represent the last indirect bounce of illumination before the camera as the composite radiance field emitted by a set of virtual point light sources (VPLs). In complex scenes, current algorithms suffer from a difficult combination of two issues: it remains a challenge to distribute VPLs in a manner that simultaneously gives a high-quality indirect illumination solution for each frame, and to do so in a temporally coherent manner. We address both issues by building, and maintaining over time, an adaptive and temporally coherent distribution of VPLs in locations where they bring indirect light to the image. We introduce a novel heuristic sampling method that strives to only move as few of the VPLs between frames as possible. The result is, to the best of our knowledge, the first interactive global illumination algorithm that works in complex, highly-occluded scenes, suffers little from temporal flickering, supports moving cameras and light sources, and is output-sensitive in the sense that it places VPLs in locations that matter most to the final result.
First stereo video dataset with ground truth for remote car pose estimation using satellite markers
NASA Astrophysics Data System (ADS)
Gil, Gustavo; Savino, Giovanni; Pierini, Marco
2018-04-01
Leading causes of PTW (Powered Two-Wheeler) crashes and near misses in urban areas are on the part of a failure or delayed prediction of the changing trajectories of other vehicles. Regrettably, misperception from both car drivers and motorcycle riders results in fatal or serious consequences for riders. Intelligent vehicles could provide early warning about possible collisions, helping to avoid the crash. There is evidence that stereo cameras can be used for estimating the heading angle of other vehicles, which is key to anticipate their imminent location, but there is limited heading ground truth data available in the public domain. Consequently, we employed a marker-based technique for creating ground truth of car pose and create a dataset∗ for computer vision benchmarking purposes. This dataset of a moving vehicle collected from a static mounted stereo camera is a simplification of a complex and dynamic reality, which serves as a test bed for car pose estimation algorithms. The dataset contains the accurate pose of the moving obstacle, and realistic imagery including texture-less and non-lambertian surfaces (e.g. reflectance and transparency).
Independent motion detection with a rival penalized adaptive particle filter
NASA Astrophysics Data System (ADS)
Becker, Stefan; Hübner, Wolfgang; Arens, Michael
2014-10-01
Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic filter for real-time detection and tracking of independently moving objects. The proposed approach introduces a competition scheme between particles in order to ensure an improved multi-modality. Further, the filter design helps to generate a particle distribution which is homogenous even in the presence of multiple targets showing non-rigid motion patterns. The effectiveness of the method is shown on exemplary outdoor sequences.
Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror
Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji
2017-01-01
This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2016-09-01
In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.
Inspecting rapidly moving surfaces for small defects using CNN cameras
NASA Astrophysics Data System (ADS)
Blug, Andreas; Carl, Daniel; Höfler, Heinrich
2013-04-01
A continuous increase in production speed and manufacturing precision raises a demand for the automated detection of small image features on rapidly moving surfaces. An example are wire drawing processes where kilometers of cylindrical metal surfaces moving with 10 m/s have to be inspected for defects such as scratches, dents, grooves, or chatter marks with a lateral size of 100 μm in real time. Up to now, complex eddy current systems are used for quality control instead of line cameras, because the ratio between lateral feature size and surface speed is limited by the data transport between camera and computer. This bottleneck is avoided by "cellular neural network" (CNN) cameras which enable image processing directly on the camera chip. This article reports results achieved with a demonstrator based on this novel analogue camera - computer system. The results show that computational speed and accuracy of the analogue computer system are sufficient to detect and discriminate the different types of defects. Area images with 176 x 144 pixels are acquired and evaluated in real time with frame rates of 4 to 10 kHz - depending on the number of defects to be detected. These frame rates correspond to equivalent line rates on line cameras between 360 and 880 kHz, a number far beyond the available features. Using the relation between lateral feature size and surface speed as a figure of merit, the CNN based system outperforms conventional image processing systems by an order of magnitude.
Holographic motion picture camera with Doppler shift compensation
NASA Technical Reports Server (NTRS)
Kurtz, R. L. (Inventor)
1976-01-01
A holographic motion picture camera is reported for producing three dimensional images by employing an elliptical optical system. There is provided in one of the beam paths (the object or reference beam path) a motion compensator which enables the camera to photograph faster moving objects.
Measuring Positions of Objects using Two or More Cameras
NASA Technical Reports Server (NTRS)
Klinko, Steve; Lane, John; Nelson, Christopher
2008-01-01
An improved method of computing positions of objects from digitized images acquired by two or more cameras (see figure) has been developed for use in tracking debris shed by a spacecraft during and shortly after launch. The method is also readily adaptable to such applications as (1) tracking moving and possibly interacting objects in other settings in order to determine causes of accidents and (2) measuring positions of stationary objects, as in surveying. Images acquired by cameras fixed to the ground and/or cameras mounted on tracking telescopes can be used in this method. In this method, processing of image data starts with creation of detailed computer- aided design (CAD) models of the objects to be tracked. By rotating, translating, resizing, and overlaying the models with digitized camera images, parameters that characterize the position and orientation of the camera can be determined. The final position error depends on how well the centroids of the objects in the images are measured; how accurately the centroids are interpolated for synchronization of cameras; and how effectively matches are made to determine rotation, scaling, and translation parameters. The method involves use of the perspective camera model (also denoted the point camera model), which is one of several mathematical models developed over the years to represent the relationships between external coordinates of objects and the coordinates of the objects as they appear on the image plane in a camera. The method also involves extensive use of the affine camera model, in which the distance from the camera to an object (or to a small feature on an object) is assumed to be much greater than the size of the object (or feature), resulting in a truly two-dimensional image. The affine camera model does not require advance knowledge of the positions and orientations of the cameras. This is because ultimately, positions and orientations of the cameras and of all objects are computed in a coordinate system attached to one object as defined in its CAD model.
Detecting and Analyzing Multiple Moving Objects in Crowded Environments with Coherent Motion Regions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheriyadat, Anil M.
Understanding the world around us from large-scale video data requires vision systems that can perform automatic interpretation. While human eyes can unconsciously perceive independent objects in crowded scenes and other challenging operating environments, automated systems have difficulty detecting, counting, and understanding their behavior in similar scenes. Computer scientists at ORNL have a developed a technology termed as "Coherent Motion Region Detection" that invloves identifying multiple indepedent moving objects in crowded scenes by aggregating low-level motion cues extracted from moving objects. Humans and other species exploit such low-level motion cues seamlessely to perform perceptual grouping for visual understanding. The algorithm detectsmore » and tracks feature points on moving objects resulting in partial trajectories that span coherent 3D region in the space-time volume defined by the video. In the case of multi-object motion, many possible coherent motion regions can be constructed around the set of trajectories. The unique approach in the algorithm is to identify all possible coherent motion regions, then extract a subset of motion regions based on an innovative measure to automatically locate moving objects in crowded environments.The software reports snapshot of the object, count, and derived statistics ( count over time) from input video streams. The software can directly process videos streamed over the internet or directly from a hardware device (camera).« less
Heterogeneous CPU-GPU moving targets detection for UAV video
NASA Astrophysics Data System (ADS)
Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan
2017-07-01
Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.
Intraocular camera for retinal prostheses: Refractive and diffractive lens systems
NASA Astrophysics Data System (ADS)
Hauer, Michelle Christine
The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.
Servo-controlled intravital microscope system
NASA Technical Reports Server (NTRS)
Mansour, M. N.; Wayland, H. J.; Chapman, C. P. (Inventor)
1975-01-01
A microscope system is described for viewing an area of a living body tissue that is rapidly moving, by maintaining the same area in the field-of-view and in focus. A focus sensing portion of the system includes two video cameras at which the viewed image is projected, one camera being slightly in front of the image plane and the other slightly behind it. A focus sensing circuit for each camera differentiates certain high frequency components of the video signal and then detects them and passes them through a low pass filter, to provide dc focus signal whose magnitudes represent the degree of focus. An error signal equal to the difference between the focus signals, drives a servo that moves the microscope objective so that an in-focus view is delivered to an image viewing/recording camera.
Light field analysis and its applications in adaptive optics and surveillance systems
NASA Astrophysics Data System (ADS)
Eslami, Mohammed Ali
An image can only be as good as the optics of a camera or any other imaging system allows it to be. An imaging system is merely a transformation that takes a 3D world coordinate to a 2D image plane. This can be done through both linear/non-linear transfer functions. Depending on the application at hand it is easier to use some models of imaging systems over the others in certain situations. The most well-known models are the 1) Pinhole model, 2) Thin Lens Model and 3) Thick lens model for optical systems. Using light-field analysis the connection through these different models is described. A novel figure of merit is presented on using one optical model over the other for certain applications. After analyzing these optical systems, their applications in plenoptic cameras for adaptive optics applications are introduced. A new technique to use a plenoptic camera to extract information about a localized distorted planar wave front is described. CODEV simulations conducted in this thesis show that its performance is comparable to those of a Shack-Hartmann sensor and that they can potentially increase the dynamic range of angles that can be extracted assuming a paraxial imaging system. As a final application, a novel dual PTZ-surveillance system to track a target through space is presented. 22X optic zoom lenses on high resolution pan/tilt platforms recalibrate a master-slave relationship based on encoder readouts rather than complicated image processing algorithms for real-time target tracking. As the target moves out of a region of interest in the master camera, it is moved to force the target back into the region of interest. Once the master camera is moved, a precalibrated lookup table is interpolated to compute the relationship between the master/slave cameras. The homography that relates the pixels of the master camera to the pan/tilt settings of the slave camera then continue to follow the planar trajectories of targets as they move through space at high accuracies.
A method of real-time detection for distant moving obstacles by monocular vision
NASA Astrophysics Data System (ADS)
Jia, Bao-zhi; Zhu, Ming
2013-12-01
In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.
A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications
Fu, Bo; Pitter, Mark C.; Russell, Noah A.
2011-01-01
Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852
Xu, Yilei; Roy-Chowdhury, Amit K
2007-05-01
In this paper, we present a theory for combining the effects of motion, illumination, 3D structure, albedo, and camera parameters in a sequence of images obtained by a perspective camera. We show that the set of all Lambertian reflectance functions of a moving object, at any position, illuminated by arbitrarily distant light sources, lies "close" to a bilinear subspace consisting of nine illumination variables and six motion variables. This result implies that, given an arbitrary video sequence, it is possible to recover the 3D structure, motion, and illumination conditions simultaneously using the bilinear subspace formulation. The derivation builds upon existing work on linear subspace representations of reflectance by generalizing it to moving objects. Lighting can change slowly or suddenly, locally or globally, and can originate from a combination of point and extended sources. We experimentally compare the results of our theory with ground truth data and also provide results on real data by using video sequences of a 3D face and the entire human body with various combinations of motion and illumination directions. We also show results of our theory in estimating 3D motion and illumination model parameters from a video sequence.
Anomaly Detection in Moving-Camera Video Sequences Using Principal Subspace Analysis
Thomaz, Lucas A.; Jardim, Eric; da Silva, Allan F.; ...
2017-10-16
This study presents a family of algorithms based on sparse decompositions that detect anomalies in video sequences obtained from slow moving cameras. These algorithms start by computing the union of subspaces that best represents all the frames from a reference (anomaly free) video as a low-rank projection plus a sparse residue. Then, they perform a low-rank representation of a target (possibly anomalous) video by taking advantage of both the union of subspaces and the sparse residue computed from the reference video. Such algorithms provide good detection results while at the same time obviating the need for previous video synchronization. However,more » this is obtained at the cost of a large computational complexity, which hinders their applicability. Another contribution of this paper approaches this problem by using intrinsic properties of the obtained data representation in order to restrict the search space to the most relevant subspaces, providing computational complexity gains of up to two orders of magnitude. The developed algorithms are shown to cope well with videos acquired in challenging scenarios, as verified by the analysis of 59 videos from the VDAO database that comprises videos with abandoned objects in a cluttered industrial scenario.« less
Anomaly Detection in Moving-Camera Video Sequences Using Principal Subspace Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomaz, Lucas A.; Jardim, Eric; da Silva, Allan F.
This study presents a family of algorithms based on sparse decompositions that detect anomalies in video sequences obtained from slow moving cameras. These algorithms start by computing the union of subspaces that best represents all the frames from a reference (anomaly free) video as a low-rank projection plus a sparse residue. Then, they perform a low-rank representation of a target (possibly anomalous) video by taking advantage of both the union of subspaces and the sparse residue computed from the reference video. Such algorithms provide good detection results while at the same time obviating the need for previous video synchronization. However,more » this is obtained at the cost of a large computational complexity, which hinders their applicability. Another contribution of this paper approaches this problem by using intrinsic properties of the obtained data representation in order to restrict the search space to the most relevant subspaces, providing computational complexity gains of up to two orders of magnitude. The developed algorithms are shown to cope well with videos acquired in challenging scenarios, as verified by the analysis of 59 videos from the VDAO database that comprises videos with abandoned objects in a cluttered industrial scenario.« less
Wired and Wireless Camera Triggering with Arduino
NASA Astrophysics Data System (ADS)
Kauhanen, H.; Rönnholm, P.
2017-10-01
Synchronous triggering is an important task that allows simultaneous data capture from multiple cameras. Accurate synchronization enables 3D measurements of moving objects or from a moving platform. In this paper, we describe one wired and four wireless variations of Arduino-based low-cost remote trigger systems designed to provide a synchronous trigger signal for industrial cameras. Our wireless systems utilize 315 MHz or 434 MHz frequencies with noise filtering capacitors. In order to validate the synchronization accuracy, we developed a prototype of a rotating trigger detection system (named RoTriDeS). This system is suitable to detect the triggering accuracy of global shutter cameras. As a result, the wired system indicated an 8.91 μs mean triggering time difference between two cameras. Corresponding mean values for the four wireless triggering systems varied between 7.92 and 9.42 μs. Presented values include both camera-based and trigger-based desynchronization. Arduino-based triggering systems appeared to be feasible, and they have the potential to be extended to more complicated triggering systems.
Watching elderly and disabled person's physical condition by remotely controlled monorail robot
NASA Astrophysics Data System (ADS)
Nagasaka, Yasunori; Matsumoto, Yoshinori; Fukaya, Yasutoshi; Takahashi, Tomoichi; Takeshita, Toru
2001-10-01
We are developing a nursing system using robots and cameras. The cameras are mounted on a remote controlled monorail robot which moves inside a room and watches the elderly. It is necessary to pay attention to the elderly at home or nursing homes all time. This requires staffs to pay attention to them at every time. The purpose of our system is to help those staffs. This study intends to improve such situation. A host computer controls a monorail robot to go in front of the elderly using the images taken by cameras on the ceiling. A CCD camera is mounted on the monorail robot to take pictures of their facial expression or movements. The robot sends the images to a host computer that checks them whether something unusual happens or not. We propose a simple calibration method for positioning the monorail robots to track the moves of the elderly for keeping their faces at center of camera view. We built a small experiment system, and evaluated our camera calibration method and image processing algorithm.
Using virtual environment for autonomous vehicle algorithm validation
NASA Astrophysics Data System (ADS)
Levinskis, Aleksandrs
2018-04-01
This paper describes possible use of modern game engine for validating and proving the concept of algorithm design. As the result simple visual odometry algorithm will be provided to show the concept and go over all workflow stages. Some of stages will involve using of Kalman filter in such a way that it will estimate optical flow velocity as well as position of moving camera located at vehicle body. In particular Unreal Engine 4 game engine will be used for generating optical flow patterns and ground truth path. For optical flow determination Horn and Schunck method will be applied. As the result, it will be shown that such method can estimate position of the camera attached to vehicle with certain displacement error respect to ground truth depending on optical flow pattern. For displacement rate RMS error is calculating between estimated and actual position.
Computational cameras for moving iris recognition
NASA Astrophysics Data System (ADS)
McCloskey, Scott; Venkatesha, Sharath
2015-05-01
Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.
NASA Technical Reports Server (NTRS)
Hilton, Kevin; Karl, Chad; Litherland, Mark; Ritchie, David; Sun, Nancy
1992-01-01
The dust control group designed a system to restrict dust that is disturbed by the Enabler during its operation from interfering with astronaut or camera visibility. This design also considers the many different wheel positions made possible through the use of artinuation joints that provide the steering and wheel pitching for the Enabler. The system uses a combination of brushes and fenders to restrict the dust when the vehicle is moving in either direction and in a turn. This design also allows for each of maintenance as well as accessibility of the remainder of the vehicle.
NASA Technical Reports Server (NTRS)
Hilton, Kevin; Karl, Chad; Litherland, Mark; Ritchie, David; Sun, Nancy
1992-01-01
The dust control group designed a system to restrict dust that is disturbed by the Enabler during its operation from interfering with astronaut or camera visibility. This design also considers the many different wheel positions made possible through the use of artinuation joints that provide the steering and wheel pitching for the Enabler. The system uses a combination of brushes and fenders to restrict the dust when the vehicle is moving in either direction and in a turn. This design also allows for ease of maintenance as well as accessibility of the remainder of the vehicle.
Camera-Only Kinematics for Small Lunar Rovers
NASA Astrophysics Data System (ADS)
Fang, E.; Suresh, S.; Whittaker, W.
2016-11-01
Knowledge of the kinematic state of rovers is critical. Existing methods add sensors and wiring to moving parts, which can fail and adds mass and volume. This research presents a method to optically determine kinematic state using a single camera.
3D morphology reconstruction using linear array CCD binocular stereo vision imaging system
NASA Astrophysics Data System (ADS)
Pan, Yu; Wang, Jinjiang
2018-01-01
Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.
Miura, Hideharu; Ozawa, Shuichi; Hayata, Masahiro; Tsuda, Shintaro; Yamada, Kiyoshi; Nagata, Yasushi
2016-09-08
We proposed a simple visual method for evaluating the dynamic tumor tracking (DTT) accuracy of a gimbal mechanism using a light field. A single photon beam was set with a field size of 30 × 30 mm2 at a gantry angle of 90°. The center of a cube phantom was set up at the isocenter of a motion table, and 4D modeling was performed based on the tumor and infrared (IR) marker motion. After 4D modeling, the cube phantom was replaced with a sheet of paper, which was placed perpen-dicularly, and a light field was projected on the sheet of paper. The light field was recorded using a web camera in a treatment room that was as dark as possible. Calculated images from each image obtained using the camera were summed to compose a total summation image. Sinusoidal motion sequences were produced by moving the phantom with a fixed amplitude of 20 mm and different breathing periods of 2, 4, 6, and 8 s. The light field was projected on the sheet of paper under three conditions: with the moving phantom and DTT based on the motion of the phantom, with the moving phantom and non-DTT, and with a stationary phantom for comparison. The values of tracking errors using the light field were 1.12 ± 0.72, 0.31 ± 0.19, 0.27 ± 0.12, and 0.15 ± 0.09 mm for breathing periods of 2, 4, 6, and 8s, respectively. The tracking accuracy showed dependence on the breath-ing period. We proposed a simple quality assurance (QA) process for the tracking accuracy of a gimbal mechanism system using a light field and web camera. Our method can assess the tracking accuracy using a light field without irradiation and clearly visualize distributions like film dosimetry. © 2016 The Authors.
The Proof of the ``Vortex Theory of Matter''
NASA Astrophysics Data System (ADS)
Moon, Russell
2009-11-01
According to the Vortex Theory, protons and electrons are three-dimensional holes connected by fourth-dimensional vortices. It was further theorized that when photons are absorbed then readmitted by atoms, the photon is absorbed into the proton, moves through the fourth-dimensional vortex, then reemerges back into three-dimensional space through the electron. To prove this hypothesis, an experiment was conducted using a hollow aluminum sphere containing a powerful permanent magnet suspended directly above a zinc plate. Ultraviolet light was then shined upon the zinc. The zinc emits electrons via the photoelectric effect that are attracted to the surface of the aluminum sphere. The sphere was removed from above the zinc plate and repositioned above a sensitive infrared digital camera in another room. The ball and camera were placed within a darkened box inside a Faraday cage. Light was shined upon the zinc plate and the picture taken by the camera was observed. When the light was turned on above the zinc plate in one room, the camera recorded increased light coming from the surface of the sphere within the other room; when the light was turned off, the intensity of the infrared light coming from the surface of the sphere was suddenly diminished. Five other tests were then performed to eliminate other possible explanations such as quantum-entangled electrons.
The Proof of the ``Vortex Theory of Matter''
NASA Astrophysics Data System (ADS)
Gridnev, Konstantin; Moon, Russell; Vasiliev, Victor
2009-11-01
According to the Vortex Theory, protons and electrons are three-dimensional holes connected by fourth-dimensional vortices. It was further theorized that when photons are absorbed then readmitted by atoms, the photon is absorbed into the proton, moves through the fourth-dimensional vortex, then reemerges back into three-dimensional space through the electron^2. To prove this hypothesis, an experiment was conducted using a hollow aluminum sphere containing a powerful permanent magnet suspended directly above a zinc plate. Ultraviolet light was then shined upon the zinc. The zinc emits electrons via the photoelectric effect that are attracted to the surface of the aluminum sphere. The sphere was removed from above the zinc plate and repositioned above a sensitive infrared digital camera in another room. The ball and camera were placed within a darkened box inside a Faraday cage. Light was shined upon the zinc plate and the picture taken by the camera was observed. When the light was turned on above the zinc plate in one room, the camera recorded increased light coming from the surface of the sphere within the other room; when the light was turned off, the intensity of the infrared light coming from the surface of the sphere was suddenly diminished. Five other tests were then performed to eliminate other possible explanations such as quantum-entangled electrons.
The Proof of the ``Vortex Theory of Matter''
NASA Astrophysics Data System (ADS)
Gridnev, Konstantin; Moon, Russell; Vasiliev, Victor
2009-10-01
According to the Vortex Theory, protons and electrons are three-dimensional holes connected by fourth-dimensional vortices. It was further theorized that when photons are absorbed then readmitted by atoms, the photon is absorbed into the proton, moves through the fourth-dimensional vortex, then reemerges back into three-dimensional space through the electron^2. To prove this hypothesis, an experiment was conducted using a hollow aluminum sphere containing a powerful permanent magnet suspended directly above a zinc plate. Ultraviolet light was then shined upon the zinc. The zinc emits electrons via the photoelectric effect that are attracted to the surface of the aluminum sphere. The sphere was removed from above the zinc plate and repositioned above a sensitive infrared digital camera in another room. The ball and camera were placed within a darkened box inside a Faraday cage. Light was shined upon the zinc plate and the picture taken by the camera was observed. When the light was turned on above the zinc plate in one room, the camera recorded increased light coming from the surface of the sphere within the other room; when the light was turned off, the intensity of the infrared light coming from the surface of the sphere was suddenly diminished. Five other tests were then performed to eliminate other possible explanations such as quantum-entangled electrons.
The Proof of the ``Vortex Theory of Matter''
NASA Astrophysics Data System (ADS)
Moon, Russell; Gridnev, Konstantin; Vasiliev, Victor
2010-02-01
According to the Vortex Theory, protons and electrons are three-dimensional holes connected by fourth-dimensional vortices. It was further theorized that when photons are absorbed then readmitted by atoms, the photon is absorbed into the proton, moves through the fourth-dimensional vortex, then reemerges back into three-dimensional space through the electron. To prove this hypothesis, an experiment was conducted using a hollow aluminum sphere containing a powerful permanent magnet suspended directly above a zinc plate. Ultraviolet light was then shined upon the zinc. The zinc emits electrons via the photoelectric effect that are attracted to the surface of the aluminum sphere. The sphere was removed from above the zinc plate and repositioned above a sensitive infrared digital camera in another room. The ball and camera were placed within a darkened box inside a Faraday cage. Light was shined upon the zinc plate and the picture taken by the camera was observed. When the light was turned on above the zinc plate in one room, the camera recorded increased light coming from the surface of the sphere within the other room; when the light was turned off, the intensity of the infrared light coming from the surface of the sphere was suddenly diminished. Five other tests were then performed to eliminate other possible explanations such as quantum-entangled electrons. )
New continuous recording procedure of holographic information on transient phenomena
NASA Astrophysics Data System (ADS)
Nagayama, Kunihito; Nishihara, H. Keith; Murakami, Terutoshi
1992-09-01
A new method for continuous recording of holographic information, 'streak holography,' is proposed. This kind of record can be useful for velocity and acceleration measurement as well as for observing a moving object whose trajectory cannot be predicted in advance. A very high speed camera system has been designed and constructed for streak holography. A ring-shaped 100-mm-diam film has been cut out from the high-resolution sheet film and mounted on a thin duralmin disk, which has been driven to rotate directly by an air-turbine spindle. Attainable streak velocity is 0.3 mm/microsecond(s) . A direct film drive mechanism makes it possible to use a relay lens system of extremely small f number. The feasibility of the camera system has been demonstrated by observing several transient events, such as the forced oscillation of a wire and the free fall of small glass particles, using an argon-ion laser as a light source.
A View of Opportunity's Dance Moves
NASA Technical Reports Server (NTRS)
2004-01-01
This rear hazard-avoidance camera image taken by the Mars Exploration Rover Opportunity on the 37th martian day, or sol, of its mission (March 2, 2004) shows the tracks left by the rover during its latest 'dance,' or series of maneuvers, around the rock outcrop near its landing site. Note the view of the lander to the far left and the light-colored outcrop below the horizon. The rear solar panels, located above the rear hazard-avoidance cameras, are captured in the uppermost part of the image.
Since driving off the lander, Opportunity has traveled along the entire outcrop, trenched, and completed a U-turn to revisit scientifically rich spots. Two of these spots are the rock regions dubbed 'El Capitan' and 'Last Chance.' Scientists have used the instruments on the rover's arm to conclude that this area of Mars was once soaked in water for extended amounts of time, possibly providing an environment favorable for life.Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
NASA Astrophysics Data System (ADS)
Ciurapiński, Wieslaw; Dulski, Rafal; Kastek, Mariusz; Szustakowski, Mieczyslaw; Bieszczad, Grzegorz; Życzkowski, Marek; Trzaskawka, Piotr; Piszczek, Marek
2009-09-01
The paper presents the concept of multispectral protection system for perimeter protection for stationary and moving objects. The system consists of active ground radar, thermal and visible cameras. The radar allows the system to locate potential intruders and to control an observation area for system cameras. The multisensor construction of the system ensures significant improvement of detection probability of intruder and reduction of false alarms. A final decision from system is worked out using image data. The method of data fusion used in the system has been presented. The system is working under control of FLIR Nexus system. The Nexus offers complete technology and components to create network-based, high-end integrated systems for security and surveillance applications. Based on unique "plug and play" architecture, system provides unmatched flexibility and simplistic integration of sensors and devices in TCP/IP networks. Using a graphical user interface it is possible to control sensors and monitor streaming video and other data over the network, visualize the results of data fusion process and obtain detailed information about detected intruders over a digital map. System provides high-level applications and operator workload reduction with features such as sensor to sensor cueing from detection devices, automatic e-mail notification and alarm triggering.
C-RED One and C-RED2: SWIR high-performance cameras using Saphira e-APD and Snake InGaAs detectors
NASA Astrophysics Data System (ADS)
Gach, Jean-Luc; Feautrier, Philippe; Stadler, Eric; Clop, Fabien; Lemarchand, Stephane; Carmignani, Thomas; Wanwanscappel, Yann; Boutolleau, David
2018-02-01
After the development of the OCAM2 EMCCD fast visible camera dedicated to advanced adaptive optics wavefront sensing, First Light Imaging moved to the SWIR fast cameras with the development of the C-RED One and the C-RED 2 cameras. First Light Imaging's C-RED One infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise and very low background. C-RED One is based on the last version of the SAPHIRA detector developed by Leonardo UK. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. C-RED One is an autonomous system with an integrated cooling system and a vacuum regeneration system. It operates its sensor with a wide variety of read out techniques and processes video on-board thanks to an FPGA. We will show its performances and expose its main features. In addition to this project, First Light Imaging developed an InGaAs 640x512 fast camera with unprecedented performances in terms of noise, dark and readout speed based on the SNAKE SWIR detector from Sofradir. The camera was called C-RED 2. The C-RED 2 characteristics and performances will be described. The C-RED One project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944. The C-RED 2 development is supported by the "Investments for the future" program and the Provence Alpes Côte d'Azur Region, in the frame of the CPER.
Blood pulsation measurement using cameras operating in visible light: limitations.
Koprowski, Robert
2016-10-03
The paper presents an automatic method for analysis and processing of images from a camera operating in visible light. This analysis applies to images containing the human facial area (body) and enables to measure the blood pulse rate. Special attention was paid to the limitations of this measurement method taking into account the possibility of using consumer cameras in real conditions (different types of lighting, different camera resolution, camera movement). The proposed new method of image analysis and processing was associated with three stages: (1) image pre-processing-allowing for the image filtration and stabilization (object location tracking); (2) main image processing-allowing for segmentation of human skin areas, acquisition of brightness changes; (3) signal analysis-filtration, FFT (Fast Fourier Transformation) analysis, pulse calculation. The presented algorithm and method for measuring the pulse rate has the following advantages: (1) it allows for non-contact and non-invasive measurement; (2) it can be carried out using almost any camera, including webcams; (3) it enables to track the object on the stage, which allows for the measurement of the heart rate when the patient is moving; (4) for a minimum of 40,000 pixels, it provides a measurement error of less than ±2 beats per minute for p < 0.01 and sunlight, or a slightly larger error (±3 beats per minute) for artificial lighting; (5) analysis of a single image takes about 40 ms in Matlab Version 7.11.0.584 (R2010b) with Image Processing Toolbox Version 7.1 (R2010b).
3-dimensional telepresence system for a robotic environment
Anderson, Matthew O.; McKay, Mark D.
2000-01-01
A telepresence system includes a camera pair remotely controlled by a control module affixed to an operator. The camera pair provides for three dimensional viewing and the control module, affixed to the operator, affords hands-free operation of the camera pair. In one embodiment, the control module is affixed to the head of the operator and an initial position is established. A triangulating device is provided to track the head movement of the operator relative to the initial position. A processor module receives input from the triangulating device to determine where the operator has moved relative to the initial position and moves the camera pair in response thereto. The movement of the camera pair is predetermined by a software map having a plurality of operation zones. Each zone therein corresponds to unique camera movement parameters such as speed of movement. Speed parameters include constant speed, or increasing or decreasing. Other parameters include pan, tilt, slide, raise or lowering of the cameras. Other user interface devices are provided to improve the three dimensional control capabilities of an operator in a local operating environment. Such other devices include a pair of visual display glasses, a microphone and a remote actuator. The pair of visual display glasses are provided to facilitate three dimensional viewing, hence depth perception. The microphone affords hands-free camera movement by utilizing voice commands. The actuator allows the operator to remotely control various robotic mechanisms in the remote operating environment.
Spirit Captures Two Dust Devils On the Move
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Figure 1 Annotated At the Gusev site recently, skies have been very dusty, and on its 421st sol (March 10, 2005) NASA's Mars Exploration Rover Spirit spied two dust devils in action. This is an image from the rover's navigation camera. Views of the Gusev landing region from orbit show many dark streaks across the landscape -- tracks where dust devils have removed surface dust to show relatively darker soil below -- but this is the first time Spirit has photographed an active dust devil. Scientists are considering several causes of these small phenomena. Dust devils often occur when the Sun heats the surface of Mars. Warmed soil and rocks heat the layer of atmosphere closest to the surface, and the warm air rises in a whirling motion, stirring dust up from the surface like a miniature tornado. Another possibility is that a flow structure might develop over craters as wind speeds increase. As winds pick up, turbulence eddies and rotating columns of air form. As these columns grow in diameter they become taller and gain rotational speed. Eventually they become self-sustaining and the wind blows them down range. One sol before this image was taken, power output from Spirit's solar panels went up by about 50 percent when the amount of dust on the panels decreased. Was this a coincidence, or did a helpful dust devil pass over Spirit and lift off some of the dust? By comparing the separate images from the rover's different cameras, team members estimate that the dust devils moved about 500 meters (1,640 feet) in the 155 seconds between the navigation camera and hazard-avoidance camera frames; that equates to about 3 meters per second (7 miles per hour). The dust devils appear to be about 1,100 meters (almost three-quarters of a mile) from the rover.NASA Astrophysics Data System (ADS)
Minamoto, Masahiko; Matsunaga, Katsuya
1999-05-01
Operator performance while using a remote controlled backhoe shovel is described for three different stereoscopic viewing conditions: direct view, fixed stereoscopic cameras connected to a helmet mounted display (HMD), and rotating stereo camera connected and slaved to the head orientation of a free moving stereo HMD. Results showed that the head- slaved system provided the best performance.
Camplani, M; Malizia, A; Gelfusa, M; Barbato, F; Antonelli, L; Poggi, L A; Ciparisse, J F; Salgado, L; Richetta, M; Gaudio, P
2016-01-01
In this paper, a preliminary shadowgraph-based analysis of dust particles re-suspension due to loss of vacuum accident (LOVA) in ITER-like nuclear fusion reactors has been presented. Dust particles are produced through different mechanisms in nuclear fusion devices, one of the main issues is that dust particles are capable of being re-suspended in case of events such as LOVA. Shadowgraph is based on an expanded collimated beam of light emitted by a laser or a lamp that emits light transversely compared to the flow field direction. In the STARDUST facility, the dust moves in the flow, and it causes variations of refractive index that can be detected by using a CCD camera. The STARDUST fast camera setup allows to detect and to track dust particles moving in the vessel and then to obtain information about the velocity field of dust mobilized. In particular, the acquired images are processed such that per each frame the moving dust particles are detected by applying a background subtraction technique based on the mixture of Gaussian algorithm. The obtained foreground masks are eventually filtered with morphological operations. Finally, a multi-object tracking algorithm is used to track the detected particles along the experiment. For each particle, a Kalman filter-based tracker is applied; the particles dynamic is described by taking into account position, velocity, and acceleration as state variable. The results demonstrate that it is possible to obtain dust particles' velocity field during LOVA by automatically processing the data obtained with the shadowgraph approach.
NASA Astrophysics Data System (ADS)
Camplani, M.; Malizia, A.; Gelfusa, M.; Barbato, F.; Antonelli, L.; Poggi, L. A.; Ciparisse, J. F.; Salgado, L.; Richetta, M.; Gaudio, P.
2016-01-01
In this paper, a preliminary shadowgraph-based analysis of dust particles re-suspension due to loss of vacuum accident (LOVA) in ITER-like nuclear fusion reactors has been presented. Dust particles are produced through different mechanisms in nuclear fusion devices, one of the main issues is that dust particles are capable of being re-suspended in case of events such as LOVA. Shadowgraph is based on an expanded collimated beam of light emitted by a laser or a lamp that emits light transversely compared to the flow field direction. In the STARDUST facility, the dust moves in the flow, and it causes variations of refractive index that can be detected by using a CCD camera. The STARDUST fast camera setup allows to detect and to track dust particles moving in the vessel and then to obtain information about the velocity field of dust mobilized. In particular, the acquired images are processed such that per each frame the moving dust particles are detected by applying a background subtraction technique based on the mixture of Gaussian algorithm. The obtained foreground masks are eventually filtered with morphological operations. Finally, a multi-object tracking algorithm is used to track the detected particles along the experiment. For each particle, a Kalman filter-based tracker is applied; the particles dynamic is described by taking into account position, velocity, and acceleration as state variable. The results demonstrate that it is possible to obtain dust particles' velocity field during LOVA by automatically processing the data obtained with the shadowgraph approach.
Compact Autonomous Hemispheric Vision System
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.
2012-01-01
Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.
Electronic Still Camera view of Aft end of Wide Field/Planetary Camera in HST
1993-12-06
S61-E-015 (6 Dec 1993) --- A close-up view of the aft part of the new Wide Field/Planetary Camera (WFPC-II) installed on the Hubble Space Telescope (HST). WFPC-II was photographed with the Electronic Still Camera (ESC) from inside Endeavour's cabin as astronauts F. Story Musgrave and Jeffrey A. Hoffman moved it from its stowage position onto the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Vision robot with rotational camera for searching ID tags
NASA Astrophysics Data System (ADS)
Kimura, Nobutaka; Moriya, Toshio
2008-02-01
We propose a new concept, called "real world crawling", in which intelligent mobile sensors completely recognize environments by actively gathering information in those environments and integrating that information on the basis of location. First we locate objects by widely and roughly scanning the entire environment with these mobile sensors, and we check the objects in detail by moving the sensors to find out exactly what and where they are. We focused on the automation of inventory counting with barcodes as an application of our concept. We developed "a barcode reading robot" which autonomously moved in a warehouse. It located and read barcode ID tags using a camera and a barcode reader while moving. However, motion blurs caused by the robot's translational motion made it difficult to recognize the barcodes. Because of the high computational cost of image deblurring software, we used the pan rotation of the camera to reduce these blurs. We derived the appropriate pan rotation velocity from the robot's translational velocity and from the distance to the surfaces of barcoded boxes. We verified the effectiveness of our method in an experimental test.
Teasing Apart Complex Motions using VideoPoint
NASA Astrophysics Data System (ADS)
Fischer, Mark
2002-10-01
Using video analysis software such as VideoPoint, it is possible to explore the physics of any phenomenon that can be captured on videotape. The good news is that complex motions can be filmed and analyzed. The bad news is that the motions can become very complex very quickly. An example of such a complicated motion, the 2-dimensional motion of an object as filmed by a camera that is moving and rotating in the same plane will be discussed. Methods for extracting the desired object motion will be given as well as suggestions for shooting more easily analyzable video clips.
FieldSAFE: Dataset for Obstacle Detection in Agriculture.
Kragh, Mikkel Fly; Christiansen, Peter; Laursen, Morten Stigaard; Larsen, Morten; Steen, Kim Arild; Green, Ole; Karstoft, Henrik; Jørgensen, Rasmus Nyholm
2017-11-09
In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360 ∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates.
FieldSAFE: Dataset for Obstacle Detection in Agriculture
Christiansen, Peter; Larsen, Morten; Steen, Kim Arild; Green, Ole; Karstoft, Henrik
2017-01-01
In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates. PMID:29120383
Algorithms for detection of objects in image sequences captured from an airborne imaging system
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak
1995-01-01
This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.
Pettit runs a drill while looking through a camera mounted on the Nadir window in the U.S. Lab
2003-04-05
ISS006-E-44305 (5 April 2003) --- Astronaut Donald R. Pettit, Expedition Six NASA ISS science officer, runs a drill while looking through a camera mounted on the nadir window in the Destiny laboratory on the International Space Station (ISS). The device is called a barn door tracker. The drill turns the screw, which moves the camera and its spotting scope.
Red ball ranging optimization based on dual camera ranging method
NASA Astrophysics Data System (ADS)
Kuang, Lei; Sun, Weijia; Liu, Jiaming; Tang, Matthew Wai-Chung
2018-05-01
In this paper, the process of positioning and moving to target red ball by NAO robot through its camera system is analyzed and improved using the dual camera ranging method. The single camera ranging method, which is adapted by NAO robot, was first studied and experimented. Since the existing error of current NAO Robot is not a single variable, the experiments were divided into two parts to obtain more accurate single camera ranging experiment data: forward ranging and backward ranging. Moreover, two USB cameras were used in our experiments that adapted Hough's circular method to identify a ball, while the HSV color space model was used to identify red color. Our results showed that the dual camera ranging method reduced the variance of error in ball tracking from 0.68 to 0.20.
1. Mechanics Shop. NE corner. Camera pointed SW. This building ...
1. Mechanics Shop. NE corner. Camera pointed SW. This building was the original Paddock barn when the track opened in 1933 and was later moved to this site south of the Paddock. See the historic photo WA-201-4-8. (July 1993) - Longacres, Mechanic's Shop, 1621 Southwest Sixteenth Street, Renton, King County, WA
Dual beam optical interferometer
NASA Technical Reports Server (NTRS)
Gutierrez, Roman C. (Inventor)
2003-01-01
A dual beam interferometer device is disclosed that enables moving an optics module in a direction, which changes the path lengths of two beams of light. The two beams reflect off a surface of an object and generate different speckle patterns detected by an element, such as a camera. The camera detects a characteristic of the surface.
NASA Astrophysics Data System (ADS)
Fan, Shuzhen; Qi, Feng; Notake, Takashi; Nawata, Kouji; Matsukawa, Takeshi; Takida, Yuma; Minamide, Hiroaki
2014-03-01
Real-time terahertz (THz) wave imaging has wide applications in areas such as security, industry, biology, medicine, pharmacy, and arts. In this letter, we report on real-time room-temperature THz imaging by nonlinear optical frequency up-conversion in organic 4-dimethylamino-N'-methyl-4'-stilbazolium tosylate crystal. The active projection-imaging system consisted of (1) THz wave generation, (2) THz-near-infrared hybrid optics, (3) THz wave up-conversion, and (4) an InGaAs camera working at 60 frames per second. The pumping laser system consisted of two optical parametric oscillators pumped by a nano-second frequency-doubled Nd:YAG laser. THz-wave images of handmade samples at 19.3 THz were taken, and videos of a sample moving and a ruler stuck with a black polyethylene film moving were supplied online to show real-time ability. Thanks to the high speed and high responsivity of this technology, real-time THz imaging with a higher signal-to-noise ratio than a commercially available THz micro-bolometer camera was proven to be feasible. By changing the phase-matching condition, i.e., by changing the wavelength of the pumping laser, we suggest THz imaging with a narrow THz frequency band of interest in a wide range from approximately 2 to 30 THz is possible.
Template-Based 3D Reconstruction of Non-rigid Deformable Object from Monocular Video
NASA Astrophysics Data System (ADS)
Liu, Yang; Peng, Xiaodong; Zhou, Wugen; Liu, Bo; Gerndt, Andreas
2018-06-01
In this paper, we propose a template-based 3D surface reconstruction system of non-rigid deformable objects from monocular video sequence. Firstly, we generate a semi-dense template of the target object with structure from motion method using a subsequence video. This video can be captured by rigid moving camera orienting the static target object or by a static camera observing the rigid moving target object. Then, with the reference template mesh as input and based on the framework of classical template-based methods, we solve an energy minimization problem to get the correspondence between the template and every frame to get the time-varying mesh to present the deformation of objects. The energy terms combine photometric cost, temporal and spatial smoothness cost as well as as-rigid-as-possible cost which can enable elastic deformation. In this paper, an easy and controllable solution to generate the semi-dense template for complex objects is presented. Besides, we use an effective iterative Schur based linear solver for the energy minimization problem. The experimental evaluation presents qualitative deformation objects reconstruction results with real sequences. Compare against the results with other templates as input, the reconstructions based on our template have more accurate and detailed results for certain regions. The experimental results show that the linear solver we used performs better efficiency compared to traditional conjugate gradient based solver.
Fuzzy logic control for camera tracking system
NASA Technical Reports Server (NTRS)
Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant
1992-01-01
A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.
Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K
2008-01-01
A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.
Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras
NASA Astrophysics Data System (ADS)
Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.
2017-02-01
Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.
Real-time moving objects detection and tracking from airborne infrared camera
NASA Astrophysics Data System (ADS)
Zingoni, Andrea; Diani, Marco; Corsini, Giovanni
2017-10-01
Detecting and tracking moving objects in real-time from an airborne infrared (IR) camera offers interesting possibilities in video surveillance, remote sensing and computer vision applications, such as monitoring large areas simultaneously, quickly changing the point of view on the scene and pursuing objects of interest. To fully exploit such a potential, versatile solutions are needed, but, in the literature, the majority of them works only under specific conditions about the considered scenario, the characteristics of the moving objects or the aircraft movements. In order to overcome these limitations, we propose a novel approach to the problem, based on the use of a cheap inertial navigation system (INS), mounted on the aircraft. To exploit jointly the information contained in the acquired video sequence and the data provided by the INS, a specific detection and tracking algorithm has been developed. It consists of three main stages performed iteratively on each acquired frame. The detection stage, in which a coarse detection map is computed, using a local statistic both fast to calculate and robust to noise and self-deletion of the targeted objects. The registration stage, in which the position of the detected objects is coherently reported on a common reference frame, by exploiting the INS data. The tracking stage, in which the steady objects are rejected, the moving objects are tracked, and an estimation of their future position is computed, to be used in the subsequent iteration. The algorithm has been tested on a large dataset of simulated IR video sequences, recreating different environments and different movements of the aircraft. Promising results have been obtained, both in terms of detection and false alarm rate, and in terms of accuracy in the estimation of position and velocity of the objects. In addition, for each frame, the detection and tracking map has been generated by the algorithm, before the acquisition of the subsequent frame, proving its capability to work in real-time.
Vision System Measures Motions of Robot and External Objects
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2008-01-01
A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.
NASA Astrophysics Data System (ADS)
Mazurek, Przemysław
2013-09-01
Matchmoving (Match Moving) is the process used for the estimation of camera movements for further integration of acquired video image with computer graphics. The estimation of movements is possible using pattern recognition, 2D and 3D tracking algorithms. The main problem for the workflow is the partial occlusion of markers by the actor, because manual rotoscoping is necessary for fixing of the chroma-keyed footage. In the paper, the partial occlusion problem is solved using the invented, selectively active electronic markers. The sensor network with multiple infrared links detects occlusion state (no-occlusion, partial, full) and switch LED's based markers.
The Orbital Maneuvering Vehicle Training Facility visual system concept
NASA Technical Reports Server (NTRS)
Williams, Keith
1989-01-01
The purpose of the Orbital Maneuvering Vehicle (OMV) Training Facility (OTF) is to provide effective training for OMV pilots. A critical part of the training environment is the Visual System, which will simulate the video scenes produced by the OMV Closed-Circuit Television (CCTV) system. The simulation will include camera models, dynamic target models, moving appendages, and scene degradation due to the compression/decompression of video signal. Video system malfunctions will also be provided to ensure that the pilot is ready to meet all challenges the real-world might provide. One possible visual system configuration for the training facility that will meet existing requirements is described.
Real-time object detection, tracking and occlusion reasoning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Divakaran, Ajay; Yu, Qian; Tamrakar, Amir
A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.
Systems and methods for estimating the structure and motion of an object
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dani, Ashwin P; Dixon, Warren
2015-11-03
In one embodiment, the structure and motion of a stationary object are determined using two images and a linear velocity and linear acceleration of a camera. In another embodiment, the structure and motion of a stationary or moving object are determined using an image and linear and angular velocities of a camera.
Mars Odyssey from Two Distances in One Image
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Figure 1: Why There are Two Images of Odyssey NASA's Mars Odyssey spacecraft appears twice in the same frame in this image from the Mars Orbiter Camera aboard NASA's Mars Global Surveyor. The camera's successful imaging of Odyssey and of the European Space Agency's Mars Express in April 2005 produced the first pictures of any spacecraft orbiting Mars taken by another spacecraft orbiting Mars. Mars Global Surveyor and Mars Odyssey are both in nearly circular, near-polar orbits. Odyssey is in an orbit slightly higher than that of Global Surveyor in order to preclude the possibility of a collision. However, the two spacecraft occasionally come as close together as 15 kilometers (9 miles). The images were obtained by the Mars Global Surveyor operations teams at Lockheed Martin Space System, Denver; JPL and Malin Space Science Systems. The two views of Mars Odyssey in this image were acquired a little under 7.5 seconds apart as Odyssey receded from a close flyby of Mars Global Surveyor. The geometry of the flyby (see Figure 1) and the camera's way of acquiring an image line-by-line resulted in the two views of Odyssey in the same frame. The first view (right) was taken when Odyssey was about 90 kilometers (56 miles) from Global Surveyor and moving more rapidly than Global Surveyor was rotating, as seen from Global Surveyor. A few seconds later, Odyssey was farther away -- about 135 kilometers (84 miles) -- and appeared to be moving more slowly. In this second view of Odyssey (left), the Mars Orbiter Camera's field-of-view overtook Odyssey. The Mars Orbiter Camera can resolve features on the surface of Mars as small as a few meters or yards across from Mars Global Surveyor's orbital altitude of 350 to 405 kilometers (217 to 252 miles). From a distance of 100 kilometers (62 miles), the camera would be able to resolve features substantially smaller than 1 meter or yard across. Mars Odyssey was launched on April 7, 2001, and reached Mars on Oct. 24, 2001. Mars Global Surveyor left Earth on Nov. 7, 1996, and arrived in Mars orbit on Sept. 12, 1997. Both orbiters are in an extended mission phase, both have relayed data from the Mars Exploration Rovers, and both are continuing to return exciting new results from Mars. JPL, a division of the California Institute of Technology, Pasadena, manages both missions for NASA's Science Mission Directorate, Washington, D.C.Automatic Camera Calibration Using Multiple Sets of Pairwise Correspondences.
Vasconcelos, Francisco; Barreto, Joao P; Boyer, Edmond
2018-04-01
We propose a new method to add an uncalibrated node into a network of calibrated cameras using only pairwise point correspondences. While previous methods perform this task using triple correspondences, these are often difficult to establish when there is limited overlap between different views. In such challenging cases we must rely on pairwise correspondences and our solution becomes more advantageous. Our method includes an 11-point minimal solution for the intrinsic and extrinsic calibration of a camera from pairwise correspondences with other two calibrated cameras, and a new inlier selection framework that extends the traditional RANSAC family of algorithms to sampling across multiple datasets. Our method is validated on different application scenarios where a lack of triple correspondences might occur: addition of a new node to a camera network; calibration and motion estimation of a moving camera inside a camera network; and addition of views with limited overlap to a Structure-from-Motion model.
Optimizing a neural network for detection of moving vehicles in video
NASA Astrophysics Data System (ADS)
Fischer, Noëlle M.; Kruithof, Maarten C.; Bouma, Henri
2017-10-01
In the field of security and defense, it is extremely important to reliably detect moving objects, such as cars, ships, drones and missiles. Detection and analysis of moving objects in cameras near borders could be helpful to reduce illicit trading, drug trafficking, irregular border crossing, trafficking in human beings and smuggling. Many recent benchmarks have shown that convolutional neural networks are performing well in the detection of objects in images. Most deep-learning research effort focuses on classification or detection on single images. However, the detection of dynamic changes (e.g., moving objects, actions and events) in streaming video is extremely relevant for surveillance and forensic applications. In this paper, we combine an end-to-end feedforward neural network for static detection with a recurrent Long Short-Term Memory (LSTM) network for multi-frame analysis. We present a practical guide with special attention to the selection of the optimizer and batch size. The end-to-end network is able to localize and recognize the vehicles in video from traffic cameras. We show an efficient way to collect relevant in-domain data for training with minimal manual labor. Our results show that the combination with LSTM improves performance for the detection of moving vehicles.
Space-variant restoration of images degraded by camera motion blur.
Sorel, Michal; Flusser, Jan
2008-02-01
We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.
First Carlsberg Meridian Telescope (CMT) CCD Catalogue.
NASA Astrophysics Data System (ADS)
Bélizon, F.; Muiños, J. L.; Vallejo, M.; Evans, D. W.; Irwin, M.; Helmer, L.
2003-11-01
The Carlsberg Meridian Telescope (CMT) is a telescope owned by Copenhagen University Observatory (CUO). It was installed in the Spanish observatory of El Roque de los Muchachos on the island of La Palma (Canary Islands) in 1984. It is operated jointly by the CUO, the Institute of Astronomy, Cambridge (IoA) and the Real Instituto y Observatorio de la Armada of Spain (ROA) in the framework of an international agreement. From 1984 to 1998 the instrument was provided with a moving slit micrometer and with its observations a series of 11 catalogues were published, `Carlsberg Meridian Catalogue La Palma (CMC No 1-11)'. Since 1997, the telescope has been controlled remotely via Internet. The three institutions share this remote control in periods of approximately three months. In 1998, the CMT was upgraded by installing as sensor, a commercial Spectrasource CCD camera as a test of the possibility of performing meridian transits observed in drift-scan mode. Once this was shown possible, in 1999, a second model of CCD camera, built in the CUO workshop with a better performance, was installed. The Spectrasource camera was loaned to ROA by CUO and is now installed in the San Fernando Automatic Meridian Circle in San Juan (CMASF). In 1999, the observations were started of a sky survey from -3deg to +30deg in declination. In July 2002, a first release of the survey was published, with the positions of the observed stars in the band between -3deg and +3deg in declination. This oral communication will present this first release of the survey.
NASA Astrophysics Data System (ADS)
Yabuta, Kenichi; Kitazawa, Hitoshi; Tanaka, Toshihisa
2006-02-01
Recently, monitoring cameras for security have been extensively increasing. However, it is normally difficult to know when and where we are monitored by these cameras and how the recorded images are stored and/or used. Therefore, how to protect privacy in the recorded images is a crucial issue. In this paper, we address this problem and introduce a framework for security monitoring systems considering the privacy protection. We state requirements for monitoring systems in this framework. We propose a possible implementation that satisfies the requirements. To protect privacy of recorded objects, they are made invisible by appropriate image processing techniques. Moreover, the original objects are encrypted and watermarked into the image with the "invisible" objects, which is coded by the JPEG standard. Therefore, the image decoded by a normal JPEG viewer includes the objects that are unrecognized or invisible. We also introduce in this paper a so-called "special viewer" in order to decrypt and display the original objects. This special viewer can be used by limited users when necessary for crime investigation, etc. The special viewer allows us to choose objects to be decoded and displayed. Moreover, in this proposed system, real-time processing can be performed, since no future frame is needed to generate a bitstream.
Clancy, Neil T.; Stoyanov, Danail; James, David R. C.; Di Marco, Aimee; Sauvage, Vincent; Clark, James; Yang, Guang-Zhong; Elson, Daniel S.
2012-01-01
Sequential multispectral imaging is an acquisition technique that involves collecting images of a target at different wavelengths, to compile a spectrum for each pixel. In surgical applications it suffers from low illumination levels and motion artefacts. A three-channel rigid endoscope system has been developed that allows simultaneous recording of stereoscopic and multispectral images. Salient features on the tissue surface may be tracked during the acquisition in the stereo cameras and, using multiple camera triangulation techniques, this information used to align the multispectral images automatically even though the tissue or camera is moving. This paper describes a detailed validation of the set-up in a controlled experiment before presenting the first in vivo use of the device in a porcine minimally invasive surgical procedure. Multispectral images of the large bowel were acquired and used to extract the relative concentration of haemoglobin in the tissue despite motion due to breathing during the acquisition. Using the stereoscopic information it was also possible to overlay the multispectral information on the reconstructed 3D surface. This experiment demonstrates the ability of this system for measuring blood perfusion changes in the tissue during surgery and its potential use as a platform for other sequential imaging modalities. PMID:23082296
Whose point-of-view is it anyway?
NASA Astrophysics Data System (ADS)
Garvey, Gregory P.
2011-03-01
Shared virtual worlds such as Second Life privilege a single point-of-view, namely that of the user. When logged into Second Life a user sees the virtual world from a default viewpoint, which is from slightly above and behind the user's avatar (the user's alter ego 'in-world.') This point-of-view is as if the user were viewing his or her avatar using a camera floating a few feet behind it. In fact it is possible to set the view to as if you were seeing the world through the eyes of your avatar or you can even move the camera completely independent of your avatar. A change in point-of-view, means, more than just a different camera point-of-view. The practice of using multiple avatars requires a transformation of identity and personality. When a user 'enacts' the identity of a particular avatar, their 'real' personality is masked by the assumed personality. The technology of virtual worlds permits both a change of point-of -view and also facilitates a change in identity. Does this cause any psychological distress? Or is the ability to be someone else and see a world (a game, a virtual world) through a different set of eyes somehow liberating and even beneficial?
Multicolor pyrometer for materials processing in space
NASA Technical Reports Server (NTRS)
Frish, M. B.; Frank, J.; Baker, J. E.; Foutter, R. R.; Beerman, H.; Allen, M. G.
1990-01-01
This report documents the work performed by Physical Sciences Inc. (PSI), under contract to NASA JPL, during a 2.5-year SBIR Phase 2 Program. The program goals were to design, construct, and program a prototype passive imaging pyrometer capable of measuring, as accurately as possible, and controlling the temperature distribution across the surface of a moving object suspended in space. These goals were achieved and the instrument was delivered to JPL in November 1989. The pyrometer utilizes an optical system which operates at short wavelengths compared to the peak of the black-body spectrum for the temperature range of interest, thus minimizing errors associated with a lack of knowledge about the heated sample's emissivity. To cover temperatures from 900 to 2500 K, six wavelengths are available. The preferred wavelength for measurement of a particular temperature decreases as the temperature increases. Images at all six wavelengths are projected onto a single CCD camera concurrently. The camera and optical system have been calibrated to relate the measured intensity at each pixel to the temperature of the heated object. The output of the camera is digitized by a frame grabber installed in a personal computer and analyzed automatically to yield temperature information. The data can be used in a feedback loop to alter the status of computer-activated switches and thereby control a heating system.
Realization of the ergonomics design and automatic control of the fundus cameras
NASA Astrophysics Data System (ADS)
Zeng, Chi-liang; Xiao, Ze-xin; Deng, Shi-chao; Yu, Xin-ye
2012-12-01
The principles of ergonomics design in fundus cameras should be extending the agreeableness by automatic control. Firstly, a 3D positional numerical control system is designed for positioning the eye pupils of the patients who are doing fundus examinations. This system consists of a electronically controlled chin bracket for moving up and down, a lateral movement of binocular with the detector and the automatic refocusing of the edges of the eye pupils. Secondly, an auto-focusing device for the object plane of patient's fundus is designed, which collects the patient's fundus images automatically whether their eyes is ametropic or not. Finally, a moving visual target is developed for expanding the fields of the fundus images.
4D Light Field Imaging System Using Programmable Aperture
NASA Technical Reports Server (NTRS)
Bae, Youngsam
2012-01-01
Complete depth information can be extracted from analyzing all angles of light rays emanated from a source. However, this angular information is lost in a typical 2D imaging system. In order to record this information, a standard stereo imaging system uses two cameras to obtain information from two view angles. Sometimes, more cameras are used to obtain information from more angles. However, a 4D light field imaging technique can achieve this multiple-camera effect through a single-lens camera. Two methods are available for this: one using a microlens array, and the other using a moving aperture. The moving-aperture method can obtain more complete stereo information. The existing literature suggests a modified liquid crystal panel [LC (liquid crystal) panel, similar to ones commonly used in the display industry] to achieve a moving aperture. However, LC panels cannot withstand harsh environments and are not qualified for spaceflight. In this regard, different hardware is proposed for the moving aperture. A digital micromirror device (DMD) will replace the liquid crystal. This will be qualified for harsh environments for the 4D light field imaging. This will enable an imager to record near-complete stereo information. The approach to building a proof-ofconcept is using existing, or slightly modified, off-the-shelf components. An SLR (single-lens reflex) lens system, which typically has a large aperture for fast imaging, will be modified. The lens system will be arranged so that DMD can be integrated. The shape of aperture will be programmed for single-viewpoint imaging, multiple-viewpoint imaging, and coded aperture imaging. The novelty lies in using a DMD instead of a LC panel to move the apertures for 4D light field imaging. The DMD uses reflecting mirrors, so any light transmission lost (which would be expected from the LC panel) will be minimal. Also, the MEMS-based DMD can withstand higher temperature and pressure fluctuation than a LC panel can. Robotics need near complete stereo images for their autonomous navigation, manipulation, and depth approximation. The imaging system can provide visual feedback
Camera Systems Rapidly Scan Large Structures
NASA Technical Reports Server (NTRS)
2013-01-01
Needing a method to quickly scan large structures like an aircraft wing, Langley Research Center developed the line scanning thermography (LST) system. LST works in tandem with a moving infrared camera to capture how a material responds to changes in temperature. Princeton Junction, New Jersey-based MISTRAS Group Inc. now licenses the technology and uses it in power stations and industrial plants.
Intelligent person identification system using stereo camera-based height and stride estimation
NASA Astrophysics Data System (ADS)
Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo
2005-05-01
In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.
NASA Astrophysics Data System (ADS)
Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.
2012-07-01
Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83 % in RMS of range error and 72 % in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90 % true positive recognition and the average of 12 centimetres 3D positioning accuracy.
NASA Astrophysics Data System (ADS)
Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.
2012-07-01
Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83% in RMS of range error and 72% in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90% true positive recognition and the average of 12 centimetres 3D positioning accuracy.
Determination of feature generation methods for PTZ camera object tracking
NASA Astrophysics Data System (ADS)
Doyle, Daniel D.; Black, Jonathan T.
2012-06-01
Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.
Describing a Robot's Workspace Using a Sequence of Views from a Moving Camera.
Hong, T H; Shneier, M O
1985-06-01
This correspondence describes a method of building and maintaining a spatial respresentation for the workspace of a robot, using a sensor that moves about in the world. From the known camera position at which an image is obtained, and two-dimensional silhouettes of the image, a series of cones is projected to describe the possible positions of the objects in the space. When an object is seen from several viewpoints, the intersections of the cones constrain the position and size of the object. After several views have been processed, the representation of the object begins to resemble its true shape. At all times, the spatial representation contains the best guess at the true situation in the world with uncertainties in position and shape explicitly represented. An octree is used as the data structure for the representation. It not only provides a relatively compact representation, but also allows fast access to information and enables large parts of the workspace to be ignored. The purpose of constructing this representation is not so much to recognize objects as to describe the volumes in the workspace that are occupied and those that are empty. This enables trajectory planning to be carried out, and also provides a means of spatially indexing objects without needing to represent the objects at an extremely fine resolution. The spatial representation is one part of a more complex representation of the workspace used by the sensory system of a robot manipulator in understanding its environment.
Spatial imaging of UV emission from Jupiter and Saturn
NASA Technical Reports Server (NTRS)
Clarke, J. T.; Moos, H. W.
1981-01-01
Spatial imaging with the IUE is accomplished both by moving one of the apertures in a series of exposures and within the large aperture in a single exposure. The image of the field of view subtended by the large aperture is focussed directly onto the detector camera face at each wavelength; since the spatial resolution of the instrument is 5 to 6 arc sec and the aperture extends 23.0 by 10.3 arc sec, imaging both parallel and perpendicular to dispersion is possible in a single exposure. The correction for the sensitivity variation along the slit at 1216 A is obtained from exposures of diffuse geocoronal H Ly alpha emission. The relative size of the aperture superimposed on the apparent discs of Jupiter and Saturn in typical observation is illustrated. By moving the planet image 10 to 20 arc sec along the major axis of the aperture (which is constrained to point roughly north-south) maps of the discs of these planets are obtained with 6 arc sec spatial resolution.
Positron emission particle tracking using a modular positron camera
NASA Astrophysics Data System (ADS)
Parker, D. J.; Leadbeater, T. W.; Fan, X.; Hausard, M. N.; Ingram, A.; Yang, Z.
2009-06-01
The technique of positron emission particle tracking (PEPT), developed at Birmingham in the early 1990s, enables a radioactively labelled tracer particle to be accurately tracked as it moves between the detectors of a "positron camera". In 1999 the original Birmingham positron camera, which consisted of a pair of MWPCs, was replaced by a system comprising two NaI(Tl) gamma camera heads operating in coincidence. This system has been successfully used for PEPT studies of a wide range of granular and fluid flow processes. More recently a modular positron camera has been developed using a number of the bismuth germanate (BGO) block detectors from standard PET scanners (CTI ECAT 930 and 950 series). This camera has flexible geometry, is transportable, and is capable of delivering high data rates. This paper presents simple models of its performance, and initial experience of its use in a range of geometries and applications.
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2015-08-05
This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA).
Velocity and Structure Estimation of a Moving Object Using a Moving Monocular Camera
2006-01-01
map the Euclidean position of static landmarks or visual features in the environment . Recent applications of this technique include aerial...From Motion in a Piecewise Planar Environment ,” International Journal of Pattern Recognition and Artificial Intelligence, Vol. 2, No. 3, pp. 485-508...1988. [9] J. M. Ferryman, S. J. Maybank , and A. D. Worrall, “Visual Surveil- lance for Moving Vehicles,” Intl. Journal of Computer Vision, Vol. 37, No
Concave Surround Optics for Rapid Multi-View Imaging
2006-11-01
thus is amenable to capturing dynamic events avoiding the need to construct and calibrate an array of cameras. We demonstrate the system with a high...hard to assemble and calibrate . In this paper we present an optical system capable of rapidly moving the viewpoint around a scene. Our system...flexibility, large camera arrays are typically expensive and require significant effort to calibrate temporally, geometrically and chromatically
Enhancing physics demos using iPhone slow motion
NASA Astrophysics Data System (ADS)
Lincoln, James
2017-12-01
Slow motion video enhances our ability to perceive and experience the physical world. This can help students and teachers especially in cases of fast moving objects or detailed events that happen too quickly for the eye to follow. As often as possible, demonstrations should be performed by the students themselves and luckily many of them will already have this technology in their pockets. The "S" series of iPhone has the slow motion video feature standard, which also includes simultaneous sound recording (somewhat unusual among slow motion cameras). In this article I share some of my experiences using this feature and provide advice on how to successfully use this technology in the classroom.
Visual environment recognition for robot path planning using template matched filters
NASA Astrophysics Data System (ADS)
Orozco-Rosas, Ulises; Picos, Kenia; Díaz-Ramírez, Víctor H.; Montiel, Oscar; Sepúlveda, Roberto
2017-08-01
A visual approach in environment recognition for robot navigation is proposed. This work includes a template matching filtering technique to detect obstacles and feasible paths using a single camera to sense a cluttered environment. In this problem statement, a robot can move from the start to the goal by choosing a single path between multiple possible ways. In order to generate an efficient and safe path for mobile robot navigation, the proposal employs a pseudo-bacterial potential field algorithm to derive optimal potential field functions using evolutionary computation. Simulation results are evaluated in synthetic and real scenes in terms of accuracy of environment recognition and efficiency of path planning computation.
Automatic Recognition Of Moving Objects And Its Application To A Robot For Picking Asparagus
NASA Astrophysics Data System (ADS)
Baylou, P.; Amor, B. El Hadj; Bousseau, G.
1983-10-01
After a brief description of the robot for picking white asparagus, a statistical study of the different shapes of asparagus tips allowed us to determine certain discriminating parameters to detect the tips as they appear on the silhouette of the mound of earth. The localisation was done stereometrically with the help of two cameras. As the robot carrying the system of vision-localisation moves, the images are altered and decision cri-teria modified. A study of the image from mobile objects produced by both tube and CCD came-ras was carried out. A simulation of this phenomenon has been achieved in order to determine the modifications concerning object shapes, thresholding levels and decision parameters in function of the robot speed.
Robust human detection, tracking, and recognition in crowded urban areas
NASA Astrophysics Data System (ADS)
Chen, Hai-Wen; McGurr, Mike
2014-06-01
In this paper, we present algorithms we recently developed to support an automated security surveillance system for very crowded urban areas. In our approach for human detection, the color features are obtained by taking the difference of R, G, B spectrum and converting R, G, B to HSV (Hue, Saturation, Value) space. Morphological patch filtering and regional minimum and maximum segmentation on the extracted features are applied for target detection. The human tracking process approach includes: 1) Color and intensity feature matching track candidate selection; 2) Separate three parallel trackers for color, bright (above mean intensity), and dim (below mean intensity) detections, respectively; 3) Adaptive track gate size selection for reducing false tracking probability; and 4) Forward position prediction based on previous moving speed and direction for continuing tracking even when detections are missed from frame to frame. The Human target recognition is improved with a Super-Resolution Image Enhancement (SRIE) process. This process can improve target resolution by 3-5 times and can simultaneously process many targets that are tracked. Our approach can project tracks from one camera to another camera with a different perspective viewing angle to obtain additional biometric features from different perspective angles, and to continue tracking the same person from the 2nd camera even though the person moved out of the Field of View (FOV) of the 1st camera with `Tracking Relay'. Finally, the multiple cameras at different view poses have been geo-rectified to nadir view plane and geo-registered with Google- Earth (or other GIS) to obtain accurate positions (latitude, longitude, and altitude) of the tracked human for pin-point targeting and for a large area total human motion activity top-view. Preliminary tests of our algorithms indicate than high probability of detection can be achieved for both moving and stationary humans. Our algorithms can simultaneously track more than 100 human targets with averaged tracking period (time length) longer than the performance of the current state-of-the-art.
Pole Photogrammetry with AN Action Camera for Fast and Accurate Surface Mapping
NASA Astrophysics Data System (ADS)
Gonçalves, J. A.; Moutinho, O. F.; Rodrigues, A. C.
2016-06-01
High resolution and high accuracy terrain mapping can provide height change detection for studies of erosion, subsidence or land slip. A UAV flying at a low altitude above the ground, with a compact camera, acquires images with resolution appropriate for these change detections. However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. This paper describes a very simple system of this kind, created for topographic change detection, based on an action camera. These cameras have high quality and very flexible image capture. Although radial distortion is normally high, it can be treated in an auto-calibration process. The system is composed by a light aluminium pole, 4 meters long, with a 12 megapixel GoPro camera. Average ground sampling distance at the image centre is 2.3 mm. The user moves along a path, taking successive photos, with a time lapse of 0.5 or 1 second, and adjusting the speed in order to have an appropriate overlap, with enough redundancy for 3D coordinate extraction. Marked ground control points are surveyed with GNSS for precise georeferencing of the DSM and orthoimage that are created by structure from motion processing software. An average vertical accuracy of 1 cm could be achieved, which is enough for many applications, for example for soil erosion. The GNSS survey in RTK mode with permanent stations is now very fast (5 seconds per point), which results, together with the image collection, in a very fast field work. If an improved accuracy is needed, since image resolution is 1/4 cm, it can be achieved using a total station for the control point survey, although the field work time increases.
a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging
NASA Astrophysics Data System (ADS)
Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.
2017-08-01
Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.
Dexter, F
2000-10-01
We examined how to program an operating room (OR) information system to assist the OR manager in deciding whether to move the last case of the day in one OR to another OR that is empty to decrease overtime labor costs. We first developed a statistical strategy to predict whether moving the case would decrease overtime labor costs for first shift nurses and anesthesia providers. The strategy was based on using historical case duration data stored in a surgical services information system. Second, we estimated the incremental overtime labor costs achieved if our strategy was used for moving cases versus movement of cases by an OR manager who knew in advance exactly how long each case would last. We found that if our strategy was used to decide whether to move cases, then depending on parameter values, only 2.0 to 4.3 more min of overtime would be required per case than if the OR manager had perfect retrospective knowledge of case durations. The use of other information technologies to assist in the decision of whether to move a case, such as real-time patient tracking information systems, closed-circuit cameras, or graphical airport-style displays can, on average, reduce overtime by no more than only 2 to 4 min per case that can be moved. The use of other information technologies to assist in the decision of whether to move a case, such as real-time patient tracking information systems, closed-circuit cameras, or graphical airport-style displays, can, on average, reduce overtime by no more than only 2 to 4 min per case that can be moved.
Dynamics of Meddies Interaction With Submarine Mountains
NASA Astrophysics Data System (ADS)
Cenedese, A.; Espa, S.; Sciarra, R.; Cicerani, S.
The dynamics of MEDDIES (i.e. Mediterranean Eddies) impinging on submarine mountains has been experimentally analyzed both in the f-plane and b-plane condi- tions in order to validate in situ observations of the geophysical phenomenon (Richard- son P.L., Bower A.S. &Zenk W.; 2000). Experiments have been performed by using a rotating tank equipped with a co-rotating video camera, which allows to take flow visualizations. The tank has a squared section (L=88 cm) and is filled with pure wa- ter (Tz180 C). Cyclonic vortices are generated by placing ice cubes on the upper surface of the tank (Cenedese C., 2000) and the mountain is simulated by using cylin- ders characterised by different shaped sections. We analyzed two impact typology in which there is: - vortex advected by an uniform background flow: the experiment is performed by moving an obstacle against a motionless vortex in a f-plane framework. A video camera is fixed over the obstacle moving at the same time. -self moving vor- tex: the beta effect induced by a sloping bottom allow the vortex to move by itself. In this case the vortex impinges on a fixed obstacle. Our aim is to investigate the possible scenario corresponding to frontal and glancing collision events and the influ- ence of impact and geometrical parameters (i.e. obstacle size, D, and shape; vortex size, R; distance between the center of the vortex and the horizontal axis of the obsta- cle) leading to vortex destruction, vortex bifurcation or changing in vortex structure. Lagrangian trajectories of individual tracers (styrene particles) released on the fluid surface have been reconstructed in the tank reference frame by using PTV technique (Cenedese A., Querzoli G., 2000). These particles are supposed to act as passive scalar i.e. their influence on the fluid motion can be considered negligible. By interpolating Lagrangian velocities over a regular grid, we obtained the Eulerian flow fields. It is then possible to evaluate vorticity distribution and to investigate its evolution during the impact event. REFERENCES Richardson P.L., Bower A.S. &Zenk W. (2000) 'A census of Med- dies tracked by floats'. Progress in Oceanography, 45, 209-250. Cenedese C.(2000) 'Mesoscale vortices colliding with a seamount' J.Geophys.Res. Cenedese, A., Quer- zoli, G., (2000), SParticle Tracking Velocimetry: measuring in the Lagrangian ref- ´ erence frameS, in: Particle Image Velocimetry and associated techniques, Lectures series 2000-01, von Karman Institute for Fluid Dynamics
Design and control of 2-axis tilting actuator for endoscope using ionic polymer metal composites
NASA Astrophysics Data System (ADS)
Kim, Sung-Joo; Kim, Chul-Jin; Park, No-Cheol; Yang, Hyun-Seok; Park, Young-Pil
2009-03-01
In field of endoscopy, in order to overcome limitation in conventional endoscopy, capsule endoscope has been developed and has been recently applied in medical field in hospital. However, since capsule endoscope moves passively through GI tract by peristalsis, it is not able to control direction of head including camera. It is possible to miss symptoms of disease. Therefore, in this thesis, 2-Axis Tilting Actuator for Endoscope, based on Ionic Polymer Metal Composites (IPMC), is presented. In order to apply to capsule endoscope, the actuator material should satisfy a size, low energy consumption and low working voltage. Since IPMC is emerging material that exhibits a large bending deflection at low voltage, consume low energy and it can be fabricated in any size or any shape, IPMC are selected as an actuator. The system tilts camera module of endoscope to reduce invisible area of the intestines and a goal of tilting angle is selected to be an angle of 5 degrees for each axis. In order to control tiling angle, LQR controller and the full order observer is designed.
Thermal human phantom for testing of millimeter wave cameras
NASA Astrophysics Data System (ADS)
Palka, Norbert; Ryniec, Radoslaw; Piszczek, Marek; Szustakowski, Mieczyslaw; Zyczkowski, Marek; Kowalski, Marcin
2012-06-01
Screening cameras working in millimetre band gain more and more interest among security society mainly due to their capability of finding items hidden under clothes. Performance of commercially available passive cameras is still limited due to not sufficient resolution and contrast in comparison to other wavelengths (visible or infrared range). Testing of such cameras usually requires some persons carrying guns, bombs or knives. Such persons can have different clothes or body temperature, what makes the measurements even more ambiguous. To avoid such situations we built a moving phantom of human body. The phantom consists of a polystyrene manikin which is covered with a number of small pipes with water. Pipes were next coated with a silicone "skin". The veins (pipes) are filled with water heated up to 37 C degrees to obtain the same temperature as human body. The phantom is made of non-metallic materials and is placed on a moving wirelessly-controlled platform with four wheels. The phantom can be dressed with a set of ordinary clothes and can be equipped with some dangerous (guns, bombs) and non-dangerous items. For tests we used a passive commercially available camera TS4 from ThruVision Systems Ltd. operating at 250 GHz. We compared the images taken from phantom and a man and we obtained good similarity both for naked as well as dressed man/phantom case. We also tested the phantom with different sets of clothes and hidden items and we got good conformity with persons.
Hardware accelerator design for tracking in smart camera
NASA Astrophysics Data System (ADS)
Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil
2011-10-01
Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.
Influence of camera parameters on the quality of mobile 3D capture
NASA Astrophysics Data System (ADS)
Georgiev, Mihail; Boev, Atanas; Gotchev, Atanas; Hannuksela, Miska
2010-01-01
We investigate the effect of camera de-calibration on the quality of depth estimation. Dense depth map is a format particularly suitable for mobile 3D capture (scalable and screen independent). However, in real-world scenario cameras might move (vibrations, temp. bend) form their designated positions. For experiments, we create a test framework, described in the paper. We investigate how mechanical changes will affect different (4) stereo-matching algorithms. We also assess how different geometric corrections (none, motion compensation-like, full rectification) will affect the estimation quality (how much offset can be still compensated with "crop" over a larger CCD). Finally, we show how estimated camera pose change (E) relates with stereo-matching, which can be used for "rectification quality" measure.
Jung, Jaehoon; Yoon, Inhye; Paik, Joonki
2016-01-01
This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978
Silvatti, Amanda P; Cerveri, Pietro; Telles, Thiago; Dias, Fábio A S; Baroni, Guido; Barros, Ricardo M L
2013-01-01
In this study we aim at investigating the applicability of underwater 3D motion capture based on submerged video cameras in terms of 3D accuracy analysis and trajectory reconstruction. Static points with classical direct linear transform (DLT) solution, a moving wand with bundle adjustment and a moving 2D plate with Zhang's method were considered for camera calibration. As an example of the final application, we reconstructed the hand motion trajectories in different swimming styles and qualitatively compared this with Maglischo's model. Four highly trained male swimmers performed butterfly, breaststroke and freestyle tasks. The middle fingertip trajectories of both hands in the underwater phase were considered. The accuracy (mean absolute error) of the two calibration approaches (wand: 0.96 mm - 2D plate: 0.73 mm) was comparable to out of water results and highly superior to the classical DLT results (9.74 mm). Among all the swimmers, the hands' trajectories of the expert swimmer in the style were almost symmetric and in good agreement with Maglischo's model. The kinematic results highlight symmetry or asymmetry between the two hand sides, intra- and inter-subject variability in terms of the motion patterns and agreement or disagreement with the model. The two outcomes, calibration results and trajectory reconstruction, both move towards the quantitative 3D underwater motion analysis.
2015-10-30
During its closest ever dive past the active south polar region of Saturn moon Enceladus, NASA Cassini spacecraft quickly shuttered its imaging cameras to capture glimpses of the fast moving terrain below.
3D imaging studies of rigid-fiber sedimentation
NASA Astrophysics Data System (ADS)
Vahey, David W.; Tozzi, Emilio J.; Scott, C. Tim; Klingenberg, Daniel J.
2011-03-01
Fibers are industrially important particles that experience coupling between rotational and translational motion during sedimentation. This leads to helical trajectories that have yet to be accurately predicted or measured. Sedimentation experiments and hydrodynamic analysis were performed on 11 copper "fibers" of average length 10.3 mm and diameter 0.20 mm. Each fiber contained three linear but non-coplanar segments. Fiber dimensions were measured by imaging their 2D projections on three planes. The fibers were sequentially released into silicone oil contained in a transparent cylinder of square cross section. Identical, synchronized cameras were mounted to a moveable platform and imaged the cylinder from orthogonal directions. The cameras were fixed in position during the time that a fiber remained in the field of view. Subsequently, the cameras were controllably moved to the next lower field of view. The trajectories of descending fibers were followed over distances up to 250 mm. Custom software was written to extract fiber orientation and trajectory from the 3D images. Fibers with similar terminal velocity often had significantly different terminal angular velocities. Both were well-predicted by theory. The radius of the helical trajectory was hard to predict when angular velocity was high, probably reflecting uncertainties in fiber shape, initial velocity, and fluid conditions associated with launch. Nevertheless, lateral excursion of fibers during sedimentation was reasonably predicted by fiber curl and asymmetry, suggesting the possibility of sorting fibers according to their shape.
Two-Camera Acquisition and Tracking of a Flying Target
NASA Technical Reports Server (NTRS)
Biswas, Abhijit; Assad, Christopher; Kovalik, Joseph M.; Pain, Bedabrata; Wrigley, Chris J.; Twiss, Peter
2008-01-01
A method and apparatus have been developed to solve the problem of automated acquisition and tracking, from a location on the ground, of a luminous moving target in the sky. The method involves the use of two electronic cameras: (1) a stationary camera having a wide field of view, positioned and oriented to image the entire sky; and (2) a camera that has a much narrower field of view (a few degrees wide) and is mounted on a two-axis gimbal. The wide-field-of-view stationary camera is used to initially identify the target against the background sky. So that the approximate position of the target can be determined, pixel locations on the image-detector plane in the stationary camera are calibrated with respect to azimuth and elevation. The approximate target position is used to initially aim the gimballed narrow-field-of-view camera in the approximate direction of the target. Next, the narrow-field-of view camera locks onto the target image, and thereafter the gimbals are actuated as needed to maintain lock and thereby track the target with precision greater than that attainable by use of the stationary camera.
Multi-frame image processing with panning cameras and moving subjects
NASA Astrophysics Data System (ADS)
Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric
2014-06-01
Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.
Application of infrared camera to bituminous concrete pavements: measuring vehicle
NASA Astrophysics Data System (ADS)
Janků, Michal; Stryk, Josef
2017-09-01
Infrared thermography (IR) has been used for decades in certain fields. However, the technological level of advancement of measuring devices has not been sufficient for some applications. Over the recent years, good quality thermal cameras with high resolution and very high thermal sensitivity have started to appear on the market. The development in the field of measuring technologies allowed the use of infrared thermography in new fields and for larger number of users. This article describes the research in progress in Transport Research Centre with a focus on the use of infrared thermography for diagnostics of bituminous road pavements. A measuring vehicle, equipped with a thermal camera, digital camera and GPS sensor, was designed for the diagnostics of pavements. New, highly sensitive, thermal cameras allow to measure very small temperature differences from the moving vehicle. This study shows the potential of a high-speed inspection without lane closures while using IR thermography.
Free-viewpoint video of human actors using multiple handheld Kinects.
Ye, Genzhi; Liu, Yebin; Deng, Yue; Hasler, Nils; Ji, Xiangyang; Dai, Qionghai; Theobalt, Christian
2013-10-01
We present an algorithm for creating free-viewpoint video of interacting humans using three handheld Kinect cameras. Our method reconstructs deforming surface geometry and temporal varying texture of humans through estimation of human poses and camera poses for every time step of the RGBZ video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem, which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Finally, texture recovery is achieved through jointly optimization on spatio-temporal RGB data using matrix completion. As opposed to previous methods, our algorithm succeeds on free-viewpoint video of human actors under general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.
1995-03-23
A diver tests a secondary camera and maneuvering platform in Marshall's Neutral Buoyancy Simulator (NBS).The secondary camera will be beneficial for recording repairs and other extra vehicular activities (EVA) the astronuats will perform while making repairs on the Hubble Space Telescope (HST). The maneuvering platform was developed to give the astronauts something to stand on while performing maintenance tasks. These platforms were developed to be mobile so that the astronauts could move them to accommadate different sites.
Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras
NASA Astrophysics Data System (ADS)
Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro
2018-03-01
Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Shestakov, Ivan L.; Blednov, Roman G.
2017-05-01
One of urgent security problems is a detection of objects placed inside the human body. Obviously, for safety reasons one cannot use X-rays for such object detection widely and often. For this purpose, we propose to use THz camera and IR camera. Below we continue a possibility of IR camera using for a detection of temperature trace on a human body. In contrast to passive THz camera using, the IR camera does not allow to see very pronounced the object under clothing. Of course, this is a big disadvantage for a security problem solution based on the IR camera using. To find possible ways for this disadvantage overcoming we make some experiments with IR camera, produced by FLIR Company and develop novel approach for computer processing of images captured by IR camera. It allows us to increase a temperature resolution of IR camera as well as human year effective susceptibility enhancing. As a consequence of this, a possibility for seeing of a human body temperature changing through clothing appears. We analyze IR images of a person, which drinks water and eats chocolate. We follow a temperature trace on human body skin, caused by changing of temperature inside the human body. Some experiments are made with observing of temperature trace from objects placed behind think overall. Demonstrated results are very important for the detection of forbidden objects, concealed inside the human body, by using non-destructive control without using X-rays.
Ground moving target geo-location from monocular camera mounted on a micro air vehicle
NASA Astrophysics Data System (ADS)
Guo, Li; Ang, Haisong; Zheng, Xiangming
2011-08-01
The usual approaches to unmanned air vehicle(UAV)-to-ground target geo-location impose some severe constraints to the system, such as stationary objects, accurate geo-reference terrain database, or ground plane assumption. Micro air vehicle(MAV) works with characteristics including low altitude flight, limited payload and onboard sensors' low accuracy. According to these characteristics, a method is developed to determine the location of ground moving target which imaged from the air using monocular camera equipped on MAV. This method eliminates the requirements for terrain database (elevation maps) and altimeters that can provide MAV's and target's altitude. Instead, the proposed method only requires MAV flight status provided by its inherent onboard navigation system which includes inertial measurement unit(IMU) and global position system(GPS). The key is to get accurate information on the altitude of the ground moving target. First, Optical flow method extracts background static feature points. Setting a local region around the target in the current image, The features which are on the same plane with the target in this region are extracted, and are retained as aided features. Then, inverse-velocity method calculates the location of these points by integrated with aircraft status. The altitude of object, which is calculated by using position information of these aided features, combining with aircraft status and image coordinates, geo-locate the target. Meanwhile, a framework with Bayesian estimator is employed to eliminate noise caused by camera, IMU and GPS. Firstly, an extended Kalman filter(EKF) provides a simultaneous localization and mapping solution for the estimation of aircraft states and aided features location which defines the moving target local environment. Secondly, an unscented transformation(UT) method determines the estimated mean and covariance of target location from aircraft states and aided features location, and then exports them for the moving target Kalman filter(KF). Experimental results show that our method can instantaneously geo-locate the moving target by operator's single click and can reach 15 meters accuracy for an MAV flying at 200 meters above the ground.
An intelligent space for mobile robot localization using a multi-camera system.
Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel
2014-08-15
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.
An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System
Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.
2014-01-01
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009
Satellite markers: a simple method for ground truth car pose on stereo video
NASA Astrophysics Data System (ADS)
Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco
2018-04-01
Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.
Navigation of a care and welfare robot
NASA Astrophysics Data System (ADS)
Yukawa, Toshihiro; Hosoya, Osamu; Saito, Naoki; Okano, Hideharu
2005-12-01
In this paper, we propose the development of a robot that can perform nursing tasks in a hospital. In a narrow environment such as a sickroom or a hallway, the robot must be able to move freely in arbitrary directions. Therefore, the robot needs to have high controllability and the capability to make precise movements. Our robot can recognize a line by using cameras, and can be controlled in the reference directions by means of comparison with original cell map information; furthermore, it moves safely on the basis of an original center-line established permanently in the building. Correspondence between the robot and a centralized control center enables the robot's autonomous movement in the hospital. Through a navigation system using cell map information, the robot is able to perform nursing tasks smoothly by changing the camera angle.
Liang, Jinyang; Gao, Liang; Hai, Pengfei; Li, Chiye; Wang, Lihong V.
2015-01-01
Compressed ultrafast photography (CUP), a computational imaging technique, is synchronized with short-pulsed laser illumination to enable dynamic three-dimensional (3D) imaging. By leveraging the time-of-flight (ToF) information of pulsed light backscattered by the object, ToF-CUP can reconstruct a volumetric image from a single camera snapshot. In addition, the approach unites the encryption of depth data with the compressed acquisition of 3D data in a single snapshot measurement, thereby allowing efficient and secure data storage and transmission. We demonstrated high-speed 3D videography of moving objects at up to 75 volumes per second. The ToF-CUP camera was applied to track the 3D position of a live comet goldfish. We have also imaged a moving object obscured by a scattering medium. PMID:26503834
Vision-based control for flight relative to dynamic environments
NASA Astrophysics Data System (ADS)
Causey, Ryan Scott
The concept of autonomous systems has been considered an enabling technology for a diverse group of military and civilian applications. The current direction for autonomous systems is increased capabilities through more advanced systems that are useful for missions that require autonomous avoidance, navigation, tracking, and docking. To facilitate this level of mission capability, passive sensors, such as cameras, and complex software are added to the vehicle. By incorporating an on-board camera, visual information can be processed to interpret the surroundings. This information allows decision making with increased situational awareness without the cost of a sensor signature, which is critical in military applications. The concepts presented in this dissertation facilitate the issues inherent to vision-based state estimation of moving objects for a monocular camera configuration. The process consists of several stages involving image processing such as detection, estimation, and modeling. The detection algorithm segments the motion field through a least-squares approach and classifies motions not obeying the dominant trend as independently moving objects. An approach to state estimation of moving targets is derived using a homography approach. The algorithm requires knowledge of the camera motion, a reference motion, and additional feature point geometry for both the target and reference objects. The target state estimates are then observed over time to model the dynamics using a probabilistic technique. The effects of uncertainty on state estimation due to camera calibration are considered through a bounded deterministic approach. The system framework focuses on an aircraft platform of which the system dynamics are derived to relate vehicle states to image plane quantities. Control designs using standard guidance and navigation schemes are then applied to the tracking and homing problems using the derived state estimation. Four simulations are implemented in MATLAB that build on the image concepts present in this dissertation. The first two simulations deal with feature point computations and the effects of uncertainty. The third simulation demonstrates the open-loop estimation of a target ground vehicle in pursuit whereas the four implements a homing control design for the Autonomous Aerial Refueling (AAR) using target estimates as feedback.
NASA Astrophysics Data System (ADS)
Awang Soh, Ahmad Afiq Sabqi; Mat Jafri, Mohd Zubir; Azraai, Nur Zaidi
2017-07-01
In Malay world, there is a spirit traditional ritual where they use it as healing practices or for normal life. Malay martial arts (silat) also is not exceptional where some branch of silat have spirit traditional ritual where they said can help them in combat. In this paper, we will not use any ritual, instead we will use some medicinal and environment change when they are performing. There will be 2 performers (fighter) selected, one of them have an experience in martial arts training and another performer does not have experience. Motion Capture (MOCAP) camera will help observe and analyze this move. 8 cameras have been placed in the MOCAP room 2 on each side of the wall facing toward the center of the room from every angle. This will help prevent the loss detection of a marker that been stamped on the limb of a performer. Passive marker has been used where it will reflect the infrared to the camera sensor. Infrared is generated by the source around the camera lens. A 60 kg punching bag was hung on the iron bar function as the target for the performer when throws a punch. Markers also have been stamped on the punching bag so we can detect the movement how far can it swing when hit by the performer. 2 performers will perform 2 moves each with the same position and posture. For every 2 moves, we have made the environment change without the performer notice about it. The first 2 punch with normal environment, second part we have played a positive music to change the performer's mood and third part we have put a medicine (cream/oil) on the skin of the performer. This medicine will make the skin feel a little bit hot. This process repeated to another performer with no experience. The position of this marker analyzed by the Cortex Motion Analysis software where from this data, we can estimate the kinetics and kinematics of the performer. It shows that the increase of kinetics for every part because of the change in the environment, and different result for the 2 performers.
NASA Technical Reports Server (NTRS)
Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.
2012-01-01
As imagery is collected from an airborne platform, an individual viewing the images wants to know from where on the Earth the images were collected. To do this, some information about the camera needs to be known, such as its position and orientation relative to the Earth. This can be provided by common inertial navigation systems (INS). Once the location of the camera is known, it is useful to project an image onto some representation of the Earth. Due to the non-smooth terrain of the Earth (mountains, valleys, etc.), this projection is highly non-linear. Thus, to ensure accurate projection, one needs to project onto a digital elevation map (DEM). This allows one to view the images overlaid onto a representation of the Earth. A code has been developed that takes an image, a model of the camera used to acquire that image, the pose of the camera during acquisition (as provided by an INS), and a DEM, and outputs an image that has been geo-rectified. The world coordinate of the bounds of the image are provided for viewing purposes. The code finds a mapping from points on the ground (DEM) to pixels in the image. By performing this process for all points on the ground, one can "paint" the ground with the image, effectively performing a projection of the image onto the ground. In order to make this process efficient, a method was developed for finding a region of interest (ROI) on the ground to where the image will project. This code is useful in any scenario involving an aerial imaging platform that moves and rotates over time. Many other applications are possible in processing aerial and satellite imagery.
NASA Astrophysics Data System (ADS)
Sensui, Takayuki
2012-10-01
Although digitalization has tripled consumer-class camera market scale, extreme reductions in prices of fixed-lens cameras has reduced profitability. As a result, a number of manufacturers have entered the market of the System DSC i.e. digital still camera with interchangeable lens, where large profit margins are possible, and many high ratio zoom lenses with image stabilization functions have been released. Quiet actuators are another indispensable component. Design with which there is little degradation in performance due to all types of errors is preferred for good balance in terms of size, lens performance, and the rate of quality to sub-standard products. Decentering, such as that caused by tilting, sensitivity of moving groups is especially important. In addition, image stabilization mechanisms actively shift lens groups. Development of high ratio zoom lenses with vibration reduction mechanism is confronted by the challenge of reduced performance due to decentering, making control over decentering sensitivity between lens groups everything. While there are a number of ways to align lenses (axial alignment), shock resistance and ability to stand up to environmental conditions must also be considered. Naturally, it is very difficult, if not impossible, to make lenses smaller and achieve a low decentering sensitivity at the same time. 4-group zoom construction is beneficial in making lenses smaller, but decentering sensitivity is greater. 5-group zoom configuration makes smaller lenses more difficult, but it enables lower decentering sensitivities. At Nikon, the most advantageous construction is selected for each lens based on specifications. The AF-S DX NIKKOR 18-200mm f/3.5-5.6G ED VR II and AF-S NIKKOR 28-300mm f/3.5-5.6G ED VR are excellent examples of this.
A new computerized moving stage for optical microscopes
NASA Astrophysics Data System (ADS)
Hatiboglu, Can Ulas; Akin, Serhat
2004-06-01
Measurements of microscope stage movements in the x and y directions are of importance for some stereological methods. Traditionally, the length of stage movements is measured with differing precision and accuracy using a suitable motorized stage, a microscope and software. Such equipment is generally expensive and not readily available in many laboratories. One other challenging problem is the adaptability to available microscope systems which weakens the possibility of the equipment to be used with any kind of light microscope. This paper describes a simple and cheap programmable moving stage that can be used with the available microscopes in the market. The movements of the stage are controlled by two servo-motors and a controller chip via a Java-based image processing software. With the developed motorized stage and a microscope equipped with a CCD camera, the software allows complete coverage of the specimens with minimum overlap, eliminating the optical strain associated with counting hundreds of images through an eyepiece, in a quick and precise fashion. The uses and the accuracy of the developed stage are demonstrated using thin sections obtained from a limestone core plug.
High Speed Photographic Analysis Of Railgun Plasmas
NASA Astrophysics Data System (ADS)
Macintyre, I. B.
1985-02-01
Various experiments are underway at the Materials Research Laboratories, Australian Department of Defence, to develop a theory for the behaviour and propulsion action of plasmas in rail guns. Optical recording and imaging devices, with their low vulnerability to the effects of magnetic and electric fields present in the vicinity of electromagnetic launchers, have proven useful as diagnostic tools. This paper describes photoinstrumentation systems developed to provide visual qualitative assessment of the behaviour of plasma travelling along the bore of railgun launchers. In addition, a quantitative system is incorporated providing continuous data (on a microsecond time scale) of (a) Length of plasma during flight along the launcher bore. (b) Velocity of plasma. (c) Distribution of plasma with respect to time after creation. (d) Plasma intensity profile as it travels along the launcher bore. The evolution of the techniques used is discussed. Two systems were employed. The first utilized a modified high speed streak camera to record the light emitted from the plasma, through specially prepared fibre optic cables. The fibre faces external to the bore were then imaged onto moving film. The technique involved the insertion of fibres through the launcher body to enable the plasma to be viewed at discrete positions as it travelled along the launcher bore. Camera configuration, fibre optic preparation and experimental results are outlined. The second system utilized high speed streak and framing photography in conjunction with accurate sensitometric control procedures on the recording film. The two cameras recorded the plasma travelling along the bore of a specially designed transparent launcher. The streak camera, fitted with a precise slit size, recorded a streak image of the upper brightness range of the plasma as it travelled along the launcher's bore. The framing camera recorded an overall view of the launcher and the plasma path, to the maximum possible, governed by the film's ability to reproduce the plasma's brightness range. The instrumentation configuration, calibration, and film measurement using microdensitometer scanning techniques to evaluate inbore plasma behaviour, are also presented.
Phenology cameras observing boreal ecosystems of Finland
NASA Astrophysics Data System (ADS)
Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali
2016-04-01
Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.
Improved Scanners for Microscopic Hyperspectral Imaging
NASA Technical Reports Server (NTRS)
Mao, Chengye
2009-01-01
Improved scanners to be incorporated into hyperspectral microscope-based imaging systems have been invented. Heretofore, in microscopic imaging, including spectral imaging, it has been customary to either move the specimen relative to the optical assembly that includes the microscope or else move the entire assembly relative to the specimen. It becomes extremely difficult to control such scanning when submicron translation increments are required, because the high magnification of the microscope enlarges all movements in the specimen image on the focal plane. To overcome this difficulty, in a system based on this invention, no attempt would be made to move either the specimen or the optical assembly. Instead, an objective lens would be moved within the assembly so as to cause translation of the image at the focal plane: the effect would be equivalent to scanning in the focal plane. The upper part of the figure depicts a generic proposed microscope-based hyperspectral imaging system incorporating the invention. The optical assembly of this system would include an objective lens (normally, a microscope objective lens) and a charge-coupled-device (CCD) camera. The objective lens would be mounted on a servomotor-driven translation stage, which would be capable of moving the lens in precisely controlled increments, relative to the camera, parallel to the focal-plane scan axis. The output of the CCD camera would be digitized and fed to a frame grabber in a computer. The computer would store the frame-grabber output for subsequent viewing and/or processing of images. The computer would contain a position-control interface board, through which it would control the servomotor. There are several versions of the invention. An essential feature common to all versions is that the stationary optical subassembly containing the camera would also contain a spatial window, at the focal plane of the objective lens, that would pass only a selected portion of the image. In one version, the window would be a slit, the CCD would contain a one-dimensional array of pixels, and the objective lens would be moved along an axis perpendicular to the slit to spatially scan the image of the specimen in pushbroom fashion. The image built up by scanning in this case would be an ordinary (non-spectral) image. In another version, the optics of which are depicted in the lower part of the figure, the spatial window would be a slit, the CCD would contain a two-dimensional array of pixels, the slit image would be refocused onto the CCD by a relay-lens pair consisting of a collimating and a focusing lens, and a prism-gratingprism optical spectrometer would be placed between the collimating and focusing lenses. Consequently, the image on the CCD would be spatially resolved along the slit axis and spectrally resolved along the axis perpendicular to the slit. As in the first-mentioned version, the objective lens would be moved along an axis perpendicular to the slit to spatially scan the image of the specimen in pushbroom fashion.
Reticle stage based linear dosimeter
Berger, Kurt W [Livermore, CA
2007-03-27
A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.
Reticle stage based linear dosimeter
Berger, Kurt W.
2005-06-14
A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.
Frequently Asked Questions about Digital Mammography
... in digital cameras, which convert x-rays into electrical signals. The electrical signals are used to produce images of the ... DBT? Digital breast tomosynthesis is a relatively new technology. In DBT, the X-ray tube moves in ...
Adaptive optics with pupil tracking for high resolution retinal imaging
Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris
2012-01-01
Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics. PMID:22312577
Adaptive optics with pupil tracking for high resolution retinal imaging.
Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris
2012-02-01
Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics.
NASA Technical Reports Server (NTRS)
1994-01-01
In laparoscopic surgery, tiny incisions are made in the patient's body and a laparoscope (an optical tube with a camera at the end) is inserted. The camera's image is projected onto two video screens, whose views guide the surgeon through the procedure. AESOP, a medical robot developed by Computer Motion, Inc. with NASA assistance, eliminates the need for a human assistant to operate the camera. The surgeon uses a foot pedal control to move the device, allowing him to use both hands during the surgery. Miscommunication is avoided; AESOP's movement is smooth and steady, and the memory vision is invaluable. Operations can be completed more quickly, and the patient spends less time under anesthesia. AESOP has been approved by the FDA.
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.
Ci, Wenyan; Huang, Yingping
2016-10-17
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.
Qiu, Kang-Fu
2017-01-01
This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2–2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking. PMID:29027950
NASA Astrophysics Data System (ADS)
Sobue, Shinichi; Yamazaki, Junichi; Matsumoto, Shuichi; Konishi, Hisahiro; Maejima, Hironori; Sasaki, Susumu; Kato, Manabu; Mitsuhashi, Seiji; Tachino, Junichi
The lunar explorer SELENE (also called KAGUYA) carried thirteen scientific mission instruments to reveal the origin and evolution of Moon and to investigate the possible future utilization of Moon. In addition to the scientific instruments, a high-definition TV (HDTV) camera provided by the Japan Broadcasting Corporation (NHK) was carried on KAGUYA to promote public outreach. We usually use housekeeping telemetry data to derive the satellite attitude along with orbital determination and propagated information. However, it takes time to derive this information, since orbital determination and propagation calculation require the use of the orbital model. When a malfunction of the KAGUYA reaction wheel occurred, we could not have correct attitude information. This means that we don’t have a correct orbital determination in timely fashion. However, when we checked HDTV movies, we found that horizon information on the lunar surface derived from HDTV moving images as a horizon sensor was very useful for the detection of the attitude of KAGUYA. We then compared this information with the attitude information derived from orbital telemetry to validate the accuracy of the HDTV derived estimation. As a result of this comparison, there are good pitch attitude estimation using HDTV derived estimation and we could estimate the pitch angle change during the KAGUYA mission operation simplify and quickly. In this study, we show the usefulness of this HDTV camera as a horizon sensor.
Wang, Jeremy H-S; Qiu, Kang-Fu; Chao, Paul C-P
2017-10-13
This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2-2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking.
Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects
Lu, Shin-Yee
1998-01-01
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.
Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects
Lu, S.Y.
1998-12-22
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.
Darmanis, Spyridon; Toms, Andrew; Durman, Robert; Moore, Donna; Eyres, Keith
2007-07-01
To reduce the operating time in computer-assisted navigated total knee replacement (TKR), by improving communication between the infrared camera and the trackers placed on the patient. The innovation involves placing a routinely used laser pointer on top of the camera, so that the infrared cameras focus precisely on the trackers located on the knee to be operated on. A prospective randomized study was performed involving 40 patients divided into two groups, A and B. Both groups underwent navigated TKR, but for group B patients a laser pointer was used to improve the targeting capabilities of the cameras. Without the laser pointer, the camera had to move a mean 9.2 times in order to identify the trackers. With the introduction of the laser pointer, this was reduced to 0.9 times. Accordingly, the additional mean time required without the laser pointer was 11.6 minutes. Time delays are a major problem in computer-assisted surgery, and our technical suggestion can contribute towards reducing the delays associated with this particular application.
NASA Astrophysics Data System (ADS)
Naimark, Michael
1997-05-01
Two immersive virtual environments produced as art installations investigate 'sense of place' in different but complimentary ways. One is a stereoscopic moviemap, the other a stereoscopic panorama. Moviemaps are interactive systems which allow 'travel' along pre-recorded routes with some control over speed and direction. Panoramas are 360 degree visual representations dating back to the late 18th century but which have recently experienced renewed interest due to 'virtual reality' systems. Moviemaps allow 'moving around' while panoramas allow 'looking around,' but to date there has been little or no attempt to produce either in stereo from camera-based material. 'See Banff stereoscopic moviemap about landscape, tourism, and growth in the Canadian Rocky Mountains. It was filmed with twin 16 mm cameras and displayed as a single-user experience housed in a cabinet resembling a century-old kinetoscope, with a crank on the side for 'moving through' the material. 'Be Now Here (Welcome to the Neighborhood)' (1995-6) is a stereoscopic panorama filmed in public gathering places around the world, based upon the UNESCO World Heritage 'In Danger' list. It was filmed with twin 35 mm motion picture cameras on a rotating tripod and displayed using a synchronized rotating floor.
Compact 3D Camera for Shake-the-Box Particle Tracking
NASA Astrophysics Data System (ADS)
Hesseling, Christina; Michaelis, Dirk; Schneiders, Jan
2017-11-01
Time-resolved 3D-particle tracking usually requires the time-consuming optical setup and calibration of 3 to 4 cameras. Here, a compact four-camera housing has been developed. The performance of the system using Shake-the-Box processing (Schanz et al. 2016) is characterized. It is shown that the stereo-base is large enough for sensible 3D velocity measurements. Results from successful experiments in water flows using LED illumination are presented. For large-scale wind tunnel measurements, an even more compact version of the system is mounted on a robotic arm. Once calibrated for a specific measurement volume, the necessity for recalibration is eliminated even when the system moves around. Co-axial illumination is provided through an optical fiber in the middle of the housing, illuminating the full measurement volume from one viewing direction. Helium-filled soap bubbles are used to ensure sufficient particle image intensity. This way, the measurement probe can be moved around complex 3D-objects. By automatic scanning and stitching of recorded particle tracks, the detailed time-averaged flow field of a full volume of cubic meters in size is recorded and processed. Results from an experiment at TU-Delft of the flow field around a cyclist are shown.
Science, conservation, and camera traps
Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas
2011-01-01
Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.
Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Multi-target camera tracking, hand-off and display LDRD 158819 final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
8. Elevation view of midsection of west wall of South ...
8. Elevation view of midsection of west wall of South Section. This photo makes an extended elevation panorama with photos WA-116-E-7 and WA-116-E-9. Note that Crane No. 42 also appears in photo WA-116-E-7. This is because the crane moved in the time that the photographer moved the camera. - Puget Sound Naval Shipyard, Drydock No. 3, Farragut Avenue, Bremerton, Kitsap County, WA
Phase-stepped fringe projection by rotation about the camera's perspective center.
Huddart, Y R; Valera, J D; Weston, N J; Featherstone, T C; Moore, A J
2011-09-12
A technique to produce phase steps in a fringe projection system for shape measurement is presented. Phase steps are produced by introducing relative rotation between the object and the fringe projection probe (comprising a projector and camera) about the camera's perspective center. Relative motion of the object in the camera image can be compensated, because it is independent of the distance of the object from the camera, whilst the phase of the projected fringes is stepped due to the motion of the projector with respect to the object. The technique was validated with a static fringe projection system by moving an object on a coordinate measuring machine (CMM). The alternative approach, of rotating a lightweight and robust CMM-mounted fringe projection probe, is discussed. An experimental accuracy of approximately 1.5% of the projected fringe pitch was achieved, limited by the standard phase-stepping algorithms used rather than by the accuracy of the phase steps produced by the new technique.
Changing requirements and solutions for unattended ground sensors
NASA Astrophysics Data System (ADS)
Prado, Gervasio; Johnson, Robert
2007-10-01
Unattended Ground Sensors (UGS) were first used to monitor Viet Cong activity along the Ho Chi Minh Trail in the 1960's. In the 1980's, significant improvement in the capabilities of UGS became possible with the development of digital signal processors; this led to their use as fire control devices for smart munitions (for example: the Wide Area Mine) and later to monitor the movements of mobile missile launchers. In these applications, the targets of interest were large military vehicles with strong acoustic, seismic and magnetic signatures. Currently, the requirements imposed by new terrorist threats and illegal border crossings have changed the emphasis to the monitoring of light vehicles and foot traffic. These new requirements have changed the way UGS are used. To improve performance against targets with lower emissions, sensors are used in multi-modal arrangements. Non-imaging sensors (acoustic, seismic, magnetic and passive infrared) are now being used principally as activity sensors to cue imagers and remote cameras. The availability of better imaging technology has made imagers the preferred source of "actionable intelligence". Infrared cameras are now based on un-cooled detector-arrays that have made their application in UGS possible in terms of their cost and power consumption. Visible light imagers are also more sensitive extending their utility well beyond twilight. The imagers are equipped with sophisticated image processing capabilities (image enhancement, moving target detection and tracking, image compression). Various commercial satellite services now provide relatively inexpensive long-range communications and the Internet provides fast worldwide access to the data.
Low-cost panoramic infrared surveillance system
NASA Astrophysics Data System (ADS)
Kecskes, Ian; Engel, Ezra; Wolfe, Christopher M.; Thomson, George
2017-05-01
A nighttime surveillance concept consisting of a single surface omnidirectional mirror assembly and an uncooled Vanadium Oxide (VOx) longwave infrared (LWIR) camera has been developed. This configuration provides a continuous field of view spanning 360° in azimuth and more than 110° in elevation. Both the camera and the mirror are readily available, off-the-shelf, inexpensive products. The mirror assembly is marketed for use in the visible spectrum and requires only minor modifications to function in the LWIR spectrum. The compactness and portability of this optical package offers significant advantages over many existing infrared surveillance systems. The developed system was evaluated on its ability to detect moving, human-sized heat sources at ranges between 10 m and 70 m. Raw camera images captured by the system are converted from rectangular coordinates in the camera focal plane to polar coordinates and then unwrapped into the users azimuth and elevation system. Digital background subtraction and color mapping are applied to the images to increase the users ability to extract moving items from background clutter. A second optical system consisting of a commercially available 50 mm f/1.2 ATHERM lens and a second LWIR camera is used to examine the details of objects of interest identified using the panoramic imager. A description of the components of the proof of concept is given, followed by a presentation of raw images taken by the panoramic LWIR imager. A description of the method by which these images are analyzed is given, along with a presentation of these results side-by-side with the output of the 50 mm LWIR imager and a panoramic visible light imager. Finally, a discussion of the concept and its future development are given.
2003-01-22
One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.
Understanding Visible Perception
NASA Technical Reports Server (NTRS)
2003-01-01
One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.
Park, J H; Garipov, G K; Jeon, J A; Khrenov, B A; Kim, J E; Kim, M; Kim, Y K; Lee, C-H; Lee, J; Na, G W; Nam, S; Park, I H; Park, Y-S
2008-12-08
We introduce a novel telescope consisting of a pinhole-like camera with rotatable MEMS micromirrors substituting for pinholes. The design is ideal for observations of transient luminous phenomena or fast-moving objects, such as upper atmospheric lightning and bright gamma ray bursts. The advantage of the MEMS "obscura telescope" over conventional cameras is that it is capable both of searching for events over a wide field of view, and fast zooming to allow detailed investigation of the structure of events. It is also able to track the triggering object to investigate its space-time development, and to center the interesting portion of the image on the photodetector array. We present the proposed system and the test results for the MEMS obscura telescope which has a field of view of 11.3 degrees, sixteen times zoom-in and tracking within 1 ms. (c) 2008 Optical Society of America
Positron emission particle tracking and its application to granular media
NASA Astrophysics Data System (ADS)
Parker, D. J.
2017-05-01
Positron emission particle tracking (PEPT) is a technique for tracking a single radioactively labelled particle. Accurate 3D tracking is possible even when the particle is moving at high speed inside a dense opaque system. In many cases, tracking a single particle within a granular system provides sufficient information to determine the time-averaged behaviour of the entire granular system. After a general introduction, this paper describes the detector systems (PET scanners and positron cameras) used to record PEPT data, the techniques used to label particles, and the algorithms used to process the data. This paper concentrates on the use of PEPT for studying granular systems: the focus is mainly on work at Birmingham, but reference is also made to work from other centres, and options for wider diversification are suggested.
Reitsamer, H; Groiss, H P; Franz, M; Pflug, R
2000-01-31
We present a computer-guided microelectrode positioning system that is routinely used in our laboratory for intracellular electrophysiology and functional staining of retinal neurons. Wholemount preparations of isolated retina are kept in a superfusion chamber on the stage of an inverted microscope. Cells and layers of the retina are visualized by Nomarski interference contrast using infrared light in combination with a CCD camera system. After five-point calibration has been performed the electrode can be guided to any point inside the calibrated volume without moving the retina. Electrode deviations from target cells can be corrected by the software further improving the precision of this system. The good visibility of cells avoids prelabeling with fluorescent dyes and makes it possible to work under completely dark adapted conditions.
Improving land vehicle situational awareness using a distributed aperture system
NASA Astrophysics Data System (ADS)
Fortin, Jean; Bias, Jason; Wells, Ashley; Riddle, Larry; van der Wal, Gooitzen; Piacentino, Mike; Mandelbaum, Robert
2005-05-01
U.S. Army Research, Development, and Engineering Command (RDECOM) Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (NVESD) has performed early work to develop a Distributed Aperture System (DAS). The DAS aims at improving the situational awareness of armored fighting vehicle crews under closed-hatch conditions. The concept is based on a plurality of sensors configured to create a day and night dome of surveillance coupled with heads up displays slaved to the operator's head to give a "glass turret" feel. State-of-the-art image processing is used to produce multiple seamless hemispherical views simultaneously available to the vehicle commander, crew members and dismounting infantry. On-the-move automatic cueing of multiple moving/pop-up low silhouette threats is also done with the possibility to save/revisit/share past events. As a first step in this development program, a contract was awarded to United Defense to further develop the Eagle VisionTM system. The second-generation prototype features two camera heads, each comprising four high-resolution (2048x1536) color sensors, and each covering a field of view of 270°hx150°v. High-bandwidth digital links interface the camera heads with a field programmable gate array (FPGA) based custom processor developed by Sarnoff Corporation. The processor computes the hemispherical stitch and warp functions required for real-time, low latency, immersive viewing (360°hx120°v, 30° down) and generates up to six simultaneous extended graphics array (XGA) video outputs for independent display either on a helmet-mounted display (with associated head tracking device) or a flat panel display (and joystick). The prototype is currently in its last stage of development and will be integrated on a vehicle for user evaluation and testing. Near-term improvements include the replacement of the color camera heads with a pixel-level fused combination of uncooled long wave infrared (LWIR) and low light level intensified imagery. It is believed that the DAS will significantly increase situational awareness by providing the users with a day and night, wide area coverage, immersive visualization capability.
A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming
2018-06-01
This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.
2016-09-15
NASA's Cassini spacecraft stared at Saturn for nearly 44 hours on April 25 to 27, 2016, to obtain this movie showing just over four Saturn days. With Cassini's orbit being moved closer to the planet in preparation for the mission's 2017 finale, scientists took this final opportunity to capture a long movie in which the planet's full disk fit into a single wide-angle camera frame. Visible at top is the giant hexagon-shaped jet stream that surrounds the planet's north pole. Each side of this huge shape is slightly wider than Earth. The resolution of the 250 natural color wide-angle camera frames comprising this movie is 512x512 pixels, rather than the camera's full resolution of 1024x1024 pixels. Cassini's imaging cameras have the ability to take reduced-size images like these in order to decrease the amount of data storage space required for an observation. The spacecraft began acquiring this sequence of images just after it obtained the images to make a three-panel color mosaic. When it began taking images for this movie sequence, Cassini was 1,847,000 miles (2,973,000 kilometers) from Saturn, with an image scale of 355 kilometers per pixel. When it finished gathering the images, the spacecraft had moved 171,000 miles (275,000 kilometers) closer to the planet, with an image scale of 200 miles (322 kilometers) per pixel. A movie is available at http://photojournal.jpl.nasa.gov/catalog/PIA21047
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Shestakov, Ivan L.; Blednov, Roman G.
2016-09-01
One of urgent security problems is a detection of objects placed inside the human body. Obviously, for safety reasons one cannot use X-rays for such object detection widely and often. Three years ago, we have demonstrated principal possibility to see a temperature trace, induced by food eating or water drinking, on the human body skin by using a passive THz camera. However, this camera is very expensive. Therefore, for practice it will be very convenient if one can use the IR camera for this purpose. In contrast to passive THz camera using, the IR camera does not allow to see the object under clothing, if an image, produced by this camera, is used directly. Of course, this is a big disadvantage for a security problem solution based on the IR camera using. To overcome this disadvantage we develop novel approach for computer processing of IR camera images. It allows us to increase a temperature resolution of IR camera as well as increasing of human year effective susceptibility. As a consequence of this, a possibility for seeing of a human body temperature changing through clothing appears. We analyze IR images of a person, which drinks water and eats chocolate. We follow a temperature trace on human body skin, caused by changing of temperature inside the human body. Some experiments were made with measurements of a body temperature covered by T-shirt. Shown results are very important for the detection of forbidden objects, cancelled inside the human body, by using non-destructive control without using X-rays.
NASA Astrophysics Data System (ADS)
Hanel, A.; Stilla, U.
2017-05-01
Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.
Wei, Hsiang-Chun; Su, Guo-Dung John
2012-01-01
Conventional camera modules with image sensors manipulate the focus or zoom by moving lenses. Although motors, such as voice-coil motors, can move the lens sets precisely, large volume, high power consumption, and long moving time are critical issues for motor-type camera modules. A deformable mirror (DM) provides a good opportunity to improve these issues. The DM is a reflective type optical component which can alter the optical power to focus the lights on the two dimensional optical image sensors. It can make the camera system operate rapidly. Ionic polymer metal composite (IPMC) is a promising electro-actuated polymer material that can be used in micromachining devices because of its large deformation with low actuation voltage. We developed a convenient simulation model based on Young's modulus and Poisson's ratio. We divided an ion exchange polymer, also known as Nafion®, into two virtual layers in the simulation model: one was expansive and the other was contractive, caused by opposite constant surface forces on each surface of the elements. Therefore, the deformation for different IPMC shapes can be described more easily. A standard experiment of voltage vs. tip displacement was used to verify the proposed modeling. Finally, a gear shaped IPMC actuator was designed and tested. Optical power of the IPMC deformable mirror is experimentally demonstrated to be 17 diopters with two volts. The needed voltage was about two orders lower than conventional silicon deformable mirrors and about one order lower than the liquid lens. PMID:23112648
Usachev with docking probe in Destiny module
2001-05-30
ISS002-E-6576 (30 May 2001) --- Yury V. Usachev of Rosaviakosmos, Expedition Two mission commander, moves a docking probe through the Destiny Laboratory on the International Space Station (ISS). The image was recorded with a digital still camera.
NASA Astrophysics Data System (ADS)
Watanabe, Jun-Ichi; Honda, Mitsuhiko; Ishiguro, Masateru; Ootsubo, Takafumi; Sarugaku, Yuki; Kadono, Toshihiko; Sakon, Itsuki; Fuse, Tetsuharu; Takato, Naruhisa; Furusho, Reiko
2009-08-01
Mid-infrared 8--25μm imaging and spectroscopic observations of the comet 17P/Holmes in the early phase of its outburst in brightness were performed on 2007 October 25--28UT using the Cooled Mid-Infrared Camera and Spectrometer (COMICS) on the 8.2-m Subaru Telescope. We detected an isolated dust cloud that moved toward the south-west direction from the nucleus. The 11.2μm peak of a crystalline silicate feature onto a broad amorphous silicate feature was also detected both in the central condensation of the nucleus and an isolated dust cloud. The color temperature of the isolated dust cloud was estimated to be ˜200K, which is slightly higher than the black-body temperature. Our analysis of the motion indicates that the isolated cloud moved anti-sunward. We propose several possibilities for the motion of the cloud: fluffy dust particles in the isolated cloud started to depart from the nucleus due to radiation pressure almost as soon as the main outburst occurred, or dust particles moved by some other anti-sunward forces, such as a rocket effect and photophoresis when the surrounding dust coma became optically thin. The origin and the nature of the isolated dust cloud are discussed in this paper.
Using Visual Odometry to Estimate Position and Attitude
NASA Technical Reports Server (NTRS)
Maimone, Mark; Cheng, Yang; Matthies, Larry; Schoppers, Marcel; Olson, Clark
2007-01-01
A computer program in the guidance system of a mobile robot generates estimates of the position and attitude of the robot, using features of the terrain on which the robot is moving, by processing digitized images acquired by a stereoscopic pair of electronic cameras mounted rigidly on the robot. Developed for use in localizing the Mars Exploration Rover (MER) vehicles on Martian terrain, the program can also be used for similar purposes on terrestrial robots moving in sufficiently visually textured environments: examples include low-flying robotic aircraft and wheeled robots moving on rocky terrain or inside buildings. In simplified terms, the program automatically detects visual features and tracks them across stereoscopic pairs of images acquired by the cameras. The 3D locations of the tracked features are then robustly processed into an estimate of overall vehicle motion. Testing has shown that by use of this software, the error in the estimate of the position of the robot can be limited to no more than 2 percent of the distance traveled, provided that the terrain is sufficiently rich in features. This software has proven extremely useful on the MER vehicles during driving on sandy and highly sloped terrains on Mars.
Semantic Information Extraction of Lanes Based on Onboard Camera Videos
NASA Astrophysics Data System (ADS)
Tang, L.; Deng, T.; Ren, C.
2018-04-01
In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.
Event-Driven Random-Access-Windowing CCD Imaging System
NASA Technical Reports Server (NTRS)
Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William
2004-01-01
A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).
NASA Technical Reports Server (NTRS)
Wachter, R.; Schou, Jesper; Rabello-Soares, M. C.; Miles, J. W.; Duvall, T. L., Jr.; Bush, R. I.
2011-01-01
We describe the imaging quality of the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO) as measured during the ground calibration of the instrument. We describe the calibration techniques and report our results for the final configuration of HMI. We present the distortion, modulation transfer function, stray light,image shifts introduced by moving parts of the instrument, best focus, field curvature, and the relative alignment of the two cameras. We investigate the gain and linearity of the cameras, and present the measured flat field.
Moving vehicles segmentation based on Gaussian motion model
NASA Astrophysics Data System (ADS)
Zhang, Wei; Fang, Xiang Z.; Lin, Wei Y.
2005-07-01
Moving objects segmentation is a challenge in computer vision. This paper focuses on the segmentation of moving vehicles in dynamic scene. We analyses the psychology of human vision and present a framework for segmenting moving vehicles in the highway. The proposed framework consists of two parts. Firstly, we propose an adaptive background update method in which the background is updated according to the change of illumination conditions and thus can adapt to the change of illumination sensitively. Secondly, we construct a Gaussian motion model to segment moving vehicles, in which the motion vectors of the moving pixels are modeled as a Gaussian model and an on-line EM algorithm is used to update the model. The Gaussian distribution of the adaptive model is elevated to determine which moving vectors result from moving vehicles and which from other moving objects such as waving trees. Finally, the pixels with motion vector result from the moving vehicles are segmented. Experimental results of several typical scenes show that the proposed model can detect the moving vehicles correctly and is immune from influence of the moving objects caused by the waving trees and the vibration of camera.
NASA Technical Reports Server (NTRS)
Everett, Louis J.
1994-01-01
The work reported here demonstrates how to automatically compute the position and attitude of a targeting reflective alignment concept (TRAC) camera relative to the robot end effector. In the robotics literature this is known as the sensor registration problem. The registration problem is important to solve if TRAC images need to be related to robot position. Previously, when TRAC operated on the end of a robot arm, the camera had to be precisely located at the correct orientation and position. If this location is in error, then the robot may not be able to grapple an object even though the TRAC sensor indicates it should. In addition, if the camera is significantly far from the alignment it is expected to be at, TRAC may give incorrect feedback for the control of the robot. A simple example is if the robot operator thinks the camera is right side up but the camera is actually upside down, the camera feedback will tell the operator to move in an incorrect direction. The automatic calibration algorithm requires the operator to translate and rotate the robot arbitrary amounts along (about) two coordinate directions. After the motion, the algorithm determines the transformation matrix from the robot end effector to the camera image plane. This report discusses the TRAC sensor registration problem.
A view of the ET camera on STS-112
NASA Technical Reports Server (NTRS)
2002-01-01
KENNEDY SPACE CENTER, FLA. - A view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.
A view of the ET camera on STS-112
NASA Technical Reports Server (NTRS)
2002-01-01
KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.
The guidance methodology of a new automatic guided laser theodolite system
NASA Astrophysics Data System (ADS)
Zhang, Zili; Zhu, Jigui; Zhou, Hu; Ye, Shenghua
2008-12-01
Spatial coordinate measurement systems such as theodolites, laser trackers and total stations have wide application in manufacturing and certification processes. The traditional operation of theodolites is manual and time-consuming which does not meet the need of online industrial measurement, also laser trackers and total stations need reflective targets which can not realize noncontact and automatic measurement. A new automatic guided laser theodolite system is presented to achieve automatic and noncontact measurement with high precision and efficiency which is comprised of two sub-systems: the basic measurement system and the control and guidance system. The former system is formed by two laser motorized theodolites to accomplish the fundamental measurement tasks while the latter one consists of a camera and vision system unit mounted on a mechanical displacement unit to provide azimuth information of the measured points. The mechanical displacement unit can rotate horizontally and vertically to direct the camera to the desired orientation so that the camera can scan every measured point in the measuring field, then the azimuth of the corresponding point is calculated for the laser motorized theodolites to move accordingly to aim at it. In this paper the whole system composition and measuring principle are analyzed, and then the emphasis is laid on the guidance methodology for the laser points from the theodolites to move towards the measured points. The guidance process is implemented based on the coordinate transformation between the basic measurement system and the control and guidance system. With the view field angle of the vision system unit and the world coordinate of the control and guidance system through coordinate transformation, the azimuth information of the measurement area that the camera points at can be attained. The momentary horizontal and vertical changes of the mechanical displacement movement are also considered and calculated to provide real time azimuth information of the pointed measurement area by which the motorized theodolite will move accordingly. This methodology realizes the predetermined location of the laser points which is within the camera-pointed scope so that it accelerates the measuring process and implements the approximate guidance instead of manual operations. The simulation results show that the proposed method of automatic guidance is effective and feasible which provides good tracking performance of the predetermined location of laser points.
MonoSLAM: real-time single camera SLAM.
Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier
2007-06-01
We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.
Cross-Correlation-Based Structural System Identification Using Unmanned Aerial Vehicles
Yoon, Hyungchul; Hoskere, Vedhus; Park, Jong-Woong; Spencer, Billie F.
2017-01-01
Computer vision techniques have been employed to characterize dynamic properties of structures, as well as to capture structural motion for system identification purposes. All of these methods leverage image-processing techniques using a stationary camera. This requirement makes finding an effective location for camera installation difficult, because civil infrastructure (i.e., bridges, buildings, etc.) are often difficult to access, being constructed over rivers, roads, or other obstacles. This paper seeks to use video from Unmanned Aerial Vehicles (UAVs) to address this problem. As opposed to the traditional way of using stationary cameras, the use of UAVs brings the issue of the camera itself moving; thus, the displacements of the structure obtained by processing UAV video are relative to the UAV camera. Some efforts have been reported to compensate for the camera motion, but they require certain assumptions that may be difficult to satisfy. This paper proposes a new method for structural system identification using the UAV video directly. Several challenges are addressed, including: (1) estimation of an appropriate scale factor; and (2) compensation for the rolling shutter effect. Experimental validation is carried out to validate the proposed approach. The experimental results demonstrate the efficacy and significant potential of the proposed approach. PMID:28891985
Motion camera based on a custom vision sensor and an FPGA architecture
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel
1998-09-01
A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.
Localization and Mapping Using a Non-Central Catadioptric Camera System
NASA Astrophysics Data System (ADS)
Khurana, M.; Armenakis, C.
2018-05-01
This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.
Monocular Stereo Measurement Using High-Speed Catadioptric Tracking
Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku
2017-01-01
This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483
Joint Video Stitching and Stabilization from Moving Cameras.
Guo, Heng; Liu, Shuaicheng; He, Tong; Zhu, Shuyuan; Zeng, Bing; Gabbouj, Moncef
2016-09-08
In this paper, we extend image stitching to video stitching for videos that are captured for the same scene simultaneously by multiple moving cameras. In practice, videos captured under this circumstance often appear shaky. Directly applying image stitching methods for shaking videos often suffers from strong spatial and temporal artifacts. To solve this problem, we propose a unified framework in which video stitching and stabilization are performed jointly. Specifically, our system takes several overlapping videos as inputs. We estimate both inter motions (between different videos) and intra motions (between neighboring frames within a video). Then, we solve an optimal virtual 2D camera path from all original paths. An enlarged field of view along the virtual path is finally obtained by a space-temporal optimization that takes both inter and intra motions into consideration. Two important components of this optimization are that (1) a grid-based tracking method is designed for an improved robustness, which produces features that are distributed evenly within and across multiple views, and (2) a mesh-based motion model is adopted for the handling of the scene parallax. Some experimental results are provided to demonstrate the effectiveness of our approach on various consumer-level videos and a Plugin, named "Video Stitcher" is developed at Adobe After Effects CC2015 to show the processed videos.
Foreground extraction for moving RGBD cameras
NASA Astrophysics Data System (ADS)
Junejo, Imran N.; Ahmed, Naveed
2017-02-01
In this paper, we propose a simple method to perform foreground extraction for a moving RGBD camera. These cameras have now been available for quite some time. Their popularity is primarily due to their low cost and ease of availability. Although the field of foreground extraction or background subtraction has been explored by the computer vision researchers since a long time, the depth-based subtraction is relatively new and has not been extensively addressed as of yet. Most of the current methods make heavy use of geometric reconstruction, making the solutions quite restrictive. In this paper, we make a novel use RGB and RGBD data: from the RGB frame, we extract corner features (FAST) and then represent these features with the histogram of oriented gradients (HoG) descriptor. We train a non-linear SVM on these descriptors. During the test phase, we make used of the fact that the foreground object has distinct depth ordering with respect to the rest of the scene. That is, we use the positively classified FAST features on the test frame to initiate a region growing to obtain the accurate segmentation of the foreground object from just the RGBD data. We demonstrate the proposed method of a synthetic datasets, and demonstrate encouraging quantitative and qualitative results.
Forensics for flatbed scanners
NASA Astrophysics Data System (ADS)
Gloe, Thomas; Franz, Elke; Winkler, Antje
2007-02-01
Within this article, we investigate possibilities for identifying the origin of images acquired with flatbed scanners. A current method for the identification of digital cameras takes advantage of image sensor noise, strictly speaking, the spatial noise. Since flatbed scanners and digital cameras use similar technologies, the utilization of image sensor noise for identifying the origin of scanned images seems to be possible. As characterization of flatbed scanner noise, we considered array reference patterns and sensor line reference patterns. However, there are particularities of flatbed scanners which we expect to influence the identification. This was confirmed by extensive tests: Identification was possible to a certain degree, but less reliable than digital camera identification. In additional tests, we simulated the influence of flatfielding and down scaling as examples for such particularities of flatbed scanners on digital camera identification. One can conclude from the results achieved so far that identifying flatbed scanners is possible. However, since the analyzed methods are not able to determine the image origin in all cases, further investigations are necessary.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2015-05-01
As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. In previous papers, we demonstrate new possibility of the passive THz camera using for a temperature difference observing on the human skin if this difference is caused by different temperatures inside the body. For proof of validity of our statement we make the similar physical experiment using the IR camera. We show a possibility of temperature trace on human body skin, caused by changing of temperature inside the human body due to water drinking. We use as a computer code that is available for treatment of images captured by commercially available IR camera, manufactured by Flir Corp., as well as our developed computer code for computer processing of these images. Using both codes we demonstrate clearly changing of human body skin temperature induced by water drinking. Shown phenomena are very important for the detection of forbidden samples and substances concealed inside the human body using non-destructive control without X-rays using. Early we have demonstrated such possibility using THz radiation. Carried out experiments can be used for counter-terrorism problem solving. We developed original filters for computer processing of images captured by IR cameras. Their applications for computer processing of images results in a temperature resolution enhancing of cameras.
Accuracy Analysis for Automatic Orientation of a Tumbling Oblique Viewing Sensor System
NASA Astrophysics Data System (ADS)
Stebner, K.; Wieden, A.
2014-03-01
Dynamic camera systems with moving parts are difficult to handle in photogrammetric workflow, because it is not ensured that the dynamics are constant over the recording period. Minimum changes of the camera's orientation greatly influence the projection of oblique images. In this publication these effects - originating from the kinematic chain of a dynamic camera system - are analysed and validated. A member of the Modular Airborne Camera System family - MACS-TumbleCam - consisting of a vertical viewing and a tumbling oblique camera was used for this investigation. Focus is on dynamic geometric modeling and the stability of the kinematic chain. To validate the experimental findings, the determined parameters are applied to the exterior orientation of an actual aerial image acquisition campaign using MACS-TumbleCam. The quality of the parameters is sufficient for direct georeferencing of oblique image data from the orientation information of a synchronously captured vertical image dataset. Relative accuracy for the oblique data set ranges from 1.5 pixels when using all images of the image block to 0.3 pixels when using only adjacent images.
Motionless active depth from defocus system using smart optics for camera autofocus applications
NASA Astrophysics Data System (ADS)
Amin, M. Junaid; Riza, Nabeel A.
2016-04-01
This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.
2001-04-07
ISS002-E-5511 (07 April 2001) --- Astronaut Susan J. Helms, Expedition Two flight engineer, pauses from moving through the Node 1 / Unity module of the International Space Station (ISS) to pose for a photograph. This image was recorded with a digital still camera.
ISS Passes over Hurricane_Irma_GMT248-1510
2017-09-05
The International Space Station’s external cameras captured a dramatic view of Hurricane Irma as it moved across the Atlantic Ocean Sept. 5. The National Hurricane Center had recently upgraded Irma to a Category 5 storm with hurricane warnings issued across the Caribbean.
Streak camera receiver definition study
NASA Technical Reports Server (NTRS)
Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.
1990-01-01
Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.
Real-time model-based vision system for object acquisition and tracking
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd
1987-01-01
A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.
Vehicular camera pedestrian detection research
NASA Astrophysics Data System (ADS)
Liu, Jiahui
2018-03-01
With the rapid development of science and technology, it has made great development, but at the same time of highway traffic more convenient in highway traffic and transportation. However, in the meantime, traffic safety accidents occur more and more frequently in China. In order to deal with the increasingly heavy traffic safety. So, protecting the safety of people's personal property and facilitating travel has become a top priority. The real-time accurate pedestrian and driving environment are obtained through a vehicular camera which are used to detection and track the preceding moving targets. It is popular in the domain of intelligent vehicle safety driving, autonomous navigation and traffic system research. Based on the pedestrian video obtained by the Vehicular Camera, this paper studies the trajectory of pedestrian detection and its algorithm.
Bennett, C.L.
1996-07-23
An imaging Fourier transform spectrometer is described having a Fourier transform infrared spectrometer providing a series of images to a focal plane array camera. The focal plane array camera is clocked to a multiple of zero crossing occurrences as caused by a moving mirror of the Fourier transform infrared spectrometer and as detected by a laser detector such that the frame capture rate of the focal plane array camera corresponds to a multiple of the zero crossing rate of the Fourier transform infrared spectrometer. The images are transmitted to a computer for processing such that representations of the images as viewed in the light of an arbitrary spectral ``fingerprint`` pattern can be displayed on a monitor or otherwise stored and manipulated by the computer. 2 figs.
Privacy Protection by Masking Moving Objects for Security Cameras
NASA Astrophysics Data System (ADS)
Yabuta, Kenichi; Kitazawa, Hitoshi; Tanaka, Toshihisa
Because of an increasing number of security cameras, it is crucial to establish a system that protects the privacy of objects in the recorded images. To this end, we propose a framework of image processing and data hiding for security monitoring and privacy protection. First, we state the requirements of the proposed monitoring systems and suggest possible implementation that satisfies those requirements. The underlying concept of our proposed framework is as follows: (1) in the recorded images, the objects whose privacy should be protected are deteriorated by appropriate image processing; (2) the original objects are encrypted and watermarked into the output image, which is encoded using an image compression standard; (3) real-time processing is performed such that no future frame is required to generate on output bitstream. It should be noted that in this framework, anyone can observe the decoded image that includes the deteriorated objects that are unrecognizable or invisible. On the other hand, for crime investigation, this system allows a limited number of users to observe the original objects by using a special viewer that decrypts and decodes the watermarked objects with a decoding password. Moreover, the special viewer allows us to select the objects to be decoded and displayed. We provide an implementation example, experimental results, and performance evaluations to support our proposed framework.
Space telescope low scattered light camera - A model
NASA Technical Reports Server (NTRS)
Breckinridge, J. B.; Kuper, T. G.; Shack, R. V.
1982-01-01
A design approach for a camera to be used with the space telescope is given. Camera optics relay the system pupil onto an annular Gaussian ring apodizing mask to control scattered light. One and two dimensional models of ripple on the primary mirror were calculated. Scattered light calculations using ripple amplitudes between wavelength/20 wavelength/200 with spatial correlations of the ripple across the primary mirror between 0.2 and 2.0 centimeters indicate that the detection of an object a billion times fainter than a bright source in the field is possible. Detection of a Jovian type planet in orbit about alpha Centauri with a camera on the space telescope may be possible.
Methods for identification of images acquired with digital cameras
NASA Astrophysics Data System (ADS)
Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki
2001-02-01
From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.
ERIC Educational Resources Information Center
Lanier, Jaron
2001-01-01
Describes tele-immersion, a new medium for human interaction enabled by digital technologies. It combines the display and interaction techniques of virtual reality with new vision technologies that transcend the traditional limitations of a camera. Tele-immersion stations observe people as moving sculptures without favoring a single point of view.…
Who cares about a camera if you are not speeding?
DOT National Transportation Integrated Search
1999-06-19
Speeding is a hazard on both busy highways and city streets, but regular police enforcement does not work very well since dense and fast moving traffic makes it both difficult and dangerous for officers to make traditional traffic stops. The paper di...
Real-time color measurement using active illuminant
NASA Astrophysics Data System (ADS)
Tominaga, Shoji; Horiuchi, Takahiko; Yoshimura, Akihiko
2010-01-01
This paper proposes a method for real-time color measurement using active illuminant. A synchronous measurement system is constructed by combining a high-speed active spectral light source and a high-speed monochrome camera. The light source is a programmable spectral source which is capable of emitting arbitrary spectrum in high speed. This system is the essential advantage of capturing spectral images without using filters in high frame rates. The new method of real-time colorimetry is different from the traditional method based on the colorimeter or the spectrometers. We project the color-matching functions onto an object surface as spectral illuminants. Then we can obtain the CIE-XYZ tristimulus values directly from the camera outputs at every point on the surface. We describe the principle of our colorimetric technique based on projection of the color-matching functions and the procedure for realizing a real-time measurement system of a moving object. In an experiment, we examine the performance of real-time color measurement for a static object and a moving object.
Online tracking of outdoor lighting variations for augmented reality with moving cameras.
Liu, Yanli; Granier, Xavier
2012-04-01
In augmented reality, one of key tasks to achieve a convincing visual appearance consistency between virtual objects and video scenes is to have a coherent illumination along the whole sequence. As outdoor illumination is largely dependent on the weather, the lighting condition may change from frame to frame. In this paper, we propose a full image-based approach for online tracking of outdoor illumination variations from videos captured with moving cameras. Our key idea is to estimate the relative intensities of sunlight and skylight via a sparse set of planar feature-points extracted from each frame. To address the inevitable feature misalignments, a set of constraints are introduced to select the most reliable ones. Exploiting the spatial and temporal coherence of illumination, the relative intensities of sunlight and skylight are finally estimated by using an optimization process. We validate our technique on a set of real-life videos and show that the results with our estimations are visually coherent along the video sequences.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.
2014-06-01
As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. We demonstrate new possibility of the passive THz camera using for a temperature difference observing on the human skin if this difference is caused by different temperatures inside the body. We discuss some physical experiments, in which a person drinks hot, and warm, and cold water and he eats. After computer processing of images captured by passive THz camera TS4 we may see the pronounced temperature trace on skin of the human body. For proof of validity of our statement we make the similar physical experiment using the IR camera. Our investigation allows to increase field of the passive THz camera using for the detection of objects concealed in the human body because the difference in temperature between object and parts of human body will be reflected on the human skin. However, modern passive THz cameras have not enough resolution in a temperature to see this difference. That is why, we use computer processing to enhance the camera resolution for this application. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp.
Speed cameras for the prevention of road traffic injuries and deaths.
Wilson, Cecilia; Willis, Charlene; Hendrikz, Joan K; Le Brocque, Robyne; Bellamy, Nicholas
2010-11-10
It is estimated that by 2020, road traffic crashes will have moved from ninth to third in the world ranking of burden of disease, as measured in disability adjusted life years. The prevention of road traffic injuries is of global public health importance. Measures aimed at reducing traffic speed are considered essential to preventing road injuries; the use of speed cameras is one such measure. To assess whether the use of speed cameras reduces the incidence of speeding, road traffic crashes, injuries and deaths. We searched the following electronic databases covering all available years up to March 2010; the Cochrane Library, MEDLINE (WebSPIRS), EMBASE (WebSPIRS), TRANSPORT, IRRD (International Road Research Documentation), TRANSDOC (European Conference of Ministers of Transport databases), Web of Science (Science and Social Science Citation Index), PsycINFO, CINAHL, EconLit, WHO database, Sociological Abstracts, Dissertation Abstracts, Index to Theses. Randomised controlled trials, interrupted time series and controlled before-after studies that assessed the impact of speed cameras on speeding, road crashes, crashes causing injury and fatalities were eligible for inclusion. We independently screened studies for inclusion, extracted data, assessed methodological quality, reported study authors' outcomes and where possible, calculated standardised results based on the information available in each study. Due to considerable heterogeneity between and within included studies, a meta-analysis was not appropriate. Thirty five studies met the inclusion criteria. Compared with controls, the relative reduction in average speed ranged from 1% to 15% and the reduction in proportion of vehicles speeding ranged from 14% to 65%. In the vicinity of camera sites, the pre/post reductions ranged from 8% to 49% for all crashes and 11% to 44% for fatal and serious injury crashes. Compared with controls, the relative improvement in pre/post injury crash proportions ranged from 8% to 50%. Despite the methodological limitations and the variability in degree of signal to noise effect, the consistency of reported reductions in speed and crash outcomes across all studies show that speed cameras are a worthwhile intervention for reducing the number of road traffic injuries and deaths. However, whilst the the evidence base clearly demonstrates a positive direction in the effect, an overall magnitude of this effect is currently not deducible due to heterogeneity and lack of methodological rigour. More studies of a scientifically rigorous and homogenous nature are necessary, to provide the answer to the magnitude of effect.
Speed cameras for the prevention of road traffic injuries and deaths.
Wilson, Cecilia; Willis, Charlene; Hendrikz, Joan K; Le Brocque, Robyne; Bellamy, Nicholas
2010-10-06
It is estimated that by 2020, road traffic crashes will have moved from ninth to third in the world ranking of burden of disease, as measured in disability adjusted life years. The prevention of road traffic injuries is of global public health importance. Measures aimed at reducing traffic speed are considered essential to preventing road injuries; the use of speed cameras is one such measure. To assess whether the use of speed cameras reduces the incidence of speeding, road traffic crashes, injuries and deaths. We searched the following electronic databases covering all available years up to March 2010; the Cochrane Library, MEDLINE (WebSPIRS), EMBASE (WebSPIRS), TRANSPORT, IRRD (International Road Research Documentation), TRANSDOC (European Conference of Ministers of Transport databases), Web of Science (Science and Social Science Citation Index), PsycINFO, CINAHL, EconLit, WHO database, Sociological Abstracts, Dissertation Abstracts, Index to Theses. Randomised controlled trials, interrupted time series and controlled before-after studies that assessed the impact of speed cameras on speeding, road crashes, crashes causing injury and fatalities were eligible for inclusion. We independently screened studies for inclusion, extracted data, assessed methodological quality, reported study authors' outcomes and where possible, calculated standardised results based on the information available in each study. Due to considerable heterogeneity between and within included studies, a meta-analysis was not appropriate. Thirty five studies met the inclusion criteria. Compared with controls, the relative reduction in average speed ranged from 1% to 15% and the reduction in proportion of vehicles speeding ranged from 14% to 65%. In the vicinity of camera sites, the pre/post reductions ranged from 8% to 49% for all crashes and 11% to 44% for fatal and serious injury crashes. Compared with controls, the relative improvement in pre/post injury crash proportions ranged from 8% to 50%. Despite the methodological limitations and the variability in degree of signal to noise effect, the consistency of reported reductions in speed and crash outcomes across all studies show that speed cameras are a worthwhile intervention for reducing the number of road traffic injuries and deaths. However, whilst the the evidence base clearly demonstrates a positive direction in the effect, an overall magnitude of this effect is currently not deducible due to heterogeneity and lack of methodological rigour. More studies of a scientifically rigorous and homogenous nature are necessary, to provide the answer to the magnitude of effect.
SEOS frame camera applications study
NASA Technical Reports Server (NTRS)
1974-01-01
A research and development satellite is discussed which will provide opportunities for observation of transient phenomena that fall within the fixed viewing circle of the spacecraft. The evaluation of possible applications for frame cameras, for SEOS, are studied. The computed lens characteristics for each camera are listed.
HIGH SPEED KERR CELL FRAMING CAMERA
Goss, W.C.; Gilley, L.F.
1964-01-01
The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)
High dynamic range image acquisition based on multiplex cameras
NASA Astrophysics Data System (ADS)
Zeng, Hairui; Sun, Huayan; Zhang, Tinghua
2018-03-01
High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.
2001-11-26
KENNEDY SPACE CENTER, Fla. -- A piece of equipment for Hubble Space Telescope Servicing mission is moved inside Hangar AE, Cape Canaveral. In the canister is the Advanced Camera for Surveys (ACS). The ACS will increase the discovery efficiency of the HST by a factor of ten. It consists of three electronic cameras and a complement of filters and dispersers that detect light from the ultraviolet to the near infrared (1200 - 10,000 angstroms). The ACS was built through a collaborative effort between Johns Hopkins University, Goddard Space Flight Center, Ball Aerospace Corporation and Space Telescope Science Institute. The goal of the mission, STS-109, is to service the HST, replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the ACS, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002
Machine vision based teleoperation aid
NASA Technical Reports Server (NTRS)
Hoff, William A.; Gatrell, Lance B.; Spofford, John R.
1991-01-01
When teleoperating a robot using video from a remote camera, it is difficult for the operator to gauge depth and orientation from a single view. In addition, there are situations where a camera mounted for viewing by the teleoperator during a teleoperation task may not be able to see the tool tip, or the viewing angle may not be intuitive (requiring extensive training to reduce the risk of incorrect or dangerous moves by the teleoperator). A machine vision based teleoperator aid is presented which uses the operator's camera view to compute an object's pose (position and orientation), and then overlays onto the operator's screen information on the object's current and desired positions. The operator can choose to display orientation and translation information as graphics and/or text. This aid provides easily assimilated depth and relative orientation information to the teleoperator. The camera may be mounted at any known orientation relative to the tool tip. A preliminary experiment with human operators was conducted and showed that task accuracies were significantly greater with than without this aid.
Sensor fusion of cameras and a laser for city-scale 3D reconstruction.
Bok, Yunsu; Choi, Dong-Geol; Kweon, In So
2014-11-04
This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.
[Virtual reality in ophthalmological education].
Wagner, C; Schill, M; Hennen, M; Männer, R; Jendritza, B; Knorz, M C; Bender, H J
2001-04-01
We present a computer-based medical training workstation for the simulation of intraocular eye surgery. The surgeon manipulates two original instruments inside a mechanical model of the eye. The instrument positions are tracked by CCD cameras and monitored by a PC which renders the scenery using a computer-graphic model of the eye and the instruments. The simulator incorporates a model of the operation table, a mechanical eye, three CCD cameras for the position tracking, the stereo display, and a computer. The three cameras are mounted under the operation table from where they can observe the interior of the mechanical eye. Using small markers the cameras recognize the instruments and the eye. Their position and orientation in space is determined by stereoscopic back projection. The simulation runs with more than 20 frames per second and provides a realistic impression of the surgery. It includes the cold light source which can be moved inside the eye and the shadow of the instruments on the retina which is important for navigational purposes.
Fast Fiber-Coupled Imaging Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brockington, Samuel; Case, Andrew; Witherspoon, Franklin Douglas
HyperV Technologies Corp. has successfully designed, built and experimentally demonstrated a full scale 1024 pixel 100 MegaFrames/s fiber coupled camera with 12 or 14 bits, and record lengths of 32K frames, exceeding our original performance objectives. This high-pixel-count, fiber optically-coupled, imaging diagnostic can be used for investigating fast, bright plasma events. In Phase 1 of this effort, a 100 pixel fiber-coupled fast streak camera for imaging plasma jet profiles was constructed and successfully demonstrated. The resulting response from outside plasma physics researchers emphasized development of increased pixel performance as a higher priority over increasing pixel count. In this Phase 2more » effort, HyperV therefore focused on increasing the sample rate and bit-depth of the photodiode pixel designed in Phase 1, while still maintaining a long record length and holding the cost per channel to levels which allowed up to 1024 pixels to be constructed. Cost per channel was 53.31 dollars, very close to our original target of $50 per channel. The system consists of an imaging "camera head" coupled to a photodiode bank with an array of optical fibers. The output of these fast photodiodes is then digitized at 100 Megaframes per second and stored in record lengths of 32,768 samples with bit depths of 12 to 14 bits per pixel. Longer record lengths are possible with additional memory. A prototype imaging system with up to 1024 pixels was designed and constructed and used to successfully take movies of very fast moving plasma jets as a demonstration of the camera performance capabilities. Some faulty electrical components on the 64 circuit boards resulted in only 1008 functional channels out of 1024 on this first generation prototype system. We experimentally observed backlit high speed fan blades in initial camera testing and then followed that with full movies and streak images of free flowing high speed plasma jets (at 30-50 km/s). Jet structure and jet collisions onto metal pillars in the path of the plasma jets were recorded in a single shot. This new fast imaging system is an attractive alternative to conventional fast framing cameras for applications and experiments where imaging events using existing techniques are inefficient or impossible. The development of HyperV's new diagnostic was split into two tracks: a next generation camera track, in which HyperV built, tested, and demonstrated a prototype 1024 channel camera at its own facility, and a second plasma community beta test track, where selected plasma physics programs received small systems of a few test pixels to evaluate the expected performance of a full scale camera on their experiments. These evaluations were performed as part of an unfunded collaboration with researchers at Los Alamos National Laboratory and the University of California at Davis. Results from the prototype 1024-pixel camera are discussed, as well as results from the collaborations with test pixel system deployment sites.« less
Senthil Kumar, S; Suresh Babu, S S; Anand, P; Dheva Shantha Kumari, G
2012-06-01
The purpose of our study was to fabricate in-house web-camera based automatic continuous patient movement monitoring device and control the movement of the patients during EXRT. Web-camera based patient movement monitoring device consists of a computer, digital web-camera, mounting system, breaker circuit, speaker, and visual indicator. The computer is used to control and analyze the patient movement using indigenously developed software. The speaker and the visual indicator are placed in the console room to indicate the positional displacement of the patient. Studies were conducted on phantom and 150 patients with different types of cancers. Our preliminary clinical results indicate that our device is highly reliable and can accurately report smaller movements of the patients in all directions. The results demonstrated that the device was able to detect patient's movements with the sensitivity of about 1 mm. When a patient moves, the receiver activates the circuit; an audible warning sound will be produced in the console. Through real-time measurements, an audible alarm can alert the radiation technologist to stop the treatment if the user defined positional threshold is violated. Simultaneously, the electrical circuit to the teletherapy machine will be activated and radiation will be halted. Patient's movement during the course for radiotherapy was studied. The beam is halted automatically when the threshold level of the system is exceeded. By using the threshold provided in the system, it is possible to monitor the patient continuously with certain fixed limits. An additional benefit is that it has reduced the tension and stress of a treatment team associated with treating patients who are not immobilized. It also enables the technologists to do their work more efficiently, because they don't have to continuously monitor patients with as much scrutiny as was required. © 2012 American Association of Physicists in Medicine.
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Figure 1 Annotated At the Gusev site recently, skies have been very dusty, and on its 421st sol (March 10, 2005) NASA's Mars Exploration Rover Spirit spied two dust devils in action. This pair of images is from the rover's rear hazard-avoidance camera. Views of the Gusev landing region from orbit show many dark streaks across the landscape -- tracks where dust devils have removed surface dust to show relatively darker soil below -- but this is the first time Spirit has photographed an active dust devil. Scientists are considering several causes of these small phenomena. Dust devils often occur when the Sun heats the surface of Mars. Warmed soil and rocks heat the layer of atmosphere closest to the surface, and the warm air rises in a whirling motion, stirring dust up from the surface like a miniature tornado. Another possibility is that a flow structure might develop over craters as wind speeds increase. As winds pick up, turbulence eddies and rotating columns of air form. As these columns grow in diameter they become taller and gain rotational speed. Eventually they become self-sustaining and the wind blows them down range. One sol before this image was taken, power output from Spirit's solar panels went up by about 50 percent when the amount of dust on the panels decreased. Was this a coincidence, or did a helpful dust devil pass over Spirit and lift off some of the dust? By comparing the separate images from the rover's different cameras, team members estimate that the dust devils moved about 500 meters (1,640 feet) in the 155 seconds between the navigation camera and hazard-avoidance camera frames; that equates to about 3 meters per second (7 miles per hour). The dust devils appear to be about 1,100 meters (almost three-quarters of a mile) from the rover.NASA Astrophysics Data System (ADS)
Haubeck, K.; Prinz, T.
2013-08-01
The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.
The application of holography as a real-time three-dimensional motion picture camera
NASA Technical Reports Server (NTRS)
Kurtz, R. L.
1973-01-01
A historical introduction to holography is presented, as well as a basic description of sideband holography for stationary objects. A brief theoretical development of both time-dependent and time-independent holography is also provided, along with an analytical and intuitive discussion of a unique holographic arrangement which allows the resolution of front surface detail from an object moving at high speeds. As an application of such a system, a real-time three-dimensional motion picture camera system is discussed and the results of a recent demonstration of the world's first true three-dimensional motion picture are given.
Lunar UV-visible-IR mapping interferometric spectrometer
NASA Technical Reports Server (NTRS)
Smith, W. Hayden; Haskin, L.; Korotev, R.; Arvidson, R.; Mckinnon, W.; Hapke, B.; Larson, S.; Lucey, P.
1992-01-01
Ultraviolet-visible-infrared mapping digital array scanned interferometers for lunar compositional surveys was developed. The research has defined a no-moving-parts, low-weight and low-power, high-throughput, and electronically adaptable digital array scanned interferometer that achieves measurement objectives encompassing and improving upon all the requirements defined by the LEXSWIG for lunar mineralogical investigation. In addition, LUMIS provides a new, important, ultraviolet spectral mapping, high-spatial-resolution line scan camera, and multispectral camera capabilities. An instrument configuration optimized for spectral mapping and imaging of the lunar surface and provide spectral results in support of the instrument design are described.
Gaze control for an active camera system by modeling human pursuit eye movements
NASA Astrophysics Data System (ADS)
Toelg, Sebastian
1992-11-01
The ability to stabilize the image of one moving object in the presence of others by active movements of the visual sensor is an essential task for biological systems, as well as for autonomous mobile robots. An algorithm is presented that evaluates the necessary movements from acquired visual data and controls an active camera system (ACS) in a feedback loop. No a priori assumptions about the visual scene and objects are needed. The algorithm is based on functional models of human pursuit eye movements and is to a large extent influenced by structural principles of neural information processing. An intrinsic object definition based on the homogeneity of the optical flow field of relevant objects, i.e., moving mainly fronto- parallel, is used. Velocity and spatial information are processed in separate pathways, resulting in either smooth or saccadic sensor movements. The program generates a dynamic shape model of the moving object and focuses its attention to regions where the object is expected. The system proved to behave in a stable manner under real-time conditions in complex natural environments and manages general object motion. In addition it exhibits several interesting abilities well-known from psychophysics like: catch-up saccades, grouping due to coherent motion, and optokinetic nystagmus.
Surveyor 3: Bacterium isolated from lunar retrieved television camera
NASA Technical Reports Server (NTRS)
Mitchell, F. J.; Ellis, W. L.
1972-01-01
Microbial analysis was the first of several studies of the retrieved camera and was performed immediately after the camera was opened. The emphasis of the analysis was placed upon isolating microorganisms that could be potentially pathogenic for man. Every step in the retrieval of the Surveyor 3 television camera was analyzed for possible contamination sources, including camera contact by the astronauts, ingassing in the lunar and command module during the mission or at splashdown, and handling during quarantine, disassembly, and analysis at the Lunar Receiving Laboratory
Fotogrammetria dell'Obelisco Vaticano con il Sole
NASA Astrophysics Data System (ADS)
Sigismondi, Costantino
2016-05-01
The Vatican Obelisk has been moved to St Peter's square by Domenico Fontana in 1586. The measurement of its height using the shadow on the floor is compared with Fontana's original data and with digital photo, corrected by perspective. A pincushion deformation is found in the camera.
A Vision-Based Motion Sensor for Undergraduate Laboratories.
ERIC Educational Resources Information Center
Salumbides, Edcel John; Maristela, Joyce; Uy, Alfredson; Karremans, Kees
2002-01-01
Introduces an alternative method to determine the mechanics of a moving object that uses computer vision algorithms with a charge-coupled device (CCD) camera as a recording device. Presents two experiments, pendulum motion and terminal velocity, to compare results of the alternative and conventional methods. (YDS)
Plenoptic Image Motion Deblurring.
Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo
2018-04-01
We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2015-08-05
This animation shows images of the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2017-12-08
This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
2002-09-26
KENNEDY SPACE CENTER, FLA. - A view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.
2002-09-26
KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.
2002-09-26
KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.
Low Cost Efficient Deliverying Video Surveillance Service to Moving Guard for Smart Home.
Gualotuña, Tatiana; Macías, Elsa; Suárez, Álvaro; C, Efraín R Fonseca; Rivadeneira, Andrés
2018-03-01
Low-cost video surveillance systems are attractive for Smart Home applications (especially in emerging economies). Those systems use the flexibility of the Internet of Things to operate the video camera only when an intrusion is detected. We are the only ones that focus on the design of protocols based on intelligent agents to communicate the video of an intrusion in real time to the guards by wireless or mobile networks. The goal is to communicate, in real time, the video to the guards who can be moving towards the smart home. However, this communication suffers from sporadic disruptions that difficults the control and drastically reduces user satisfaction and operativity of the system. In a novel way, we have designed a generic software architecture based on design patterns that can be adapted to any hardware in a simple way. The implanted hardware is of very low economic cost; the software frameworks are free. In the experimental tests we have shown that it is possible to communicate to the moving guard, intrusion notifications (by e-mail and by instant messaging), and the first video frames in less than 20 s. In addition, we automatically recovered the frames of video lost in the disruptions in a transparent way to the user, we supported vertical handover processes and we could save energy of the smartphone's battery. However, the most important thing was that the high satisfaction of the people who have used the system.
Low Cost Efficient Deliverying Video Surveillance Service to Moving Guard for Smart Home
Gualotuña, Tatiana; Fonseca C., Efraín R.; Rivadeneira, Andrés
2018-01-01
Low-cost video surveillance systems are attractive for Smart Home applications (especially in emerging economies). Those systems use the flexibility of the Internet of Things to operate the video camera only when an intrusion is detected. We are the only ones that focus on the design of protocols based on intelligent agents to communicate the video of an intrusion in real time to the guards by wireless or mobile networks. The goal is to communicate, in real time, the video to the guards who can be moving towards the smart home. However, this communication suffers from sporadic disruptions that difficults the control and drastically reduces user satisfaction and operativity of the system. In a novel way, we have designed a generic software architecture based on design patterns that can be adapted to any hardware in a simple way. The implanted hardware is of very low economic cost; the software frameworks are free. In the experimental tests we have shown that it is possible to communicate to the moving guard, intrusion notifications (by e-mail and by instant messaging), and the first video frames in less than 20 s. In addition, we automatically recovered the frames of video lost in the disruptions in a transparent way to the user, we supported vertical handover processes and we could save energy of the smartphone's battery. However, the most important thing was that the high satisfaction of the people who have used the system. PMID:29494551
NASA Astrophysics Data System (ADS)
Dan, Luo; Ohya, Jun
2010-02-01
Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.
Blind image deblurring based on trained dictionary and curvelet using sparse representation
NASA Astrophysics Data System (ADS)
Feng, Liang; Huang, Qian; Xu, Tingfa; Li, Shao
2015-04-01
Motion blur is one of the most significant and common artifacts causing poor image quality in digital photography, in which many factors resulted. In imaging process, if the objects are moving quickly in the scene or the camera moves in the exposure interval, the image of the scene would blur along the direction of relative motion between the camera and the scene, e.g. camera shake, atmospheric turbulence. Recently, sparse representation model has been widely used in signal and image processing, which is an effective method to describe the natural images. In this article, a new deblurring approach based on sparse representation is proposed. An overcomplete dictionary learned from the trained image samples via the KSVD algorithm is designed to represent the latent image. The motion-blur kernel can be treated as a piece-wise smooth function in image domain, whose support is approximately a thin smooth curve, so we employed curvelet to represent the blur kernel. Both of overcomplete dictionary and curvelet system have high sparsity, which improves the robustness to the noise and more satisfies the observer's visual demand. With the two priors, we constructed restoration model of blurred images and succeeded to solve the optimization problem with the help of alternating minimization technique. The experiment results prove the method can preserve the texture of original images and suppress the ring artifacts effectively.
Multi-view video segmentation and tracking for video surveillance
NASA Astrophysics Data System (ADS)
Mohammadi, Gelareh; Dufaux, Frederic; Minh, Thien Ha; Ebrahimi, Touradj
2009-05-01
Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios.
Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory
Vega, Julio; Perdices, Eduardo; Cañas, José M.
2013-01-01
Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. PMID:23337333
Multiple-camera/motion stereoscopy for range estimation in helicopter flight
NASA Technical Reports Server (NTRS)
Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.
1993-01-01
Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.
Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds
NASA Astrophysics Data System (ADS)
Boerner, R.; Kröhnert, M.
2016-06-01
3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.
Signal Collection Processing Enhancements
2004-04-01
APPROVED: /s/ ALFREDO VEGA IRIZARRY Project Engineer FOR THE DIRECTOR: /s/ JOSEPH CAMERA, Chief...SPONSORING / MONITORING AGENCY REPORT NUMBER AFRL-IF-RS-TR-2004-108 11. SUPPLEMENTARY NOTES AFRL Project Engineer: Alfredo Vega Irizzary...Mercury representative, Emilio Velilla, suggested several actions to better diagnose the problem. Both boards were moved to different PCI slots. The
Urban Terrain Modeling for Augmented Reality Applications
2001-01-01
pointing ( Maybank -92). Almost all such systems are designed to extract the geometry of buildings and to texture these to provide models that can be... Maybank , S. and Faugeras, O. (1992). A Theory of Self-Calibration of a Moving Camera, International Journal of Computer Vision, 8(2):123-151
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.
Mars Global Surveyor MOC Images
NASA Technical Reports Server (NTRS)
1999-01-01
Images of several dust devils were captured by the Mars Orbiter Camera (MOC) during its global geodesy campaign. The images shown were taken two days apart, May 13, 1999 and May 15, 1999. Dust devils are columnar vortices of wind that move across the landscape and pick up dust. They look like mini tornadoes.
NASA Astrophysics Data System (ADS)
Tanada, Jun
1992-08-01
Ikegami has been involved in broadcast equipment ever since it was established as a company. In conjunction with NHK it has brought forth countless television cameras, from black-and-white cameras to color cameras, HDTV cameras, and special-purpose cameras. In the early days of HDTV (high-definition television, also known as "High Vision") cameras the specifications were different from those for the cameras of the present-day system, and cameras using all kinds of components, having different arrangements of components, and having different appearances were developed into products, with time spent on experimentation, design, fabrication, adjustment, and inspection. But recently the knowhow built up thus far in components, , printed circuit boards, and wiring methods has been incorporated in camera fabrication, making it possible to make HDTV cameras by metbods similar to the present system. In addition, more-efficient production, lower costs, and better after-sales service are being achieved by using the same circuits, components, mechanism parts, and software for both HDTV cameras and cameras that operate by the present system.
Detection of obstacles on runway using Ego-Motion compensation and tracking of significant features
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar (Principal Investigator); Camps, Octavia (Principal Investigator); Gandhi, Tarak; Devadiga, Sadashiva
1996-01-01
This report describes a method for obstacle detection on a runway for autonomous navigation and landing of an aircraft. Detection is done in the presence of extraneous features such as tiremarks. Suitable features are extracted from the image and warping using approximately known camera and plane parameters is performed in order to compensate ego-motion as far as possible. Residual disparity after warping is estimated using an optical flow algorithm. Features are tracked from frame to frame so as to obtain more reliable estimates of their motion. Corrections are made to motion parameters with the residual disparities using a robust method, and features having large residual disparities are signaled as obstacles. Sensitivity analysis of the procedure is also studied. Nelson's optical flow constraint is proposed to separate moving obstacles from stationary ones. A Bayesian framework is used at every stage so that the confidence in the estimates can be determined.
Stray light suppression in the Goddard IRAM 2-Millimeter Observer (GISMO)
NASA Astrophysics Data System (ADS)
Sharp, E. H.; Benford, D. J.; Fixsen, D. J.; Moseley, S. H.; Staguhn, J. G.; Wollack, E. J.
2012-09-01
The Goddard-IRAM Superconducting 2 Millimeter Observer (GISMO) is an 8x16 Transition Edge Sensor (TES) array of bolometers built as a pathfinder for TES detector development efforts at NASA Goddard Space Flight Center. GISMO has been used annually at the Institut de Radioastronomie Millimétrique (IRAM) 30 meter telescope since 2007 under engineering time and was opened in the spring of 2012 to the general astronomical community. The spring deployment provided an opportunity to modify elements of the room temperature optics before moving the instrument to its new permanent position in the telescope receiver cabin. This allowed for the possibility to extend the cryostat, introduce improved cold baffling and thus further optimize the stray light performance for final astronomical use of the instrument, which has been completed and validated. We will demonstrate and discuss several of the methods used to quantify and limit the influence of stray light in the GISMO camera.
Stray Light Suppression in the Goddard IRAM 2-Millimeter Observer (GISMO)
NASA Technical Reports Server (NTRS)
Sharp, E. H.; Benford, D. J.; Fixsen, D. J.; Moseley, S. H.; Staguhn, J. G.; Wollack, E. J.
2012-01-01
The Goddard-IRAM Superconducting 2 Millimeter Observer (GISMO) is an 8xl6 Transition Edge Sensor (TES) array of bolometers built as a pathfinder for TES detector development efforts at NASA Goddard Space Flight Center. GISMO has been used annually at the Institut de Radioastronomie Millimetrique (IRAM) 30 meter telescope since 2007 under engineering time and was opened in the spring of 2012 to the general astronomical community. The spring deployment provided an opportunity to modify elements of the room temperature optics before moving the instrument to its new permanent position in the telescope receiver cabin. This allowed for the possibility to extend the cryostat, introduce improved cold baffling and thus further optimize the stray light performance for final astronomical use of the instrument, which has been completed and validated. We will demonstrate and discuss several of the methods used to quantify and limit the influence of stray light in the GISMO camera.
NASA Astrophysics Data System (ADS)
Selker, Ted
1983-05-01
Lens focusing using a hardware model of a retina (Reticon RL256 light sensitive array) with a low cost processor (8085 with 512 bytes of ROM and 512 bytes of RAM) was built. This system was developed and tested on a variety of visual stimuli to demonstrate that: a)an algorithm which moves a lens to maximize the sum of the difference of light level on adjacent light sensors will converge to best focus in all but contrived situations. This is a simpler algorithm than any previously suggested; b) it is feasible to use unmodified video sensor arrays with in-expensive processors to aid video camera use. In the future, software could be developed to extend the processor's usefulness, possibly to track an actor by panning and zooming to give a earners operator increased ease of framing; c) lateral inhibition is an adequate basis for determining best focus. This supports a simple anatomically motivated model of how our brain focuses our eyes.
NASA Technical Reports Server (NTRS)
Vaughan, O. H., Jr.
1990-01-01
Information on the data obtained from the Mesoscale Lightning Experiment flown on STS-26 is provided. The experiment used onboard TV cameras and a 35 mm film camera to obtain data. Data from the 35 mm camera are presented. During the mission, the crew had difficulty locating the various targets of opportunity with the TV cameras. To obtain as much data as possible in the short observational timeline allowed due to other commitments, the crew opted to use the hand-held 35 mm camera.
Detecting Phase Boundaries in Hard-Sphere Suspensions
NASA Technical Reports Server (NTRS)
McDowell, Mark; Rogers, Richard B.; Gray, Elizabeth
2009-01-01
A special image-data-processing technique has been developed for use in experiments that involve observation, via optical microscopes equipped with electronic cameras, of moving boundaries between the colloidal-solid and colloidal-liquid phases of colloidal suspensions of monodisperse hard spheres. During an experiment, it is necessary to adjust the position of a microscope to keep the phase boundary within view. A boundary typically moves at a speed of the order of microns per hour. Because an experiment can last days or even weeks, it is impractical to require human intervention to keep the phase boundary in view. The present image-data-processing technique yields results within a computation time short enough to enable generation of automated-microscope-positioning commands to track the moving phase boundary
Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment
NASA Astrophysics Data System (ADS)
Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.
2016-06-01
Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.
Improving the color fidelity of cameras for advanced television systems
NASA Astrophysics Data System (ADS)
Kollarits, Richard V.; Gibbon, David C.
1992-08-01
In this paper we compare the accuracy of the color information obtained from television cameras using three and five wavelength bands. This comparison is based on real digital camera data. The cameras are treated as colorimeters whose characteristics are not linked to that of the display. The color matrices for both cameras were obtained by identical optimization procedures that minimized the color error The color error for the five band camera is 2. 5 times smaller than that obtained from the three band camera. Visual comparison of color matches on a characterized color monitor indicate that the five band camera is capable of color measurements that produce no significant visual error on the display. Because the outputs from the five band camera are reduced to the normal three channels conventionally used for display there need be no increase in signal handling complexity outside the camera. Likewise it is possible to construct a five band camera using only three sensors as in conventional cameras. The principal drawback of the five band camera is the reduction in effective camera sensitivity by about 3/4 of an I stop. 1.
Evaluation of Moving Object Detection Based on Various Input Noise Using Fixed Camera
NASA Astrophysics Data System (ADS)
Kiaee, N.; Hashemizadeh, E.; Zarrinpanjeh, N.
2017-09-01
Detecting and tracking objects in video has been as a research area of interest in the field of image processing and computer vision. This paper evaluates the performance of a novel method for object detection algorithm in video sequences. This process helps us to know the advantage of this method which is being used. The proposed framework compares the correct and wrong detection percentage of this algorithm. This method was evaluated with the collected data in the field of urban transport which include car and pedestrian in fixed camera situation. The results show that the accuracy of the algorithm will decreases because of image resolution reduction.
NASA Technical Reports Server (NTRS)
Johnson, R. W.; Hall, J. B., Jr.
1977-01-01
Ocean dumping of waste materials is a significant environmental concern in the New York Bight. One of these waste materials, sewage sludge, was monitored in an experiment conducted in the New York Bight on September 22, 1975. Remote sensing over controlled sewage sludge dumping included an 11-band multispectral scanner, fiver multispectral cameras and one mapping camera. Concurrent in situ water samples were taken and acoustical measurements were made of the sewage sludge plumes. Data were obtained for sewage sludge plumes resulting from line (moving barge) and spot (stationary barge) dumps. Multiple aircraft overpasses were made to evaluate temporal effects on the plume signature.
Cloud and aerosol polarimetric imager
NASA Astrophysics Data System (ADS)
Zhang, Junqiang; Shao, Jianbing; Yan, Changxiang
2014-02-01
Cloud and Aerosol Polarimetric Imager (CAPI), which is the first onboard cloud and aerosol Polarimetric detector of CHINA, is developed to get cloud and aerosol data of atmosphere to retrieve aerosol optical and microphysical properties to increase the reversion precision of greenhouse gasses (GHGs). The instrument is neither a Polarization and Direction of Earth's Reflectance (POLDER) nor a Directional Polarimetric Camera (DPC) type polarized camera. It is a multispectral push broom system using linear detectors, and can get 5 bands spectral data, from ultraviolet (UV) to SWIR, of the same ground feature at the same time without any moving structure. This paper describes the CAPI instrument characteristics, composition, calibration, and the nearest development.
Phase transitions in traffic flow on multilane roads.
Kerner, Boris S; Klenov, Sergey L
2009-11-01
Based on empirical and numerical analyses of vehicular traffic, the physics of spatiotemporal phase transitions in traffic flow on multilane roads is revealed. The complex dynamics of moving jams observed in single vehicle data measured by video cameras on American highways is explained by the nucleation-interruption effect in synchronized flow, i.e., the spontaneous nucleation of a narrow moving jam with the subsequent jam dissolution. We find that (i) lane changing, vehicle merging from on-ramps, and vehicle leaving to off-ramps result in different traffic phases-free flow, synchronized flow, and wide moving jams-occurring and coexisting in different road lanes as well as in diverse phase transitions between the traffic phases; (ii) in synchronized flow, the phase transitions are responsible for a non-regular moving jam dynamics that explains measured single vehicle data: moving jams emerge and dissolve randomly at various road locations in different lanes; (iii) the phase transitions result also in diverse expanded general congested patterns occurring at closely located bottlenecks.
Control Program for an Optical-Calibration Robot
NASA Technical Reports Server (NTRS)
Johnston, Albert
2005-01-01
A computer program provides semiautomatic control of a moveable robot used to perform optical calibration of video-camera-based optoelectronic sensor systems that will be used to guide automated rendezvous maneuvers of spacecraft. The function of the robot is to move a target and hold it at specified positions. With the help of limit switches, the software first centers or finds the target. Then the target is moved to a starting position. Thereafter, with the help of an intuitive graphical user interface, an operator types in coordinates of specified positions, and the software responds by commanding the robot to move the target to the positions. The software has capabilities for correcting errors and for recording data from the guidance-sensor system being calibrated. The software can also command that the target be moved in a predetermined sequence of motions between specified positions and can be run in an advanced control mode in which, among other things, the target can be moved beyond the limits set by the limit switches.
Optimising Camera Traps for Monitoring Small Mammals
Glen, Alistair S.; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce
2013-01-01
Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera’s field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera’s field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats ( Mustela erminea ), feral cats (Felis catus) and hedgehogs ( Erinaceus europaeus ). Trigger speeds of 0.2–2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera’s field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps. PMID:23840790
Depth-Based Detection of Standing-Pigs in Moving Noise Environments.
Kim, Jinseong; Chung, Yeonwoo; Choi, Younchang; Sa, Jaewon; Kim, Heegon; Chung, Yongwha; Park, Daihee; Kim, Hakjae
2017-11-29
In a surveillance camera environment, the detection of standing-pigs in real-time is an important issue towards the final goal of 24-h tracking of individual pigs. In this study, we focus on depth-based detection of standing-pigs with "moving noises", which appear every night in a commercial pig farm, but have not been reported yet. We first apply a spatiotemporal interpolation technique to remove the moving noises occurring in the depth images. Then, we detect the standing-pigs by utilizing the undefined depth values around them. Our experimental results show that this method is effective for detecting standing-pigs at night, in terms of both cost-effectiveness (using a low-cost Kinect depth sensor) and accuracy (i.e., 94.47%), even with severe moving noises occluding up to half of an input depth image. Furthermore, without any time-consuming technique, the proposed method can be executed in real-time.
Robust Notion Vision For A Vehicle Moving On A Plane
NASA Astrophysics Data System (ADS)
Moni, Shankar; Weldon, E. J.
1987-05-01
A vehicle equipped with a cemputer vision system moves on a plane. We show that subject to certain constraints, the system can determine the motion of the vehicle (one rotational and two translational degrees of freedom) and the depth of the scene in front of the vehicle. The constraints include limits on the speed of the vehicle, presence of texture on the plane and absence of pitch and roll in the vehicular motion. It is possible to decouple the problems of finding the vehicle's motion and the depth of the scene in front of the vehicle by using two rigidly connected cameras. One views a field with known depth (i.e. the ground plane) and estimates the motion parameters and the other determines the depth map knowing the motion parameters. The motion is constrained to be planar to increase robustness. We use a least squares method of fitting the vehicle motion to observer brightness gradients. With this method, no correspondence between image points needs to be established and information fran the entire image is used in calculating notion. The algorithm performs very reliably on real image sequences and these results have been included. The results compare favourably to the performance of the algorithm of Negandaripour and Horn [2] where six degrees of freedom are assumed.
Automatic alignment method for calibration of hydrometers
NASA Astrophysics Data System (ADS)
Lee, Y. J.; Chang, K. H.; Chon, J. C.; Oh, C. Y.
2004-04-01
This paper presents a new method to automatically align specific scale-marks for the calibration of hydrometers. A hydrometer calibration system adopting the new method consists of a vision system, a stepping motor, and software to control the system. The vision system is composed of a CCD camera and a frame grabber, and is used to acquire images. The stepping motor moves the camera, which is attached to the vessel containing a reference liquid, along the hydrometer. The operating program has two main functions: to process images from the camera to find the position of the horizontal plane and to control the stepping motor for the alignment of the horizontal plane with a particular scale-mark. Any system adopting this automatic alignment method is a convenient and precise means of calibrating a hydrometer. The performance of the proposed method is illustrated by comparing the calibration results using the automatic alignment method with those obtained using the manual method.
Ultrafast Imaging using Spectral Resonance Modulation
NASA Astrophysics Data System (ADS)
Huang, Eric; Ma, Qian; Liu, Zhaowei
2016-04-01
CCD cameras are ubiquitous in research labs, industry, and hospitals for a huge variety of applications, but there are many dynamic processes in nature that unfold too quickly to be captured. Although tradeoffs can be made between exposure time, sensitivity, and area of interest, ultimately the speed limit of a CCD camera is constrained by the electronic readout rate of the sensors. One potential way to improve the imaging speed is with compressive sensing (CS), a technique that allows for a reduction in the number of measurements needed to record an image. However, most CS imaging methods require spatial light modulators (SLMs), which are subject to mechanical speed limitations. Here, we demonstrate an etalon array based SLM without any moving elements that is unconstrained by either mechanical or electronic speed limitations. This novel spectral resonance modulator (SRM) shows great potential in an ultrafast compressive single pixel camera.
Bennett, Charles L.
1996-01-01
An imaging Fourier transform spectrometer (10, 210) having a Fourier transform infrared spectrometer (12) providing a series of images (40) to a focal plane array camera (38). The focal plane array camera (38) is clocked to a multiple of zero crossing occurrences as caused by a moving mirror (18) of the Fourier transform infrared spectrometer (12) and as detected by a laser detector (50) such that the frame capture rate of the focal plane array camera (38) corresponds to a multiple of the zero crossing rate of the Fourier transform infrared spectrometer (12). The images (40) are transmitted to a computer (45) for processing such that representations of the images (40) as viewed in the light of an arbitrary spectral "fingerprint" pattern can be displayed on a monitor (60) or otherwise stored and manipulated by the computer (45).
Optical Indoor Positioning System Based on TFT Technology.
Gőzse, István
2015-12-24
A novel indoor positioning system is presented in the paper. Similarly to the camera-based solutions, it is based on visual detection, but it conceptually differs from the classical approaches. First, the objects are marked by LEDs, and second, a special sensing unit is applied, instead of a camera, to track the motion of the markers. This sensing unit realizes a modified pinhole camera model, where the light-sensing area is fixed and consists of a small number of sensing elements (photodiodes), and it is the hole that can be moved. The markers are tracked by controlling the motion of the hole, such that the light of the LEDs always hits the photodiodes. The proposed concept has several advantages: Apart from its low computational demands, it is insensitive to the disturbing ambient light. Moreover, as every component of the system can be realized by simple and inexpensive elements, the overall cost of the system can be kept low.
Goyal, Anish; Myers, Travis; Wang, Christine A; Kelly, Michael; Tyrrell, Brian; Gokden, B; Sanchez, Antonio; Turner, George; Capasso, Federico
2014-06-16
We demonstrate active hyperspectral imaging using a quantum-cascade laser (QCL) array as the illumination source and a digital-pixel focal-plane-array (DFPA) camera as the receiver. The multi-wavelength QCL array used in this work comprises 15 individually addressable QCLs in which the beams from all lasers are spatially overlapped using wavelength beam combining (WBC). The DFPA camera was configured to integrate the laser light reflected from the sample and to perform on-chip subtraction of the passive thermal background. A 27-frame hyperspectral image was acquired of a liquid contaminant on a diffuse gold surface at a range of 5 meters. The measured spectral reflectance closely matches the calculated reflectance. Furthermore, the high-speed capabilities of the system were demonstrated by capturing differential reflectance images of sand and KClO3 particles that were moving at speeds of up to 10 m/s.
Investigation of the influence of spatial degrees of freedom on thermal infrared measurement
NASA Astrophysics Data System (ADS)
Fleuret, Julien R.; Yousefi, Bardia; Lei, Lei; Djupkep Dizeu, Frank Billy; Zhang, Hai; Sfarra, Stefano; Ouellet, Denis; Maldague, Xavier P. V.
2017-05-01
Long Wavelength Infrared (LWIR) cameras can provide a representation of a part of the light spectrum that is sensitive to temperature. These cameras also named Thermal Infrared (TIR) cameras are powerful tools to detect features that cannot be seen by other imaging technologies. For instance they enable defect detection in material, fever and anxiety in mammals and many other features for numerous applications. However, the accuracy of thermal cameras can be affected by many parameters; the most critical involves the relative position of the camera with respect to the object of interest. Several models have been proposed in order to minimize the influence of some of the parameters but they are mostly related to specific applications. Because such models are based on some prior informations related to context, their applicability to other contexts cannot be easily assessed. The few models remaining are mostly associated with a specific device. In this paper the authors studied the influence of the camera position on the measurement accuracy. Modeling of the position of the camera from the object of interest depends on many parameters. In order to propose a study which is as accurate as possible, the position of the camera will be represented as a five dimensions model. The aim of this study is to investigate and attempt to introduce a model which is as independent from the device as possible.
High-performance camera module for fast quality inspection in industrial printing applications
NASA Astrophysics Data System (ADS)
Fürtler, Johannes; Bodenstorfer, Ernst; Mayer, Konrad J.; Brodersen, Jörg; Heiss, Dorothea; Penz, Harald; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert
2007-02-01
Today, printing products which must meet highest quality standards, e.g., banknotes, stamps, or vouchers, are automatically checked by optical inspection systems. Typically, the examination of fine details of the print or security features demands images taken from various perspectives, with different spectral sensitivity (visible, infrared, ultraviolet), and with high resolution. Consequently, the inspection system is equipped with several cameras and has to cope with an enormous data rate to be processed in real-time. Hence, it is desirable to move image processing tasks into the camera to reduce the amount of data which has to be transferred to the (central) image processing system. The idea is to transfer relevant information only, i.e., features of the image instead of the raw image data from the sensor. These features are then further processed. In this paper a color line-scan camera for line rates up to 100 kHz is presented. The camera is based on a commercial CMOS (complementary metal oxide semiconductor) area image sensor and a field programmable gate array (FPGA). It implements extraction of image features which are well suited to detect print flaws like blotches of ink, color smears, splashes, spots and scratches. The camera design and several image processing methods implemented on the FPGA are described, including flat field correction, compensation of geometric distortions, color transformation, as well as decimation and neighborhood operations.
Optical design of space cameras for automated rendezvous and docking systems
NASA Astrophysics Data System (ADS)
Zhu, X.
2018-05-01
Visible cameras are essential components of a space automated rendezvous and docking (AR and D) system, which is utilized in many space missions including crewed or robotic spaceship docking, on-orbit satellite servicing, autonomous landing and hazard avoidance. Cameras are ubiquitous devices in modern time with countless lens designs that focus on high resolution and color rendition. In comparison, space AR and D cameras, while are not required to have extreme high resolution and color rendition, impose some unique requirements on lenses. Fixed lenses with no moving parts and separated lenses for narrow and wide field-of-view (FOV) are normally used in order to meet high reliability requirement. Cemented lens elements are usually avoided due to wide temperature swing and outgassing requirement in space environment. The lenses should be designed with exceptional straylight performance and minimum lens flare given intense sun light and lacking of atmosphere scattering in space. Furthermore radiation resistant glasses should be considered to prevent glass darkening from space radiation. Neptec has designed and built a narrow FOV (NFOV) lens and a wide FOV (WFOV) lens for an AR and D visible camera system. The lenses are designed by using ZEMAX program; the straylight performance and the lens baffles are simulated by using TracePro program. This paper discusses general requirements for space AR and D camera lenses and the specific measures for lenses to meet the space environmental requirements.
Automated tracking of a figure skater by using PTZ cameras
NASA Astrophysics Data System (ADS)
Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi
2009-08-01
In this paper, a system for automated real-time tracking of a figure skater moving on an ice rink by using PTZ cameras is presented. This system is intended for support in training of skating, for example, as a tool for recording and evaluation of his/her motion performances. In the processing procedure of the system, an ice rink region is extracted first from a video image by region growing method, then one of hole components in the obtained rink region is extracted as a skater region. If there exists no hole component, a skater region is estimated from horizontal and vertical intensity projections of the rink region. Each camera is automatically panned and/or tilted so as to keep the skater region on almost the center of the image, and also zoomed so as to keep the height of the skater region within an appropriate range. In the experiments using 5 practical video images of skating, it was shown that the extraction rate of the skater region was almost 90%, and tracking with camera control was successfully done for almost all of the cases used here.
Remote gaze tracking system on a large display.
Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun
2013-10-07
We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.
Remote Gaze Tracking System on a Large Display
Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun
2013-01-01
We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°∼±0.775° and a speed of 5∼10 frames/s. PMID:24105351
The Economics of Notebook Universities
ERIC Educational Resources Information Center
Bryan, John M. S.
2007-01-01
In the fall of 2006, students could purchase an entry-level notebook computer with a 15-inch LCD for $500. This price crossed an important threshold, moving notebooks into the range of consumer electronics--the category of phenomena that fuels mass consumer trends such as cell phones, digital cameras, and iPods. Most colleges and universities…
Large Scale Structure From Motion for Autonomous Underwater Vehicle Surveys
2004-09-01
Govern the Formation of Multiple Images of a Scene and Some of Their Applications. MIT Press, 2001. [26] 0. Faugeras and S. Maybank . Motion from point...Machine Vision Conference, volume 1, pages 384-393, September 2002. [69] S. Maybank and 0. Faugeras. A theory of self-calibration of a moving camera
Optimal UAV Path Planning for Tracking a Moving Ground Vehicle with a Gimbaled Camera
2014-03-27
micro SD card slot to record all video taken at 1080P resolution. This feature allows the team to record the high definition video taken by the...Inequality constraints 64 h=[]; %Equality constraints 104 Bibliography 1. “ DIY Drones: Official ArduPlane Repository”, 2013. URL https://code
Verifying the Hanging Chain Model
ERIC Educational Resources Information Center
Karls, Michael A.
2013-01-01
The wave equation with variable tension is a classic partial differential equation that can be used to describe the horizontal displacements of a vertical hanging chain with one end fixed and the other end free to move. Using a web camera and TRACKER software to record displacement data from a vibrating hanging chain, we verify a modified version…
ERIC Educational Resources Information Center
Fontes, Kris
2008-01-01
Not every art department is fortunate enough to have access to digital cameras and image-editing software, but if a scanner, computer, and printer are available, students can create some imaginative and surreal work. This high-school level lesson begins with a discussion of self-portraits, and then moves to students creating images by scanning…
A Navigation System for the Visually Impaired: A Fusion of Vision and Depth Sensor
Kanwal, Nadia; Bostanci, Erkan; Currie, Keith; Clark, Adrian F.
2015-01-01
For a number of years, scientists have been trying to develop aids that can make visually impaired people more independent and aware of their surroundings. Computer-based automatic navigation tools are one example of this, motivated by the increasing miniaturization of electronics and the improvement in processing power and sensing capabilities. This paper presents a complete navigation system based on low cost and physically unobtrusive sensors such as a camera and an infrared sensor. The system is based around corners and depth values from Kinect's infrared sensor. Obstacles are found in images from a camera using corner detection, while input from the depth sensor provides the corresponding distance. The combination is both efficient and robust. The system not only identifies hurdles but also suggests a safe path (if available) to the left or right side and tells the user to stop, move left, or move right. The system has been tested in real time by both blindfolded and blind people at different indoor and outdoor locations, demonstrating that it operates adequately. PMID:27057135
Television image compression and small animal remote monitoring
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Jackson, Robert W.
1990-01-01
It was shown that a subject can reliably discriminate a difference in video image quality (using a specific commercial product) for image compression levels ranging from 384 kbits per second to 1536 kbits per second. However, their discriminations are significantly influenced by whether or not the TV camera is stable or moving and whether or not the animals are quiescent or active, which is correlated with illumination level (daylight versus night illumination, respectively). The highest video rate used here was 1.54 megabits per second, which is about 18 percent of the so-called normal TV resolution of 8.4MHz. Since this video rate was judged to be acceptable by 27 of the 34 subjects (79 percent), for monitoring the general health and status of small animals within their illuminated (lights on) cages (regardless of whether the camera was stable or moved), it suggests that an immediate Space Station Freedom to ground bandwidth reduction of about 80 percent can be tolerated without a significant loss in general monitoring capability. Another general conclusion is that the present methodology appears to be effective in quantifying visual judgments of video image quality.
Live video monitoring robot controlled by web over internet
NASA Astrophysics Data System (ADS)
Lokanath, M.; Akhil Sai, Guruju
2017-11-01
Future is all about robots, robot can perform tasks where humans cannot, Robots have huge applications in military and industrial area for lifting heavy weights, for accurate placements, for repeating the same task number of times, where human are not efficient. Generally robot is a mix of electronic, electrical and mechanical engineering and can do the tasks automatically on its own or under the supervision of humans. The camera is the eye for robot, call as robovision helps in monitoring security system and also can reach into the places where the human eye cannot reach. This paper presents about developing a live video streaming robot controlled from the website. We designed the web, controlling for the robot to move left, right, front and back while streaming video. As we move to the smart environment or IoT (Internet of Things) by smart devices the system we developed here connects over the internet and can be operated with smart mobile phone using a web browser. The Raspberry Pi model B chip acts as heart for this system robot, the sufficient motors, surveillance camera R pi 2 are connected to Raspberry pi.
Visual EKF-SLAM from Heterogeneous Landmarks †
Esparza-Jiménez, Jorge Othón; Devy, Michel; Gordillo, José L.
2016-01-01
Many applications require the localization of a moving object, e.g., a robot, using sensory data acquired from embedded devices. Simultaneous localization and mapping from vision performs both the spatial and temporal fusion of these data on a map when a camera moves in an unknown environment. Such a SLAM process executes two interleaved functions: the front-end detects and tracks features from images, while the back-end interprets features as landmark observations and estimates both the landmarks and the robot positions with respect to a selected reference frame. This paper describes a complete visual SLAM solution, combining both point and line landmarks on a single map. The proposed method has an impact on both the back-end and the front-end. The contributions comprehend the use of heterogeneous landmark-based EKF-SLAM (the management of a map composed of both point and line landmarks); from this perspective, the comparison between landmark parametrizations and the evaluation of how the heterogeneity improves the accuracy on the camera localization, the development of a front-end active-search process for linear landmarks integrated into SLAM and the experimentation methodology. PMID:27070602
Moving target feature phenomenology data collection at China Lake
NASA Astrophysics Data System (ADS)
Gross, David C.; Hill, Jeff; Schmitz, James L.
2002-08-01
This paper describes the DARPA Moving Target Feature Phenomenology (MTFP) data collection conducted at the China Lake Naval Weapons Center's Junction Ranch in July 2001. The collection featured both X-band and Ku-band radars positioned on top of Junction Ranch's Parrot Peak. The test included seven targets used in eleven configurations with vehicle motion consisting of circular, straight-line, and 90-degree turning motion. Data was collected at 10-degree and 17-degree depression angles. Key parameters in the collection were polarization, vehicle speed, and road roughness. The collection also included a canonical target positioned at Junction Ranch's tilt-deck turntable. The canonical target included rotating wheels (military truck tire and civilian pick-up truck tire) and a flat plate with variable positioned corner reflectors. The canonical target was also used to simulate a rotating antenna and a vibrating plate. The target vehicles were instrumented with ARDS pods for differential GPS and roll, pitch and yaw measurements. Target motion was also documented using a video camera slaved to the X-band radar antenna and by a video camera operated near the target site.
2006-01-16
KENNEDY SPACE CENTER, FLA. - On Complex 41 at Cape Canaveral Air Force Station, the Atlas V expendable launch vehicle with the New Horizons spacecraft moves with the launcher umbilical tower to the pad. The liftoff is scheduled for 1:24 p.m. EST Jan. 17. After its launch aboard the Atlas V, the compact, 1,050-pound piano-sized probe will get a boost from a kick-stage solid propellant motor for its journey to Pluto. New Horizons will be the fastest spacecraft ever launched, reaching lunar orbit distance in just nine hours and passing Jupiter 13 months later. The New Horizons science payload, developed under direction of Southwest Research Institute, includes imaging infrared and ultraviolet spectrometers, a multi-color camera, a long-range telescopic camera, two particle spectrometers, a space-dust detector and a radio science experiment. The dust counter was designed and built by students at the University of Colorado, Boulder. A launch before Feb. 3 allows New Horizons to fly past Jupiter in early 2007 and use the planet’s gravity as a slingshot toward Pluto. The Jupiter flyby trims the trip to Pluto by as many as five years and provides opportunities to test the spacecraft’s instruments and flyby capabilities on the Jupiter system. New Horizons could reach the Pluto system as early as mid-2015, conducting a five-month-long study possible only from the close-up vantage of a spacecraft.
2006-01-16
KENNEDY SPACE CENTER, FLA. - On Complex 41 at Cape Canaveral Air Force Station, the Atlas V expendable launch vehicle with the New Horizons spacecraft is being moved from the Vertical Integration Facility to the pad. The liftoff is scheduled for 1:24 p.m. EST Jan. 17. After its launch aboard the Atlas V, the compact, 1,050-pound piano-sized probe will get a boost from a kick-stage solid propellant motor for its journey to Pluto. New Horizons will be the fastest spacecraft ever launched, reaching lunar orbit distance in just nine hours and passing Jupiter 13 months later. The New Horizons science payload, developed under direction of Southwest Research Institute, includes imaging infrared and ultraviolet spectrometers, a multi-color camera, a long-range telescopic camera, two particle spectrometers, a space-dust detector and a radio science experiment. The dust counter was designed and built by students at the University of Colorado, Boulder. A launch before Feb. 3 allows New Horizons to fly past Jupiter in early 2007 and use the planet’s gravity as a slingshot toward Pluto. The Jupiter flyby trims the trip to Pluto by as many as five years and provides opportunities to test the spacecraft’s instruments and flyby capabilities on the Jupiter system. New Horizons could reach the Pluto system as early as mid-2015, conducting a five-month-long study possible only from the close-up vantage of a spacecraft.
2006-01-16
KENNEDY SPACE CENTER, FLA. - On Complex 41 at Cape Canaveral Air Force Station, the Atlas V expendable launch vehicle with the New Horizons spacecraft moves with the launcher umbilical tower to the pad. The liftoff is scheduled for 1:24 p.m. EST Jan. 17. After its launch aboard the Atlas V, the compact, 1,050-pound piano-sized probe will get a boost from a kick-stage solid propellant motor for its journey to Pluto. New Horizons will be the fastest spacecraft ever launched, reaching lunar orbit distance in just nine hours and passing Jupiter 13 months later. The New Horizons science payload, developed under direction of Southwest Research Institute, includes imaging infrared and ultraviolet spectrometers, a multi-color camera, a long-range telescopic camera, two particle spectrometers, a space-dust detector and a radio science experiment. The dust counter was designed and built by students at the University of Colorado, Boulder. A launch before Feb. 3 allows New Horizons to fly past Jupiter in early 2007 and use the planet’s gravity as a slingshot toward Pluto. The Jupiter flyby trims the trip to Pluto by as many as five years and provides opportunities to test the spacecraft’s instruments and flyby capabilities on the Jupiter system. New Horizons could reach the Pluto system as early as mid-2015, conducting a five-month-long study possible only from the close-up vantage of a spacecraft.
Designing components using smartMOVE electroactive polymer technology
NASA Astrophysics Data System (ADS)
Rosenthal, Marcus; Weaber, Chris; Polyakov, Ilya; Zarrabi, Al; Gise, Peter
2008-03-01
Designing components using SmartMOVE TM electroactive polymer technology requires an understanding of the basic operation principles and the necessary design tools for integration into actuator, sensor and energy generation applications. Artificial Muscle, Inc. is collaborating with OEMs to develop customized solutions for their applications using smartMOVE. SmartMOVE is an advanced and elegant way to obtain almost any kind of movement using dielectric elastomer electroactive polymers. Integration of this technology offers the unique capability to create highly precise and customized motion for devices and systems that require actuation. Applications of SmartMOVE include linear actuators for medical, consumer and industrial applications, such as pumps, valves, optical or haptic devices. This paper will present design guidelines for selecting a smartMOVE actuator design to match the stroke, force, power, size, speed, environmental and reliability requirements for a range of applications. Power supply and controller design and selection will also be introduced. An overview of some of the most versatile configuration options will be presented with performance comparisons. A case example will include the selection, optimization, and performance overview of a smartMOVE actuator for the cell phone camera auto-focus and proportional valve applications.
C-RED one: ultra-high speed wavefront sensing in the infrared made possible
NASA Astrophysics Data System (ADS)
Gach, J.-L.; Feautrier, Philippe; Stadler, Eric; Greffe, Timothee; Clop, Fabien; Lemarchand, Stéphane; Carmignani, Thomas; Boutolleau, David; Baker, Ian
2016-07-01
First Light Imaging's CRED-ONE infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. We will show the performances of the camera, its main features and compare them to other high performance wavefront sensing cameras like OCAM2 in the visible and in the infrared. The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944.
Development of a Sunspot Tracking System
NASA Technical Reports Server (NTRS)
Taylor, Jaime R.
1998-01-01
Large solar flares produce a significant amount of energetic particles which pose a hazard for human activity in space. In the hope of understanding flare mechanisms and thus better predicting solar flares, NASA's Marshall Space Flight Center (MSFC) developed an experimental vector magnetograph (EXVM) polarimeter to measure the Sun's magnetic field. The EXVM will be used to perform ground-based solar observations and will provide a proof of concept for the design of a similar instrument for the Japanese Solar-B space mission. The EXVM typically operates for a period of several minutes. During this time there is image motion due to atmospheric fluctuation and telescope wind loading. To optimize the EXVM performance an image motion compensation device (sunspot tracker) is needed. The sunspot tracker consists of two parts, an image motion determination system and an image deflection system. For image motion determination a CCD or CID camera is used to digitize an image, than an algorithm is applied to determine the motion. This motion or error signal is sent to the image deflection system which moves the image back to its original location. Both of these systems are under development. Two algorithms are available for sunspot tracking which require the use of only one row and one column of image data. To implement these algorithms, two identical independent systems are being developed, one system for each axis of motion. Two CID cameras have been purchased; the data from each camera will be used to determine image motion for each direction. The error signal generated by the tracking algorithm will be sent to an image deflection system consisting of an actuator and a mirror constrained to move about one axis. Magnetostrictive actuators were chosen to move the mirror over piezoelectrics due to their larger driving force and larger range of motion. The actuator and mirror mounts are currently under development.
Automatic acquisition of motion trajectories: tracking hockey players
NASA Astrophysics Data System (ADS)
Okuma, Kenji; Little, James J.; Lowe, David
2003-12-01
Computer systems that have the capability of analyzing complex and dynamic scenes play an essential role in video annotation. Scenes can be complex in such a way that there are many cluttered objects with different colors, shapes and sizes, and can be dynamic with multiple interacting moving objects and a constantly changing background. In reality, there are many scenes that are complex, dynamic, and challenging enough for computers to describe. These scenes include games of sports, air traffic, car traffic, street intersections, and cloud transformations. Our research is about the challenge of inventing a descriptive computer system that analyzes scenes of hockey games where multiple moving players interact with each other on a constantly moving background due to camera motions. Ultimately, such a computer system should be able to acquire reliable data by extracting the players" motion as their trajectories, querying them by analyzing the descriptive information of data, and predict the motions of some hockey players based on the result of the query. Among these three major aspects of the system, we primarily focus on visual information of the scenes, that is, how to automatically acquire motion trajectories of hockey players from video. More accurately, we automatically analyze the hockey scenes by estimating parameters (i.e., pan, tilt, and zoom) of the broadcast cameras, tracking hockey players in those scenes, and constructing a visual description of the data by displaying trajectories of those players. Many technical problems in vision such as fast and unpredictable players' motions and rapid camera motions make our challenge worth tackling. To the best of our knowledge, there have not been any automatic video annotation systems for hockey developed in the past. Although there are many obstacles to overcome, our efforts and accomplishments would hopefully establish the infrastructure of the automatic hockey annotation system and become a milestone for research in automatic video annotation in this domain.
The Effect of Transition Type in Multi-View 360° Media.
MacQuarrie, Andrew; Steed, Anthony
2018-04-01
360° images and video have become extremely popular formats for immersive displays, due in large part to the technical ease of content production. While many experiences use a single camera viewpoint, an increasing number of experiences use multiple camera locations. In such multi-view 360° media (MV360M) systems, a visual effect is required when the user transitions from one camera location to another. This effect can take several forms, such as a cut or an image-based warp, and the choice of effect may impact many aspects of the experience, including issues related to enjoyment and scene understanding. To investigate the effect of transition types on immersive MV360M experiences, a repeated-measures experiment was conducted with 31 participants. Wearing a head-mounted display, participants explored four static scenes, for which multiple 360° images and a reconstructed 3D model were available. Three transition types were examined: teleport, a linear move through a 3D model of the scene, and an image-based transition using a Möbius transformation. The metrics investigated included spatial awareness, users' movement profiles, transition preference and the subjective feeling of moving through the space. Results indicate that there was no significant difference between transition types in terms of spatial awareness, while significant differences were found for users' movement profiles, with participants taking 1.6 seconds longer to select their next location following a teleport transition. The model and Möbius transitions were significantly better in terms of creating the feeling of moving through the space. Preference was also significantly different, with model and teleport transitions being preferred over Möbius transitions. Our results indicate that trade-offs between transitions will require content creators to think carefully about what aspects they consider to be most important when producing MV360M experiences.
NASA Technical Reports Server (NTRS)
Vaughan, Andrew T. (Inventor); Riedel, Joseph E. (Inventor)
2016-01-01
A single, compact, lower power deep space positioning system (DPS) configured to determine a location of a spacecraft anywhere in the solar system, and provide state information relative to Earth, Sun, or any remote object. For example, the DPS includes a first camera and, possibly, a second camera configured to capture a plurality of navigation images to determine a state of a spacecraft in a solar system. The second camera is located behind, or adjacent to, a secondary reflector of a first camera in a body of a telescope.
Observations of the Perseids 2013 using SPOSH cameras
NASA Astrophysics Data System (ADS)
Margonis, A.; Elgner, S.; Christou, A.; Oberst, J.; Flohrer, J.
2013-09-01
Earth is constantly bombard by debris, most of which disintegrates in the upper atmosphere. The collision of a dust particle, having a mass of approximately 1g or larger, with the Earth's atmosphere results into a visible streak of light in the night sky, called meteor. Comets produce new meteoroids each time they come close to the Sun due to sublimation processes. These fresh particles are moving around the Sun in orbits similar to their parent comet forming meteoroid streams. For this reason, the intersection of Earth's orbital path with different comets, gives rise to anumber of meteor showers throughout the year. The Perseids are one of the most prominent annual meteor showers occurring every summer, having its origin in Halley-type comet 109P/Swift-Tuttle. The dense core of this stream passes Earth's orbit on the 12th of August when more than 100 meteors per hour can been seen by a single observer under ideal conditions. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) together with the Armagh observatory organize meteor campaigns every summer observing the activity of the Perseids meteor shower. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [2] which has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract. The camera was designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera is equipped with a highly sensitive back-illuminated CCD chip having a pixel resolution of 1024x1024. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal) making the monitoring of nearly the whole night sky possible (Fig. 1). This year the observations will take place between 3rd and 10th of August to cover the meteor activity of the Perseids just before their maximum. The SPOSH cameras will be deployed at two remote sites located in high altitudes in the Greek Peloponnese peninsula. The baseline of ∼50km between the two observing stations ensures a large overlapping area of the cameras' field of views allowing the triangulation of approximately every meteor captured by the two observing systems. The acquired data will be reduced using dedicated software developed at TUB and DLR. Assuming a successful campaign, statistics, trajectories and photometric properties of the processed double-station meteors will be presented at the conference. Furthermore, a first order statistical analysis of the meteors processed during the 2012 and the new 2013 campaigns will be presented [1].
Use of camera drive in stereoscopic display of learning contents of introductory physics
NASA Astrophysics Data System (ADS)
Matsuura, Shu
2011-03-01
Simple 3D physics simulations with stereoscopic display were created for a part of introductory physics e-Learning. First, cameras to see the 3D world can be made controllable by the user. This enabled to observe the system and motions of objects from any position in the 3D world. Second, cameras were made attachable to one of the moving object in the simulation so as to observe the relative motion of other objects. By this option, it was found that users perceive the velocity and acceleration more sensibly on stereoscopic display than on non-stereoscopic 3D display. Simulations were made using Adobe Flash ActionScript, and Papervison 3D library was used to render the 3D models in the flash web pages. To display the stereogram, two viewports from virtual cameras were displayed in parallel in the same web page. For observation of stereogram, the images of two viewports were superimposed by using 3D stereogram projection box (T&TS CO., LTD.), and projected on an 80-inch screen. The virtual cameras were controlled by keyboard and also by Nintendo Wii remote controller buttons. In conclusion, stereoscopic display offers learners more opportunities to play with the simulated models, and to perceive the characteristics of motion better.
Neuromorphic Event-Based 3D Pose Estimation
Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.
2016-01-01
Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547
Online phase measuring profilometry for rectilinear moving object by image correction
NASA Astrophysics Data System (ADS)
Yuan, Han; Cao, Yi-Ping; Chen, Chen; Wang, Ya-Pin
2015-11-01
In phase measuring profilometry (PMP), the object must be static for point-to-point reconstruction with the captured deformed patterns. While the object is rectilinearly moving online, the size and pixel position differences of the object in different captured deformed patterns do not meet the point-to-point requirement. We propose an online PMP based on image correction to measure the three-dimensional shape of the rectilinear moving object. In the proposed method, the deformed patterns captured by a charge-coupled diode camera are reprojected from the oblique view to an aerial view first and then translated based on the feature points of the object. This method makes the object appear stationary in the deformed patterns. Experimental results show the feasibility and efficiency of the proposed method.
3D shape measurement of moving object with FFT-based spatial matching
NASA Astrophysics Data System (ADS)
Guo, Qinghua; Ruan, Yuxi; Xi, Jiangtao; Song, Limei; Zhu, Xinjun; Yu, Yanguang; Tong, Jun
2018-03-01
This work presents a new technique for 3D shape measurement of moving object in translational motion, which finds applications in online inspection, quality control, etc. A low-complexity 1D fast Fourier transform (FFT)-based spatial matching approach is devised to obtain accurate object displacement estimates, and it is combined with single shot fringe pattern prolometry (FPP) techniques to achieve high measurement performance with multiple captured images through coherent combining. The proposed technique overcomes some limitations of existing ones. Specifically, the placement of marks on object surface and synchronization between projector and camera are not needed, the velocity of the moving object is not required to be constant, and there is no restriction on the movement trajectory. Both simulation and experimental results demonstrate the effectiveness of the proposed technique.
Calibration method for video and radiation imagers
Cunningham, Mark F [Oak Ridge, TN; Fabris, Lorenzo [Knoxville, TN; Gee, Timothy F [Oak Ridge, TN; Goddard, Jr., James S.; Karnowski, Thomas P [Knoxville, TN; Ziock, Klaus-peter [Clinton, TN
2011-07-05
The relationship between the high energy radiation imager pixel (HERIP) coordinate and real-world x-coordinate is determined by a least square fit between the HERIP x-coordinate and the measured real-world x-coordinates of calibration markers that emit high energy radiation imager and reflect visible light. Upon calibration, a high energy radiation imager pixel position may be determined based on a real-world coordinate of a moving vehicle. Further, a scale parameter for said high energy radiation imager may be determined based on the real-world coordinate. The scale parameter depends on the y-coordinate of the moving vehicle as provided by a visible light camera. The high energy radiation imager may be employed to detect radiation from moving vehicles in multiple lanes, which correspondingly have different distances to the high energy radiation imager.
Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.
Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P
2016-01-01
Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.
Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis
Cerveri, Pietro; Barros, Ricardo M. L.; Marins, João C. B.; Silvatti, Amanda P.
2016-01-01
Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846
Real-time person detection in low-resolution thermal infrared imagery with MSER and CNNs
NASA Astrophysics Data System (ADS)
Herrmann, Christian; Müller, Thomas; Willersinn, Dieter; Beyerer, Jürgen
2016-10-01
In many camera-based systems, person detection and localization is an important step for safety and security applications such as search and rescue, reconnaissance, surveillance, or driver assistance. Long-wave infrared (LWIR) imagery promises to simplify this task because it is less affected by background clutter or illumination changes. In contrast to a lot of related work, we make no assumptions about any movement of persons or the camera, i.e. persons may stand still and the camera may move or any combination thereof. Furthermore, persons may appear arbitrarily in near or far distances to the camera leading to low-resolution persons in far distances. To address this task, we propose a two-stage system, including a proposal generation method and a classifier to verify, if the detected proposals really are persons. In contradiction to use all possible proposals as with sliding window approaches, we apply Maximally Stable Extremal Regions (MSER) and classify the detected proposals afterwards with a Convolutional Neural Network (CNN). The MSER algorithm acts as a hot spot detector when applied to LWIR imagery. Because the body temperature of persons is usually higher than the background, they appear as hot spots in the image. However, the MSER algorithm is unable to distinguish between different kinds of hot spots. Thus, all further LWIR sources such as windows, animals or vehicles will be detected, too. Still by applying MSER, the number of proposals is reduced significantly in comparison to a sliding window approach which allows employing the high discriminative capabilities of deep neural networks classifiers that were recently shown in several applications such as face recognition or image content classification. We suggest using a CNN as classifier for the detected hot spots and train it to discriminate between person hot spots and all further hot spots. We specifically design a CNN that is suitable for the low-resolution person hot spots that are common with LWIR imagery applications and is capable of fast classification. Evaluation on several different LWIR person detection datasets shows an error rate reduction of up to 80 percent compared to previous approaches consisting of MSER, local image descriptors and a standard classifier such as an SVM or boosted decision trees. Further time measurements show that the proposed processing chain is capable of real-time person detection in LWIR camera streams.
Sniper detection using infrared camera: technical possibilities and limitations
NASA Astrophysics Data System (ADS)
Kastek, M.; Dulski, R.; Trzaskawka, P.; Bieszczad, G.
2010-04-01
The paper discusses technical possibilities to build an effective system for sniper detection using infrared cameras. Descriptions of phenomena which make it possible to detect sniper activities in infrared spectra as well as analysis of physical limitations were performed. Cooled and uncooled detectors were considered. Three phases of sniper activities were taken into consideration: before, during and after the shot. On the basis of experimental data the parameters defining the target were determined which are essential in assessing the capability of infrared camera to detect sniper activity. A sniper body and muzzle flash were analyzed as targets. The simulation of detection ranges was done for the assumed scenario of sniper detection task. The infrared sniper detection system was discussed, capable of fulfilling the requirements. The discussion of the results of analysis and simulations was finally presented.
Applications of Action Cam Sensors in the Archaeological Yard
NASA Astrophysics Data System (ADS)
Pepe, M.; Ackermann, S.; Fregonese, L.; Fassi, F.; Adami, A.
2018-05-01
In recent years, special digital cameras called "action camera" or "action cam", have become popular due to their low price, smallness, lightness, strength and capacity to make videos and photos even in extreme environment surrounding condition. Indeed, these particular cameras have been designed mainly to capture sport actions and work even in case of dirt, bumps, or underwater and at different external temperatures. High resolution of Digital single-lens reflex (DSLR) cameras are usually preferred to be employed in photogrammetric field. Indeed, beyond the sensor resolution, the combination of such cameras with fixed lens with low distortion are preferred to perform accurate 3D measurements; at the contrary, action cameras have small and wide-angle lens, with a lower performance in terms of sensor resolution, lens quality and distortions. However, by considering the characteristics of the action cameras to acquire under conditions that may result difficult for standard DSLR cameras and because of their lower price, these could be taken into consideration as a possible and interesting approach during archaeological excavation activities to document the state of the places. In this paper, the influence of lens radial distortion and chromatic aberration on this type of cameras in self-calibration mode and an evaluation of their application in the field of Cultural Heritage will be investigated and discussed. Using a suitable technique, it has been possible to improve the accuracy of the 3D model obtained by action cam images. Case studies show the quality and the utility of the use of this type of sensor in the survey of archaeological artefacts.
A comparison of moving object detection methods for real-time moving object detection
NASA Astrophysics Data System (ADS)
Roshan, Aditya; Zhang, Yun
2014-06-01
Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.
Fine particles on mars: Observations with the viking 1 lander cameras
Mutch, T.A.; Arvidson, R. E.; Binder, A.B.; Huck, F.O.; Levinthal, E.C.; Liebes, S.; Morris, E.C.; Nummedal, D.; Pollack, James B.; Sagan, C.
1976-01-01
Drifts of fine-grained sediment are present in the vicinity of the Viking 1 lander. Many drifts occur in the lees of large boulders. Morphologic analysis indicates that the last dynamic event was one of general deflation for at least some drifts. Particle cohesion implies that there is a distinct small-particle upturn in the threshold velocity-particle size curve; the apparent absence of the most easily moved particles (150 micrometers in diameter) may be due to their preferential transport to other regions or their preferential collisional destruction. A twilight rescan with lander cameras indicates a substantial amount of red dust with mean radius on the order of 1 micrometer in the atmosphere.
Fine particles on Mars: observations with the viking 1 lander cameras.
Mutch, T A; Arvidson, R E; Binder, A B; Huck, F O; Levinthal, E C; Liebes, S; Morris, E C; Nummedal, D; Pollack, J B; Sagan, C
1976-10-01
Drifts of fine-grained sediment are present in the vicinity of the Viking 1 lander. Many drifts occur in the lees of large boulders. Morphologic analysis indicates that the last dynamic event was one of general deflation for at least some drifts. Particle cohesion implies that there is a distinct small-particle upturn in the threshold velocity-particle size curve; the apparent absence of the most easily moved particles (150 micrometers in diameter) may be due to their preferential transport to other regions or their preferential collisional destruction. A twilight rescan with lander cameras indicates a substantial amount of red dust with mean radius on the order of 1 micrometer in the atmosphere.
Experimental study of 3-D structure and evolution of foam
NASA Astrophysics Data System (ADS)
Thoroddsen, S. T.; Tan, E.; Bauer, J. M.
1998-11-01
Liquid foam coarsens due to diffusion of gas between adjacent foam cells. This evolution process is slow, but leads to rapid topological changes taking place during localized rearrangements of Plateau borders or disappearance of small cells. We are developing a new imaging technique to construct the three-dimensional topology of real soap foam contained in a small glass container. The technique uses 3 video cameras equipped with lenses having narrow depth-of-field. These cameras are moved with respect to the container, in effect obtaining numerous slices through the foam. Preliminary experimental results showing typical rearrangement events will also be presented. These events involve for example disappearance of either triangular or rectangular cell faces.
Sequential detection of web defects
Eichel, Paul H.; Sleefe, Gerard E.; Stalker, K. Terry; Yee, Amy A.
2001-01-01
A system for detecting defects on a moving web having a sequential series of identical frames uses an imaging device to form a real-time camera image of a frame and a comparitor to comparing elements of the camera image with corresponding elements of an image of an exemplar frame. The comparitor provides an acceptable indication if the pair of elements are determined to be statistically identical; and a defective indication if the pair of elements are determined to be statistically not identical. If the pair of elements is neither acceptable nor defective, the comparitor recursively compares the element of said exemplar frame with corresponding elements of other frames on said web until one of the acceptable or defective indications occur.
1998-10-30
This picture of Neptune was produced from the last whole planet images taken through the green and orange filters on NASA's Voyager 2 narrow angle camera. The images were taken at a range of 4.4 million miles from the planet, 4 days and 20 hours before closest approach. The picture shows the Great Dark Spot and its companion bright smudge; on the west limb the fast moving bright feature called Scooter and the little dark spot are visible. These clouds were seen to persist for as long as Voyager's cameras could resolve them. North of these, a bright cloud band similar to the south polar streak may be seen. http://photojournal.jpl.nasa.gov/catalog/PIA01492
NASA Astrophysics Data System (ADS)
Wolszczak, Piotr; Łygas, Krystian; Litak, Grzegorz
2018-07-01
This study investigates dynamic responses of a nonlinear vibration energy harvester. The nonlinear mechanical resonator consists of a flexible beam moving like an inverted pendulum between amplitude limiters. It is coupled with a piezoelectric converter, and excited kinematically. Consequently, the mechanical energy input is converted into the electrical power output on the loading resistor included in an electric circuit attached to the piezoelectric electrodes. The curvature of beam mode shapes as well as deflection of the whole beam are examined using a high speed camera. The visual identification results are compared with the voltage output generated by the piezoelectric element for corresponding frequency sweeps and analyzed by the Hilbert transform.
11. CALIFORNIATYPE DEPRESSION BEAM: Photocopy of photograph showing a Californiatype ...
11. CALIFORNIA-TYPE DEPRESSION BEAM: Photocopy of photograph showing a California-type depression beam positioned in its yokes. A car would approach the beam moving towards the camera. Note the open access cover, pulleys, counterweight hatchcover, and the wooden construction of the beam. - San Francisco Cable Railway, Washington & Mason Streets, San Francisco, San Francisco County, CA
Calibration Techniques for Accurate Measurements by Underwater Camera Systems
Shortis, Mark
2015-01-01
Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172
NASA Astrophysics Data System (ADS)
Yoon, J. S.; Culligan, P. J.; Germaine, J. T.
2003-12-01
Subsurface colloid behavior has recently drawn attention because colloids are suspected of enhancing contaminant transport in groundwater systems. To better understand the processes by which colloids move through the subsurface, and in particular the vadose zone, a new technique that enables real-time visualization of colloid particles as they move through a porous medium has been developed. This visualization technique involves the use of laser induced fluorescent particles and digital image processing to directly observe particles moving through a porous medium consisting of soda-lime glass beads and water in a transparent experimental box of 10.0cm\\x9D27.9cm\\x9D2.38cm. Colloid particles are simulated using commercially available micron sized particles that fluoresce under argon-ion laser light. The fluorescent light given off from the particles is captured through a camera filter, which lets through only the emitted wavelength of the colloid particles. The intensity of the emitted light is proportional to the colloid particle concentration. The images of colloid movement are captured by a MagnaFire digital camera; a cooled CCD digital camera produced by Optronics. This camera enables real-time capture of images to a computer, thereby allowing the images to be processed immediately. The images taken by the camera are analyzed by the ImagePro software from Media Cybernetics, which contains a range of counting, sizing, measuring, and image enhancement tools for image processing. Laboratory experiments using the new technique have demonstrated the existence of both irreversible and reversible sites for colloid entrapment during uniform saturated flow in a homogeneous porous medium. These tests have also shown a dependence of colloid entrapment on velocity. Models for colloid transport currently available in the literature have proven to be inadequate predictors for the experimental observations, despite the simplicity of the system studied. To further extend the work, the visualization technique has been developed for use on the geo-centrifuge. The advantage that the geo-centrifuge has for investigating subsurface colloid behavior, is the ability to simulate unsaturated transport mechanisms under well simulated field moisture profiles and in shortened periods of time. A series of tests to investigate colloid transport during uniform saturated flow is being used to examine basic scaling laws for colloid transport under enhanced gravity. The paper will describe the new visualization technique, its use in geo-centrifuge testing and observations on scaling relationships for colloid transport during geo-centrifuge experiments. Although the visualization technique has been developed for investigating subsurface colloid behavior, it does have application in other areas of investigation, including the investigation of microbial behavior in the subsurface.
Computing camera heading: A study
NASA Astrophysics Data System (ADS)
Zhang, John Jiaxiang
2000-08-01
An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.
Hu, Wen; McCartt, Anne T
2016-09-01
In May 2007, Montgomery County, Maryland, implemented an automated speed enforcement program, with cameras allowed on residential streets with speed limits of 35 mph or lower and in school zones. In 2009, the state speed camera law increased the enforcement threshold from 11 to 12 mph over the speed limit and restricted school zone enforcement hours. In 2012, the county began using a corridor approach, in which cameras were periodically moved along the length of a roadway segment. The long-term effects of the speed camera program on travel speeds, public attitudes, and crashes were evaluated. Changes in travel speeds at camera sites from 6 months before the program began to 7½ years after were compared with changes in speeds at control sites in the nearby Virginia counties of Fairfax and Arlington. A telephone survey of Montgomery County drivers was conducted in Fall 2014 to examine attitudes and experiences related to automated speed enforcement. Using data on crashes during 2004-2013, logistic regression models examined the program's effects on the likelihood that a crash involved an incapacitating or fatal injury on camera-eligible roads and on potential spillover roads in Montgomery County, using crashes in Fairfax County on similar roads as controls. About 7½ years after the program began, speed cameras were associated with a 10% reduction in mean speeds and a 62% reduction in the likelihood that a vehicle was traveling more than 10 mph above the speed limit at camera sites. When interviewed in Fall 2014, 95% of drivers were aware of the camera program, 62% favored it, and most had received a camera ticket or knew someone else who had. The overall effect of the camera program in its modified form, including both the law change and the corridor approach, was a 39% reduction in the likelihood that a crash resulted in an incapacitating or fatal injury. Speed cameras alone were associated with a 19% reduction in the likelihood that a crash resulted in an incapacitating or fatal injury, the law change was associated with a nonsignificant 8% increase, and the corridor approach provided an additional 30% reduction over and above the cameras. This study adds to the evidence that speed cameras can reduce speeding, which can lead to reductions in speeding-related crashes and crashes involving serious injuries or fatalities.
Video coding for next-generation surveillance systems
NASA Astrophysics Data System (ADS)
Klasen, Lena M.; Fahlander, Olov
1997-02-01
Video is used as recording media in surveillance system and also more frequently by the Swedish Police Force. Methods for analyzing video using an image processing system have recently been introduced at the Swedish National Laboratory of Forensic Science, and new methods are in focus in a research project at Linkoping University, Image Coding Group. The accuracy of the result of those forensic investigations often depends on the quality of the video recordings, and one of the major problems when analyzing videos from crime scenes is the poor quality of the recordings. Enhancing poor image quality might add manipulative or subjective effects and does not seem to be the right way of getting reliable analysis results. The surveillance system in use today is mainly based on video techniques, VHS or S-VHS, and the weakest link is the video cassette recorder, (VCR). Multiplexers for selecting one of many camera outputs for recording is another problem as it often filters the video signal, and recording is limited to only one of the available cameras connected to the VCR. A way to get around the problem of poor recording is to simultaneously record all camera outputs digitally. It is also very important to build such a system bearing in mind that image processing analysis methods becomes more important as a complement to the human eye. Using one or more cameras gives a large amount of data, and the need for data compression is more than obvious. Crime scenes often involve persons or moving objects, and the available coding techniques are more or less useful. Our goal is to propose a possible system, being the best compromise with respect to what needs to be recorded, movements in the recorded scene, loss of information and resolution etc., to secure the efficient recording of the crime and enable forensic analysis. The preventative effective of having a well functioning surveillance system and well established image analysis methods is not to be neglected. Aspects of this next generation of digital surveillance systems are discussed in this paper.
Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas
2018-01-01
This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites. PMID:29673230
Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas.
Gakne, Paul Verlaine; O'Keefe, Kyle
2018-04-17
This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites.
Alali, Sanaz; Gribble, Adam; Vitkin, I Alex
2016-03-01
A new polarimetry method is demonstrated to image the entire Mueller matrix of a turbid sample using four photoelastic modulators (PEMs) and a charge coupled device (CCD) camera, with no moving parts. Accurate wide-field imaging is enabled with a field-programmable gate array (FPGA) optical gating technique and an evolutionary algorithm (EA) that optimizes imaging times. This technique accurately and rapidly measured the Mueller matrices of air, polarization elements, and turbid phantoms. The system should prove advantageous for Mueller matrix analysis of turbid samples (e.g., biological tissues) over large fields of view, in less than a second.
DOE Office of Scientific and Technical Information (OSTI.GOV)
School of Materials Science and Engineering, State Key Lab for Materials Processing and Die & Mold Technology, Huazhong University of Science and Technology, Wuhan 430074, China; Department of Physics, University of California Berkeley, Berkeley, California 94720, USA; Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, California 94720, USA
2014-12-11
Past research has revealed the propagation of dense, asymmetric ionization zones in both high and low current magnetron discharges. Here we report about the direction reversal of ionization zone propagation as observed with fast cameras. At high currents, zones move in the E B direction with velocities of 103 to 104 m/s. However at lower currents, ionization zones are observed to move in the opposite, the -E B direction, with velocities ~;; 103 m/s. It is proposed that the direction reversal is associated with the local balance of ionization and supply of neutrals in the ionization zone.
Physiologically Modulating Videogames or Simulations which Use Motion-Sensing Input Devices
NASA Technical Reports Server (NTRS)
Blanson, Nina Marie (Inventor); Stephens, Chad L. (Inventor); Pope, Alan T. (Inventor)
2017-01-01
New types of controllers allow a player to make inputs to a video game or simulation by moving the entire controller itself or by gesturing or by moving the player's body in whole or in part. This capability is typically accomplished using a wireless input device having accelerometers, gyroscopes, and a camera. The present invention exploits these wireless motion-sensing technologies to modulate the player's movement inputs to the videogame based upon physiological signals. Such biofeedback-modulated video games train valuable mental skills beyond eye-hand coordination. These psychophysiological training technologies enhance personal improvement, not just the diversion, of the user.
Return Beam Vidicon (RBV) panchromatic two-camera subsystem for LANDSAT-C
NASA Technical Reports Server (NTRS)
1977-01-01
A two-inch Return Beam Vidicon (RBV) panchromatic two camera Subsystem, together with spare components was designed and fabricated for the LANDSAT-C Satellite; the basis for the design was the Landsat 1&2 RBV Camera System. The purpose of the RBV Subsystem is to acquire high resolution pictures of the Earth for a mapping application. Where possible, residual LANDSAT 1 and 2 equipment was utilized.
Polarized fluorescence for skin cancer diagnostic with a multi-aperture camera
NASA Astrophysics Data System (ADS)
Kandimalla, Haripriya; Ramella-Roman, Jessica C.
2008-02-01
Polarized fluorescence has shown some promising results in assessment of skin cancer margins. Researchers have used tetracycline and cross polarization imaging for nonmelanoma skin cancer demarcation as well as investigating endogenous skin polarized fluorescence. In this paper we present a new instrument for polarized fluorescence imaging, able to calculate the full fluorescence Stokes vector in one snapshot. The core of our system is a multi-aperture camera constructed with a two by two lenslet array. Three of the lenses have polarizing elements in front of them, oriented at 0°, + 45°and 90° with respect to light source polarization. A flash lamp combined with a polarizer parallel to the source-camera-sample plane and a UV filter is used as an excitation source. A blue filter in front of the camera system is used to collect only the fluorescent emission of interest and filter out the incident light. In-vitro tests of endogenous and exogenous polarized fluorescence on collagen rich material like bovine tendon were performed and Stokes vector of polarized fluorescence calculated. The system has the advantage of eliminating moving artifacts with the collection of different polarization states and stoke vector in a single snap shot.
NASA Astrophysics Data System (ADS)
Gamadia, Mark Noel
In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras are presented to further illustrate the real-world AF performance gains achieved by the developed approach. The major contribution of this dissertation is that the developed auto focusing approach can be successfully used by camera manufacturers in the development of the AF feature in future generations of digital still cameras and camera phones.
Homography-based multiple-camera person-tracking
NASA Astrophysics Data System (ADS)
Turk, Matthew R.
2009-01-01
Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of live targets for training. No calibration is required. Testing shows that the algorithm performs very well in real-world sequences. The consistent labelling problem is solved, even for targets that appear via in-scene entrances. Full occlusions are handled. Although implemented in Matlab, the multiple-camera tracking system runs at eight frames per second. A faster implementation would be suitable for real-world use at typical video frame rates.
Application of infrared uncooled cameras in surveillance systems
NASA Astrophysics Data System (ADS)
Dulski, R.; Bareła, J.; Trzaskawka, P.; PiÄ tkowski, T.
2013-10-01
The recent necessity to protect military bases, convoys and patrols gave serious impact to the development of multisensor security systems for perimeter protection. One of the most important devices used in such systems are IR cameras. The paper discusses technical possibilities and limitations to use uncooled IR camera in a multi-sensor surveillance system for perimeter protection. Effective ranges of detection depend on the class of the sensor used and the observed scene itself. Application of IR camera increases the probability of intruder detection regardless of the time of day or weather conditions. It also simultaneously decreased the false alarm rate produced by the surveillance system. The role of IR cameras in the system was discussed as well as technical possibilities to detect human being. Comparison of commercially available IR cameras, capable to achieve desired ranges was done. The required spatial resolution for detection, recognition and identification was calculated. The simulation of detection ranges was done using a new model for predicting target acquisition performance which uses the Targeting Task Performance (TTP) metric. Like its predecessor, the Johnson criteria, the new model bounds the range performance with image quality. The scope of presented analysis is limited to the estimation of detection, recognition and identification ranges for typical thermal cameras with uncooled microbolometer focal plane arrays. This type of cameras is most widely used in security systems because of competitive price to performance ratio. Detection, recognition and identification range calculations were made, and the appropriate results for the devices with selected technical specifications were compared and discussed.
Back-to-Back Martian Dust Storms
2017-03-09
This frame from a movie clip of hundreds of images from NASA's Mars Reconnaissance Orbiter shows a global map of Mars with atmospheric changes from Feb. 18, 2017 through March 6, 2017, a period when two regional-scale dust storms appeared. It combines hundreds of images from the Mars Color Imager (MARCI) camera on NASA's Mars Reconnaissance Orbiter. The date for each map in the series is given at upper left. Dust storms appear as pale tan. In the opening frames, one appears left of center, near the top (north) of the map, then grows in size as it moves south, eventually spreading to about half the width of the map after reaching the southern hemisphere. As the dust from that first storm becomes more diffuse in the south, another storm appears near the center of the map in the final frames. In viewing the movie, it helps to understand some of the artifacts produced by the nature of MARCI images when seen in animation. MARCI acquires images in swaths from pole-to-pole during the dayside portion of each orbit. The camera can cover the entire planet in just over 12 orbits, and takes about one day to accumulate this coverage. The individual swaths for each day are assembled into a false-color, map-projected mosaic for the day. Equally spaced blurry areas that run from south-to-north result from the high off-nadir viewing geometry in those parts of each swath, a product of the spacecraft's low orbit. Portions with sharper-looking details are the central part of an image, viewing more directly downward through less atmosphere than the obliquely viewed portions. MARCI has a 180-degree field of view, and Mars fills about 78 percent of that field of view when the camera is pointed down at the planet. However, the Mars Reconnaissance Orbiter often is pointed to one side or the other off its orbital track in order to acquire targeted observations by other imaging systems on the spacecraft. When such rolls exceed about 20 degrees, gaps occur in the mosaic of MARCI swaths. Other dark gaps appear where data are missing. It isn't easy to see the actual dust motion in the atmosphere in these images, owing to the apparent motion of these artifacts. However, by concentrating on specific surface features (craters, prominent ice deposits, etc.) and looking for the tan clouds of dust, it is possible to see where the storms start and how they grow, move and eventually dissipate. Movies are available at http://photojournal.jpl.nasa.gov/catalog/PIA21484
Fast and compact internal scanning CMOS-based hyperspectral camera: the Snapscan
NASA Astrophysics Data System (ADS)
Pichette, Julien; Charle, Wouter; Lambrechts, Andy
2017-02-01
Imec has developed a process for the monolithic integration of optical filters on top of CMOS image sensors, leading to compact, cost-efficient and faster hyperspectral cameras. Linescan cameras are typically used in remote sensing or for conveyor belt applications. Translation of the target is not always possible for large objects or in many medical applications. Therefore, we introduce a novel camera, the Snapscan (patent pending), exploiting internal movement of a linescan sensor enabling fast and convenient acquisition of high-resolution hyperspectral cubes (up to 2048x3652x150 in spectral range 475-925 nm). The Snapscan combines the spectral and spatial resolutions of a linescan system with the convenience of a snapshot camera.
Automated exterior inspection of an aircraft with a pan-tilt-zoom camera mounted on a mobile robot
NASA Astrophysics Data System (ADS)
Jovančević, Igor; Larnier, Stanislas; Orteu, Jean-José; Sentenac, Thierry
2015-11-01
This paper deals with an automated preflight aircraft inspection using a pan-tilt-zoom camera mounted on a mobile robot moving autonomously around the aircraft. The general topic is image processing framework for detection and exterior inspection of different types of items, such as closed or unlatched door, mechanical defect on the engine, the integrity of the empennage, or damage caused by impacts or cracks. The detection step allows to focus on the regions of interest and point the camera toward the item to be checked. It is based on the detection of regular shapes, such as rounded corner rectangles, circles, and ellipses. The inspection task relies on clues, such as uniformity of isolated image regions, convexity of segmented shapes, and periodicity of the image intensity signal. The approach is applied to the inspection of four items of Airbus A320: oxygen bay handle, air-inlet vent, static ports, and fan blades. The results are promising and demonstrate the feasibility of an automated exterior inspection.
CCD Camera Lens Interface for Real-Time Theodolite Alignment
NASA Technical Reports Server (NTRS)
Wake, Shane; Scott, V. Stanley, III
2012-01-01
Theodolites are a common instrument in the testing, alignment, and building of various systems ranging from a single optical component to an entire instrument. They provide a precise way to measure horizontal and vertical angles. They can be used to align multiple objects in a desired way at specific angles. They can also be used to reference a specific location or orientation of an object that has moved. Some systems may require a small margin of error in position of components. A theodolite can assist with accurately measuring and/or minimizing that error. The technology is an adapter for a CCD camera with lens to attach to a Leica Wild T3000 Theodolite eyepiece that enables viewing on a connected monitor, and thus can be utilized with multiple theodolites simultaneously. This technology removes a substantial part of human error by relying on the CCD camera and monitors. It also allows image recording of the alignment, and therefore provides a quantitative means to measure such error.
Spacecraft 3D Augmented Reality Mobile App
NASA Technical Reports Server (NTRS)
Hussey, Kevin J.; Doronila, Paul R.; Kumanchik, Brian E.; Chan, Evan G.; Ellison, Douglas J.; Boeck, Andrea; Moore, Justin M.
2013-01-01
The Spacecraft 3D application allows users to learn about and interact with iconic NASA missions in a new and immersive way using common mobile devices. Using Augmented Reality (AR) techniques to project 3D renditions of the mission spacecraft into real-world surroundings, users can interact with and learn about Curiosity, GRAIL, Cassini, and Voyager. Additional updates on future missions, animations, and information will be ongoing. Using a printed AR Target and camera on a mobile device, users can get up close with these robotic explorers, see how some move, and learn about these engineering feats, which are used to expand knowledge and understanding about space. The software receives input from the mobile device's camera to recognize the presence of an AR marker in the camera's field of view. It then displays a 3D rendition of the selected spacecraft in the user's physical surroundings, on the mobile device's screen, while it tracks the device's movement in relation to the physical position of the spacecraft's 3D image on the AR marker.
3D medical thermography device
NASA Astrophysics Data System (ADS)
Moghadam, Peyman
2015-05-01
In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.
Optical Indoor Positioning System Based on TFT Technology
Gőzse, István
2015-01-01
A novel indoor positioning system is presented in the paper. Similarly to the camera-based solutions, it is based on visual detection, but it conceptually differs from the classical approaches. First, the objects are marked by LEDs, and second, a special sensing unit is applied, instead of a camera, to track the motion of the markers. This sensing unit realizes a modified pinhole camera model, where the light-sensing area is fixed and consists of a small number of sensing elements (photodiodes), and it is the hole that can be moved. The markers are tracked by controlling the motion of the hole, such that the light of the LEDs always hits the photodiodes. The proposed concept has several advantages: Apart from its low computational demands, it is insensitive to the disturbing ambient light. Moreover, as every component of the system can be realized by simple and inexpensive elements, the overall cost of the system can be kept low. PMID:26712753
View of the ISS stack as seen during the fly-around by the STS-96 crew
2017-04-20
S96-E-5218 (3 June 1999) --- Partially silhouetted over clouds and a wide expanse of ocean waters, the unmanned International Space Station (ISS) moves away from the Space Shuttle Discovery. An electronic still camera (ESC) was aimed through aft flight deck windows to capture the image at 23:01:00 GMT, June 3, 1999.
The Input-Interface of Webcam Applied in 3D Virtual Reality Systems
ERIC Educational Resources Information Center
Sun, Huey-Min; Cheng, Wen-Lin
2009-01-01
Our research explores a virtual reality application based on Web camera (Webcam) input-interface. The interface can replace with the mouse to control direction intention of a user by the method of frame difference. We divide a frame into nine grids from Webcam and make use of the background registration to compute the moving object. In order to…
3. Elevation view of entire midsection using ultrawide angle lens. ...
3. Elevation view of entire midsection using ultrawide angle lens. Note opened south doors and closed north doors. The following photo WA-203-C-4 is similar except the camera position was moved right to include the slope of the south end. - Puget Sound Naval Shipyard, Munitions Storage Bunker, Naval Ammunitions Depot, South of Campbell Trail, Bremerton, Kitsap County, WA
Gap Acceptance During Lane Changes by Large-Truck Drivers—An Image-Based Analysis
Nobukawa, Kazutoshi; Bao, Shan; LeBlanc, David J.; Zhao, Ding; Peng, Huei; Pan, Christopher S.
2016-01-01
This paper presents an analysis of rearward gap acceptance characteristics of drivers of large trucks in highway lane change scenarios. The range between the vehicles was inferred from camera images using the estimated lane width obtained from the lane tracking camera as the reference. Six-hundred lane change events were acquired from a large-scale naturalistic driving data set. The kinematic variables from the image-based gap analysis were filtered by the weighted linear least squares in order to extrapolate them at the lane change time. In addition, the time-to-collision and required deceleration were computed, and potential safety threshold values are provided. The resulting range and range rate distributions showed directional discrepancies, i.e., in left lane changes, large trucks are often slower than other vehicles in the target lane, whereas they are usually faster in right lane changes. Video observations have confirmed that major motivations for changing lanes are different depending on the direction of move, i.e., moving to the left (faster) lane occurs due to a slower vehicle ahead or a merging vehicle on the right-hand side, whereas right lane changes are frequently made to return to the original lane after passing. PMID:26924947
NASA Astrophysics Data System (ADS)
Wang, Yanli; Puria, Sunil; Steele, Charles R.; Ricci, Anthony J.
2018-05-01
Mechanical stimulation of the stereocilia hair bundles of the inner and outer hair cells (IHCs and OHCs, respectively) drives IHC synaptic release and OHC electromotility. The modes of hair-bundle motion can have a dramatic influence on the electrophysiological responses of the hair cells. The in vivo modes of motion are, however, unknown for both IHC and OHC bundles. In this work, we are developing technology to investigate the in situ hair-bundle motion in excised mouse cochleae, for which the hair bundles of the OHCs are embedded in the tectorial membrane but those of the IHCs are not. Motion is generated by pushing onto the stapes at 1 kHz with a glass probe coupled to a piezo stack, and recorded using a high-speed camera at 10,000 frames per second. The motions of individual IHC stereocilia and the cell boundary are analyzed using 2D and 1D Gaussian fitting algorithms, respectively. Preliminary results show that the IHC bundle moves mainly in the radial direction and exhibits a small degree of splay, and that the stereocilia in the second row move less than those in the first row, even in the same focal plane.
Gap Acceptance During Lane Changes by Large-Truck Drivers-An Image-Based Analysis.
Nobukawa, Kazutoshi; Bao, Shan; LeBlanc, David J; Zhao, Ding; Peng, Huei; Pan, Christopher S
2016-03-01
This paper presents an analysis of rearward gap acceptance characteristics of drivers of large trucks in highway lane change scenarios. The range between the vehicles was inferred from camera images using the estimated lane width obtained from the lane tracking camera as the reference. Six-hundred lane change events were acquired from a large-scale naturalistic driving data set. The kinematic variables from the image-based gap analysis were filtered by the weighted linear least squares in order to extrapolate them at the lane change time. In addition, the time-to-collision and required deceleration were computed, and potential safety threshold values are provided. The resulting range and range rate distributions showed directional discrepancies, i.e., in left lane changes, large trucks are often slower than other vehicles in the target lane, whereas they are usually faster in right lane changes. Video observations have confirmed that major motivations for changing lanes are different depending on the direction of move, i.e., moving to the left (faster) lane occurs due to a slower vehicle ahead or a merging vehicle on the right-hand side, whereas right lane changes are frequently made to return to the original lane after passing.
High-throughput microfluidic line scan imaging for cytological characterization
NASA Astrophysics Data System (ADS)
Hutcheson, Joshua A.; Powless, Amy J.; Majid, Aneeka A.; Claycomb, Adair; Fritsch, Ingrid; Balachandran, Kartik; Muldoon, Timothy J.
2015-03-01
Imaging cells in a microfluidic chamber with an area scan camera is difficult due to motion blur and data loss during frame readout causing discontinuity of data acquisition as cells move at relatively high speeds through the chamber. We have developed a method to continuously acquire high-resolution images of cells in motion through a microfluidics chamber using a high-speed line scan camera. The sensor acquires images in a line-by-line fashion in order to continuously image moving objects without motion blur. The optical setup comprises an epi-illuminated microscope with a 40X oil immersion, 1.4 NA objective and a 150 mm tube lens focused on a microfluidic channel. Samples containing suspended cells fluorescently stained with 0.01% (w/v) proflavine in saline are introduced into the microfluidics chamber via a syringe pump; illumination is provided by a blue LED (455 nm). Images were taken of samples at the focal plane using an ELiiXA+ 8k/4k monochrome line-scan camera at a line rate of up to 40 kHz. The system's line rate and fluid velocity are tightly controlled to reduce image distortion and are validated using fluorescent microspheres. Image acquisition was controlled via MATLAB's Image Acquisition toolbox. Data sets comprise discrete images of every detectable cell which may be subsequently mined for morphological statistics and definable features by a custom texture analysis algorithm. This high-throughput screening method, comparable to cell counting by flow cytometry, provided efficient examination including counting, classification, and differentiation of saliva, blood, and cultured human cancer cells.
NASA Astrophysics Data System (ADS)
Crause, Lisa A.; Carter, Dave; Daniels, Alroy; Evans, Geoff; Fourie, Piet; Gilbank, David; Hendricks, Malcolm; Koorts, Willie; Lategan, Deon; Loubser, Egan; Mouries, Sharon; O'Connor, James E.; O'Donoghue, Darragh E.; Potter, Stephen; Sass, Craig; Sickafoose, Amanda A.; Stoffels, John; Swanevelder, Pieter; Titus, Keegan; van Gend, Carel; Visser, Martin; Worters, Hannah L.
2016-08-01
SpUpNIC (Spectrograph Upgrade: Newly Improved Cassegrain) is the extensively upgraded Cassegrain Spectrograph on the South African Astronomical Observatory's 74-inch (1.9-m) telescope. The inverse-Cassegrain collimator mirrors and woefully inefficient Maksutov-Cassegrain camera optics have been replaced, along with the CCD and SDSU controller. All moving mechanisms are now governed by a programmable logic controller, allowing remote configuration of the instrument via an intuitive new graphical user interface. The new collimator produces a larger beam to match the optically faster Folded-Schmidt camera design and nine surface-relief diffraction gratings offer various wavelength ranges and resolutions across the optical domain. The new camera optics (a fused silica Schmidt plate, a slotted fold flat and a spherically figured primary mirror, both Zerodur, and a fused silica field-flattener lens forming the cryostat window) reduce the camera's central obscuration to increase the instrument throughput. The physically larger and more sensitive CCD extends the available wavelength range; weak arc lines are now detectable down to 325 nm and the red end extends beyond one micron. A rear-of-slit viewing camera has streamlined the observing process by enabling accurate target placement on the slit and facilitating telescope focus optimisation. An interactive quick-look data reduction tool further enhances the user-friendliness of SpUpNI
Testbed for remote telepresence research
NASA Astrophysics Data System (ADS)
Adnan, Sarmad; Cheatham, John B., Jr.
1992-11-01
Teleoperated robots offer solutions to problems associated with operations in remote and unknown environments, such as space. Teleoperated robots can perform tasks related to inspection, maintenance, and retrieval. A video camera can be used to provide some assistance in teleoperations, but for fine manipulation and control, a telepresence system that gives the operator a sense of actually being at the remote location is more desirable. A telepresence system comprised of a head-tracking stereo camera system, a kinematically redundant arm, and an omnidirectional mobile robot has been developed at the mechanical engineering department at Rice University. This paper describes the design and implementation of this system, its control hardware, and software. The mobile omnidirectional robot has three independent degrees of freedom that permit independent control of translation and rotation, thereby simulating a free flying robot in a plane. The kinematically redundant robot arm has eight degrees of freedom that assist in obstacle and singularity avoidance. The on-board control computers permit control of the robot from the dual hand controllers via a radio modem system. A head-mounted display system provides the user with a stereo view from a pair of cameras attached to the mobile robotics system. The head tracking camera system moves stereo cameras mounted on a three degree of freedom platform to coordinate with the operator's head movements. This telepresence system provides a framework for research in remote telepresence, and teleoperations for space.
A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.
Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C
2017-02-07
The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.
Small Orbital Stereo Tracking Camera Technology Development
NASA Technical Reports Server (NTRS)
Bryan, Tom; Macleod, Todd; Gagliano, Larry
2015-01-01
On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well to help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.
Small Orbital Stereo Tracking Camera Technology Development
NASA Technical Reports Server (NTRS)
Bryan, Tom; MacLeod, Todd; Gagliano, Larry
2016-01-01
On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.
Improving accuracy of Plenoptic PIV using two light field cameras
NASA Astrophysics Data System (ADS)
Thurow, Brian; Fahringer, Timothy
2017-11-01
Plenoptic particle image velocimetry (PIV) has recently emerged as a viable technique for acquiring three-dimensional, three-component velocity field data using a single plenoptic, or light field, camera. The simplified experimental arrangement is advantageous in situations where optical access is limited and/or it is not possible to set-up the four or more cameras typically required in a tomographic PIV experiment. A significant disadvantage of a single camera plenoptic PIV experiment, however, is that the accuracy of the velocity measurement along the optical axis of the camera is significantly worse than in the two lateral directions. In this work, we explore the accuracy of plenoptic PIV when two plenoptic cameras are arranged in a stereo imaging configuration. It is found that the addition of a 2nd camera improves the accuracy in all three directions and nearly eliminates any differences between them. This improvement is illustrated using both synthetic and real experiments conducted on a vortex ring using both one and two plenoptic cameras.
Binary pressure-sensitive paint measurements using miniaturised, colour, machine vision cameras
NASA Astrophysics Data System (ADS)
Quinn, Mark Kenneth
2018-05-01
Recent advances in machine vision technology and capability have led to machine vision cameras becoming applicable for scientific imaging. This study aims to demonstrate the applicability of machine vision colour cameras for the measurement of dual-component pressure-sensitive paint (PSP). The presence of a second luminophore component in the PSP mixture significantly reduces its inherent temperature sensitivity, increasing its applicability at low speeds. All of the devices tested are smaller than the cooled CCD cameras traditionally used and most are of significantly lower cost, thereby increasing the accessibility of such technology and techniques. Comparisons between three machine vision cameras, a three CCD camera, and a commercially available specialist PSP camera are made on a range of parameters, and a detailed PSP calibration is conducted in a static calibration chamber. The findings demonstrate that colour machine vision cameras can be used for quantitative, dual-component, pressure measurements. These results give rise to the possibility of performing on-board dual-component PSP measurements in wind tunnels or on real flight/road vehicles.
Video Mosaicking for Inspection of Gas Pipelines
NASA Technical Reports Server (NTRS)
Magruder, Darby; Chien, Chiun-Hong
2005-01-01
A vision system that includes a specially designed video camera and an image-data-processing computer is under development as a prototype of robotic systems for visual inspection of the interior surfaces of pipes and especially of gas pipelines. The system is capable of providing both forward views and mosaicked radial views that can be displayed in real time or after inspection. To avoid the complexities associated with moving parts and to provide simultaneous forward and radial views, the video camera is equipped with a wide-angle (>165 ) fish-eye lens aimed along the axis of a pipe to be inspected. Nine white-light-emitting diodes (LEDs) placed just outside the field of view of the lens (see Figure 1) provide ample diffuse illumination for a high-contrast image of the interior pipe wall. The video camera contains a 2/3-in. (1.7-cm) charge-coupled-device (CCD) photodetector array and functions according to the National Television Standards Committee (NTSC) standard. The video output of the camera is sent to an off-the-shelf video capture board (frame grabber) by use of a peripheral component interconnect (PCI) interface in the computer, which is of the 400-MHz, Pentium II (or equivalent) class. Prior video-mosaicking techniques are applicable to narrow-field-of-view (low-distortion) images of evenly illuminated, relatively flat surfaces viewed along approximately perpendicular lines by cameras that do not rotate and that move approximately parallel to the viewed surfaces. One such technique for real-time creation of mosaic images of the ocean floor involves the use of visual correspondences based on area correlation, during both the acquisition of separate images of adjacent areas and the consolidation (equivalently, integration) of the separate images into a mosaic image, in order to insure that there are no gaps in the mosaic image. The data-processing technique used for mosaicking in the present system also involves area correlation, but with several notable differences: Because the wide-angle lens introduces considerable distortion, the image data must be processed to effectively unwarp the images (see Figure 2). The computer executes special software that includes an unwarping algorithm that takes explicit account of the cylindrical pipe geometry. To reduce the processing time needed for unwarping, parameters of the geometric mapping between the circular view of a fisheye lens and pipe wall are determined in advance from calibration images and compiled into an electronic lookup table. The software incorporates the assumption that the optical axis of the camera is parallel (rather than perpendicular) to the direction of motion of the camera. The software also compensates for the decrease in illumination with distance from the ring of LEDs.
A Vision System For A Mars Rover
NASA Astrophysics Data System (ADS)
Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.
1987-01-01
A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.
A vision system for a Mars rover
NASA Technical Reports Server (NTRS)
Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.
1988-01-01
A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.
Tails and streams around the Galactic globular clusters NGC 1851, NGC 1904, NGC 2298 and NGC 2808
NASA Astrophysics Data System (ADS)
Carballo-Bello, Julio A.; Martínez-Delgado, David; Navarrete, Camila; Catelan, Márcio; Muñoz, Ricardo R.; Antoja, Teresa; Sollima, Antonio
2018-02-01
We present Dark Energy Camera imaging for the peculiar Galactic globular clusters NGC 1851, NGC 1904 (M 79), NGC 2298 and NGC 2808. Our deep photometry reveals that all the clusters have an important contribution of stars beyond their King tidal radii and present tails with different morphologies. We have also explored the surroundings of the clusters where the presence of the Canis Major overdensity and/or the low Galactic latitude Monoceros ring at d⊙ ˜ 8 kpc is evident. A second stellar system is found at d⊙ ˜ 17 kpc and spans at least 18 deg × 15 deg in the sky. As one of the possible scenarios to explain that feature, we propose that the unveiled system is part of Monoceros explained as a density wave moving towards the outer Milky Way. Alternatively, the unveiled system might be connected with other known halo substructures or associated with the progenitor dwarf galaxy of NGC 1851 and NGC 1904, which are widely considered accreted globular clusters.
Automatic road sign detecion and classification based on support vector machines and HOG descriptos
NASA Astrophysics Data System (ADS)
Adam, A.; Ioannidis, C.
2014-05-01
This paper examines the detection and classification of road signs in color-images acquired by a low cost camera mounted on a moving vehicle. A new method for the detection and classification of road signs is proposed based on color based detection, in order to locate regions of interest. Then, a circular Hough transform is applied to complete detection taking advantage of the shape properties of the road signs. The regions of interest are finally represented using HOG descriptors and are fed into trained Support Vector Machines (SVMs) in order to be recognized. For the training procedure, a database with several training examples depicting Greek road sings has been developed. Many experiments have been conducted and are presented, to measure the efficiency of the proposed methodology especially under adverse weather conditions and poor illumination. For the experiments training datasets consisting of different number of examples were used and the results are presented, along with some possible extensions of this work.
2006-01-16
KENNEDY SPACE CENTER, FLA. - On Complex 41 at Cape Canaveral Air Force Station, the Atlas V expendable launch vehicle with the New Horizons spacecraft moves with the launcher umbilical tower between lightning masts on its way to the launch pad. The liftoff is scheduled for 1:24 p.m. EST Jan. 17. After its launch aboard the Atlas V, the compact, 1,050-pound piano-sized probe will get a boost from a kick-stage solid propellant motor for its journey to Pluto. New Horizons will be the fastest spacecraft ever launched, reaching lunar orbit distance in just nine hours and passing Jupiter 13 months later. The New Horizons science payload, developed under direction of Southwest Research Institute, includes imaging infrared and ultraviolet spectrometers, a multi-color camera, a long-range telescopic camera, two particle spectrometers, a space-dust detector and a radio science experiment. The dust counter was designed and built by students at the University of Colorado, Boulder. A launch before Feb. 3 allows New Horizons to fly past Jupiter in early 2007 and use the planet’s gravity as a slingshot toward Pluto. The Jupiter flyby trims the trip to Pluto by as many as five years and provides opportunities to test the spacecraft’s instruments and flyby capabilities on the Jupiter system. New Horizons could reach the Pluto system as early as mid-2015, conducting a five-month-long study possible only from the close-up vantage of a spacecraft.
2006-01-16
KENNEDY SPACE CENTER, FLA. - On Complex 41 at Cape Canaveral Air Force Station, the Atlas V expendable launch vehicle with the New Horizons spacecraft has been moved to the pad. Umbilicals have been attached. Seen near the rocket are lightning masts that support the catenary wire used to provide lightning protection. Liftoff is scheduled for 1:24 p.m. EST Jan. 17. After its launch aboard the Atlas V, the compact, 1,050-pound piano-sized probe will get a boost from a kick-stage solid propellant motor for its journey to Pluto. New Horizons will be the fastest spacecraft ever launched, reaching lunar orbit distance in just nine hours and passing Jupiter 13 months later. The New Horizons science payload, developed under direction of Southwest Research Institute, includes imaging infrared and ultraviolet spectrometers, a multi-color camera, a long-range telescopic camera, two particle spectrometers, a space-dust detector and a radio science experiment. The dust counter was designed and built by students at the University of Colorado, Boulder. A launch before Feb. 3 allows New Horizons to fly past Jupiter in early 2007 and use the planet’s gravity as a slingshot toward Pluto. The Jupiter flyby trims the trip to Pluto by as many as five years and provides opportunities to test the spacecraft’s instruments and flyby capabilities on the Jupiter system. New Horizons could reach the Pluto system as early as mid-2015, conducting a five-month-long study possible only from the close-up vantage of a spacecraft.
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera
Ci, Wenyan; Huang, Yingping
2016-01-01
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508
Robust and fast pedestrian detection method for far-infrared automotive driving assistance systems
NASA Astrophysics Data System (ADS)
Liu, Qiong; Zhuang, Jiajun; Ma, Jun
2013-09-01
Despite considerable effort has been contributed to night-time pedestrian detection for automotive driving assistance systems recent years, robust and real-time pedestrian detection is by no means a trivial task and is still underway due to the moving cameras, uncontrolled outdoor environments, wide range of possible pedestrian presentations and the stringent performance criteria for automotive applications. This paper presents an alternative night-time pedestrian detection method using monocular far-infrared (FIR) camera, which includes two modules (regions of interest (ROIs) generation and pedestrian recognition) in a cascade fashion. Pixel-gradient oriented vertical projection is first proposed to estimate the vertical image stripes that might contain pedestrians, and then local thresholding image segmentation is adopted to generate ROIs more accurately within the estimated vertical stripes. A novel descriptor called PEWHOG (pyramid entropy weighted histograms of oriented gradients) is proposed to represent FIR pedestrians in recognition module. Specifically, PEWHOG is used to capture both the local object shape described by the entropy weighted distribution of oriented gradient histograms and its pyramid spatial layout. Then PEWHOG is fed to a three-branch structured classifier using support vector machines (SVM) with histogram intersection kernel (HIK). An off-line training procedure combining both the bootstrapping and early-stopping strategy is introduced to generate a more robust classifier by exploiting hard negative samples iteratively. Finally, multi-frame validation is utilized to suppress some transient false positives. Experimental results on FIR video sequences from various scenarios demonstrate that the presented method is effective and promising.
In-flight performance of the Faint Object Camera of the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Greenfield, P.; Paresce, F.; Baxter, D.; Hodge, P.; Hook, R.; Jakobsen, P.; Jedrzejewski, R.; Nota, A.; Sparks, W. B.; Towers, N.
1991-01-01
An overview of the Faint Object Camera and its performance to date is presented. In particular, the detector's efficiency, the spatial uniformity of response, distortion characteristics, detector and sky background, detector linearity, spectrography, and operation are discussed. The effect of the severe spherical aberration of the telescope's primary mirror on the camera's point spread function is reviewed, as well as the impact it has on the camera's general performance. The scientific implications of the performance and the spherical aberration are outlined, with emphasis on possible remedies for spherical aberration, hardware remedies, and stellar population studies.
NASA Astrophysics Data System (ADS)
Nara, Shunsuke; Takahashi, Satoru
In this paper, what we want to do is to develop an observation device to measure the working radius of a crane truck. The device has a single CCD camera, a laser range finder and two AC servo motors. First, in order to measure the working radius, we need to consider algorithm of a crane hook recognition. Then, we attach the cross mark on the crane hook. Namely, instead of the crane hook, we try to recognize the cross mark. Further, for the observation device, we construct PI control system with an extended Kalman filter to track the moving cross mark. Through experiments, we show the usefulness of our device including new control system of mark tracking.
Final Report for the Advanced Camera for Surveys (ACS)
NASA Technical Reports Server (NTRS)
2004-01-01
ACS was launched aboard the Space Shuttle Columbia just before dawn on March 1, 2002. At the time of liftoff, the Hubble Space Telescope (HST) was reflecting the early morning sun as it moved across the sky. After successfully docking with HST, several components were replaced. One of the components was the Advanced Camera for Surveys built by Ball Aerospace & Technologies Corp. (BATC) in Boulder, Colorado. Over the life of the HST contract at BATC, hundreds of employees had the pleasure of working on the concept, design, fabrication, assembly, and test of ACS. Those employees thank NASA - Goddard Space Flight Center and the science team at Johns Hopkins University (JHU) for the opportunity to participate in building a great science instrument for HST.
Etalon Array Reconstructive Spectrometry
NASA Astrophysics Data System (ADS)
Huang, Eric; Ma, Qian; Liu, Zhaowei
2017-01-01
Compact spectrometers are crucial in areas where size and weight may need to be minimized. These types of spectrometers often contain no moving parts, which makes for an instrument that can be highly durable. With the recent proliferation in low-cost and high-resolution cameras, camera-based spectrometry methods have the potential to make portable spectrometers small, ubiquitous, and cheap. Here, we demonstrate a novel method for compact spectrometry that uses an array of etalons to perform spectral encoding, and uses a reconstruction algorithm to recover the incident spectrum. This spectrometer has the unique capability for both high resolution and a large working bandwidth without sacrificing sensitivity, and we anticipate that its simplicity makes it an excellent candidate whenever a compact, robust, and flexible spectrometry solution is needed.
Mastcam Telephoto of a Martian Dune Downwind Face
2016-01-04
This view combines multiple images from the telephoto-lens camera of the Mast Camera (Mastcam) on NASA's Curiosity Mars rover to reveal fine details of the downwind face of "Namib Dune." The site is part of the dark-sand "Bagnold Dunes" field along the northwestern flank of Mount Sharp. Images taken from orbit have shown that dunes in the Bagnold field move as much as about 3 feet (1 meter) per Earth year. Sand on this face of Namib Dune has cascaded down a slope of about 26 to 28 degrees. The top of the face is about 13 to 17 feet (4 to 5 meters) above the rocky ground at its base. http://photojournal.jpl.nasa.gov/catalog/PIA20283
Securing quality of camera-based biomedical optics
NASA Astrophysics Data System (ADS)
Guse, Frank; Kasper, Axel; Zinter, Bob
2009-02-01
As sophisticated optical imaging technologies move into clinical applications, manufacturers need to guarantee their products meet required performance criteria over long lifetimes and in very different environmental conditions. A consistent quality management marks critical components features derived from end-users requirements in a top-down approach. Careful risk analysis in the design phase defines the sample sizes for production tests, whereas first article inspection assures the reliability of the production processes. We demonstrate the application of these basic quality principles to camera-based biomedical optics for a variety of examples including molecular diagnostics, dental imaging, ophthalmology and digital radiography, covering a wide range of CCD/CMOS chip sizes and resolutions. Novel concepts in fluorescence detection and structured illumination are also highlighted.
Surveillance of a 2D Plane Area with 3D Deployed Cameras
Fu, Yi-Ge; Zhou, Jie; Deng, Lei
2014-01-01
As the use of camera networks has expanded, camera placement to satisfy some quality assurance parameters (such as a good coverage ratio, an acceptable resolution constraints, an acceptable cost as low as possible, etc.) has become an important problem. The discrete camera deployment problem is NP-hard and many heuristic methods have been proposed to solve it, most of which make very simple assumptions. In this paper, we propose a probability inspired binary Particle Swarm Optimization (PI-BPSO) algorithm to solve a homogeneous camera network placement problem. We model the problem under some more realistic assumptions: (1) deploy the cameras in the 3D space while the surveillance area is restricted to a 2D ground plane; (2) deploy the minimal number of cameras to get a maximum visual coverage under more constraints, such as field of view (FOV) of the cameras and the minimum resolution constraints. We can simultaneously optimize the number and the configuration of the cameras through the introduction of a regulation item in the cost function. The simulation results showed the effectiveness of the proposed PI-BPSO algorithm. PMID:24469353
Patterned Video Sensors For Low Vision
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1996-01-01
Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.
Electro-optical system for gunshot detection: analysis, concept, and performance
NASA Astrophysics Data System (ADS)
Kastek, M.; Dulski, R.; Madura, H.; Trzaskawka, P.; Bieszczad, G.; Sosnowski, T.
2011-08-01
The paper discusses technical possibilities to build an effective electro-optical sensor unit for sniper detection using infrared cameras. This unit, comprising of thermal and daylight cameras, can operate as a standalone device but its primary application is a multi-sensor sniper and shot detection system. At first, the analysis was presented of three distinguished phases of sniper activity: before, during and after the shot. On the basis of experimental data the parameters defining the relevant sniper signatures were determined which are essential in assessing the capability of infrared camera to detect sniper activity. A sniper body and muzzle flash were analyzed as targets and the descriptions of phenomena which make it possible to detect sniper activities in infrared spectra as well as analysis of physical limitations were performed. The analyzed infrared systems were simulated using NVTherm software. The calculations for several cameras, equipped with different lenses and detector types were performed. The simulation of detection ranges was performed for the selected scenarios of sniper detection tasks. After the analysis of simulation results, the technical specifications of infrared sniper detection system were discussed, required to provide assumed detection range. Finally the infrared camera setup was proposed which can detected sniper from 1000 meters range.
A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system.
Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni
2013-07-03
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.
HDR video synthesis for vision systems in dynamic scenes
NASA Astrophysics Data System (ADS)
Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried
2016-09-01
High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.
A novel fully integrated handheld gamma camera
NASA Astrophysics Data System (ADS)
Massari, R.; Ucci, A.; Campisi, C.; Scopinaro, F.; Soluri, A.
2016-10-01
In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.
Overview of Digital Forensics Algorithms in Dslr Cameras
NASA Astrophysics Data System (ADS)
Aminova, E.; Trapeznikov, I.; Priorov, A.
2017-05-01
The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.
Comparison Between RGB and Rgb-D Cameras for Supporting Low-Cost Gnss Urban Navigation
NASA Astrophysics Data System (ADS)
Rossi, L.; De Gaetani, C. I.; Pagliari, D.; Realini, E.; Reguzzoni, M.; Pinto, L.
2018-05-01
A pure GNSS navigation is often unreliable in urban areas because of the presence of obstructions, thus preventing a correct reception of the satellite signal. The bridging between GNSS outages, as well as the vehicle attitude reconstruction, can be recovered by using complementary information, such as visual data acquired by RGB-D or RGB cameras. In this work, the possibility of integrating low-cost GNSS and visual data by means of an extended Kalman filter has been investigated. The focus is on the comparison between the use of RGB-D or RGB cameras. In particular, a Microsoft Kinect device (second generation) and a mirrorless Canon EOS M RGB camera have been compared. The former is an interesting RGB-D camera because of its low-cost, easiness of use and raw data accessibility. The latter has been selected for the high-quality of the acquired images and for the possibility of mounting fixed focal length lenses with a lower weight and cost with respect to a reflex camera. The designed extended Kalman filter takes as input the GNSS-only trajectory and the relative orientation between subsequent pairs of images. Depending on the visual data acquisition system, the filter is different because RGB-D cameras acquire both RGB and depth data, allowing to solve the scale problem, which is instead typical of image-only solutions. The two systems and filtering approaches were assessed by ad-hoc experimental tests, showing that the use of a Kinect device for supporting a u-blox low-cost receiver led to a trajectory with a decimeter accuracy, that is 15 % better than the one obtained when using the Canon EOS M camera.
Investigation of the spreading of diesel injection jets using a new high-speed 3D drum camera
NASA Astrophysics Data System (ADS)
Eisfeld, Fritz
1997-05-01
To improve the combustion of the diesel engine it is important that the combustion chamber is equally filled with fuel and vapor of fuel. The investigation of the spatial spreading of the injection jet is possible with optical methods. Therefore a drum camera for 3D was developed to take this spatial event. The camera and the first results of the investigations of different injection nozzles are described.
Development of a Compton camera for safeguards applications in a pyroprocessing facility
NASA Astrophysics Data System (ADS)
Park, Jin Hyung; Kim, Young Su; Kim, Chan Hyeong; Seo, Hee; Park, Se-Hwan; Kim, Ho-Dong
2014-11-01
The Compton camera has a potential to be used for localizing nuclear materials in a large pyroprocessing facility due to its unique Compton kinematics-based electronic collimation method. Our R&D group, KAERI, and Hanyang University have made an effort to develop a scintillation-detector-based large-area Compton camera for safeguards applications. In the present study, a series of Monte Carlo simulations was performed with Geant4 in order to examine the effect of the detector parameters and the feasibility of using a Compton camera to obtain an image of the nuclear material distribution. Based on the simulation study, experimental studies were performed to assess the possibility of Compton imaging in accordance with the type of the crystal. Two different types of Compton cameras were fabricated and tested with a pixelated type of LYSO (Ce) and a monolithic type of NaI(Tl). The conclusions of this study as a design rule for a large-area Compton camera can be summarized as follows: 1) The energy resolution, rather than position resolution, of the component detector was the limiting factor for the imaging resolution, 2) the Compton imaging system needs to be placed as close as possible to the source location, and 3) both pixelated and monolithic types of crystals can be utilized; however, the monolithic types, require a stochastic-method-based position-estimating algorithm for improving the position resolution.
NASA Astrophysics Data System (ADS)
Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji
2012-03-01
We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.
QuadCam - A Quadruple Polarimetric Camera for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Skuljan, J.
A specialised quadruple polarimetric camera for space situational awareness, QuadCam, has been built at the Defence Technology Agency (DTA), New Zealand, as part of collaboration with the Defence Science and Technology Laboratory (Dstl), United Kingdom. The design was based on a similar system originally developed at Dstl, with some significant modifications for improved performance. The system is made up of four identical CCD cameras looking in the same direction, but in a different plane of polarisation at 0, 45, 90 and 135 degrees with respect to the reference plane. A standard set of Stokes parameters can be derived from the four images in order to describe the state of polarisation of an object captured in the field of view. The modified design of the DTA QuadCam makes use of four small Raspberry Pi computers, so that each camera is controlled by its own computer in order to speed up the readout process and ensure that the four individual frames are taken simultaneously (to within 100-200 microseconds). In addition, a new firmware was requested from the camera manufacturer so that an output signal is generated to indicate the state of the camera shutter. A specialised GPS unit (also developed at DTA) is then used to monitor the shutter signals from the four cameras and record the actual time of exposure to an accuracy of about 100 microseconds. This makes the system well suited for the observation of fast-moving objects in the low Earth orbit (LEO). The QuadCam is currently mounted on a Paramount MEII robotic telescope mount at the newly built DTA space situational awareness observatory located on Whangaparaoa Peninsula near Auckland, New Zealand. The system will be used for tracking satellites in low Earth orbit and geostationary belt as well. The performance of the camera has been evaluated and a series of test images have been collected in order to derive the polarimetric signatures for selected satellites.
Measuring SO2 ship emissions with an ultraviolet imaging camera
NASA Astrophysics Data System (ADS)
Prata, A. J.
2014-05-01
Over the last few years fast-sampling ultraviolet (UV) imaging cameras have been developed for use in measuring SO2 emissions from industrial sources (e.g. power plants; typical emission rates ~ 1-10 kg s-1) and natural sources (e.g. volcanoes; typical emission rates ~ 10-100 kg s-1). Generally, measurements have been made from sources rich in SO2 with high concentrations and emission rates. In this work, for the first time, a UV camera has been used to measure the much lower concentrations and emission rates of SO2 (typical emission rates ~ 0.01-0.1 kg s-1) in the plumes from moving and stationary ships. Some innovations and trade-offs have been made so that estimates of the emission rates and path concentrations can be retrieved in real time. Field experiments were conducted at Kongsfjord in Ny Ålesund, Svalbard, where SO2 emissions from cruise ships were made, and at the port of Rotterdam, Netherlands, measuring emissions from more than 10 different container and cargo ships. In all cases SO2 path concentrations could be estimated and emission rates determined by measuring ship plume speeds simultaneously using the camera, or by using surface wind speed data from an independent source. Accuracies were compromised in some cases because of the presence of particulates in some ship emissions and the restriction of single-filter UV imagery, a requirement for fast-sampling (> 10 Hz) from a single camera. Despite the ease of use and ability to determine SO2 emission rates from the UV camera system, the limitation in accuracy and precision suggest that the system may only be used under rather ideal circumstances and that currently the technology needs further development to serve as a method to monitor ship emissions for regulatory purposes. A dual-camera system or a single, dual-filter camera is required in order to properly correct for the effects of particulates in ship plumes.
Zoom system without moving element by using two liquid crystal lenses with spherical electrode
NASA Astrophysics Data System (ADS)
Yang, Ren-Kai; Lin, Chia-Ping; Su, Guo-Dung J.
2017-08-01
A traditional zoom system is composed of several elements moving relatively toward other components to achieve zooming. Unlike tradition system, an electrically control zoom system with liquid crystal (LC) lenses is demonstrated in this paper. To achieve zooming, we apply two LC lenses whose optical power is controlled by voltage to replace two moving lenses in traditional zoom system. The mechanism of zoom system is to use two LC lenses to form a simple zoom system. We found that with such spherical electrodes, we could operate LC lens at voltage range from 31V to 53 V for 3X tunability in optical power. For each LC lens, we use concave spherical electrode which provide lower operating voltage and great tunability in optical power, respectively. For such operating voltage and compact size, this zoom system with zoom ratio approximate 3:1 could be applied to mobile phone, camera and other applications.
Direct Evidence for Vision-based Control of Flight Speed in Budgerigars.
Schiffner, Ingo; Srinivasan, Mandyam V
2015-06-05
We have investigated whether, and, if so, how birds use vision to regulate the speed of their flight. Budgerigars, Melopsittacus undulatus, were filmed in 3-D using high-speed video cameras as they flew along a 25 m tunnel in which stationary or moving vertically oriented black and white stripes were projected on the side walls. We found that the birds increased their flight speed when the stripes were moved in the birds' flight direction, but decreased it only marginally when the stripes were moved in the opposite direction. The results provide the first direct evidence that Budgerigars use cues based on optic flow, to regulate their flight speed. However, unlike the situation in flying insects, it appears that the control of flight speed in Budgerigars is direction-specific. It does not rely solely on cues derived from optic flow, but may also be determined by energy constraints.
Direct Evidence for Vision-based Control of Flight Speed in Budgerigars
Schiffner, Ingo; Srinivasan, Mandyam V.
2015-01-01
We have investigated whether, and, if so, how birds use vision to regulate the speed of their flight. Budgerigars, Melopsittacus undulatus, were filmed in 3-D using high-speed video cameras as they flew along a 25 m tunnel in which stationary or moving vertically oriented black and white stripes were projected on the side walls. We found that the birds increased their flight speed when the stripes were moved in the birds’ flight direction, but decreased it only marginally when the stripes were moved in the opposite direction. The results provide the first direct evidence that Budgerigars use cues based on optic flow, to regulate their flight speed. However, unlike the situation in flying insects, it appears that the control of flight speed in Budgerigars is direction-specific. It does not rely solely on cues derived from optic flow, but may also be determined by energy constraints. PMID:26046799
End effector of the Discovery's RMS with tools moves toward Syncom-IV
1985-04-17
51D-44-046 (17 April 1985) --- The Space Shuttle Discovery's Remote Manipulator System (RMS) arm and two specially designed extensions move toward the troubled Syncom-IV (LEASAT) communications satellite during a station keeping mode of the two spacecraft in Earth orbit. Inside the Shuttle's cabin, astronaut Rhea Seddon, 51D mission specialist, controlled the Canadian-built arm in an attempt to move an external lever on the satellite. Crewmembers learned of the satellite's problems shortly after it was deployed from the cargo bay on April 13, 1985. The arm achieved physical contact with the lever as planned. However, the satellite did not respond to the contact as hoped. A 70mm handheld Hassellblad camera, aimed through Discovery's windows, recorded this frame -- one of the first to be released to news media following return of the seven-member crew on April 17, 1985.
Space trajectory calculation based on G-sensor
NASA Astrophysics Data System (ADS)
Xu, Biya; Zhan, Yinwei; Shao, Yang
2017-08-01
At present, without full use of the mobile phone around us, most of the research in human body posture recognition field is use camera or portable acceleration sensor to collect data. In this paper, G-sensor built-in mobile phone is use to collect data. After processing data with the way of moving average filter and acceleration integral, joint point's space three-dimensional coordinates can be abtained accurately.
Detection of Humans and Light Vehicles Using Acoustic-to-Seismic Coupling
2009-08-31
microphones, video cameras (regular and infrared), magnetic sensors, and active Doppler radar and sonar systems. These sensors could be located at... sonar systems due to dramatic absorption/reflection of electromagnetic/ultrasonic waves [8,9]. 6...engine was turned off, and the car continued moving. This eliminated the engine sound. A PCB microphone, 377B41, with preamplifier , 426A30, and with
Sophie in the Snow: A Simple Approach to Datalogging and Modelling in Physics
ERIC Educational Resources Information Center
Oldknow, Adrian; Huyton, Pip; Galloway, Ian
2010-01-01
Most students now have access to devices such as digital cameras and mobile phones that are capable of taking short video clips outdoors. Such clips can be used with powerful ICT tools, such as Tracker, Excel and TI-Nspire, to extract time and coordinate data about a moving object, to produce scattergrams and to fit models. In this article we…
The Surgeon's View: Comparison of Two Digital Video Recording Systems in Veterinary Surgery.
Giusto, Gessica; Caramello, Vittorio; Comino, Francesco; Gandini, Marco
2015-01-01
Video recording and photography during surgical procedures are useful in veterinary medicine for several reasons, including legal, educational, and archival purposes. Many systems are available, such as hand cameras, light-mounted cameras, and head cameras. We chose a reasonably priced head camera that is among the smallest video cameras available. To best describe its possible uses and advantages, we recorded video and images of eight different surgical cases and procedures, both in hospital and field settings. All procedures were recorded both with a head-mounted camera and a commercial hand-held photo camera. Then sixteen volunteers (eight senior clinicians and eight final-year students) completed an evaluation questionnaire. Both cameras produced high-quality photographs and videos, but observers rated the head camera significantly better regarding point of view and their understanding of the surgical operation. The head camera was considered significantly more useful in teaching surgical procedures. Interestingly, senior clinicians tended to assign generally lower scores compared to students. The head camera we tested is an effective, easy-to-use tool for recording surgeries and various veterinary procedures in all situations, with no need for assistance from a dedicated operator. It can be a valuable aid for veterinarians working in all fields of the profession and a useful tool for veterinary surgical education.
Efficient large-scale graph data optimization for intelligent video surveillance
NASA Astrophysics Data System (ADS)
Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming
2017-08-01
Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.
Design and Development of a High Speed Sorting System Based on Machine Vision Guiding
NASA Astrophysics Data System (ADS)
Zhang, Wenchang; Mei, Jiangping; Ding, Yabin
In this paper, a vision-based control strategy to perform high speed pick-and-place tasks on automation product line is proposed, and relevant control software is develop. Using Delta robot to control a sucker to grasp disordered objects from one moving conveyer and then place them on the other in order. CCD camera gets one picture every time the conveyer moves a distance of ds. Objects position and shape are got after image processing. Target tracking method based on "Servo motor + synchronous conveyer" is used to fulfill the high speed porting operation real time. Experiments conducted on Delta robot sorting system demonstrate the efficiency and validity of the proposed vision-control strategy.
Further Analysis on the Mystery of the Surveyor III Dust Deposits
NASA Technical Reports Server (NTRS)
Metzger, Philip; Hintze, Paul; Trigwell, Steven; Lane, John
2012-01-01
The Apollo 12 lunar module (LM) landing near the Surveyor III spacecraft at the end of 1969 has remained the primary experimental verification of the predicted physics of plume ejecta effects from a rocket engine interacting with the surface of the moon. This was made possible by the return of the Surveyor III camera housing by the Apollo 12 astronauts, allowing detailed analysis of the composition of dust deposited by the LM plume. It was soon realized after the initial analysis of the camera housing that the LM plume tended to remove more dust than it had deposited. In the present study, coupons from the camera housing have been reexamined. In addition, plume effects recorded in landing videos from each Apollo mission have been studied for possible clues.
NASA Astrophysics Data System (ADS)
Swain, Pradyumna; Mark, David
2004-09-01
The emergence of curved CCD detectors as individual devices or as contoured mosaics assembled to match the curved focal planes of astronomical telescopes and terrestrial stereo panoramic cameras represents a major optical design advancement that greatly enhances the scientific potential of such instruments. In altering the primary detection surface within the telescope"s optical instrumentation system from flat to curved, and conforming the applied CCD"s shape precisely to the contour of the telescope"s curved focal plane, a major increase in the amount of transmittable light at various wavelengths through the system is achieved. This in turn enables multi-spectral ultra-sensitive imaging with much greater spatial resolution necessary for large and very large telescope applications, including those involving infrared image acquisition and spectroscopy, conducted over very wide fields of view. For earth-based and space-borne optical telescopes, the advent of curved CCD"s as the principle detectors provides a simplification of the telescope"s adjoining optics, reducing the number of optical elements and the occurrence of optical aberrations associated with large corrective optics used to conform to flat detectors. New astronomical experiments may be devised in the presence of curved CCD applications, in conjunction with large format cameras and curved mosaics, including three dimensional imaging spectroscopy conducted over multiple wavelengths simultaneously, wide field real-time stereoscopic tracking of remote objects within the solar system at high resolution, and deep field survey mapping of distant objects such as galaxies with much greater multi-band spatial precision over larger sky regions. Terrestrial stereo panoramic cameras equipped with arrays of curved CCD"s joined with associative wide field optics will require less optical glass and no mechanically moving parts to maintain continuous proper stereo convergence over wider perspective viewing fields than their flat CCD counterparts, lightening the cameras and enabling faster scanning and 3D integration of objects moving within a planetary terrain environment. Preliminary experiments conducted at the Sarnoff Corporation indicate the feasibility of curved CCD imagers with acceptable electro-optic integrity. Currently, we are in the process of evaluating the electro-optic performance of a curved wafer scale CCD imager. Detailed ray trace modeling and experimental electro-optical data performance obtained from the curved imager will be presented at the conference.
NASA Astrophysics Data System (ADS)
Goldstein, N.; Dressler, R. A.; Richtsmeier, S. S.; McLean, J.; Dao, P. D.; Murray-Krezan, J.; Fulcoly, D. O.
2013-09-01
Recent ground testing of a wide area camera system and automated star removal algorithms has demonstrated the potential to detect, quantify, and track deep space objects using small aperture cameras and on-board processors. The camera system, which was originally developed for a space-based Wide Area Space Surveillance System (WASSS), operates in a fixed-stare mode, continuously monitoring a wide swath of space and differentiating celestial objects from satellites based on differential motion across the field of view. It would have greatest utility in a LEO orbit to provide automated and continuous monitoring of deep space with high refresh rates, and with particular emphasis on the GEO belt and GEO transfer space. Continuous monitoring allows a concept of change detection and custody maintenance not possible with existing sensors. The detection approach is equally applicable to Earth-based sensor systems. A distributed system of such sensors, either Earth-based, or space-based, could provide automated, persistent night-time monitoring of all of deep space. The continuous monitoring provides a daily record of the light curves of all GEO objects above a certain brightness within the field of view. The daily updates of satellite light curves offers a means to identify specific satellites, to note changes in orientation and operational mode, and to queue other SSA assets for higher resolution queries. The data processing approach may also be applied to larger-aperture, higher resolution camera systems to extend the sensitivity towards dimmer objects. In order to demonstrate the utility of the WASSS system and data processing, a ground based field test was conducted in October 2012. We report here the results of the observations made at Magdalena Ridge Observatory using the prototype WASSS camera, which has a 4×60° field-of-view , <0.05° resolution, a 2.8 cm2 aperture, and the ability to view within 4° of the sun. A single camera pointed at the GEO belt provided a continuous night-long record of the intensity and location of more than 50 GEO objects detected within the camera's 60-degree field-of-view, with a detection sensitivity similar to the camera's shot noise limit of Mv=13.7. Performance is anticipated to scale with aperture area, allowing the detection of dimmer objects with larger-aperture cameras. The sensitivity of the system depends on multi-frame averaging and an image processing algorithm that exploits the different angular velocities of celestial objects and SOs. Principal Components Analysis (PCA) is used to filter out all objects moving with the velocity of the celestial frame of reference. The resulting filtered images are projected back into an Earth-centered frame of reference, or into any other relevant frame of reference, and co-added to form a series of images of the GEO objects as a function of time. The PCA approach not only removes the celestial background, but it also removes systematic variations in system calibration, sensor pointing, and atmospheric conditions. The resulting images are shot-noise limited, and can be exploited to automatically identify deep space objects, produce approximate state vectors, and track their locations and intensities as a function of time.
Advanced EVA Suit Camera System Development Project
NASA Technical Reports Server (NTRS)
Mock, Kyla
2016-01-01
The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was also spent creating a case for the original interface board that is already being used. This design is being done by use of Creo 2. Due to time constraints, I may not be able to complete the 3-D printing portion of this design, but I was able to use my knowledge of the interface board and Altium Design to help in the task. As a side project, I assisted another intern in selecting and programming a microprocessor to control linear actuators. These linear actuators will be used to move various increments of polyethylene for controlled radiation testing. For this, we began the software portion of the project using the Arduino's coding environment to control an Arduino Due and H-Bridge components. Along with the obvious learning of computer programs such as Altium Design and Creo 2, I also acquired more skills with networking and collaborating with others, being able to multi-task because of responsibilities to work on various projects, and how to set realistic goals in the work place. Like many internship projects, this project will be continued and improved, so I also had the chance to improve my organization and communication skills as I documented all of my meetings and research. As a result of my internship at JSC, I desire to continue a career with NASA, whether that be through another internship or possibly a co-op. I am excited to return to my university and continue my education in electrical engineering because of all of my experiences at JSC.
Rock Slope Monitoring from 4D Time-Lapse Structure from Motion Analysis
NASA Astrophysics Data System (ADS)
Kromer, Ryan; Abellan, Antonio; Chyz, Alex; Hutchinson, Jean
2017-04-01
Structure from Motion (SfM) photogrammetry has become an important tool for studying earth surface processes because of its flexibility, ease of use, low cost and its capability of producing high quality 3-D surface models. A major benefit of SfM is that model accuracy is fit for purpose and surveys can be designed to meet a large range of spatial and temporal scales. In the Earth sciences, research in time-lapse SfM photogrammetry or videogrammetry is an area that is difficult to undertake due to complexities in acquiring, processing and managing large 4D datasets and represents an area with significant advancement potential (Eltner et al. 2016). In this study, we investigate the potential of 4D time-lapse SfM to monitor unstable rock slopes. We tested an array of statically mounted cameras collecting time-lapse photos of a limestone rock slope located along a highway in Canada. Our setup consisted of 8 DSLR cameras with 50 mm prime lenses spaced 2-3 m apart at a distance of 10 m from the slope. The portion of the rock slope monitored was 20 m wide and 6 m high. We collected data in four phases, each having 50 photographs taken simultaneously by each camera. The first phase of photographs was taken of the stable slope. In each successive phase, we gradually moved small, discrete blocks within the rock slope by 5-15 mm, simulating pre-failure deformation of rockfall. During the last phase we also removed discrete rock blocks, simulating rockfall. We used Agisoft Photoscan's 4D processing functionality and timeline tools to create 3D point clouds from the time-lapse photographs. These tools have the benefit of attaining better accuracy photo alignments as a greater number of photos are used. For change detection, we used the 4D filtering and calibration technique proposed by Kromer et al. (2015), which takes advantage of high degrees of spatial and temporal point redundancy to decrease measurement uncertainty. Preliminary results show that it is possible to attain more accurate 3D models using time-lapse photos taken from an array of cameras than photos taken from a single camera from multiple positions. For this survey setup, it was possible to detect mm to cm level of changes, which is of sufficient accuracy to detect the pre-failure stage of rockfalls, as well as small rockfall events. Additionally, cameras mounted in a static array can be operated remotely and automatically. Time-lapse SfM photogrammetry can be a cost effective alternative to terrestrial laser scanning for rockfall prone areas and facilitates the study of surface processes with high spatial and temporal detail. We gratefully acknowledge support from the NSERC collaborative research and development grant. References Eltner, A., Kaiser, A., Castillo, C.; Rock, G., Neugirg, F., Abellán, A. Image-based surface reconstruction in geomorphometry—Merits, limits and developments. Earth Surf. Dyn. 2016, 4, 359-389. Kromer, R. A., Abellán, A., Hutchinson, D. J., Lato, M., Edwards, T., & Jaboyedoff, M. A 4D filtering and calibration technique for small-scale point cloud change detection with a terrestrial laser scanner. Remote Sensing 2015, 7(10), 13029-13052.
Moving object localization using optical flow for pedestrian detection from a moving vehicle.
Hariyono, Joko; Hoang, Van-Dung; Jo, Kang-Hyun
2014-01-01
This paper presents a pedestrian detection method from a moving vehicle using optical flows and histogram of oriented gradients (HOG). A moving object is extracted from the relative motion by segmenting the region representing the same optical flows after compensating the egomotion of the camera. To obtain the optical flow, two consecutive images are divided into grid cells 14 × 14 pixels; then each cell is tracked in the current frame to find corresponding cell in the next frame. Using at least three corresponding cells, affine transformation is performed according to each corresponding cell in the consecutive images, so that conformed optical flows are extracted. The regions of moving object are detected as transformed objects, which are different from the previously registered background. Morphological process is applied to get the candidate human regions. In order to recognize the object, the HOG features are extracted on the candidate region and classified using linear support vector machine (SVM). The HOG feature vectors are used as input of linear SVM to classify the given input into pedestrian/nonpedestrian. The proposed method was tested in a moving vehicle and also confirmed through experiments using pedestrian dataset. It shows a significant improvement compared with original HOG using ETHZ pedestrian dataset.
Visualization of high speed liquid jet impaction on a moving surface.
Guo, Yuchen; Green, Sheldon
2015-04-17
Two apparatuses for examining liquid jet impingement on a high-speed moving surface are described: an air cannon device (for examining surface speeds between 0 and 25 m/sec) and a spinning disk device (for examining surface speeds between 15 and 100 m/sec). The air cannon linear traverse is a pneumatic energy-powered system that is designed to accelerate a metal rail surface mounted on top of a wooden projectile. A pressurized cylinder fitted with a solenoid valve rapidly releases pressurized air into the barrel, forcing the projectile down the cannon barrel. The projectile travels beneath a spray nozzle, which impinges a liquid jet onto its metal upper surface, and the projectile then hits a stopping mechanism. A camera records the jet impingement, and a pressure transducer records the spray nozzle backpressure. The spinning disk set-up consists of a steel disk that reaches speeds of 500 to 3,000 rpm via a variable frequency drive (VFD) motor. A spray system similar to that of the air cannon generates a liquid jet that impinges onto the spinning disc, and cameras placed at several optical access points record the jet impingement. Video recordings of jet impingement processes are recorded and examined to determine whether the outcome of impingement is splash, splatter, or deposition. The apparatuses are the first that involve the high speed impingement of low-Reynolds-number liquid jets on high speed moving surfaces. In addition to its rail industry applications, the described technique may be used for technical and industrial purposes such as steelmaking and may be relevant to high-speed 3D printing.
Visualization of High Speed Liquid Jet Impaction on a Moving Surface
Guo, Yuchen; Green, Sheldon
2015-01-01
Two apparatuses for examining liquid jet impingement on a high-speed moving surface are described: an air cannon device (for examining surface speeds between 0 and 25 m/sec) and a spinning disk device (for examining surface speeds between 15 and 100 m/sec). The air cannon linear traverse is a pneumatic energy-powered system that is designed to accelerate a metal rail surface mounted on top of a wooden projectile. A pressurized cylinder fitted with a solenoid valve rapidly releases pressurized air into the barrel, forcing the projectile down the cannon barrel. The projectile travels beneath a spray nozzle, which impinges a liquid jet onto its metal upper surface, and the projectile then hits a stopping mechanism. A camera records the jet impingement, and a pressure transducer records the spray nozzle backpressure. The spinning disk set-up consists of a steel disk that reaches speeds of 500 to 3,000 rpm via a variable frequency drive (VFD) motor. A spray system similar to that of the air cannon generates a liquid jet that impinges onto the spinning disc, and cameras placed at several optical access points record the jet impingement. Video recordings of jet impingement processes are recorded and examined to determine whether the outcome of impingement is splash, splatter, or deposition. The apparatuses are the first that involve the high speed impingement of low-Reynolds-number liquid jets on high speed moving surfaces. In addition to its rail industry applications, the described technique may be used for technical and industrial purposes such as steelmaking and may be relevant to high-speed 3D printing. PMID:25938331
Huang, Hsu-Chia; Lee, Yen-Tung; Chen, Wen-Yeo; Liang, Caleb
2017-01-01
Self-location—the sense of where I am in space—provides an experiential anchor for one's interaction with the environment. In the studies of full-body illusions, many researchers have defined self-location solely in terms of body-location—the subjective feeling of where my body is. Although this view is useful, there is an issue regarding whether it can fully accommodate the role of 1PP-location—the sense of where my first-person perspective is located in space. In this study, we investigate self-location by comparing body-location and 1PP-location: using a head-mounted display (HMD) and a stereo camera, the subjects watched their own body standing in front of them and received tactile stimulations. We manipulated their senses of body-location and 1PP-location in three different conditions: the participants standing still (Basic condition), asking them to move forward (Walking condition), and swiftly moving the stereo camera away from their body (Visual condition). In the Walking condition, the participants watched their body moving away from their 1PP. In the Visual condition, the scene seen via the HMD was systematically receding. Our data show that, under different manipulations of movement, the spatial unity between 1PP-location and body-location can be temporarily interrupted. Interestingly, we also observed a “double-body effect.” We further suggest that it is better to consider body-location and 1PP-location as interrelated but distinct factors that jointly support the sense of self-location. PMID:28352241
Huang, Hsu-Chia; Lee, Yen-Tung; Chen, Wen-Yeo; Liang, Caleb
2017-01-01
Self-location -the sense of where I am in space-provides an experiential anchor for one's interaction with the environment. In the studies of full-body illusions, many researchers have defined self-location solely in terms of body-location -the subjective feeling of where my body is. Although this view is useful, there is an issue regarding whether it can fully accommodate the role of 1PP-location -the sense of where my first-person perspective is located in space. In this study, we investigate self-location by comparing body-location and 1PP-location: using a head-mounted display (HMD) and a stereo camera, the subjects watched their own body standing in front of them and received tactile stimulations. We manipulated their senses of body-location and 1PP-location in three different conditions: the participants standing still (Basic condition), asking them to move forward (Walking condition), and swiftly moving the stereo camera away from their body (Visual condition). In the Walking condition, the participants watched their body moving away from their 1PP. In the Visual condition, the scene seen via the HMD was systematically receding. Our data show that, under different manipulations of movement, the spatial unity between 1PP-location and body-location can be temporarily interrupted. Interestingly, we also observed a "double-body effect." We further suggest that it is better to consider body-location and 1PP-location as interrelated but distinct factors that jointly support the sense of self-location.
Concept of a photon-counting camera based on a diffraction-addressed Gray-code mask
NASA Astrophysics Data System (ADS)
Morel, Sébastien
2004-09-01
A new concept of photon counting camera for fast and low-light-level imaging applications is introduced. The possible spectrum covered by this camera ranges from visible light to gamma rays, depending on the device used to transform an incoming photon into a burst of visible photons (photo-event spot) localized in an (x,y) image plane. It is actually an evolution of the existing "PAPA" (Precision Analog Photon Address) Camera that was designed for visible photons. This improvement comes from a simplified optics. The new camera transforms, by diffraction, each photo-event spot from an image intensifier or a scintillator into a cross-shaped pattern, which is projected onto a specific Gray code mask. The photo-event position is then extracted from the signal given by an array of avalanche photodiodes (or photomultiplier tubes, alternatively) downstream of the mask. After a detailed explanation of this camera concept that we have called "DIAMICON" (DIffraction Addressed Mask ICONographer), we briefly discuss about technical solutions to build such a camera.
Ultraviolet Viewing with a Television Camera.
ERIC Educational Resources Information Center
Eisner, Thomas; And Others
1988-01-01
Reports on a portable video color camera that is fully suited for seeing ultraviolet images and offers some expanded viewing possibilities. Discusses the basic technique, specialized viewing, and the instructional value of this system of viewing reflectance patterns of flowers and insects that are invisible to the unaided eye. (CW)
Making Connections with Digital Data
ERIC Educational Resources Information Center
Leonard, William; Bassett, Rick; Clinger, Alicia; Edmondson, Elizabeth; Horton, Robert
2004-01-01
State-of-the-art digital cameras open up enormous possibilities in the science classroom, especially when used as data collectors. Because most high school students are not fully formal thinkers, the digital camera can provide a much richer learning experience than traditional observation. Data taken through digital images can make the…
Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid
2016-06-13
Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.
Flow visualization by mobile phone cameras
NASA Astrophysics Data System (ADS)
Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.
2016-06-01
Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.
Active learning in camera calibration through vision measurement application
NASA Astrophysics Data System (ADS)
Li, Xiaoqin; Guo, Jierong; Wang, Xianchun; Liu, Changqing; Cao, Binfang
2017-08-01
Since cameras are increasingly more used in scientific application as well as in the applications requiring precise visual information, effective calibration of such cameras is getting more important. There are many reasons why the measurements of objects are not accurate. The largest reason is that the lens has a distortion. Another detrimental influence on the evaluation accuracy is caused by the perspective distortions in the image. They happen whenever we cannot mount the camera perpendicularly to the objects we want to measure. In overall, it is very important for students to understand how to correct lens distortions, that is camera calibration. If the camera is calibrated, the images are rectificated, and then it is possible to obtain undistorted measurements in world coordinates. This paper presents how the students should develop a sense of active learning for mathematical camera model besides the theoretical scientific basics. The authors will present the theoretical and practical lectures which have the goal of deepening the students understanding of the mathematical models of area scan cameras and building some practical vision measurement process by themselves.
Retinal fundus imaging with a plenoptic sensor
NASA Astrophysics Data System (ADS)
Thurin, Brice; Bloch, Edward; Nousias, Sotiris; Ourselin, Sebastien; Keane, Pearse; Bergeles, Christos
2018-02-01
Vitreoretinal surgery is moving towards 3D visualization of the surgical field. This require acquisition system capable of recording such 3D information. We propose a proof of concept imaging system based on a light-field camera where an array of micro-lenses is placed in front of a conventional sensor. With a single snapshot, a stack of images focused at different depth are produced on the fly, which provides enhanced depth perception for the surgeon. Difficulty in depth localization of features and frequent focus-change during surgery are making current vitreoretinal heads-up surgical imaging systems cumbersome to use. To improve the depth perception and eliminate the need to manually refocus on the instruments during the surgery, we designed and implemented a proof-of-concept ophthalmoscope equipped with a commercial light-field camera. The sensor of our camera is composed of an array of micro-lenses which are projecting an array of overlapped micro-images. We show that with a single light-field snapshot we can digitally refocus between the retina and a tool located in front of the retina or display an extended depth-of-field image where everything is in focus. The design and system performances of the plenoptic fundus camera are detailed. We will conclude by showing in vivo data recorded with our device.
Positive-Buoyancy Rover for Under Ice Mobility
NASA Technical Reports Server (NTRS)
Leichty, John M.; Klesh, Andrew T.; Berisford, Daniel F.; Matthews, Jaret B.; Hand, Kevin P.
2013-01-01
A buoyant rover has been developed to traverse the underside of ice-covered lakes and seas. The rover operates at the ice/water interface and permits direct observation and measurement of processes affecting freeze- over and thaw events in lake and marine environments. Operating along the 2- D ice-water interface simplifies many aspects of underwater exploration, especially when compared to submersibles, which have difficulty in station-keeping and precision mobility. The buoyant rover consists of an all aluminum body with two aluminum sawtooth wheels. The two independent body segments are sandwiched between four actuators that permit isolation of wheel movement from movement of the central tether spool. For normal operations, the wheels move while the tether spool feeds out line and the cameras on each segment maintain a user-controlled fixed position. Typically one camera targets the ice/water interface and one camera looks down to the lake floor to identify seep sources. Each wheel can be operated independently for precision turning and adjustments. The rover is controlled by a touch- tablet interface and wireless goggles enable real-time viewing of video streamed from the rover cameras. The buoyant rover was successfully deployed and tested during an October 2012 field campaign to investigate methane trapped in ice in lakes along the North Slope of Alaska.
Endoscopic add-on stiffness probe for real-time soft surface characterisation in MIS.
Faragasso, A; Stilli, A; Bimbo, J; Noh, Y; Liu, H; Nanayakkara, T; Dasgupta, P; Wurdemann, H A; Althoefer, K
2014-01-01
This paper explores a novel stiffness sensor which is mounted on the tip of a laparoscopic camera. The proposed device is able to compute stiffness when interacting with soft surfaces. The sensor can be used in Minimally Invasive Surgery, for instance, to localise tumor tissue which commonly has a higher stiffness when compared to healthy tissue. The purely mechanical sensor structure utilizes the functionality of an endoscopic camera to the maximum by visually analyzing the behavior of trackers within the field of view. Two pairs of spheres (used as easily identifiable features in the camera images) are connected to two springs with known but different spring constants. Four individual indenters attached to the spheres are used to palpate the surface. During palpation, the spheres move linearly towards the objective lens (i.e. the distance between lens and spheres is changing) resulting in variations of their diameters in the camera images. Relating the measured diameters to the different spring constants, a developed mathematical model is able to determine the surface stiffness in real-time. Tests were performed using a surgical endoscope to palpate silicon phantoms presenting different stiffness. Results show that the accuracy of the sensing system developed increases with the softness of the examined tissue.
Automatic source camera identification using the intrinsic lens radial distortion
NASA Astrophysics Data System (ADS)
Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.
2006-11-01
Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.
Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization.
Lee, Sing Chun; Fuerst, Bernhard; Fotouhi, Javad; Fischer, Marius; Osgood, Greg; Navab, Nassir
2016-06-01
This work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient's surface without the need to move the C-arm. An RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm. Several experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries. To the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.
NASA Astrophysics Data System (ADS)
Do, Trong Hop; Yoo, Myungsik
2018-01-01
This paper proposes a vehicle positioning system using LED street lights and two rolling shutter CMOS sensor cameras. In this system, identification codes for the LED street lights are transmitted to camera-equipped vehicles through a visible light communication (VLC) channel. Given that the camera parameters are known, the positions of the vehicles are determined based on the geometric relationship between the coordinates of the LEDs in the images and their real world coordinates, which are obtained through the LED identification codes. The main contributions of the paper are twofold. First, the collinear arrangement of the LED street lights makes traditional camera-based positioning algorithms fail to determine the position of the vehicles. In this paper, an algorithm is proposed to fuse data received from the two cameras attached to the vehicles in order to solve the collinearity problem of the LEDs. Second, the rolling shutter mechanism of the CMOS sensors combined with the movement of the vehicles creates image artifacts that may severely degrade the positioning accuracy. This paper also proposes a method to compensate for the rolling shutter artifact, and a high positioning accuracy can be achieved even when the vehicle is moving at high speeds. The performance of the proposed positioning system corresponding to different system parameters is examined by conducting Matlab simulations. Small-scale experiments are also conducted to study the performance of the proposed algorithm in real applications.
Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras
NASA Astrophysics Data System (ADS)
El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid
2015-03-01
In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.
Ripples in Rocks Point to Water
NASA Technical Reports Server (NTRS)
2004-01-01
This image taken by the Mars Exploration Rover Opportunity's panoramic camera shows the rock nicknamed 'Last Chance,' which lies within the outcrop near the rover's landing site at Meridiani Planum, Mars. The image provides evidence for a geologic feature known as ripple cross-stratification. At the base of the rock, layers can be seen dipping downward to the right. The bedding that contains these dipping layers is only one to two centimeters (0.4 to 0.8 inches) thick. In the upper right corner of the rock, layers also dip to the right, but exhibit a weak 'concave-up' geometry. These two features -- the thin, cross-stratified bedding combined with the possible concave geometry -- suggest small ripples with sinuous crest lines. Although wind can produce ripples, they rarely have sinuous crest lines and never form steep, dipping layers at this small scale. The most probable explanation for these ripples is that they were formed in the presence of moving water.
Crossbedding Evidence for Underwater Origin Interpretations of cross-lamination patterns presented as clues to this martian rock's origin under flowing water are marked on images taken by the panoramic camera and microscopic imager on NASA's Opportunity. [figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 1Figure 2 The red arrows (Figure 1) point to features suggesting cross-lamination within the rock called 'Last Chance' taken at a distance of 4.5 meters (15 feet) during Opportunity's 17th sol (February 10, 2004). The inferred sets of fine layers at angles to each other (cross-laminae) are up to 1.4 centimeters (half an inch) thick. For scale, the distance between two vertical cracks in the rock is about 7 centimeters (2.8 inches). The feature indicated by the middle red arrow suggests a pattern called trough cross-lamination, likely produced when flowing water shaped sinuous ripples in underwater sediment and pushed the ripples to migrate in one direction. The direction of the ancient flow would have been either toward or away from the line of sight from this perspective. The lower and upper red arrows point to cross-lamina sets that are consistent with underwater ripples in the sediment having moved in water that was flowing left to right from this perspective. The yellow arrows (Figure 2) indicate places in the panoramic camera view that correlate with places in the microscope's view of the same rock. [figure removed for brevity, see original site] Figure 3 The microscopic view (Figure 3) is a mosaic of some of the 152 microscopic imager frames of 'Last Chance' that Opportunity took on sols 39 and 40 (March 3 and 4, 2004). [figure removed for brevity, see original site] Figure 4 Figure 4 shows cross-lamination expressed by lines that trend downward from left to right, traced with black lines in the interpretive overlay. These cross-lamination lines are consistent with dipping planes that would have formed surfaces on the down-current side of migrating ripples. Interpretive blue lines indicate boundaries between possible sets of cross-laminae.Simulating the Performance of Ground-Based Optical Asteroid Surveys
NASA Astrophysics Data System (ADS)
Christensen, Eric J.; Shelly, Frank C.; Gibbs, Alex R.; Grauer, Albert D.; Hill, Richard E.; Johnson, Jess A.; Kowalski, Richard A.; Larson, Stephen M.
2014-11-01
We are developing a set of asteroid survey simulation tools in order to estimate the capability of existing and planned ground-based optical surveys, and to test a variety of possible survey cadences and strategies. The survey simulator is composed of several layers, including a model population of solar system objects and an orbital integrator, a site-specific atmospheric model (including inputs for seeing, haze and seasonal cloud cover), a model telescope (with a complete optical path to estimate throughput), a model camera (including FOV, pixel scale, and focal plane fill factor) and model source extraction and moving object detection layers with tunable detection requirements. We have also developed a flexible survey cadence planning tool to automatically generate nightly survey plans. Inputs to the cadence planner include camera properties (FOV, readout time), telescope limits (horizon, declination, hour angle, lunar and zenithal avoidance), preferred and restricted survey regions in RA/Dec, ecliptic, and Galactic coordinate systems, and recent coverage by other asteroid surveys. Simulated surveys are created for a subset of current and previous NEO surveys (LINEAR, Pan-STARRS and the three Catalina Sky Survey telescopes), and compared against the actual performance of these surveys in order to validate the model’s performance. The simulator tracks objects within the FOV of any pointing that were not discovered (e.g. too few observations, too trailed, focal plane array gaps, too fast or slow), thus dividing the population into “discoverable” and “discovered” subsets, to inform possible survey design changes. Ongoing and future work includes generating a realistic “known” subset of the model NEO population, running multiple independent simulated surveys in coordinated and uncoordinated modes, and testing various cadences to find optimal strategies for detecting NEO sub-populations. These tools can also assist in quantifying the efficiency of novel yet unverified survey cadences (e.g. the baseline LSST cadence) that sparsely spread the observations required for detection over several days or weeks.
Plume propagation direction determination with SO2 cameras
NASA Astrophysics Data System (ADS)
Klein, Angelika; Lübcke, Peter; Bobrowski, Nicole; Kuhn, Jonas; Platt, Ulrich
2017-03-01
SO2 cameras are becoming an established tool for measuring sulfur dioxide (SO2) fluxes in volcanic plumes with good precision and high temporal resolution. The primary result of SO2 camera measurements are time series of two-dimensional SO2 column density distributions (i.e. SO2 column density images). However, it is frequently overlooked that, in order to determine the correct SO2 fluxes, not only the SO2 column density, but also the distance between the camera and the volcanic plume, has to be precisely known. This is because cameras only measure angular extents of objects while flux measurements require knowledge of the spatial plume extent. The distance to the plume may vary within the image array (i.e. the field of view of the SO2 camera) since the plume propagation direction (i.e. the wind direction) might not be parallel to the image plane of the SO2 camera. If the wind direction and thus the camera-plume distance are not well known, this error propagates into the determined SO2 fluxes and can cause errors exceeding 50 %. This is a source of error which is independent of the frequently quoted (approximate) compensation of apparently higher SO2 column densities and apparently lower plume propagation velocities at non-perpendicular plume observation angles.Here, we propose a new method to estimate the propagation direction of the volcanic plume directly from SO2 camera image time series by analysing apparent flux gradients along the image plane. From the plume propagation direction and the known location of the SO2 source (i.e. volcanic vent) and camera position, the camera-plume distance can be determined. Besides being able to determine the plume propagation direction and thus the wind direction in the plume region directly from SO2 camera images, we additionally found that it is possible to detect changes of the propagation direction at a time resolution of the order of minutes. In addition to theoretical studies we applied our method to SO2 flux measurements at Mt Etna and demonstrate that we obtain considerably more precise (up to a factor of 2 error reduction) SO2 fluxes. We conclude that studies on SO2 flux variability become more reliable by excluding the possible influences of propagation direction variations.
An inexpensive programmable illumination microscope with active feedback.
Tompkins, Nathan; Fraden, Seth
2016-02-01
We have developed a programmable illumination system capable of tracking and illuminating numerous objects simultaneously using only low-cost and reused optical components. The active feedback control software allows for a closed-loop system that tracks and perturbs objects of interest automatically. Our system uses a static stage where the objects of interest are tracked computationally as they move across the field of view allowing for a large number of simultaneous experiments. An algorithmically determined illumination pattern can be applied anywhere in the field of view with simultaneous imaging and perturbation using different colors of light to enable spatially and temporally structured illumination. Our system consists of a consumer projector, camera, 35-mm camera lens, and a small number of other optical and scaffolding components. The entire apparatus can be assembled for under $4,000.
Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras
Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Guo, Yin
2016-01-01
The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained. PMID:27869731
Incidents Prediction in Road Junctions Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Hajji, Tarik; Alami Hassani, Aicha; Ouazzani Jamil, Mohammed
2018-05-01
The implementation of an incident detection system (IDS) is an indispensable operation in the analysis of the road traffics. However the IDS may, in no case, represent an alternative to the classical monitoring system controlled by the human eye. The aim of this work is to increase detection and prediction probability of incidents in camera-monitored areas. Knowing that, these areas are monitored by multiple cameras and few supervisors. Our solution is to use Artificial Neural Networks (ANN) to analyze moving objects trajectories on captured images. We first propose a modelling of the trajectories and their characteristics, after we develop a learning database for valid and invalid trajectories, and then we carry out a comparative study to find the artificial neural network architecture that maximizes the rate of valid and invalid trajectories recognition.
FROM THE HISTORY OF PHYSICS: Georgii L'vovich Shnirman: designer of fast-response instruments
NASA Astrophysics Data System (ADS)
Bashilov, I. P.
1994-07-01
A biography is given of the outstanding Russian scientist Georgii L'vovich Shnirman, whose scientific life had been 'top secret'. He was an experimental physicist and instrument designer, the founder of many branches of the Soviet instrument-making industry, the originator of a theory of electric methods of integration and differentiation, a theory of astasisation of pendulums, and also of original measurement methods. He was the originator and designer of automatic systems for the control of the measuring apparatus used at nuclear test sites and of automatic seismic station systems employed in monitoring nuclear tests. He also designed the first loop oscilloscopes in the Soviet Union, high-speed photographic and cine cameras (streak cameras, etc.), and many other unique instruments, including some mounted on moving objects.
View of the SBS-4 communications satellite in orbit above the earth
1984-08-30
41D-39-068 (1 Sept 1984) --- Quickly moving away from the Space Shuttle Discovery is the Telstar 3 communications satellite, deployed September 1, 1984. The 41-D crew successfully completed three satellite placements, of which this was the last. Telstar was the second 41-D deployed satellite to be equipped with a payload assist module (PAM-D). The frame was exposed with a 70mm camera.
ERIC Educational Resources Information Center
Ochsner, Karl
2010-01-01
Students are moving away from content consumption to content production. Short movies are uploaded onto video social networking sites and shared around the world. Unfortunately they usually contain little to no educational value, lack a narrative and are rarely created in the science classroom. According to new Arizona Technology standards and…
Optico-photographic measurements of airplane deformations
NASA Technical Reports Server (NTRS)
Kussner, Hans Georg
1931-01-01
The deformation of aircraft wings is measured by photographically recording a series of bright shots on a moving paper band sensitive to light. Alternating deformations, especially vibrations, can thus be measured in operation, unaffected by inertia. A handy recording camera, the optograph, was developed by the static division of the D.V.L. (German Experimental Institute for Aeronautics) for the employment of this method of measurement on airplanes in flight.
From Image Analysis to Computer Vision: Motives, Methods, and Milestones.
1998-07-01
images. Initially, work on digital image analysis dealt with specific classes of images such as text, photomicrographs, nuclear particle tracks, and aerial...photographs; but by the 1960’s, general algorithms and paradigms for image analysis began to be formulated. When the artificial intelligence...scene, but eventually from image sequences obtained by a moving camera; at this stage, image analysis had become scene analysis or computer vision
Apollo 16 lunar module 'Orion' photographed from distance during EVA
NASA Technical Reports Server (NTRS)
1972-01-01
The Apollo 16 Lunar Module 'Orion' is photographed from a distance by Astronaut Chares M. Duke Jr., lunar module pilot, aboard the moving Lunar Roving Vehicle. Astronauts Duke and John W. Young, commander, were returing from the third Apollo 16 extravehicular activity (EVA-2). The RCA color television camera mounted on the LRV is in the foreground. A portion of the LRV's high-gain antenna is at top left.
Foam Experiment Hardware are Flown on Microgravity Rocket MAXUS 4
NASA Astrophysics Data System (ADS)
Lockowandt, C.; Löth, K.; Jansson, O.; Holm, P.; Lundin, M.; Schneider, H.; Larsson, B.
2002-01-01
The Foam module was developed by Swedish Space Corporation and was used for performing foam experiments on the sounding rocket MAXUS 4 launched from Esrange 29 April 2001. The development and launch of the module has been financed by ESA. Four different foam experiments were performed, two aqueous foams by Doctor Michele Adler from LPMDI, University of Marne la Vallée, Paris and two non aqueous foams by Doctor Bengt Kronberg from YKI, Institute for Surface Chemistry, Stockholm. The foam was generated in four separate foam systems and monitored in microgravity with CCD cameras. The purpose of the experiment was to generate and study the foam in microgravity. Due to loss of gravity there is no drainage in the foam and the reactions in the foam can be studied without drainage. Four solutions with various stabilities were investigated. The aqueous solutions contained water, SDS (Sodium Dodecyl Sulphate) and dodecanol. The organic solutions contained ethylene glycol a cationic surfactant, cetyl trimethyl ammonium bromide (CTAB) and decanol. Carbon dioxide was used to generate the aqueous foam and nitrogen was used to generate the organic foam. The experiment system comprised four complete independent systems with injection unit, experiment chamber and gas system. The main part in the experiment system is the experiment chamber where the foam is generated and monitored. The chamber inner dimensions are 50x50x50 mm and it has front and back wall made of glass. The front window is used for monitoring the foam and the back window is used for back illumination. The front glass has etched crosses on the inside as reference points. In the bottom of the cell is a glass frit and at the top is a gas in/outlet. The foam was generated by injecting the experiment liquid in a glass frit in the bottom of the experiment chamber. Simultaneously gas was blown through the glass frit and a small amount of foam was generated. This procedure was performed at 10 bar. Then the pressure was lowered in the experiment chamber to approximately 0,1 bar to expand the foam to a dry foam that filled the experiment chamber. The foam was regenerated during flight by pressurise the cell and repeat the foam generation procedures. The module had 4 individual experiment chambers for the four different solutions. The four experiment chambers were controlled individually with individual experiment parameters and procedures. The gas system comprise on/off valves and adjustable valves to control the pressure and the gas flow and liquid flow during foam generation. The gas system can be divided in four sections, each section serving one experiment chamber. The sections are partly connected in two pairs with common inlet and outlet. The two pairs are supplied with a 1l gas bottle each filled to a pressure of 40 bar and a pressure regulator lowering the pressure from 40 bar to 10 bar. Two sections are connected to the same outlet. The gas outlets from the experiment chambers are connected to two symmetrical placed outlets on the outer structure with diffusers not to disturb the g-levels. The foam in each experiment chamber was monitored with one tomography camera and one overview camera (8 CCD cameras in total). The tomography camera is placed on a translation table which makes it possible to move it in the depth direction of the experiment chamber. The video signal from the 8 CCD cameras were stored onboard with two DV recorders. Two video signals were also transmitted to ground for real time evaluation and operation of the experiment. Which camera signal that was transmitted to ground could be selected with telecommands. With help of the tomography system it was possible to take sequences of images of the foam at different depths in the foam. This sequences of images are used for constructing a 3-D model of the foam after flight. The overview camera has a fixed position and a field of view that covers the total experiment chamber. This camera is used for monitoring the generation of foam and the overall behaviour of the foam. The experiment was performed successfully with foam generation in all 4 experiment chambers. Foam was also regenerated during flight with telecommands. The experiment data is under evaluation.
A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM) System
Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni
2013-01-01
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes. PMID:23823972
Adams, Noah S.; Smith, Collin; Plumb, John M.; Hansen, Gabriel S.; Beeman, John W.
2015-07-06
This report describes the initial year of a 2-year study to determine the feasibility of using acoustic cameras to monitor fish movements to help inform decisions about fish passage at Cougar Dam near Springfield, Oregon. Specifically, we used acoustic cameras to measure fish presence, travel speed, and direction adjacent to the water temperature control tower in the forebay of Cougar Dam during the spring (May, June, and July) and fall (September, October, and November) of 2013. Cougar Dam is a high-head flood-control dam, and the water temperature control tower enables depth-specific water withdrawals to facilitate adjustment of water temperatures released downstream of the dam. The acoustic cameras were positioned at the upstream entrance of the tower to monitor free-ranging subyearling and yearling-size juvenile Chinook salmon (Oncorhynchus tshawytscha). Because of the large size discrepancy, we could distinguish juvenile Chinook salmon from their predators, which enabled us to measure predators and prey in areas adjacent to the entrance of the tower. We used linear models to quantify and assess operational and environmental factors—such as time of day, discharge, and water temperature—that may influence juvenile Chinook salmon movements within the beam of the acoustic cameras. Although extensive milling behavior of fish near the structure may have masked directed movement of fish and added unpredictability to fish movement models, the acoustic-camera technology enabled us to ascertain the general behavior of discrete size classes of fish. Fish travel speed, direction of travel, and counts of fish moving toward the water temperature control tower primarily were influenced by the amount of water being discharged through the dam.
High frequency modal identification on noisy high-speed camera data
NASA Astrophysics Data System (ADS)
Javh, Jaka; Slavič, Janko; Boltežar, Miha
2018-01-01
Vibration measurements using optical full-field systems based on high-speed footage are typically heavily burdened by noise, as the displacement amplitudes of the vibrating structures are often very small (in the range of micrometers, depending on the structure). The modal information is troublesome to measure as the structure's response is close to, or below, the noise level of the camera-based measurement system. This paper demonstrates modal parameter identification for such noisy measurements. It is shown that by using the Least-Squares Complex-Frequency method combined with the Least-Squares Frequency-Domain method, identification at high-frequencies is still possible. By additionally incorporating a more precise sensor to identify the eigenvalues, a hybrid accelerometer/high-speed camera mode shape identification is possible even below the noise floor. An accelerometer measurement is used to identify the eigenvalues, while the camera measurement is used to produce the full-field mode shapes close to 10 kHz. The identified modal parameters improve the quality of the measured modal data and serve as a reduced model of the structure's dynamics.
A compressed sensing X-ray camera with a multilayer architecture
NASA Astrophysics Data System (ADS)
Wang, Zhehui; Iaroshenko, O.; Li, S.; Liu, T.; Parab, N.; Chen, W. W.; Chu, P.; Kenyon, G. T.; Lipton, R.; Sun, K.-X.
2018-01-01
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.
Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras
Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong
2014-01-01
Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679
Localizing people in crosswalks with a moving handheld camera: proof of concept
NASA Astrophysics Data System (ADS)
Lalonde, Marc; Chapdelaine, Claude; Foucher, Samuel
2015-02-01
Although people or object tracking in uncontrolled environments has been acknowledged in the literature, the accurate localization of a subject with respect to a reference ground plane remains a major issue. This study describes an early prototype for the tracking and localization of pedestrians with a handheld camera. One application envisioned here is to analyze the trajectories of blind people going across long crosswalks when following different audio signals as a guide. This kind of study is generally conducted manually with an observer following a subject and logging his/her current position at regular time intervals with respect to a white grid painted on the ground. This study aims at automating the manual logging activity: with a marker attached to the subject's foot, a video of the crossing is recorded by a person following the subject, and a semi-automatic tool analyzes the video and estimates the trajectory of the marker with respect to the painted markings. Challenges include robustness to variations to lighting conditions (shadows, etc.), occlusions, and changes in camera viewpoint. Results are promising when compared to GNSS measurements.
McCurdy, Neil J.; Griswold, William G; Lenert, Leslie A.
2005-01-01
The first moments at a disater scene are chaotic. The command center initially operates with little knowledge of hazards, geography and casualties, building up knowledge of the event slowly as information trickles in by voice radio channels. RealityFlythrough is a tele-presence system that stitches together live video feeds in real-time, using the principle of visual closure, to give command center personnel the illusion of being able to explore the scene interactively by moving smoothly between the video feeds. Using RealityFlythrough, medical, fire, law enforcement, hazardous materials, and engineering experts may be able to achieve situational awareness earlier, and better manage scarce resources. The RealityFlythrough system is composed of camera units with off-the-shelf GPS and orientation systems and a server/viewing station that offers access to images collected by the camera units in real time by position/orientation. In initial field testing using an experimental mesh 802.11 wireless network, two camera unit operators were able to create an interactive image of a simulated disaster scene in about five minutes. PMID:16779092
Miniature Spatial Heterodyne Raman Spectrometer with a Cell Phone Camera Detector.
Barnett, Patrick D; Angel, S Michael
2017-05-01
A spatial heterodyne Raman spectrometer (SHRS) with millimeter-sized optics has been coupled with a standard cell phone camera as a detector for Raman measurements. The SHRS is a dispersive-based interferometer with no moving parts and the design is amenable to miniaturization while maintaining high resolution and large spectral range. In this paper, a SHRS with 2.5 mm diffraction gratings has been developed with 17.5 cm -1 theoretical spectral resolution. The footprint of the SHRS is orders of magnitude smaller than the footprint of charge-coupled device (CCD) detectors typically employed in Raman spectrometers, thus smaller detectors are being explored to shrink the entire spectrometer package. This paper describes the performance of a SHRS with 2.5 mm wide diffraction gratings and a cell phone camera detector, using only the cell phone's built-in optics to couple the output of the SHRS to the sensor. Raman spectra of a variety of samples measured with the cell phone are compared to measurements made using the same miniature SHRS with high-quality imaging optics and a high-quality, scientific-grade, thermoelectrically cooled CCD.
NASA Technical Reports Server (NTRS)
2005-01-01
Taking advantage of extra solar energy collected during the day, NASA's Mars Exploration Rover Spirit settled in for an evening of stargazing, photographing the two moons of Mars as they crossed the night sky. This time-lapse composite, acquired the evening of Spirit's martian sol 590 (Aug. 30, 2005) from a perch atop 'Husband Hill' in Gusev Crater, shows Phobos, the brighter moon, on the left, and Deimos, the dimmer moon, on the right. In this sequence of images obtained every 170 seconds, both moons move from top to bottom. The bright star Aldebaran forms a trail on the right, along with some other stars in the constellation Taurus. Most of the other streaks in the image mark the collision of cosmic rays with pixels in the camera. Scientists will use images of the two moons to better map their orbital positions, learn more about their composition, and monitor the presence of nighttime clouds or haze. Spirit took the six images that make up this composite using Spirit's panoramic camera with the camera's broadband filter, which was designed specifically for acquiring images under low-light conditions.An electronic pan/tilt/zoom camera system
NASA Technical Reports Server (NTRS)
Zimmermann, Steve; Martin, H. Lee
1991-01-01
A camera system for omnidirectional image viewing applications that provides pan, tilt, zoom, and rotational orientation within a hemispherical field of view (FOV) using no moving parts was developed. The imaging device is based on the effect that from a fisheye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high speed electronic circuitry. An incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical FOV without the need for any mechanical mechanisms. A programmable transformation processor provides flexible control over viewing situations. Multiple images, each with different image magnifications and pan tilt rotation parameters, can be obtained from a single camera. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment.
Estimation of color modification in digital images by CFA pattern change.
Choi, Chang-Hee; Lee, Hae-Yeoun; Lee, Heung-Kyu
2013-03-10
Extensive studies have been carried out for detecting image forgery such as copy-move, re-sampling, blurring, and contrast enhancement. Although color modification is a common forgery technique, there is no reported forensic method for detecting this type of manipulation. In this paper, we propose a novel algorithm for estimating color modification in images acquired from digital cameras when the images are modified. Most commercial digital cameras are equipped with a color filter array (CFA) for acquiring the color information of each pixel. As a result, the images acquired from such digital cameras include a trace from the CFA pattern. This pattern is composed of the basic red green blue (RGB) colors, and it is changed when color modification is carried out on the image. We designed an advanced intermediate value counting method for measuring the change in the CFA pattern and estimating the extent of color modification. The proposed method is verified experimentally by using 10,366 test images. The results confirmed the ability of the proposed method to estimate color modification with high accuracy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Observation of Possible Lava Tube Skylights by SELENE cameras
NASA Astrophysics Data System (ADS)
Haruyama, Junichi; Hiesinger, Harald; van der Bogert, Carolyn
We have discovered three deep hole-structures on the Moon in the Terrain Camera and Multi-band Imager on the SELENE. These holes are large depth to diameter ratios: Marius Hills Hole (MHH) is 65 m in diameter and 88-90 m in depth, Mare Tranquillitatis Hole (MTH) is 120 x 110 m in diameter and 180 m in depth, and Mare Ingenii Hole (MIH) is 140 x 110 m in diameter and deeper than 90 m. No volcanic material from the holes nor dike-relating pit craters is seen around the holes. They are possible lava tube skylights. These holes and possibly connected tubes have a lot of scientific interests and high potentialities as lunar bases.
NASA Astrophysics Data System (ADS)
Maroto, Oscar; Diez-Merino, Laura; Carbonell, Jordi; Tomàs, Albert; Reyes, Marcos; Joven-Alvarez, Enrique; Martín, Yolanda; Morales de los Ríos, J. A.; del Peral, Luis; Rodríguez-Frías, M. D.
2014-07-01
The Japanese Experiment Module (JEM) Extreme Universe Space Observatory (EUSO) will be launched and attached to the Japanese module of the International Space Station (ISS). Its aim is to observe UV photon tracks produced by ultra-high energy cosmic rays developing in the atmosphere and producing extensive air showers. The key element of the instrument is a very wide-field, very fast, large-lense telescope that can detect extreme energy particles with energy above 1019 eV. The Atmospheric Monitoring System (AMS), comprising, among others, the Infrared Camera (IRCAM), which is the Spanish contribution, plays a fundamental role in the understanding of the atmospheric conditions in the Field of View (FoV) of the telescope. It is used to detect the temperature of clouds and to obtain the cloud coverage and cloud top altitude during the observation period of the JEM-EUSO main instrument. SENER is responsible for the preliminary design of the Front End Electronics (FEE) of the Infrared Camera, based on an uncooled microbolometer, and the manufacturing and verification of the prototype model. This paper describes the flight design drivers and key factors to achieve the target features, namely, detector biasing with electrical noise better than 100μV from 1Hz to 10MHz, temperature control of the microbolometer, from 10°C to 40°C with stability better than 10mK over 4.8hours, low noise high bandwidth amplifier adaptation of the microbolometer output to differential input before analog to digital conversion, housekeeping generation, microbolometer control, and image accumulation for noise reduction. It also shows the modifications implemented in the FEE prototype design to perform a trade-off of different technologies, such as the convenience of using linear or switched regulation for the temperature control, the possibility to check the camera performances when both microbolometer and analog electronics are moved further away from the power and digital electronics, and the addition of switching regulators to demonstrate the design is immune to the electrical noise the switching converters introduce. Finally, the results obtained during the verification phase are presented: FEE limitations, verification results, including FEE noise for each channel and its equivalent NETD and microbolometer temperature stability achieved, technologies trade-off, lessons learnt, and design improvement to implement in future project phases.
Höflin, F; Ledermann, H; Noelpp, U; Weinreich, R; Rösler, H
1989-12-01
There is a recent need to study glucose metabolism of the heart in ischemic, as well as in "hibernating or stunned" myocardium, and compare it with that in perfusion studies. In non-positron emission tomography centers, positron imaging is possible with a standard Anger-type camera if proper collimation and adequate shielding of the camera crystal can be achieved. For the study with fast-decaying isotopes, seven-pinhole tomography (7PHT), a limited-angle method designed for transaxial tomography of the left ventricle using a nonrotating camera, is well suited, because projections are acquired simultaneously. Individual adjustment (patient supine) of the camera's view axis (CAx) with the left ventricular axis (LVAx) gives excellent results: sensitivity for CHD 82%, specificity 72% in a prospective 201TI study (48 patients, x-ray coronarography as reference). Good alignment of CAx with LVAx is also achieved with the patient prone in LAO in a hammock above the camera surface. In this setting additional lead shielding of the camera is possible using a table reinforced with 5 cm of lead with a central hole for the 7PH-collimator, which has a special lead inlay. This allows utilization of the 511 KeV emitter 18F-FDG, which with a half-life of 109 minutes, can be transported a reasonable distance from the production site. System sensitivity and resolution for 18F was found comparable to 201Tl, 99mTc, and 123I using a phantom. First clinical examinations after 201Tl stress/redistribution studies showed increased 18F-FDG uptake in ischemic heart segments, as well as in "hibernating" nonperfused or "stunned" myocardium.
A high-speed digital camera system for the observation of rapid H-alpha fluctuations in solar flares
NASA Technical Reports Server (NTRS)
Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.
1989-01-01
Researchers developed a prototype digital camera system for obtaining H-alpha images of solar flares with 0.1 s time resolution. They intend to operate this system in conjunction with SMM's Hard X Ray Burst Spectrometer, with x ray instruments which will be available on the Gamma Ray Observatory and eventually with the Gamma Ray Imaging Device (GRID), and with the High Resolution Gamma-Ray and Hard X Ray Spectrometer (HIREGS) which are being developed for the Max '91 program. The digital camera has recently proven to be successful as a one camera system operating in the blue wing of H-alpha during the first Max '91 campaign. Construction and procurement of a second and possibly a third camera for simultaneous observations at other wavelengths are underway as are analyses of the campaign data.
Robot acting on moving bodies (RAMBO): Preliminary results
NASA Technical Reports Server (NTRS)
Davis, Larry S.; Dementhon, Daniel; Bestul, Thor; Ziavras, Sotirios; Srinivasan, H. V.; Siddalingaiah, Madju; Harwood, David
1989-01-01
A robot system called RAMBO is being developed. It is equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a moving object. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations nearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enchancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows the use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using parametric cubic splines between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors.
Experimental demonstration of a retro-reflective laser communication link on a mobile platform
NASA Astrophysics Data System (ADS)
Nikulin, Vladimir V.; Malowicki, John E.; Khandekar, Rahul M.; Skormin, Victor A.; Legare, David J.
2010-02-01
Successful pointing, acquisition, and tracking (PAT) are crucial for the implementation of laser communication links between ground and aerial vehicles. This technology has advantages over the traditional radio frequency communication, thus justifying the research efforts presented in this paper. The authors have been successful in the development of a high precision, agile, digitally controlled two-degree-of-freedom electromechanical system for positioning of optical instruments, cameras, telescopes, and communication lasers. The centerpiece of this system is a robotic manipulator capable of singularity-free operation throughout the full hemisphere range of yaw/pitch motion. The availability of efficient two-degree-of-freedom positioning facilitated the development of an optical platform stabilization system capable of rejecting resident vibrations with the angular and frequency range consistent with those caused by a ground vehicle moving on a rough terrain. This technology is being utilized for the development of a duplex mobile PAT system demonstrator that would provide valuable feedback for the development of practical laser communication systems intended for fleets of moving ground, and possibly aerial, vehicles. In this paper, a tracking system providing optical connectivity between stationary and mobile ground platforms is described. It utilizes mechanical manipulator to perform optical platform stabilization and initial beam positioning, and optical tracking for maintaining the line-of-sight communication. Particular system components and the challenges of their integration are described. The results of field testing of the resultant system under practical conditions are presented.
SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ingram, S; Rao, A; Wendt, R
Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the cameramore » by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.« less
Micro-Imagers for Spaceborne Cell-Growth Experiments
NASA Technical Reports Server (NTRS)
Behar, Alberto; Matthews, Janet; SaintAnge, Beverly; Tanabe, Helen
2006-01-01
A document discusses selected aspects of a continuing effort to develop five micro-imagers for both still and video monitoring of cell cultures to be grown aboard the International Space Station. The approach taken in this effort is to modify and augment pre-existing electronic micro-cameras. Each such camera includes an image-detector integrated-circuit chip, signal-conditioning and image-compression circuitry, and connections for receiving power from, and exchanging data with, external electronic equipment. Four white and four multicolor light-emitting diodes are to be added to each camera for illuminating the specimens to be monitored. The lens used in the original version of each camera is to be replaced with a shorter-focal-length, more-compact singlet lens to make it possible to fit the camera into the limited space allocated to it. Initially, the lenses in the five cameras are to have different focal lengths: the focal lengths are to be 1, 1.5, 2, 2.5, and 3 cm. Once one of the focal lengths is determined to be the most nearly optimum, the remaining four cameras are to be fitted with lenses of that focal length.
An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories
NASA Astrophysics Data System (ADS)
Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji
2008-11-01
We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.
Experimental results on the enhanced backscatter phenomenon and its dynamics
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Nelson, William; Ko, Jonathan; Davis, Christopher C.
2014-10-01
Enhanced backscatter effects have long been predicted theoretically and experimentally demonstrated. The reciprocity of a turbulent channel generates a group of paired rays with identical trajectory and phase information that leads to a region in phase space with double intensity and scintillation index. Though simulation work based on phase screen models has demonstrated the existence of the phenomenon, few experimental results have been published describing its characteristics, and possible applications of the enhanced backscatter phenomenon are still unclear. With the development of commercially available high powered lasers and advanced cameras with high frame rates, we have successfully captured the enhanced backscatter effects from different reflection surfaces. In addition to static observations, we have also tilted and pre-distorted the transmitted beam at various frequencies to track the dynamic properties of the enhanced backscatter phenomenon to verify its possible application in guidance and beam and image correction through atmospheric turbulence. In this paper, experimental results will be described, and discussions on the principle and applications of the phenomenon will be included. Enhanced backscatter effects are best observed in certain levels of turbulence (Cn 2≍10-13 m-2/3), and show significant potential for providing self-guidance in beam correction that doesn't introduce additional costs (unlike providing a beacon laser). Possible applications of this phenomenon include tracking fast moving object with lasers, long distance (>1km) alignment, and focusing a high-power corrected laser beam over long distances.
High resolution bone mineral densitometry with a gamma camera
NASA Technical Reports Server (NTRS)
Leblanc, A.; Evans, H.; Jhingran, S.; Johnson, P.
1983-01-01
A technique by which the regional distribution of bone mineral can be determined in bone samples from small animals is described. The technique employs an Anger camera interfaced to a medical computer. High resolution imaging is possible by producing magnified images of the bone samples. Regional densitometry of femurs from oophorectomised and bone mineral loss.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Michael; Nemati, Bijan; Zhai, Chengxing
We present an approach that significantly increases the sensitivity for finding and tracking small and fast near-Earth asteroids (NEAs). This approach relies on a combined use of a new generation of high-speed cameras which allow short, high frame-rate exposures of moving objects, effectively 'freezing' their motion, and a computationally enhanced implementation of the 'shift-and-add' data processing technique that helps to improve the signal-to-noise ratio (SNR) for detection of NEAs. The SNR of a single short exposure of a dim NEA is insufficient to detect it in one frame, but by computationally searching for an appropriate velocity vector, shifting successive framesmore » relative to each other and then co-adding the shifted frames in post-processing, we synthetically create a long-exposure image as if the telescope were tracking the object. This approach, which we call 'synthetic tracking,' enhances the familiar shift-and-add technique with the ability to do a wide blind search, detect, and track dim and fast-moving NEAs in near real time. We discuss also how synthetic tracking improves the astrometry of fast-moving NEAs. We apply this technique to observations of two known asteroids conducted on the Palomar 200 inch telescope and demonstrate improved SNR and 10 fold improvement of astrometric precision over the traditional long-exposure approach. In the past 5 yr, about 150 NEAs with absolute magnitudes H = 28 (∼10 m in size) or fainter have been discovered. With an upgraded version of our camera and a field of view of (28 arcmin){sup 2} on the Palomar 200 inch telescope, synthetic tracking could allow detecting up to 180 such objects per night, including very small NEAs with sizes down to 7 m.« less
2014-10-15
ISS041E074458 (10/15/2014) --- NASA Flight Engineers Reid Wiseman and Barry Wilmore ventured out to the starboard truss of the International Space Station to remove and replace a power regulator known as a sequential shunt unit, which failed back in mid-May. The two spacewalkers also moved TV and camera equipment in preparation for the relocation of the Leonardo Permanent Multipurpose Module to accommodate the installation of new docking adapters for future commercial crew vehicles.
3D Lasers Increase Efficiency, Safety of Moving Machines
NASA Technical Reports Server (NTRS)
2015-01-01
Canadian company Neptec Design Group Ltd. developed its Laser Camera System, used by shuttles to render 3D maps of their hulls for assessing potential damage. Using NASA funding, the firm incorporated LiDAR technology and created the TriDAR 3D sensor. Its commercial arm, Neptec Technologies Corp., has sold the technology to Orbital Sciences, which uses it to guide its Cygnus spacecraft during rendezvous and dock operations at the International Space Station.
Daniel Barry and Ellen Ochoa on middeck with food
2017-04-20
S96-E-5116 (1 June 1999) --- Astronauts Daniel T. Barry and Ellen Ochoa, both misison specialists, are pictured onboard the Space Shuttle Discovery early on June 1. Most of the seven crew members later moved over to the International Space Station (ISS) to perform tasks designed to ready the station for human tended operations. The scene was recorded with an electronic still camera (ESC) at 04:12:12 GMT, June 1, 1999.
NDT of railway components using induction thermography
NASA Astrophysics Data System (ADS)
Netzelmann, U.; Walle, G.; Ehlen, A.; Lugin, S.; Finckbohner, M.; Bessert, S.
2016-02-01
Induction or eddy current thermography is used to detect surface cracks in ferritic steel. The technique is applied to detect surface cracks in rails from a moving test car. Cracks were detected at a train speed between 2 and 15 km/h. An automated demonstrator system for testing railway wheels after production is described. While the wheel is rotated, a robot guides the detection unit consisting of inductor and infrared camera over the surface.
VizieR Online Data Catalog: The multiplicity of M dwarfs in young moving groups (Shan+, 2017)
NASA Astrophysics Data System (ADS)
Shan, Y.; Yee, J. C.; Bowler, B. P.; Cieza, L. A.; Montet, B. T.; Canovas, H.; Liu, M. C.; Close, L. M.; Hinz, P. M.; Males, J. R.; Morzinski, K. M.; Vaz, A.; Bailey, V. P.; Follette K. B.; MagAO Team
2018-04-01
Adaptive optics observations were conducted on the 6.5m Magellan Clay Telescope at the Las Campanas Observatory in Chile using the MagAO instrument. Images were taken with two science cameras simultaneously: Clio in the near-infrared, and VisAO in the optical. The MagAO/Clio observations in H or Ks bands span 2014 Apr 17-21 to 2015 Nov 26-27. (4 data files).
Object recognition for autonomous robot utilizing distributed knowledge database
NASA Astrophysics Data System (ADS)
Takatori, Jiro; Suzuki, Kenji; Hartono, Pitoyo; Hashimoto, Shuji
2003-10-01
In this paper we present a novel method of object recognition utilizing a remote knowledge database for an autonomous robot. The developed robot has three robot arms with different sensors; two CCD cameras and haptic sensors. It can see, touch and move the target object from different directions. Referring to remote knowledge database of geometry and material, the robot observes and handles the objects to understand them including their physical characteristics.
Feature Quantization and Pooling for Videos
2014-05-01
does not score high on this metric. The exceptions are videos where objects move - for exam- ple, the ice skaters (“ice”) and the tennis player , tracked...convincing me that my future path should include a PhD. Martial and Fernando, your energy is exceptional! Its influence can be seen in the burning...3.17 BMW enables Interpretation of similar regions across videos ( tennis ). . . . . . . 50 3.18 Common Motion Words across videos with large camera
NASA Technical Reports Server (NTRS)
2004-01-01
3 February 2004 Wind is the chief agent of change on Mars today. Wind blows dust and it can move coarser sediment such as sand and silt. This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows bright ripples or small dunes on the floors of troughs northeast of Isidis Planitia near 31.1oN, 244.6oW. The picture covers an area 3 km (1.9 mi) wide; sunlight illuminates the scene from the lower left.
Whirlwind Drama During Spirit's 496th Sol
NASA Technical Reports Server (NTRS)
2005-01-01
This movie clip shows a dust devil growing in size and blowing across the plain inside Mars' Gusev Crater. The clip consists of frames taken by the navigation camera on NASA's Mars Exploration Rover Spirit during the morning of the rover's 496th martian day, or sol (May 26, 2005). Contrast has been enhanced for anything in the images that changes from frame to frame, that is, for the dust moved by wind.Advanced Standoff Interdiction Weapon and Sensor System. Volume 1
1972-06-15
interdiction system to counter enemy infiltration along the water - ways and roads of Southeast Asia. The sensors were selected to give the helicopter a...was con- ceived as an interdiction system to counter enemy infiltration along the water - ways ard roads of Southeast Asia. The sensors were selected...controller enabled him to fly the helicopter to intercept the moving target. Mount camera film was exposed while the target was being fracked by the
Duque Domingo, Jaime; Cerrada, Carlos; Valero, Enrique; Cerrada, Jose A
2017-10-20
This work presents an Indoor Positioning System to estimate the location of people navigating in complex indoor environments. The developed technique combines WiFi Positioning Systems and depth maps , delivering promising results in complex inhabited environments, consisting of various connected rooms, where people are freely moving. This is a non-intrusive system in which personal information about subjects is not needed and, although RGB-D cameras are installed in the sensing area, users are only required to carry their smart-phones. In this article, the methods developed to combine the above-mentioned technologies and the experiments performed to test the system are detailed. The obtained results show a significant improvement in terms of accuracy and performance with respect to previous WiFi-based solutions as well as an extension in the range of operation.
NASA Astrophysics Data System (ADS)
Liu, Chenguang; Cheng, Heng-Da; Zhang, Yingtao; Wang, Yuxuan; Xian, Min
2016-01-01
This paper presents a methodology for tracking multiple skaters in short track speed skating competitions. Nonrigid skaters move at high speed with severe occlusions happening frequently among them. The camera is panned quickly in order to capture the skaters in a large and dynamic scene. To automatically track the skaters and precisely output their trajectories becomes a challenging task in object tracking. We employ the global rink information to compensate camera motion and obtain the global spatial information of skaters, utilize random forest to fuse multiple cues and predict the blob of each skater, and finally apply a silhouette- and edge-based template-matching and blob-evolving method to labelling pixels to a skater. The effectiveness and robustness of the proposed method are verified through thorough experiments.
Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection
NASA Astrophysics Data System (ADS)
Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf
2016-10-01
Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.
Camera array based light field microscopy
Lin, Xing; Wu, Jiamin; Zheng, Guoan; Dai, Qionghai
2015-01-01
This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology. PMID:26417490
Near-field observation platform
NASA Astrophysics Data System (ADS)
Schlemmer, Harry; Baeurle, Constantin; Vogel, Holger
2008-04-01
A miniaturized near-field observation platform is presented comprising a sensitive daylight camera and an uncooled micro-bolometer thermal imager each equipped with a wide angle lens. Both cameras are optimised for a range between a few meters and 200 m. The platform features a stabilised line of sight and can therefore be used also on a vehicle when it is in motion. The line of sight either can be directed manually or the platform can be used in a panoramic mode. The video output is connected to a control panel where algorithms for moving target indication or tracking can be applied in order to support the observer. The near-field platform also can be netted with the vehicle system and the signals can be utilised, e.g. to designate a new target to the main periscope or the weapon sight.
An inexpensive programmable illumination microscope with active feedback
Tompkins, Nathan; Fraden, Seth
2016-01-01
We have developed a programmable illumination system capable of tracking and illuminating numerous objects simultaneously using only low-cost and reused optical components. The active feedback control software allows for a closed-loop system that tracks and perturbs objects of interest automatically. Our system uses a static stage where the objects of interest are tracked computationally as they move across the field of view allowing for a large number of simultaneous experiments. An algorithmically determined illumination pattern can be applied anywhere in the field of view with simultaneous imaging and perturbation using different colors of light to enable spatially and temporally structured illumination. Our system consists of a consumer projector, camera, 35-mm camera lens, and a small number of other optical and scaffolding components. The entire apparatus can be assembled for under $4,000. PMID:27642182
2008-08-12
CAPE CANAVERAL, Fla. – In the Payload Hazardous Servicing Facility at NASA's Kennedy Space Center, technicians move the base of the shipping container holding the Wide Field Camera 3, or WFC3, into the high bay. As Hubble enters the last stage of its life, WFC3 will be Hubble's next evolutionary step, allowing Hubble to peer ever further into the mysteries of the cosmos. WFC3 will study a diverse range of objects and phenomena, from young and extremely distant galaxies, to much more nearby stellar systems, to objects within our very own solar system. WFC3 will take the place of Wide Field Planetary Camera 2, which astronauts will bring back to Earth aboard the shuttle. WFC3 is part of the payload on the fifth and final Hubble servicing mission, STS-125, targeted for launch Oct. 8. Photo credit: NASA/Jack Pfaller
2008-08-12
CAPE CANAVERAL, Fla. – In the Payload Hazardous Servicing Facility at NASA's Kennedy Space Center, technicians move the base of the shipping container holding the Wide Field Camera 3, or WFC3. As Hubble enters the last stage of its life, WFC3 will be Hubble's next evolutionary step, allowing Hubble to peer ever further into the mysteries of the cosmos. WFC3 will study a diverse range of objects and phenomena, from young and extremely distant galaxies, to much more nearby stellar systems, to objects within our very own solar system. WFC3 will take the place of Wide Field Planetary Camera 2, which astronauts will bring back to Earth aboard the shuttle. WFC3 is part of the payload on the fifth and final Hubble servicing mission, STS-125, targeted for launch Oct. 8. Photo credit: NASA/Jack Pfaller
NASA Astrophysics Data System (ADS)
Torres, Juan; Menéndez, José Manuel
2015-02-01
This paper establishes a real-time auto-exposure method to guarantee that surveillance cameras in uncontrolled light conditions take advantage of their whole dynamic range while provide neither under nor overexposed images. State-of-the-art auto-exposure methods base their control on the brightness of the image measured in a limited region where the foreground objects are mostly located. Unlike these methods, the proposed algorithm establishes a set of indicators based on the image histogram that defines its shape and position. Furthermore, the location of the objects to be inspected is likely unknown in surveillance applications. Thus, the whole image is monitored in this approach. To control the camera settings, we defined a parameters function (Ef ) that linearly depends on the shutter speed and the electronic gain; and is inversely proportional to the square of the lens aperture diameter. When the current acquired image is not overexposed, our algorithm computes the value of Ef that would move the histogram to the maximum value that does not overexpose the capture. When the current acquired image is overexposed, it computes the value of Ef that would move the histogram to a value that does not underexpose the capture and remains close to the overexposed region. If the image is under and overexposed, the whole dynamic range of the camera is therefore used, and a default value of the Ef that does not overexpose the capture is selected. This decision follows the idea that to get underexposed images is better than to get overexposed ones, because the noise produced in the lower regions of the histogram can be removed in a post-processing step while the saturated pixels of the higher regions cannot be recovered. The proposed algorithm was tested in a video surveillance camera placed at an outdoor parking lot surrounded by buildings and trees which produce moving shadows in the ground. During the daytime of seven days, the algorithm was running alternatively together with a representative auto-exposure algorithm in the recent literature. Besides the sunrises and the nightfalls, multiple weather conditions occurred which produced light changes in the scene: sunny hours that produced sharpen shadows and highlights; cloud coverages that softened the shadows; and cloudy and rainy hours that dimmed the scene. Several indicators were used to measure the performance of the algorithms. They provided the objective quality as regards: the time that the algorithms recover from an under or over exposure, the brightness stability, and the change related to the optimal exposure. The results demonstrated that our algorithm reacts faster to all the light changes than the selected state-of-the-art algorithm. It is also capable of acquiring well exposed images and maintaining the brightness stable during more time. Summing up the results, we concluded that the proposed algorithm provides a fast and stable auto-exposure method that maintains an optimal exposure for video surveillance applications. Future work will involve the evaluation of this algorithm in robotics.
A system for simulating aerial or orbital TV observations of geographic patterns
NASA Technical Reports Server (NTRS)
Latham, J. P.
1972-01-01
A system which simulates observation of the earth surface by aerial or orbiting television devices has been developed. By projecting color slides of photographs taken by aircraft and orbiting sensors upon a rear screen system, and altering scale of projected image, screen position, or TV camera position, it is possible to simulate alternatives of altitude, or optical systems. By altering scan line patterns in COHU 3200 series camera from 525 to 945 scan lines, it is possible to study implications of scan line resolution upon the detection and analysis of geographic patterns observed by orbiting TV systems.
Active 3D camera design for target capture on Mars orbit
NASA Astrophysics Data System (ADS)
Cottin, Pierre; Babin, François; Cantin, Daniel; Deslauriers, Adam; Sylvestre, Bruno
2010-04-01
During the ESA Mars Sample Return (MSR) mission, a sample canister launched from Mars will be autonomously captured by an orbiting satellite. We present the concept and the design of an active 3D camera supporting the orbiter navigation system during the rendezvous and capture phase. This camera aims at providing the range and bearing of a 20 cm diameter canister from 2 m to 5 km within a 20° field-of-view without moving parts (scannerless). The concept exploits the sensitivity and the gating capability of a gated intensified camera. It is supported by a pulsed source based on an array of laser diodes with adjustable amplitude and pulse duration (from nanoseconds to microseconds). The ranging capability is obtained by adequately controlling the timing between the acquisition of 2D images and the emission of the light pulses. Three modes of acquisition are identified to accommodate the different levels of ranging and bearing accuracy and the 3D data refresh rate. To come up with a single 3D image, each mode requires a different number of images to be processed. These modes can be applied to the different approach phases. The entire concept of operation of this camera is detailed with an emphasis on the extreme lighting conditions. Its uses for other space missions and terrestrial applications are also highlighted. This design is implemented in a prototype with shorter ranging capabilities for concept validation. Preliminary results obtained with this prototype are also presented. This work is financed by the Canadian Space Agency.
A computational approach to real-time image processing for serial time-encoded amplified microscopy
NASA Astrophysics Data System (ADS)
Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi
2016-03-01
High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.
Accuracy of an optical active-marker system to track the relative motion of rigid bodies.
Maletsky, Lorin P; Sun, Junyi; Morton, Nicholas A
2007-01-01
The measurement of relative motion between two moving bones is commonly accomplished for in vitro studies by attaching to each bone a series of either passive or active markers in a fixed orientation to create a rigid body (RB). This work determined the accuracy of motion between two RBs using an Optotrak optical motion capture system with active infrared LEDs. The stationary noise in the system was quantified by recording the apparent change in position with the RBs stationary and found to be 0.04 degrees and 0.03 mm. Incremental 10 degrees rotations and 10-mm translations were made using a more precise tool than the Optotrak. Increasing camera distance decreased the precision or increased the range of values observed for a set motion and increased the error in rotation or bias between the measured and actual rotation. The relative positions of the RBs with respect to the camera-viewing plane had a minimal effect on the kinematics and, therefore, for a given distance in the volume less than or close to the precalibrated camera distance, any motion was similarly reliable. For a typical operating set-up, a 10 degrees rotation showed a bias of 0.05 degrees and a 95% repeatability limit of 0.67 degrees. A 10-mm translation showed a bias of 0.03 mm and a 95% repeatability limit of 0.29 mm. To achieve a high level of accuracy it is important to keep the distance between the cameras and the markers near the distance the cameras are focused to during calibration.
Khan, M Salah Uddin; Hossain, Jahangir; Gurley, Emily S; Nahar, Nazmun; Sultana, Rebeca; Luby, Stephen P
2010-12-01
Pteropus bats are commonly infected with Nipah virus, but show no signs of illness. Human Nipah outbreaks in Bangladesh coincide with the date palm sap harvesting season. In epidemiologic studies, drinking raw date palm sap is a risk factor for human Nipah infection. We conducted a study to evaluate bats' access to date palm sap. We mounted infrared cameras that silently captured images upon detection of motion on date palm trees from 5:00 pm to 6:00 am. Additionally, we placed two locally used preventative techniques, bamboo skirts and lime (CaCO₃) smeared on date palm trees to assess their effectiveness in preventing bats access to sap. Out of 20 camera-nights of observations, 14 identified 132 visits of bats around the tree, 91 to the shaved surface of the tree where the sap flow originates, 4 at the stream of sap moving toward the collection pot, and no bats at the tap or on the collection pots; the remaining 6 camera-nights recorded no visits. Of the preventative techniques, the bamboo skirt placed for four camera-nights prevented bats access to sap. This study confirmed that bats commonly visited date palm trees and physically contacted the sap collected for human consumption. This is further evidence that date palm sap is an important link between Nipah virus in bats and Nipah virus in humans. Efforts that prevent bat access to the shaved surface and the sap stream of the tree could reduce Nipah spillovers to the human population.
NASA Astrophysics Data System (ADS)
de Villiers, Jason P.; Bachoo, Asheer K.; Nicolls, Fred C.; le Roux, Francois P. J.
2011-05-01
Tracking targets in a panoramic image is in many senses the inverse problem of tracking targets with a narrow field of view camera on a pan-tilt pedestal. In a narrow field of view camera tracking a moving target, the object is constant and the background is changing. A panoramic camera is able to model the entire scene, or background, and those areas it cannot model well are the potential targets and typically subtended far fewer pixels in the panoramic view compared to the narrow field of view. The outputs of an outward staring array of calibrated machine vision cameras are stitched into a single omnidirectional panorama and used to observe False Bay near Simon's Town, South Africa. A ground truth data-set was created by geo-aligning the camera array and placing a differential global position system receiver on a small target boat thus allowing its position in the array's field of view to be determined. Common tracking techniques including level-sets, Kalman filters and particle filters were implemented to run on the central processing unit of the tracking computer. Image enhancement techniques including multi-scale tone mapping, interpolated local histogram equalisation and several sharpening techniques were implemented on the graphics processing unit. An objective measurement of each tracking algorithm's robustness in the presence of sea-glint, low contrast visibility and sea clutter - such as white caps is performed on the raw recorded video data. These results are then compared to those obtained with the enhanced video data.
NASA Technical Reports Server (NTRS)
1978-01-01
The large format camera (LFC) designed as a 30 cm focal length cartographic camera system that employs forward motion compensation in order to achieve the full image resolution provided by its 80 degree field angle lens is described. The feasibility of application of the current LFC design to deployment in the orbiter program as the Orbiter Camera Payload System was assessed and the changes that are necessary to meet such a requirement are discussed. Current design and any proposed design changes were evaluated relative to possible future deployment of the LFC on a free flyer vehicle or in a WB-57F. Preliminary mission interface requirements for the LFC are given.
Embedded processor extensions for image processing
NASA Astrophysics Data System (ADS)
Thevenin, Mathieu; Paindavoine, Michel; Letellier, Laurent; Heyrman, Barthélémy
2008-04-01
The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the video sensor. Such results confirm the potential of these computing systems for supporting future applications.
A novel super-resolution camera model
NASA Astrophysics Data System (ADS)
Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli
2015-05-01
Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.
Optimization of digitization procedures in cultural heritage preservation
NASA Astrophysics Data System (ADS)
Martínez, Bea; Mitjà, Carles; Escofet, Jaume
2013-11-01
The digitization of both volumetric and flat objects is the nowadays-preferred method in order to preserve cultural heritage items. High quality digital files obtained from photographic plates, films and prints, paintings, drawings, gravures, fabrics and sculptures, allows not only for a wider diffusion and on line transmission, but also for the preservation of the original items from future handling. Early digitization procedures used scanners for flat opaque or translucent objects and camera only for volumetric or flat highly texturized materials. The technical obsolescence of the high-end scanners and the improvement achieved by professional cameras has result in a wide use of cameras with digital back to digitize any kind of cultural heritage item. Since the lens, the digital back, the software controlling the camera and the digital image processing provide a wide range of possibilities, there is necessary to standardize the methods used in the reproduction work leading to preserve as high as possible the original item properties. This work presents an overview about methods used for camera system characterization, as well as the best procedures in order to identify and counteract the effect of the lens residual aberrations, sensor aliasing, image illumination, color management and image optimization by means of parametric image processing. As a corollary, the work shows some examples of reproduction workflow applied to the digitization of valuable art pieces and glass plate photographic black and white negatives.
NASA Astrophysics Data System (ADS)
Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo
2008-11-01
Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.
Color constancy by characterization of illumination chromaticity
NASA Astrophysics Data System (ADS)
Nikkanen, Jarno T.
2011-05-01
Computational color constancy algorithms play a key role in achieving desired color reproduction in digital cameras. Failure to estimate illumination chromaticity correctly will result in invalid overall colour cast in the image that will be easily detected by human observers. A new algorithm is presented for computational color constancy. Low computational complexity and low memory requirement make the algorithm suitable for resource-limited camera devices, such as consumer digital cameras and camera phones. Operation of the algorithm relies on characterization of the range of possible illumination chromaticities in terms of camera sensor response. The fact that only illumination chromaticity is characterized instead of the full color gamut, for example, increases robustness against variations in sensor characteristics and against failure of diagonal model of illumination change. Multiple databases are used in order to demonstrate the good performance of the algorithm in comparison to the state-of-the-art color constancy algorithms.
Camera Concepts for the Advanced Gamma-Ray Imaging System (AGIS)
NASA Astrophysics Data System (ADS)
Nepomuk Otte, Adam
2009-05-01
The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation observatory in ground-based very high energy gamma-ray astronomy. Design goals are ten times better sensitivity, higher angular resolution, and a lower energy threshold than existing Cherenkov telescopes. Each telescope is equipped with a camera that detects and records the Cherenkov-light flashes from air showers. The camera is comprised of a pixelated focal plane of blue sensitive and fast (nanosecond) photon detectors that detect the photon signal and convert it into an electrical one. The incorporation of trigger electronics and signal digitization into the camera are under study. Given the size of AGIS, the camera must be reliable, robust, and cost effective. We are investigating several directions that include innovative technologies such as Geiger-mode avalanche-photodiodes as a possible detector and switched capacitor arrays for the digitization.
Mechanically assisted liquid lens zoom system for mobile phone cameras
NASA Astrophysics Data System (ADS)
Wippermann, F. C.; Schreiber, P.; Bräuer, A.; Berge, B.
2006-08-01
Camera systems with small form factor are an integral part of today's mobile phones which recently feature auto focus functionality. Ready to market solutions without moving parts have been developed by using the electrowetting technology. Besides virtually no deterioration, easy control electronics and simple and therefore cost-effective fabrication, this type of liquid lenses enables extremely fast settling times compared to mechanical approaches. As a next evolutionary step mobile phone cameras will be equipped with zoom functionality. We present first order considerations for the optical design of a miniaturized zoom system based on liquid-lenses and compare it to its mechanical counterpart. We propose a design of a zoom lens with a zoom factor of 2.5 considering state-of-the-art commercially available liquid lens products. The lens possesses auto focus capability and is based on liquid lenses and one additional mechanical actuator. The combination of liquid lenses and a single mechanical actuator enables extremely short settling times of about 20ms for the auto focus and a simplified mechanical system design leading to lower production cost and longer life time. The camera system has a mechanical outline of 24mm in length and 8mm in diameter. The lens with f/# 3.5 provides market relevant optical performance and is designed for an image circle of 6.25mm (1/2.8" format sensor).
NASA Astrophysics Data System (ADS)
Dekemper, E.; Fussen, D.; Vanhellemont, F.; Vanhamel, J.; Pieroux, D.; Berkenbosch, S.
2017-12-01
In an urban environment, nitrogen dioxide is emitted by a multitude of static and moving point sources (cars, industry, power plants, heating systems,…). Air quality models generally rely on a limited number of monitoring stations which do not capture the whole pattern, neither allow for full validation. So far, there has been a lack of instrument capable of measuring NO2 fields with the necessary spatio-temporal resolution above major point sources (power plants), or more extended ones (cities). We have developed a new type of passive remote sensing instrument aiming at the measurement of 2-D distributions of NO2 slant column densities (SCDs) with a high spatial (meters) and temporal (minutes) resolution. The measurement principle has some similarities with the popular filter-based SO2 camera (used in volcanic and industrial sulfur emissions monitoring) as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. But contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. A first prototype was successfully tested with the plume of a coal-firing power plant in Romania, revealing the dynamics of the formation of NO2 in the early plume. A lighter version of the NO2 camera is now being tested on other targets, such as oil refineries and urban air masses.
Human image tracking technique applied to remote collaborative environments
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Suzuki, Gen
1993-10-01
To support various kinds of collaborations over long distances by using visual telecommunication, it is necessary to transmit visual information related to the participants and topical materials. When people collaborate in the same workspace, they use visual cues such as facial expressions and eye movement. The realization of coexistence in a collaborative workspace requires the support of these visual cues. Therefore, it is important that the facial images be large enough to be useful. During collaborations, especially dynamic collaborative activities such as equipment operation or lectures, the participants often move within the workspace. When the people move frequently or over a wide area, the necessity for automatic human tracking increases. Using the movement area of the human being or the resolution of the extracted area, we have developed a memory tracking method and a camera tracking method for automatic human tracking. Experimental results using a real-time tracking system show that the extracted area fairly moves according to the movement of the human head.
NASA Astrophysics Data System (ADS)
Di, Si; Lin, Hui; Du, Ruxu
2011-05-01
Displacement measurement of moving objects is one of the most important issues in the field of computer vision. This paper introduces a new binocular vision system (BVS) based on micro-electro-mechanical system (MEMS) technology. The eyes of the system are two microlenses fabricated on a substrate by MEMS technology. The imaging results of two microlenses are collected by one complementary metal-oxide-semiconductor (CMOS) array. An algorithm is developed for computing the displacement. Experimental results show that as long as the object is moving in two-dimensional (2D) space, the system can effectively estimate the 2D displacement without camera calibration. It is also shown that the average error of the displacement measurement is about 3.5% at different object distances ranging from 10 cm to 35 cm. Because of its low cost, small size and simple setting, this new method is particularly suitable for 2D displacement measurement applications such as vision-based electronics assembly and biomedical cell culture.
NASA Astrophysics Data System (ADS)
Mundhenk, T. Nathan; Ni, Kang-Yu; Chen, Yang; Kim, Kyungnam; Owechko, Yuri
2012-01-01
An aerial multiple camera tracking paradigm needs to not only spot unknown targets and track them, but also needs to know how to handle target reacquisition as well as target handoff to other cameras in the operating theater. Here we discuss such a system which is designed to spot unknown targets, track them, segment the useful features and then create a signature fingerprint for the object so that it can be reacquired or handed off to another camera. The tracking system spots unknown objects by subtracting background motion from observed motion allowing it to find targets in motion, even if the camera platform itself is moving. The area of motion is then matched to segmented regions returned by the EDISON mean shift segmentation tool. Whole segments which have common motion and which are contiguous to each other are grouped into a master object. Once master objects are formed, we have a tight bound on which to extract features for the purpose of forming a fingerprint. This is done using color and simple entropy features. These can be placed into a myriad of different fingerprints. To keep data transmission and storage size low for camera handoff of targets, we try several different simple techniques. These include Histogram, Spatiogram and Single Gaussian Model. These are tested by simulating a very large number of target losses in six videos over an interval of 1000 frames each from the DARPA VIVID video set. Since the fingerprints are very simple, they are not expected to be valid for long periods of time. As such, we test the shelf life of fingerprints. This is how long a fingerprint is good for when stored away between target appearances. Shelf life gives us a second metric of goodness and tells us if a fingerprint method has better accuracy over longer periods. In videos which contain multiple vehicle occlusions and vehicles of highly similar appearance we obtain a reacquisition rate for automobiles of over 80% using the simple single Gaussian model compared with the null hypothesis of <20%. Additionally, the performance for fingerprints stays well above the null hypothesis for as much as 800 frames. Thus, a simple and highly compact single Gaussian model is useful for target reacquisition. Since the model is agnostic to view point and object size, it is expected to perform as well on a test of target handoff. Since some of the performance degradation is due to problems with the initial target acquisition and tracking, the simple Gaussian model may perform even better with an improved initial acquisition technique. Also, since the model makes no assumption about the object to be tracked, it should be possible to use it to fingerprint a multitude of objects, not just cars. Further accuracy may be obtained by creating manifolds of objects from multiple samples.
Real-Time Detection of Sporadic Meteors in the Intensified TV Imaging Systems.
Vítek, Stanislav; Nasyrova, Maria
2017-12-29
The automatic observation of the night sky through wide-angle video systems with the aim of detecting meteor and fireballs is currently among routine astronomical observations. The observation is usually done in multi-station or network mode, so it is possible to estimate the direction and the speed of the body flight. The high velocity of the meteorite flying through the atmosphere determines the important features of the camera systems, namely the high frame rate. Thanks to high frame rates, such imaging systems produce a large amount of data, of which only a small fragment has scientific potential. This paper focuses on methods for the real-time detection of fast moving objects in the video sequences recorded by intensified TV systems with frame rates of about 60 frames per second. The goal of our effort is to remove all unnecessary data during the daytime and make free hard-drive capacity for the next observation. The processing of data from the MAIA (Meteor Automatic Imager and Analyzer) system is demonstrated in the paper.
High throughput imaging cytometer with acoustic focussing.
Zmijan, Robert; Jonnalagadda, Umesh S; Carugo, Dario; Kochi, Yu; Lemm, Elizabeth; Packham, Graham; Hill, Martyn; Glynne-Jones, Peter
2015-10-31
We demonstrate an imaging flow cytometer that uses acoustic levitation to assemble cells and other particles into a sheet structure. This technique enables a high resolution, low noise CMOS camera to capture images of thousands of cells with each frame. While ultrasonic focussing has previously been demonstrated for 1D cytometry systems, extending the technology to a planar, much higher throughput format and integrating imaging is non-trivial, and represents a significant jump forward in capability, leading to diagnostic possibilities not achievable with current systems. A galvo mirror is used to track the images of the moving cells permitting exposure times of 10 ms at frame rates of 50 fps with motion blur of only a few pixels. At 80 fps, we demonstrate a throughput of 208 000 beads per second. We investigate the factors affecting motion blur and throughput, and demonstrate the system with fluorescent beads, leukaemia cells and a chondrocyte cell line. Cells require more time to reach the acoustic focus than beads, resulting in lower throughputs; however a longer device would remove this constraint.
Real-Time Detection of Sporadic Meteors in the Intensified TV Imaging Systems
2017-01-01
The automatic observation of the night sky through wide-angle video systems with the aim of detecting meteor and fireballs is currently among routine astronomical observations. The observation is usually done in multi-station or network mode, so it is possible to estimate the direction and the speed of the body flight. The high velocity of the meteorite flying through the atmosphere determines the important features of the camera systems, namely the high frame rate. Thanks to high frame rates, such imaging systems produce a large amount of data, of which only a small fragment has scientific potential. This paper focuses on methods for the real-time detection of fast moving objects in the video sequences recorded by intensified TV systems with frame rates of about 60 frames per second. The goal of our effort is to remove all unnecessary data during the daytime and make free hard-drive capacity for the next observation. The processing of data from the MAIA (Meteor Automatic Imager and Analyzer) system is demonstrated in the paper. PMID:29286294
Kumar, Manoj; Vijayakumar, A; Rosen, Joseph
2017-09-14
We present a lensless, interferenceless incoherent digital holography technique based on the principle of coded aperture correlation holography. The acquired digital hologram by this technique contains a three-dimensional image of some observed scene. Light diffracted by a point object (pinhole) is modulated using a random-like coded phase mask (CPM) and the intensity pattern is recorded and composed as a point spread hologram (PSH). A library of PSHs is created using the same CPM by moving the pinhole to all possible axial locations. Intensity diffracted through the same CPM from an object placed within the axial limits of the PSH library is recorded by a digital camera. The recorded intensity this time is composed as the object hologram. The image of the object at any axial plane is reconstructed by cross-correlating the object hologram with the corresponding component of the PSH library. The reconstruction noise attached to the image is suppressed by various methods. The reconstruction results of multiplane and thick objects by this technique are compared with regular lens-based imaging.
2014-05-08
This image is one of the highest-resolution MDIS observations to date! Many craters of varying degradation states are visible, as well as gentle terrain undulations. Very short exposure times are needed to make these low-altitude observations while the spacecraft is moving quickly over the surface; thus the images are slightly noisier than typical MDIS images. This image was acquired as a high-resolution targeted observation. Targeted observations are images of a small area on Mercury's surface at resolutions much higher than the 200-meter/pixel morphology base map. It is not possible to cover all of Mercury's surface at this high resolution, but typically several areas of high scientific interest are imaged in this mode each week. Date acquired: March 15, 2014 Image Mission Elapsed Time (MET): 37173522 Image ID: 5936740 Instrument: Narrow Angle Camera (NAC) of the Mercury Dual Imaging System (MDIS) Center Latitude: 71.91° Center Longitude: 232.7° E Resolution: 5 meters/pixel Scale: The image is approximately 8.3 km (5.2 mi.) across. Incidence Angle: 79.4° Emission Angle: 4.0° Phase Angle: 83.4° http://photojournal.jpl.nasa.gov/catalog/PIA18370
Tracking people and cars using 3D modeling and CCTV.
Edelman, Gerda; Bijhold, Jurrien
2010-10-10
The aim of this study was to find a method for the reconstruction of movements of people and cars using CCTV footage and a 3D model of the environment. A procedure is proposed, in which video streams are synchronized and displayed in a 3D model, by using virtual cameras. People and cars are represented by cylinders and boxes, which are moved in the 3D model, according to their movements as shown in the video streams. The procedure was developed and tested in an experimental setup with test persons who logged their GPS coordinates as a recording of the ground truth. Results showed that it is possible to implement this procedure and to reconstruct movements of people and cars from video recordings. The procedure was also applied to a forensic case. In this work we experienced that more situational awareness was created by the 3D model, which made it easier to track people on multiple video streams. Based on all experiences from the experimental set up and the case, recommendations are formulated for use in practice. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Plans, Patterns, and Move Categories Guiding a Highly Selective Search
NASA Astrophysics Data System (ADS)
Trippen, Gerhard
In this paper we present our ideas for an Arimaa-playing program (also called a bot) that uses plans and pattern matching to guide a highly selective search. We restrict move generation to moves in certain move categories to reduce the number of moves considered by the bot significantly. Arimaa is a modern board game that can be played with a standard Chess set. However, the rules of the game are not at all like those of Chess. Furthermore, Arimaa was designed to be as simple and intuitive as possible for humans, yet challenging for computers. While all established Arimaa bots use alpha-beta search with a variety of pruning techniques and other heuristics ending in an extensive positional leaf node evaluation, our new bot, Rat, starts with a positional evaluation of the current position. Based on features found in the current position - supported by pattern matching using a directed position graph - our bot Rat decides which of a given set of plans to follow. The plan then dictates what types of moves can be chosen. This is another major difference from bots that generate "all" possible moves for a particular position. Rat is only allowed to generate moves that belong to certain categories. Leaf nodes are evaluated only by a straightforward material evaluation to help avoid moves that lose material. This highly selective search looks, on average, at only 5 moves out of 5,000 to over 40,000 possible moves in a middle game position.